Search results for: digital supply chain
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2013

Search results for: digital supply chain

153 Co-Administration Effects of Conjugated Linoleic Acid and L-Carnitine on Weight Gain and Biochemical Profile in Diet Induced Obese Rats

Authors: Maryam Nazari, Majid Karandish, Alihossein Saberi

Abstract:

Obesity as a global health challenge motivates pharmaceutical industries to produce anti-obesity drugs. However, effectiveness of these agents is remained unclear. Because of popularity of dietary supplements, the aim of this study was tp investigate the effects of Conjugated Linoleic Acid (CLA) and L-carnitine (LC) on serum glucose, triglyceride, cholesterol and weight changes in diet induced obese rats. 48 male Wistar rats were randomly divided into two groups: Normal fat diet (n=8), and High fat diet (HFD) (n=32). After eight weeks, the second group which was maintained on HFD until the end of study, was subdivided into four categories: a) 500 mg Corn Oil (as control group), b) 500 mg CLA, c) 200 mg LC, d) 500 mg CLA+ 200 mg LC.All doses are planned per kg body weights, which were administered by oral gavage for four weeks. Body weights were measured and recorded weekly by means of a digital scale. At the end of the study, blood samples were collected for biochemical markers measurement. SPSS Version 16 was used for statistical analysis. At the end of 8th week, a significant difference in weight was observed between HFD and NFD group. After 12 weeks, LC significantly reduced weight gain by 4.2%. Trend of weight gain in CLA and CLA+LC groups was insignificantly decelerated. CLA+LC reduced triglyceride level significantly, but just CLA had significant influence on total cholesterol and insignificant decreasing effect on FBS. Our results showed that an obesogenic diet in a relative short time led to obesity and dyslipidemia which can be modified by LC and CLA to some extent.

Keywords: Conjugated linoleic acid, high fat diet, L-carnitine, obesity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 942
152 Genotypic and Allelic Distribution of Polymorphic Variants of Gene SLC47A1 Leu125Phe (rs77474263) and Gly64Asp (rs77630697) and Their Association to the Clinical Response to Metformin in Adult Pakistani T2DM Patients

Authors: Sadaf Moeez, Madiha Khalid, Zoya Khalid, Sania Shaheen, Sumbul Khalid

Abstract:

Background: Inter-individual variation in response to metformin, which has been considered as a first line therapy for T2DM treatment is considerable. In the current study, it was aimed to investigate the impact of two genetic variants Leu125Phe (rs77474263) and Gly64Asp (rs77630697) in gene SLC47A1 on the clinical efficacy of metformin in T2DM Pakistani patients. Methods: The study included 800 T2DM patients (400 metformin responders and 400 metformin non-responders) along with 400 ethnically matched healthy individuals. The genotypes were determined by allele-specific polymerase chain reaction. In-silico analysis was done to confirm the effect of the two SNPs on the structure of genes. Association was statistically determined using SPSS software. Results: Minor allele frequency for rs77474263 and rs77630697 was 0.13 and 0.12. For SLC47A1 rs77474263 the homozygotes of one mutant allele ‘T’ (CT) of rs77474263 variant were fewer in metformin responders than metformin non-responders (29.2% vs. 35.5 %). Likewise, the efficacy was further reduced (7.2% vs. 4.0 %) in homozygotes of two copies of ‘T’ allele (TT). Remarkably, T2DM cases with two copies of allele ‘C’ (CC) had 2.11 times more probability to respond towards metformin monotherapy. For SLC47A1 rs77630697 the homozygotes of one mutant allele ‘A’ (GA) of rs77630697 variant were fewer in metformin responders than metformin non-responders (33.5% vs. 43.0 %). Likewise, the efficacy was further reduced (8.5% vs. 4.5%) in homozygotes of two copies of ‘A’ allele (AA). Remarkably, T2DM cases with two copies of allele ‘G’ (GG) had 2.41 times more probability to respond towards metformin monotherapy. In-silico analysis revealed that these two variants affect the structure and stability of their corresponding proteins. Conclusion: The present data suggest that SLC47A1 Leu125Phe (rs77474263) and Gly64Asp (rs77630697) polymorphisms were associated with the therapeutic response of metformin in T2DM patients of Pakistan.

Keywords: Diabetes, T2DM, SLC47A1, Pakistan, polymorphism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 734
151 Generation of 3D Models Obtained with Low-Cost RGB and Thermal Sensors Mounted on Drones

Authors: Julio Manuel de Luis Ruiz, Javier Sedano Cibrián, Rubén Pérez Álvarez, Raúl Pereda García, Felipe Piña García

Abstract:

Nowadays it is common to resort to aerial photography to carry out the prospection and/or exploration of archaeological sites. In recent years, Unmanned Aerial Vehicles (UAVs) have been applied as the vehicles that carry the sensor. This implies certain advantages, such as the possibility of including low-cost sensors, given that these vehicles can carry the sensor at relatively low altitudes. Due to this, low-cost dual sensors have recently begun to be used. This new equipment can collaborate with classic Digital Elevation Models (DEMs) in the exploration of archaeological sites, but this entails the need for a methodological setting to optimize the acquisition, processing and exploitation of the information provided by low-cost dual sensors. This research focuses on the design of an appropriate workflow to obtain 3D models with low-cost sensors carried on UAVs, both in the RGB and thermal domains. All the foregoing has been applied to the archaeological site of Juliobriga, located in Cantabria (Spain). To this end, a flight with this type of sensors has been planned, developed and analyzed. It has been applied to the archaeological site of Juliobriga (Cantabria, Spain). A strong dependence of the thermal sensor on the GSD, and the capability of this technique to interpret underground materials. This research allows to state that the thermal nature of the site does not provide main information about the site itself, but with combination with other types of information, such as the DEM, the typology of materials, etc., can produce very positive results with respect to the exploration and knowledge of the site. 

Keywords: process optimization, RGB models, thermal models, UAV, workflow

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 618
150 The Impact of Changing Political and Economic Conditions on International Production Cooperation with a focus on Multinational Corporations and Transnational Corporations

Authors: Tomiris Tussupova

Abstract:

The research highlights the influence of political conditions on the operations, investment decisions, and international production networks of Multinational Corporations (MNCs) and Transnational Corporations (TNCs). It investigates how factors such as political instability, protectionist policies, and regulatory changes impact the structure and functioning of International Production Cooperation (IPC). Furthermore, the analysis identifies gaps in the literature and formulates pertinent research questions to address in the paper. The study explores MNCs and TNCs' responses to changing political and economic conditions, emphasizing their strategies for adaptation. Additionally, it delves into the specific mechanisms employed by these corporations to mitigate risks and challenges arising from evolving political and economic landscapes. The research provides policy recommendations for governments, international organizations, and industry associations. These recommendations focus on enhancing policy stability, promoting regional integration, supporting digital technology adoption, and encouraging responsible and sustainable practices in IPC. By incorporating these suggestions, policymakers and practitioners can foster an enabling environment for MNCs and TNCs, thereby facilitating stable and efficient international production networks. Overall, this research contributes to a deeper understanding of the role of MNCs and TNCs in IPC under changing political and economic conditions. The insights garnered from this study can guide future research and inform policy decisions to promote sustainable and resilient international production cooperation.

Keywords: International cooperation, Multinational Corporations, Transnational Corporations, international production networks, Global Value Chains.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18
149 The Necessity of Biomass Application for Developing Combined Heat and Power(CHP) with Biogas Fuel: Case Study

Authors: Farnaz Amin Salehi, David Edward.Cotton, Mohammad Ali Abdoli, Kambiz Rezapour

Abstract:

The daily increase of organic waste materials resulting from different activities in the country is one of the main factors for the pollution of environment. Today, with regard to the low level of the output of using traditional methods, the high cost of disposal waste materials and environmental pollutions, the use of modern methods such as anaerobic digestion for the production of biogas has been prevailing. The collected biogas from the process of anaerobic digestion, as a renewable energy source similar to natural gas but with a less methane and heating value is usable. Today, with the help of technologies of filtration and proper preparation, access to biogas with features fully similar to natural gas has become possible. At present biogas is one of the main sources of supplying electrical and thermal energy and also an appropriate option to be used in four stroke engine, diesel engine, sterling engine, gas turbine, gas micro turbine and fuel cell to produce electricity. The use of biogas for different reasons which returns to socio-economic and environmental advantages has been noticed in CHP for the production of energy in the world. The production of biogas from the technology of anaerobic digestion and its application in CHP power plants in Iran can not only supply part of the energy demands in the country, but it can materialize moving in line with the sustainable development. In this article, the necessity of the development of CHP plants with biogas fuels in the country will be dealt based on studies performed from the economic, environmental and social aspects. Also to prove the importance of the establishment of these kinds of power plants from the economic point of view, necessary calculations has been done as a case study for a CHP power plant with a biogas fuel.

Keywords: Anaerobic Digestion, Biogas, CHP, Organic Wastes

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1651
148 Hierarchies Based On the Number of Cooperating Systems of Finite Automata on Four-Dimensional Input Tapes

Authors: Makoto Sakamoto, Yasuo Uchida, Makoto Nagatomo, Takao Ito, Tsunehiro Yoshinaga, Satoshi Ikeda, Masahiro Yokomichi, Hiroshi Furutani

Abstract:

In theoretical computer science, the Turing machine has played a number of important roles in understanding and exploiting basic concepts and mechanisms in computing and information processing [20]. It is a simple mathematical model of computers [9]. After that, M.Blum and C.Hewitt first proposed two-dimensional automata as a computational model of two-dimensional pattern processing, and investigated their pattern recognition abilities in 1967 [7]. Since then, a lot of researchers in this field have been investigating many properties about automata on a two- or three-dimensional tape. On the other hand, the question of whether processing fourdimensional digital patterns is much more difficult than two- or threedimensional ones is of great interest from the theoretical and practical standpoints. Thus, the study of four-dimensional automata as a computasional model of four-dimensional pattern processing has been meaningful [8]-[19],[21]. This paper introduces a cooperating system of four-dimensional finite automata as one model of four-dimensional automata. A cooperating system of four-dimensional finite automata consists of a finite number of four-dimensional finite automata and a four-dimensional input tape where these finite automata work independently (in parallel). Those finite automata whose input heads scan the same cell of the input tape can communicate with each other, that is, every finite automaton is allowed to know the internal states of other finite automata on the same cell it is scanning at the moment. In this paper, we mainly investigate some accepting powers of a cooperating system of eight- or seven-way four-dimensional finite automata. The seven-way four-dimensional finite automaton is an eight-way four-dimensional finite automaton whose input head can move east, west, south, north, up, down, or in the fu-ture, but not in the past on a four-dimensional input tape.

Keywords: computational complexity, cooperating system, finite automaton, four-dimension, hierarchy, multihead.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888
147 FACTS Based Stabilization for Smart Grid Applications

Authors: Adel M. Sharaf, Foad H. Gandoman

Abstract:

Nowadays, Photovoltaic-PV Farms/ Parks and large PV-Smart Grid Interface Schemes are emerging and commonly utilized in Renewable Energy distributed generation. However, PVhybrid- Dc-Ac Schemes using interface power electronic converters usually has negative impact on power quality and stabilization of modern electrical network under load excursions and network fault conditions in smart grid. Consequently, robust FACTS based interface schemes are required to ensure efficient energy utilization and stabilization of bus voltages as well as limiting switching/fault onrush current condition. FACTS devices are also used in smart grid- Battery Interface and Storage Schemes with PV-Battery Storage hybrid systems as an elegant alternative to renewable energy utilization with backup battery storage for electric utility energy and demand side management to provide needed energy and power capacity under heavy load conditions. The paper presents a robust interface PV-Li-Ion Battery Storage Interface Scheme for Distribution/Utilization Low Voltage Interface using FACTS stabilization enhancement and dynamic maximum PV power tracking controllers. Digital simulation and validation of the proposed scheme is done using MATLAB/Simulink software environment for Low Voltage- Distribution/Utilization system feeding a hybrid Linear-Motorized inrush and nonlinear type loads from a DC-AC Interface VSC-6- pulse Inverter Fed from the PV Park/Farm with a back-up Li-Ion Storage Battery.

Keywords: AC FACTS, Smart grid, Stabilization, PV-Battery Storage, Switched Filter-Compensation (SFC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3247
146 Optimization of Doubly Fed Induction Generator Equivalent Circuit Parameters by Direct Search Method

Authors: Mamidi Ramakrishna Rao

Abstract:

Doubly-fed induction generator (DFIG) is currently the choice for many wind turbines. These generators, when connected to the grid through a converter, is subjected to varied power system conditions like voltage variation, frequency variation, short circuit fault conditions, etc. Further, many countries like Canada, Germany, UK, Scotland, etc. have distinct grid codes relating to wind turbines. Accordingly, following the network faults, wind turbines have to supply a definite reactive current. To satisfy the requirements including reactive current capability, an optimum electrical design becomes a mandate for DFIG to function. This paper intends to optimize the equivalent circuit parameters of an electrical design for satisfactory DFIG performance. Direct search method has been used for optimization of the parameters. The variables selected include electromagnetic core dimensions (diameters and stack length), slot dimensions, radial air gap between stator and rotor and winding copper cross section area. Optimization for 2 MW DFIG has been executed separately for three objective functions - maximum reactive power capability (Case I), maximum efficiency (Case II) and minimum weight (Case III). In the optimization analysis program, voltage variations (10%), power factor- leading and lagging (0.95), speeds for corresponding to slips (-0.3 to +0.3) have been considered. The optimum designs obtained for objective functions were compared. It can be concluded that direct search method of optimization helps in determining an optimum electrical design for each objective function like efficiency or reactive power capability or weight minimization.

Keywords: Direct search, DFIG, equivalent circuit parameters, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 905
145 Reinforced Concrete Bridge Deck Condition Assessment Methods Using Ground Penetrating Radar and Infrared Thermography

Authors: Nicole M. Martino

Abstract:

Reinforced concrete bridge deck condition assessments primarily use visual inspection methods, where an inspector looks for and records locations of cracks, potholes, efflorescence and other signs of probable deterioration. Sounding is another technique used to diagnose the condition of a bridge deck, however this method listens for damage within the subsurface as the surface is struck with a hammer or chain. Even though extensive procedures are in place for using these inspection techniques, neither one provides the inspector with a comprehensive understanding of the internal condition of a bridge deck – the location where damage originates from.  In order to make accurate estimates of repair locations and quantities, in addition to allocating the necessary funding, a total understanding of the deck’s deteriorated state is key. The research presented in this paper collected infrared thermography and ground penetrating radar data from reinforced concrete bridge decks without an asphalt overlay. These decks were of various ages and their condition varied from brand new, to in need of replacement. The goals of this work were to first verify that these nondestructive evaluation methods could identify similar areas of healthy and damaged concrete, and then to see if combining the results of both methods would provide a higher confidence than if the condition assessment was completed using only one method. The results from each method were presented as plan view color contour plots. The results from one of the decks assessed as a part of this research, including these plan view plots, are presented in this paper. Furthermore, in order to answer the interest of transportation agencies throughout the United States, this research developed a step-by-step guide which demonstrates how to collect and assess a bridge deck using these nondestructive evaluation methods. This guide addresses setup procedures on the deck during the day of data collection, system setups and settings for different bridge decks, data post-processing for each method, and data visualization and quantification.

Keywords: Bridge deck deterioration, ground penetrating radar, infrared thermography, NDT of bridge decks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 914
144 Analysis of Metallothionein Gene MT1A (rs11076161) and MT2A (rs10636) Polymorphisms as a Molecular Marker in Type 2 Diabetes Mellitus among Malay Population

Authors: Norsakinah Mohammad Osman, Ali Etemad, Patimah Ismail

Abstract:

Type 2 diabetes mellitus (T2DM) is a complex metabolic disorder that characterized by the presence of high glucose in blood that cause from insulin resistance and insufficiency due to deterioration β-cell Langerhans functions. T2DM is commonly caused by the combination of inherited genetic variations as well as our own lifestyle. Metallothionein (MT) is a known cysteine-rich protein responsible in helping zinc homeostasis which is important in insulin signaling and secretion as well as protection our body from reactive oxygen species (ROS). MT scavenged ROS and free radicals in our body happen to be one of the reasons of T2DM and its complications. The objective of this study was to investigate the association of MT1A and MT2A polymorphisms between T2DM and control subjects among Malay populations. This study involved 150 T2DM and 120 Healthy individuals of Malay ethnic with mixed genders. The genomic DNA was extracted from buccal cells and amplified for MT1A and MT2A loci; the 347bp and 238bp banding patterns were respectively produced by mean of the Polymerase Chain Reaction (PCR). The PCR products were digested with Mlucl and Tsp451 restriction enzymes respectively and producing fragments lengths of (158/189/347bp) and (103/135/238bp) respectively. The ANOVA test was conducted and it shown that there was a significant difference between diabetic and control subjects for age, BMI, WHR, SBP, FPG, HBA1C, LDL, TG, TC and family history with (P<0.05). While the HDL, CVD risk ratio and DBP does not show any significant difference with (P>0.05). The genotype frequency for AA, AG and GG of MT1A polymorphisms was 72.7%, 22.7% and 4.7% in cases and 15%, 55% and 30% in control respectively. As for MT2A, genotype frequency of GG, GC and CC was 42.7%, 27.3% and 30% in case and 5%, 40% and 55% for control respectively. Both polymorphisms show significant difference between two investigated groups with (P=0.000). The Post hoc test was conducted and shows a significant difference between the genotypes within each polymorphism (P=0. 000). The MT1A and MT2A polymorphisms were believed to be the reliable molecular markers to distinguish the T2DM subjects from healthy individuals in Malay populations.

Keywords: Type 2 Diabetes Mellitus (T2DM), Metallothionein (MT), MT1A (rs11076161), MT2A (rs10636), Malay, Genetic Polymorphism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2315
143 A Novel Multiple Valued Logic OHRNS Modulo rn Adder Circuit

Authors: Mehdi Hosseinzadeh, Somayyeh Jafarali Jassbi, Keivan Navi

Abstract:

Residue Number System (RNS) is a modular representation and is proved to be an instrumental tool in many digital signal processing (DSP) applications which require high-speed computations. RNS is an integer and non weighted number system; it can support parallel, carry-free, high-speed and low power arithmetic. A very interesting correspondence exists between the concepts of Multiple Valued Logic (MVL) and Residue Number Arithmetic. If the number of levels used to represent MVL signals is chosen to be consistent with the moduli which create the finite rings in the RNS, MVL becomes a very natural representation for the RNS. There are two concerns related to the application of this Number System: reaching the most possible speed and the largest dynamic range. There is a conflict when one wants to resolve both these problem. That is augmenting the dynamic range results in reducing the speed in the same time. For achieving the most performance a method is considere named “One-Hot Residue Number System" in this implementation the propagation is only equal to one transistor delay. The problem with this method is the huge increase in the number of transistors they are increased in order m2 . In real application this is practically impossible. In this paper combining the Multiple Valued Logic and One-Hot Residue Number System we represent a new method to resolve both of these two problems. In this paper we represent a novel design of an OHRNS-based adder circuit. This circuit is useable for Multiple Valued Logic moduli, in comparison to other RNS design; this circuit has considerably improved the number of transistors and power consumption.

Keywords: Computer Arithmetic, Residue Number System, Multiple Valued Logic, One-Hot, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1843
142 Web-Based Cognitive Writing Instruction (WeCWI): A Theoretical-and-Pedagogical e-Framework for Language Development

Authors: Boon Yih Mah

Abstract:

Web-based Cognitive Writing Instruction (WeCWI)’s contribution towards language development can be divided into linguistic and non-linguistic perspectives. In linguistic perspective, WeCWI focuses on the literacy and language discoveries, while the cognitive and psychological discoveries are the hubs in non-linguistic perspective. In linguistic perspective, WeCWI draws attention to free reading and enterprises, which are supported by the language acquisition theories. Besides, the adoption of process genre approach as a hybrid guided writing approach fosters literacy development. Literacy and language developments are interconnected in the communication process; hence, WeCWI encourages meaningful discussion based on the interactionist theory that involves input, negotiation, output, and interactional feedback. Rooted in the elearning interaction-based model, WeCWI promotes online discussion via synchronous and asynchronous communications, which allows interactions happened among the learners, instructor, and digital content. In non-linguistic perspective, WeCWI highlights on the contribution of reading, discussion, and writing towards cognitive development. Based on the inquiry models, learners’ critical thinking is fostered during information exploration process through interaction and questioning. Lastly, to lower writing anxiety, WeCWI develops the instructional tool with supportive features to facilitate the writing process. To bring a positive user experience to the learner, WeCWI aims to create the instructional tool with different interface designs based on two different types of perceptual learning style.

Keywords: WeCWI, literacy discovery, language discovery, cognitive discovery, psychological discovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3232
141 Exploration of Sweet Potato Cultivar Markets Availability in North West Province, South Africa

Authors: V. M. Mmbengwa, J. R. M. Mabuso, C. P. Du Plooy, S. Laurrie, H. D. van Schalkwyk

Abstract:

Sweet potato products are necessary for the provision of essential nutrients in every household, regardless of their poverty status. Their consumption appears to be highly influenced by socioeconomic factors, such as malnutrition, food insecurity and unemployment. Therefore, market availability is crucial for these cultivars to resolve some of the socio-economic factors. The aim of the study was to investigate market availability of sweet potato cultivars in the North West Province. In this study, both qualitative and quantitative research methodologies were used. Qualitative methodology was used to explain the quantitative outcomes of the variables. On the other hand, quantitative results were used to test the hypothesis. The study used SPSS software to analyse the data. Crosstabulation and Chi-square statistics were used to obtain the descriptive and inferential analyses, respectively. The study found that the Blesbok cultivar is dominating the markets of the North West Province, with the Monate cultivar dominating in the Bojanala Platinum (75%) and Dr Ruth Segomotsi Mompati (25%) districts. It is also found that a unit increase in the supply of sweet potato cultivars in both local and district municipal markets is accompanied by a reduced demand of 28% and 33% at district and local markets, respectively. All these results were found to be significant at p<0.05. The results further revealed that in four out of nine local municipality markets, the Blesbok cultivar seems to be solely available in those four local municipal markets of North West Province. It can be concluded that Blesbok, relative to other cultivars, is the most commercialised sweet potato variety and that consumers across this Province are highly aware of it. For other cultivars to assume market prominence in this Province, a well-designed marketing campaign for creating awareness may be required. This campaign may be based on nutritional advantages of different cultivars, of which Blesbok is relatively inferior, compared to orange-fleshed sweet potato varieties.

Keywords: Cultivar, malnutrition, markets, sweet potato.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2310
140 Beam Coding with Orthogonal Complementary Golay Codes for Signal to Noise Ratio Improvement in Ultrasound Mammography

Authors: Y. Kumru, K. Enhos, H. Köymen

Abstract:

In this paper, we report the experimental results on using complementary Golay coded signals at 7.5 MHz to detect breast microcalcifications of 50 µm size. Simulations using complementary Golay coded signals show perfect consistence with the experimental results, confirming the improved signal to noise ratio for complementary Golay coded signals. For improving the success on detecting the microcalcifications, orthogonal complementary Golay sequences having cross-correlation for minimum interference are used as coded signals and compared to tone burst pulse of equal energy in terms of resolution under weak signal conditions. The measurements are conducted using an experimental ultrasound research scanner, Digital Phased Array System (DiPhAS) having 256 channels, a phased array transducer with 7.5 MHz center frequency and the results obtained through experiments are validated by Field-II simulation software. In addition, to investigate the superiority of coded signals in terms of resolution, multipurpose tissue equivalent phantom containing series of monofilament nylon targets, 240 µm in diameter, and cyst-like objects with attenuation of 0.5 dB/[MHz x cm] is used in the experiments. We obtained ultrasound images of monofilament nylon targets for the evaluation of resolution. Simulation and experimental results show that it is possible to differentiate closely positioned small targets with increased success by using coded excitation in very weak signal conditions.

Keywords: Coded excitation, complementary Golay codes, DiPhAS, medical ultrasound.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 905
139 Arriving at an Optimum Value of Tolerance Factor for Compressing Medical Images

Authors: Sumathi Poobal, G. Ravindran

Abstract:

Medical imaging uses the advantage of digital technology in imaging and teleradiology. In teleradiology systems large amount of data is acquired, stored and transmitted. A major technology that may help to solve the problems associated with the massive data storage and data transfer capacity is data compression and decompression. There are many methods of image compression available. They are classified as lossless and lossy compression methods. In lossy compression method the decompressed image contains some distortion. Fractal image compression (FIC) is a lossy compression method. In fractal image compression an image is coded as a set of contractive transformations in a complete metric space. The set of contractive transformations is guaranteed to produce an approximation to the original image. In this paper FIC is achieved by PIFS using quadtree partitioning. PIFS is applied on different images like , Ultrasound, CT Scan, Angiogram, X-ray, Mammograms. In each modality approximately twenty images are considered and the average values of compression ratio and PSNR values are arrived. In this method of fractal encoding, the parameter, tolerance factor Tmax, is varied from 1 to 10, keeping the other standard parameters constant. For all modalities of images the compression ratio and Peak Signal to Noise Ratio (PSNR) are computed and studied. The quality of the decompressed image is arrived by PSNR values. From the results it is observed that the compression ratio increases with the tolerance factor and mammogram has the highest compression ratio. The quality of the image is not degraded upto an optimum value of tolerance factor, Tmax, equal to 8, because of the properties of fractal compression.

Keywords: Fractal image compression, IFS, PIFS, PSNR, Quadtree partitioning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740
138 Mapping of Alteration Zones in Mineral Rich Belt of South-East Rajasthan Using Remote Sensing Techniques

Authors: Mrinmoy Dhara, Vivek K. Sengar, Shovan L. Chattoraj, Soumiya Bhattacharjee

Abstract:

Remote sensing techniques have emerged as an asset for various geological studies. Satellite images obtained by different sensors contain plenty of information related to the terrain. Digital image processing further helps in customized ways for the prospecting of minerals. In this study, an attempt has been made to map the hydrothermally altered zones using multispectral and hyperspectral datasets of South East Rajasthan. Advanced Space-borne Thermal Emission and Reflection Radiometer (ASTER) and Hyperion (Level1R) dataset have been processed to generate different Band Ratio Composites (BRCs). For this study, ASTER derived BRCs were generated to delineate the alteration zones, gossans, abundant clays and host rocks. ASTER and Hyperion images were further processed to extract mineral end members and classified mineral maps have been produced using Spectral Angle Mapper (SAM) method. Results were validated with the geological map of the area which shows positive agreement with the image processing outputs. Thus, this study concludes that the band ratios and image processing in combination play significant role in demarcation of alteration zones which may provide pathfinders for mineral prospecting studies.

Keywords: Advanced space-borne thermal emission and reflection radiometer, ASTER, Hyperion, Band ratios, Alteration zones, spectral angle mapper.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1486
137 Performance Analysis of Chrominance Red and Chrominance Blue in JPEG

Authors: Mamta Garg

Abstract:

While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.

Keywords: JPEG, Discrete Cosine Transform, Quantization, Color Space Conversion, Image Compression, Peak Signal to Noise Ratio & Compression Ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1677
136 Optimal Image Representation for Linear Canonical Transform Multiplexing

Authors: Navdeep Goel, Salvador Gabarda

Abstract:

Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4 × 4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4 × 4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4 × 4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.

Keywords: Chirp signals, Image multiplexing, Image transformation, Linear canonical transform, Polynomial approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2130
135 Design and Modeling of Human Middle Ear for Harmonic Response Analysis

Authors: Shende Suraj Balu, A. B. Deoghare, K. M. Pandey

Abstract:

The human middle ear (ME) is a delicate and vital organ. It has a complex structure that performs various functions such as receiving sound pressure and producing vibrations of eardrum and propagating it to inner ear. It consists of Tympanic Membrane (TM), three auditory ossicles, various ligament structures and muscles. Incidents such as traumata, infections, ossification of ossicular structures and other pathologies may damage the ME organs. The conditions can be surgically treated by employing prosthesis. However, the suitability of the prosthesis needs to be examined in advance prior to the surgery. Few decades ago, this issue was addressed and analyzed by developing an equivalent representation either in the form of spring mass system, electrical system using R-L-C circuit or developing an approximated CAD model. But, nowadays a three-dimensional ME model can be constructed using micro X-Ray Computed Tomography (μCT) scan data. Moreover, the concern about patient specific integrity pertaining to the disease can be examined well in advance. The current research work emphasizes to develop the ME model from the stacks of μCT images which are used as input file to MIMICS Research 19.0 (Materialise Interactive Medical Image Control System) software. A stack of CT images is converted into geometrical surface model to build accurate morphology of ME. The work is further extended to understand the dynamic behaviour of Harmonic response of the stapes footplate and umbo for different sound pressure levels applied at lateral side of eardrum using finite element approach. The pathological condition Cholesteatoma of ME is investigated to obtain peak to peak displacement of stapes footplate and umbo. Apart from this condition, other pathologies, mainly, changes in the stiffness of stapedial ligament, TM thickness and ossicular chain separation and fixation are also explored. The developed model of ME for pathologies is validated by comparing the results available in the literatures and also with the results of a normal ME to calculate the percentage loss in hearing capability.

Keywords: Computed tomography, human middle ear, harmonic response, pathologies, tympanic membrane.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1013
134 Effect of a Gravel Bed Flocculator on the Efficiency of a Low Cost Water Treatment Plants

Authors: Alaa Hussein Wadi

Abstract:

The principal objective of a water treatment plant is to produce water that satisfies a set of drinking water quality standards at a reasonable price to the consumers. The gravel-bed flocculator provide a simple and inexpensive design for flocculation in small water treatment plants (less than 5000 m3/day capacity). The packed bed of gravel provides ideal conditions for the formation of compact settleable flocs because of continuous recontact provided by the sinuous flow of water through the interstices formed by the gravel. The field data which were obtained from the operation of the water supply treatment unit cover the physical, chemical and biological water qualities of the raw and settled water as obtained by the operation of the treatment unit. The experiments were carried out with the aim of assessing the efficiency of the gravel filter in removing the turbidity, pathogenic bacteria, from the raw water. The water treatment plant, which was constructed for the treatment of river water, was in principle a rapid sand filter. The results show that the average value of the turbidity level of the settled water was 4.83 NTU with a standard deviation of turbidity 2.893 NTU. This indicated that the removal efficiency of the sedimentation tank (gravel filter) was about 67.8 %. for pH values fluctuated between 7.75 and 8.15, indicating the alkaline nature of the raw water of the river Shatt Al-Hilla, as expected. Raw water pH is depressed slightly following alum coagulation. The pH of the settled water ranged from 7.75 to a maximum of 8.05. The bacteriological tests which were carried out on the water samples were: total coliform test, E-coli test, and the plate count test. In each test the procedure used was as outlined in the Standard Methods for the Examination of Water and Wastewater (APHA, AWWA, and WPCF, 1985). The gravel filter exhibit a low performance in removing bacterial load. The percentage bacterial removal, which is maximum for total plate count (19%) and minimum for total coliform (16.82%).

Keywords: Gravel bed flocculator, turbidity, total coliform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2674
133 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators

Authors: Wei Zhang

Abstract:

With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.

Keywords: Deep learning, field programmable gate array, FPGA, hardware acceleration, convolutional neural networks, CNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 895
132 Multiaxial Fatigue Analysis of a High Performance Nickel-Based Superalloy

Authors: P. Selva, B. Lorrain, J. Alexis, A. Seror, A. Longuet, C. Mary, F. Denard

Abstract:

Over the past four decades, the fatigue behavior of nickel-based alloys has been widely studied. However, in recent years, significant advances in the fabrication process leading to grain size reduction have been made in order to improve fatigue properties of aircraft turbine discs. Indeed, a change in particle size affects the initiation mode of fatigue cracks as well as the fatigue life of the material. The present study aims to investigate the fatigue behavior of a newly developed nickel-based superalloy under biaxial-planar loading. Low Cycle Fatigue (LCF) tests are performed at different stress ratios so as to study the influence of the multiaxial stress state on the fatigue life of the material. Full-field displacement and strain measurements as well as crack initiation detection are obtained using Digital Image Correlation (DIC) techniques. The aim of this presentation is first to provide an in-depth description of both the experimental set-up and protocol: the multiaxial testing machine, the specific design of the cruciform specimen and performances of the DIC code are introduced. Second, results for sixteen specimens related to different load ratios are presented. Crack detection, strain amplitude and number of cycles to crack initiation vs. triaxial stress ratio for each loading case are given. Third, from fractographic investigations by scanning electron microscopy it is found that the mechanism of fatigue crack initiation does not depend on the triaxial stress ratio and that most fatigue cracks initiate from subsurface carbides.

Keywords: Cruciform specimen, multiaxial fatigue, Nickelbased superalloy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2193
131 The Effect of CPU Location in Total Immersion of Microelectronics

Authors: A. Almaneea, N. Kapur, J. L. Summers, H. M. Thompson

Abstract:

Meeting the growth in demand for digital services such as social media, telecommunications, and business and cloud services requires large scale data centres, which has led to an increase in their end use energy demand. Generally, over 30% of data centre power is consumed by the necessary cooling overhead. Thus energy can be reduced by improving the cooling efficiency. Air and liquid can both be used as cooling media for the data centre. Traditional data centre cooling systems use air, however liquid is recognised as a promising method that can handle the more densely packed data centres. Liquid cooling can be classified into three methods; rack heat exchanger, on-chip heat exchanger and full immersion of the microelectronics. This study quantifies the improvements of heat transfer specifically for the case of immersed microelectronics by varying the CPU and heat sink location. Immersion of the server is achieved by filling the gap between the microelectronics and a water jacket with a dielectric liquid which convects the heat from the CPU to the water jacket on the opposite side. Heat transfer is governed by two physical mechanisms, which is natural convection for the fixed enclosure filled with dielectric liquid and forced convection for the water that is pumped through the water jacket. The model in this study is validated with published numerical and experimental work and shows good agreement with previous work. The results show that the heat transfer performance and Nusselt number (Nu) is improved by 89% by placing the CPU and heat sink on the bottom of the microelectronics enclosure.

Keywords: CPU location, data centre cooling, heat sink in enclosures, Immersed microelectronics, turbulent natural convection in enclosures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2174
130 Detecting Fake News: A Natural Language Processing, Reinforcement Learning, and Blockchain Approach

Authors: Ashly Joseph, Jithu Paulose

Abstract:

In an era where misleading information may quickly circulate on digital news channels, it is crucial to have efficient and trustworthy methods to detect and reduce the impact of misinformation. This research proposes an innovative framework that combines Natural Language Processing (NLP), Reinforcement Learning (RL), and Blockchain technologies to precisely detect and minimize the spread of false information in news articles on social media. The framework starts by gathering a variety of news items from different social media sites and performing preprocessing on the data to ensure its quality and uniformity. NLP methods are utilized to extract complete linguistic and semantic characteristics, effectively capturing the subtleties and contextual aspects of the language used. These features are utilized as input for a RL model. This model acquires the most effective tactics for detecting and mitigating the impact of false material by modeling the intricate dynamics of user engagements and incentives on social media platforms. The integration of blockchain technology establishes a decentralized and transparent method for storing and verifying the accuracy of information. The Blockchain component guarantees the unchangeability and safety of verified news records, while encouraging user engagement for detecting and fighting false information through an incentive system based on tokens. The suggested framework seeks to provide a thorough and resilient solution to the problems presented by misinformation in social media articles.

Keywords: Natural Language Processing, Reinforcement Learning, Blockchain, fake news mitigation, misinformation detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 88
129 Assessment of Breeding Soundness by Comparative Radiography and Ultrasonography of Rabbit Testes

Authors: Adenike O. Olatunji-Akioye, Emmanual B Farayola

Abstract:

In order to improve the animal protein recommended daily intake of Nigerians, there is an upsurge in breeding of hitherto shunned food animals one of which is the rabbit. Radiography and ultrasonography are tools for diagnosing disease and evaluating the anatomical architecture of parts of the body non-invasively. As the rabbit is becoming a more important food animal, to achieve improved breeding of these animals, the best of the species form a breeding stock and will usually depend on breeding soundness which may be evaluated by assessment of the male reproductive organs by these tools. Four male intact rabbits weighing between 1.2 to 1.5 kg were acquired and acclimatized for 2 weeks. Dorsoventral views of the testes were acquired using a digital radiographic machine and a 5 MHz portable ultrasound scanner was used to acquire images of the testes in longitudinal, sagittal and transverse planes. Radiographic images acquired revealed soft tissue images of the testes in all rabbits. The testes lie in individual scrotal sacs sides on both sides of the midline at the level of the caudal vertebrae and thus are superimposed by caudal vertebrae and the caudal limits of the pelvic girdle. The ultrasonographic images revealed mostly homogenously hypoechogenic testes and a hyperechogenic mediastinum testis. The dorsal and ventral poles of the testes were heterogeneously hypoechogenic and correspond to the epididymis and spermatic cord. The rabbit is unique in the ability to retract the testes particularly when stressed and so careful and stressless handling during the procedures is of paramount importance. The imaging of rabbit testes can be safely done using both imaging methods but ultrasonography is a better method of assessment and evaluation of soundness for breeding.

Keywords: Breeding soundness, rabbits, radiography, ultrasonography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 885
128 Effect of High Injection Pressure on Mixture Formation, Burning Process and Combustion Characteristics in Diesel Combustion

Authors: Amir Khalid, B. Manshoor

Abstract:

The mixture formation prior to the ignition process plays as a key element in the diesel combustion. Parametric studies of mixture formation and ignition process in various injection parameter has received considerable attention in potential for reducing emissions. Purpose of this study is to clarify the effects of injection pressure on mixture formation and ignition especially during ignition delay period, which have to be significantly influences throughout the combustion process and exhaust emissions. This study investigated the effects of injection pressure on diesel combustion fundamentally using rapid compression machine. The detail behavior of mixture formation during ignition delay period was investigated using the schlieren photography system with a high speed camera. This method can capture spray evaporation, spray interference, mixture formation and flame development clearly with real images. Ignition process and flame development were investigated by direct photography method using a light sensitive high-speed color digital video camera. The injection pressure and air motion are important variable that strongly affect to the fuel evaporation, endothermic and prolysis process during ignition delay. An increased injection pressure makes spray tip penetration longer and promotes a greater amount of fuel-air mixing occurs during ignition delay. A greater quantity of fuel prepared during ignition delay period thus predominantly promotes more rapid heat release.

Keywords: Mixture Formation, Diesel Combustion, Ignition Process, Spray, Rapid Compression Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2843
127 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance

Authors: Rajinder Singh, Ram Valluru

Abstract:

Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.

Keywords: Actuarial loss reserving techniques, logistic regression, parametric function, volatility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 416
126 Entrepreneur Universal Education System: Future Evolution

Authors: Khaled Elbehiery, Hussam Elbehiery

Abstract:

The success of education is dependent on evolution and adaptation, while the traditional system has worked before, one type of education evolved with the digital age is virtual education that has influenced efficiency in today’s learning environments. Virtual learning has indeed proved its efficiency to overcome the drawbacks of the physical environment such as time, facilities, location, etc., but despite what it had accomplished, the educational system over all is not adequate for being a productive system yet. Earning a degree is not anymore enough to obtain a career job; it is simply missing the skills and creativity. There are always two sides of a coin; a college degree or a specialized certificate, each has its own merits, but having both can put you on a successful IT career path. For many of job-seeking individuals across world to have a clear meaningful goal for work and education and positively contribute the community, a productive correlation and cooperation among employers, universities alongside with the individual technical skills is a must for generations to come. Fortunately, the proposed research “Entrepreneur Universal Education System” is an evolution to meet the needs of both employers and students, in addition to gaining vital and real-world experience in the chosen fields is easier than ever. The new vision is to empower the education to improve organizations’ needs which means improving the world as its primary goal, adopting universal skills of effective thinking, effective action, effective relationships, preparing the students through real-world accomplishment and encouraging them to better serve their organization and their communities faster and more efficiently.

Keywords: Virtual education, academic degree, certificates, internship, amazon web services, Microsoft Azure, Google cloud platform, hybrid models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913
125 A Data Hiding Model with High Security Features Combining Finite State Machines and PMM method

Authors: Souvik Bhattacharyya, Gautam Sanyal

Abstract:

Recent years have witnessed the rapid development of the Internet and telecommunication techniques. Information security is becoming more and more important. Applications such as covert communication, copyright protection, etc, stimulate the research of information hiding techniques. Traditionally, encryption is used to realize the communication security. However, important information is not protected once decoded. Steganography is the art and science of communicating in a way which hides the existence of the communication. Important information is firstly hidden in a host data, such as digital image, video or audio, etc, and then transmitted secretly to the receiver.In this paper a data hiding model with high security features combining both cryptography using finite state sequential machine and image based steganography technique for communicating information more securely between two locations is proposed. The authors incorporated the idea of secret key for authentication at both ends in order to achieve high level of security. Before the embedding operation the secret information has been encrypted with the help of finite-state sequential machine and segmented in different parts. The cover image is also segmented in different objects through normalized cut.Each part of the encoded secret information has been embedded with the help of a novel image steganographic method (PMM) on different cuts of the cover image to form different stego objects. Finally stego image is formed by combining different stego objects and transmit to the receiver side. At the receiving end different opposite processes should run to get the back the original secret message.

Keywords: Cover Image, Finite state sequential machine, Melaymachine, Pixel Mapping Method (PMM), Stego Image, NCUT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2261
124 Laser Registration and Supervisory Control of neuroArm Robotic Surgical System

Authors: Hamidreza Hoshyarmanesh, Hosein Madieh, Sanju Lama, Yaser Maddahi, Garnette R. Sutherland, Kourosh Zareinia

Abstract:

This paper illustrates the concept of an algorithm to register specified markers on the neuroArm surgical manipulators, an image-guided MR-compatible tele-operated robot for microsurgery and stereotaxy. Two range-finding algorithms, namely time-of-flight and phase-shift, are evaluated for registration and supervisory control. The time-of-flight approach is implemented in a semi-field experiment to determine the precise position of a tiny retro-reflective moving object. The moving object simulates a surgical tool tip. The tool is a target that would be connected to the neuroArm end-effector during surgery inside the magnet bore of the MR imaging system. In order to apply flight approach, a 905-nm pulsed laser diode and an avalanche photodiode are utilized as the transmitter and receiver, respectively. For the experiment, a high frequency time to digital converter was designed using a field-programmable gate arrays. In the phase-shift approach, a continuous green laser beam with a wavelength of 530 nm was used as the transmitter. Results showed that a positioning error of 0.1 mm occurred when the scanner-target point distance was set in the range of 2.5 to 3 meters. The effectiveness of this non-contact approach exhibited that the method could be employed as an alternative for conventional mechanical registration arm. Furthermore, the approach is not limited by physical contact and extension of joint angles.

Keywords: 3D laser scanner, intraoperative MR imaging, neuroArm, real time registration, robot-assisted surgery, supervisory control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1061