Search results for: single particle ICP-MS
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6098

Search results for: single particle ICP-MS

2018 Gas Network Noncooperative Game

Authors: Teresa Azevedo PerdicoúLis, Paulo Lopes Dos Santos

Abstract:

The conceptualisation of the problem of network optimisation as a noncooperative game sets up a holistic interactive approach that brings together different network features (e.g., com-pressor stations, sources, and pipelines, in the gas context) where the optimisation objectives are different, and a single optimisation procedure becomes possible without having to feed results from diverse software packages into each other. A mathematical model of this type, where independent entities take action, offers the ideal modularity and subsequent problem decomposition in view to design a decentralised algorithm to optimise the operation and management of the network. In a game framework, compressor stations and sources are under-stood as players which communicate through network connectivity constraints–the pipeline model. That is, in a scheme similar to tatonnementˆ, the players appoint their best settings and then interact to check for network feasibility. The devolved degree of network unfeasibility informs the players about the ’quality’ of their settings, and this two-phase iterative scheme is repeated until a global optimum is obtained. Due to network transients, its optimisation needs to be assessed at different points of the control interval. For this reason, the proposed approach to optimisation has two stages: (i) the first stage computes along the period of optimisation in order to fulfil the requirement just mentioned; (ii) the second stage is initialised with the solution found by the problem computed at the first stage, and computes in the end of the period of optimisation to rectify the solution found at the first stage. The liability of the proposed scheme is proven correct on an abstract prototype and three example networks.

Keywords: connectivity matrix, gas network optimisation, large-scale, noncooperative game, system decomposition

Procedia PDF Downloads 152
2017 Efficient Delivery of Biomaterials into Living Organism by Using Noble Metal Nanowire Injector

Authors: Kkochorong Park, Keun Cheon Kim, Hyoban Lee, Eun Ju Lee, Bongsoo Kim

Abstract:

Introduction of biomaterials such as DNA, RNA, proteins is important for many research areas. There are many methods to introduce biomaterials into living organisms like tissue and cells. To introduce biomaterials, several indirect methods including virus‐mediated delivery, chemical reagent (i.e., lipofectamine), electrophoresis have been used. Such methods are passive delivery using an endocytosis process of cell, reducing an efficiency of delivery. Unlike the indirect delivery method, it has been reported that a direct delivery of exogenous biomolecules into nucleus have been more efficient to expression or integration of biomolecules. Nano-sized material is beneficial for detect signal from cell or deliver stimuli/materials into the cell at cellular and molecular levels, due to its similar physical scale. Especially, because 1 dimensional (1D) nanomaterials such as nanotube, nanorod and nanowire with high‐aspect ratio have nanoscale geometry and excellent mechanical, electrical, and chemical properties, they could play an important role in molecular and cellular biology. In this study, by using single crystalline 1D noble metal nanowire, we fabricated nano-sized 1D injector which can successfully interface with living cells and directly deliver biomolecules into several types of cell line (i.e., stem cell, mammalian embryo) without inducing detrimental damages on living cell. This nano-bio technology could be a promising and robust tool for introducing exogenous biomaterials into living organism.

Keywords: DNA, gene delivery, nanoinjector, nanowire

Procedia PDF Downloads 275
2016 Horizontal Bone Augmentation Using Two Membranes at Dehisced Implant Sites: A Randomized Clinical Study

Authors: Monika Bansal

Abstract:

Background: Placement of dental implant in narrow alveolar ridge is challenging to be treated. GBR procedure is currently most widely used to augment the deficient alveolar ridges and to treat the fenestration and dehiscence around dental implants. Thus, the objectives of the present study were to evaluate as well as compare the clinical performance of collagen membrane and titanium mesh for horizontal bone augmentation at dehisced implant sites. Methods and material: Total 12 single edentulous implant sites with buccal bone deficiency in 8 subjects were equally divided and treated simultaneously with either of the two membranes and DBBM(Bio-Oss) bone graft. Primary outcome measurements in terms of defect height and defect width were made using a calibrated plastic periodontal probe. Re-entry surgery was performed to remeasure the augmented site and to remove Ti-mesh at 6th month. Independent paired t-tests for the inter-group comparison and student-paired t-tests for the intra-group comparison were performed. The differences were considered to be significant at p ≤ 0.05. Results: Mean defect fill with respect to height and width was 3.50 ± 0.54 mm (87%) and 2.33 ± 0.51 mm (82%) for collagen membrane and 3.83 ± 0.75 mm (92%) and 2.50 ± 0.54 mm (88%) for Ti-mesh group respectively. Conclusions: Within the limitation of the study, it was concluded that mean defect height and width after 6 months were statistically significant within the group without significant difference between them, although defect resolution was better in Ti-mesh.

Keywords: collagen membrane, dehiscence, dental implant, horizontal bone, augmentation, ti-mesh

Procedia PDF Downloads 111
2015 The Influence of Different Flux Patterns on Magnetic Losses in Electric Machine Cores

Authors: Natheer Alatawneh

Abstract:

The finite element analysis of magnetic fields in electromagnetic devices shows that the machine cores experience different flux patterns including alternating and rotating fields. The rotating fields are generated in different configurations range between circular and elliptical with different ratios between the major and minor axis of the flux locus. Experimental measurements on electrical steel exposed to different flux patterns disclose different magnetic losses in the samples under test. Consequently, electric machines require special attention during the cores loss calculation process to consider the flux patterns. In this study, a circular rotational single sheet tester is employed to measure the core losses in electric steel sample of M36G29. The sample was exposed to alternating field, circular field, and elliptical fields with axis ratios of 0.2, 0.4, 0.6 and 0.8. The measured data was implemented on 6-4 switched reluctance motor at three different frequencies of interest to the industry as 60 Hz, 400 Hz, and 1 kHz. The results disclose a high margin of error that may occur during the loss calculations if the flux patterns issue is neglected. The error in different parts of the machine associated with considering the flux patterns can be around 50%, 10%, and 2% at 60Hz, 400Hz, and 1 kHz, respectively. The future work will focus on the optimization of machine geometrical shape which has a primary effect on the flux pattern in order to minimize the magnetic losses in machine cores.

Keywords: alternating core losses, electric machines, finite element analysis, rotational core losses

Procedia PDF Downloads 252
2014 Compensatory Articulation of Pressure Consonants in Telugu Cleft Palate Speech: A Spectrographic Analysis

Authors: Indira Kothalanka

Abstract:

For individuals born with a cleft palate (CP), there is no separation between the nasal cavity and the oral cavity, due to which they cannot build up enough air pressure in the mouth for speech. Therefore, it is common for them to have speech problems. Common cleft type speech errors include abnormal articulation (compensatory or obligatory) and abnormal resonance (hyper, hypo and mixed nasality). These are generally resolved after palate repair. However, in some individuals, articulation problems do persist even after the palate repair. Such individuals develop variant articulations in an attempt to compensate for the inability to produce the target phonemes. A spectrographic analysis is used to investigate the compensatory articulatory behaviours of pressure consonants in the speech of 10 Telugu speaking individuals aged between 7-17 years with a history of cleft palate. Telugu is a Dravidian language which is spoken in Andhra Pradesh and Telangana states in India. It is a language with the third largest number of native speakers in India and the most spoken Dravidian language. The speech of the informants is analysed using single word list, sentences, passage and conversation. Spectrographic analysis is carried out using PRAAT, speech analysis software. The place and manner of articulation of consonant sounds is studied through spectrograms with the help of various acoustic cues. The types of compensatory articulation identified are glottal stops, palatal stops, uvular, velar stops and nasal fricatives which are non-native in Telugu.

Keywords: cleft palate, compensatory articulation, spectrographic analysis, PRAAT

Procedia PDF Downloads 443
2013 Designing Floor Planning in 2D and 3D with an Efficient Topological Structure

Authors: V. Nagammai

Abstract:

Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining thousands of transistors into a single chip. Development of technology increases the complexity in IC manufacturing which may vary the power consumption, increase the size and latency period. Topology defines a number of connections between network. In this project, NoC topology is generated using atlas tool which will increase performance in turn determination of constraints are effective. The routing is performed by XY routing algorithm and wormhole flow control. In NoC topology generation, the value of power, area and latency are predetermined. In previous work, placement, routing and shortest path evaluation is performed using an algorithm called floor planning with cluster reconstruction and path allocation algorithm (FCRPA) with the account of 4 3x3 switch, 6 4x4 switch, and 2 5x5 switches. The usage of the 4x4 and 5x5 switch will increase the power consumption and area of the block. In order to avoid the problem, this paper has used one 8x8 switch and 4 3x3 switches. This paper uses IPRCA which of 3 steps they are placement, clustering, and shortest path evaluation. The placement is performed using min – cut placement and clustering are performed using an algorithm called cluster generation. The shortest path is evaluated using an algorithm called Dijkstra's algorithm. The power consumption of each block is determined. The experimental result shows that the area, power, and wire length improved simultaneously.

Keywords: application specific noc, b* tree representation, floor planning, t tree representation

Procedia PDF Downloads 393
2012 Using Social Media to Amplify Social Entrepreneurial Message

Authors: Irfan Khairi

Abstract:

It is arguable that today's social media has dramatically redefined human contact, and chiefly because the platforms enable communication opportunities unprecedented. Without question, billions of individuals globally engage in the media, a reality by no means lost on businesses and social entrepreneurs desirous of generating interest in a cause, movement, or other social effort. If, however, the opportunities are immense, so too is the competition. Private persons and entrepreneurial concerns alike virtually saturate the popular sites of Facebook, Twitter, and Instagram, and most are intent on capturing as much external interest as possible. At the same time, however, the social entrepreneur possesses an advantage over the individual concerned only the social aspects of the sites, as they express interests in, and measures applicable to, important causes of which the public at large may be unaware. There is, unfortunately, no single means of assuring success in using the media outlets to generate interest. Nonetheless, a general awareness of how social media sites function, as well as the psychological elements relevant to the functioning, is necessary. It is as important to comprehend basic realities of the platforms and approaches that fail as it is to develop strategy, for the latter relies on knowledge of the former. This awareness in place, the social entrepreneur is then better enabled to determine strategy, in terms of which sites to focus upon and how to most effectively convey their message. What is required is familiarity with the online communities, with attention to the specific advantages each provides. Ultimately, today's social entrepreneur may establish a highly effective platform of promotion and engagement, provided they fully comprehend the social investment necessary for success.

Keywords: social media, marketing, e-commerce, internet business

Procedia PDF Downloads 212
2011 Magnetic Navigation of Nanoparticles inside a 3D Carotid Model

Authors: E. G. Karvelas, C. Liosis, A. Theodorakakos, T. E. Karakasidis

Abstract:

Magnetic navigation of the drug inside the human vessels is a very important concept since the drug is delivered to the desired area. Consequently, the quantity of the drug required to reach therapeutic levels is being reduced while the drug concentration at targeted sites is increased. Magnetic navigation of drug agents can be achieved with the use of magnetic nanoparticles where anti-tumor agents are loaded on the surface of the nanoparticles. The magnetic field that is required to navigate the particles inside the human arteries is produced by a magnetic resonance imaging (MRI) device. The main factors which influence the efficiency of the usage of magnetic nanoparticles for biomedical applications in magnetic driving are the size and the magnetization of the biocompatible nanoparticles. In this study, a computational platform for the simulation of the optimal gradient magnetic fields for the navigation of magnetic nanoparticles inside a carotid artery is presented. For the propulsion model of the particles, seven major forces are considered, i.e., the magnetic force from MRIs main magnet static field as well as the magnetic field gradient force from the special propulsion gradient coils. The static field is responsible for the aggregation of nanoparticles, while the magnetic gradient contributes to the navigation of the agglomerates that are formed. Moreover, the contact forces among the aggregated nanoparticles and the wall and the Stokes drag force for each particle are considered, while only spherical particles are used in this study. In addition, gravitational forces due to gravity and the force due to buoyancy are included. Finally, Van der Walls force and Brownian motion are taken into account in the simulation. The OpenFoam platform is used for the calculation of the flow field and the uncoupled equations of particles' motion. To verify the optimal gradient magnetic fields, a covariance matrix adaptation evolution strategy (CMAES) is used in order to navigate the particles into the desired area. A desired trajectory is inserted into the computational geometry, which the particles are going to be navigated in. Initially, the CMAES optimization strategy provides the OpenFOAM program with random values of the gradient magnetic field. At the end of each simulation, the computational platform evaluates the distance between the particles and the desired trajectory. The present model can simulate the motion of particles when they are navigated by the magnetic field that is produced by the MRI device. Under the influence of fluid flow, the model investigates the effect of different gradient magnetic fields in order to minimize the distance of particles from the desired trajectory. In addition, the platform can navigate the particles into the desired trajectory with an efficiency between 80-90%. On the other hand, a small number of particles are stuck to the walls and remains there for the rest of the simulation.

Keywords: artery, drug, nanoparticles, navigation

Procedia PDF Downloads 107
2010 Mineralogical Study of the Triassic Clay of Maaziz and the Miocene Marl of Akrach in Morocco: Analysis and Evaluating of the Two Geomaterials for the Construction of Ceramic Bricks

Authors: Sahar El Kasmi, Ayoub Aziz, Saadia Lharti, Mohammed El Janati, Boubker Boukili, Nacer El Motawakil, Mayom Chol Luka Awan

Abstract:

Two types of geomaterials (Red Triassic clay from the Maaziz region and Yellow Pliocene clay from the Akrach region) were used to create different mixtures for the fabrication of ceramic bricks. This study investigated the influence of the Pliocene clay on the overall composition and mechanical properties of the Triassic clay. The red Triassic clay, sourced from Maaziz, underwent various mechanical processes and treatments to facilitate its transformation into ceramic bricks for construction. The triassic clay was subjected to a drying chamber and a heating chamber at 100°C to remove moisture. Subsequently, the dried clay samples were processed using a Planetary Babs ll Mill to reduce particle size and improve homogeneity. The resulting clay material was sieved, and the fine particles below 100 mm were collected for further analysis. In parallel, the Miocene marl obtained from the Akrach region was fragmented into finer particles and subjected to similar drying, grinding, and sieving procedures as the triassic clay. The two clay samples are then amalgamated and homogenized in different proportions. Precise measurements were taken using a weighing balance, and mixtures of 90%, 80%, and 70% Triassic clay with 10%, 20%, and 30% yellow clay were prepared, respectively. To evaluate the impact of Pliocene marl on the composition, the prepared clay mixtures were spread evenly and treated with a water modifier to enhance plasticity. The clay was then molded using a brick-making machine, and the initial manipulation process was observed. Additional batches were prepared with incremental amounts of Pliocene marl to further investigate its effect on the fracture behavior of the clay, specifically their resistance. The molded clay bricks were subjected to compression tests to measure their strength and resistance to deformation. Additional tests, such as water absorption tests, were also conducted to assess the overall performance of the ceramic bricks fabricated from the different clay mixtures. The results were analyzed to determine the influence of the Pliocene marl on the strength and durability of the Triassic clay bricks. The results indicated that the incorporation of Pliocene clay reduced the fracture of the triassic clay, with a noticeable reduction observed at 10% addition. No fractures were observed when 20% and 30% of yellow clay are added. These findings suggested that yellow clay can enhance the mechanical properties and structural integrity of red clay-based products.

Keywords: triassic clay, pliocene clay, mineralogical composition, geo-materials, ceramics, akach region, maaziz region, morocco.

Procedia PDF Downloads 88
2009 Simultaneous versus Sequential Model in Foreign Entry

Authors: Patricia Heredia, Isabel Saz, Marta Fernández

Abstract:

This article proposes that the decision regarding exporting and the choice of export channel are nested and non-independent decisions. We assume that firms make two sequential decisions before arriving at their final choice: the decision to access foreign markets and the decision about the type of channel. This hierarchical perspective of the choices involved in the process is appealing for two reasons. First, it supports the idea that people have a limited analytical capacity. Managers often break down a complex decision into a hierarchical process because this makes it more manageable. Secondly, it recognizes that important differences exist between entry modes. In light of the above, the objective of this study is to test different entry mode choice processes: independent decisions and nested and non-independent decisions. To do this, the methodology estimates and compares the following two models: (i) a simultaneous single-stage model with three entry mode choices (using a multinomial logit model); ii) a two-stage model with the export decision preceding the channel decision using a sequential logit model. The study uses resource-based factors in determining these decision processes concerning internationalization and the study carries out empirical analysis using a DOC Rioja sample of 177 firms.Using the Akaike and Schwarz Information Criteria, the empirical evidence supports the existence of a nested structure, where the decision about exporting precedes the export mode decision. The implications and contributions of the findings are discussed.

Keywords: sequential logit model, two-stage choice process, export mode, wine industry

Procedia PDF Downloads 30
2008 Mixed Mode Fracture Analyses Using Finite Element Method of Edge Cracked Heavy Spinning Annulus Pulley

Authors: Bijit Kalita, K. V. N. Surendra

Abstract:

Rotating disk is one of the most indispensable parts of a rotating machine. Rotating disk has found many applications in the diverging field of science and technology. In this paper, we have taken into consideration the problem of a heavy spinning disk mounted on a rotor system acted upon by boundary traction. Finite element modelling is used at various loading condition to determine the mixed mode stress intensity factors. The effect of combined shear and normal traction on the boundary is incorporated in the analysis under the action of gravity. The variation near the crack tip is characterized in terms of the stress intensity factor (SIF) with an aim to find the SIF for a wide range of parameters. The results of the finite element analyses carried out on the compressed disk of a belt pulley arrangement using fracture mechanics concepts are shown. A total of hundred cases of the problem are solved for each of the variations in loading arc parameter and crack orientation using finite element models of the disc under compression. All models were prepared and analyzed for the uncracked disk, disk with a single crack at different orientation emanating from shaft hole as well as for a disc with pair of cracks emerging from the same center hole. Curves are plotted for various loading conditions. Finally, crack propagation paths are determined using kink angle concepts.

Keywords: crack-tip deformations, static loading, stress concentration, stress intensity factor

Procedia PDF Downloads 143
2007 Understanding Indonesian Smallholder Dairy Farmers’ Decision to Adopt Multiple Farm: Level Innovations

Authors: Rida Akzar, Risti Permani, Wahida , Wendy Umberger

Abstract:

Adoption of farm innovations may increase farm productivity, and therefore improve market access and farm incomes. However, most studies that look at the level and drivers of innovation adoption only focus on a specific type of innovation. Farmers may consider multiple innovation options, and constraints such as budget, environment, scarcity of labour supply, and the cost of learning. There have been some studies proposing different methods to combine a broad variety of innovations into a single measurable index. However, little has been done to compare these methods and assess whether they provide similar information about farmer segmentation by their ‘innovativeness’. Using data from a recent survey of 220 dairy farm households in West Java, Indonesia, this study compares and considers different methods of deriving an innovation index, including expert-weighted innovation index; an index derived from the total number of adopted technologies; and an index of the extent of adoption of innovation taking into account both adoption and disadoption of multiple innovations. Second, it examines the distribution of different farming systems taking into account their innovativeness and farm characteristics. Results from this study will inform policy makers and stakeholders in the dairy industry on how to better design, target and deliver programs to improve and encourage farm innovation, and therefore improve farm productivity and the performance of the dairy industry in Indonesia.

Keywords: adoption, dairy, household survey, innovation index, Indonesia, multiple innovations dairy, West Java

Procedia PDF Downloads 336
2006 Magnesium Ameliorates Lipopolysaccharide-Induced Liver Injury in Mice

Authors: D. M. El-Tanbouly, R. M. Abdelsalam, A. S. Attia, M. T. Abdel-Aziz

Abstract:

Lipopolysaccharide (LPS) endotoxin, a component of the outer membrane of Gram-negative bacteria, is involved in the pathogenesis of sepsis. LPS administration induces systemic inflammation that mimics many of the initial clinical features of sepsis and has deleterious effects on several organs including the liver and eventually leading to septic shock and death. The present study aimed to investigate the protective effect of magnesium, a well-known cofactor in many enzymatic reactions and a critical component of the antioxidant system, on hepatic damage associated with LPS induced- endotoxima in mice. Mg (20 and 40 mg/kg, po) was administered for 7 consecutive days. Systemic inflammation was induced one hour after the last dose of Mg by a single dose of LPS (2 mg/kg, ip) and three hours thereafter plasma was separated, animals were sacrificed and their livers were isolated. LPS-treated mice suffered from hepatic dysfunction revealed by histological observation, elevation in plasma transaminases activities, C-reactive protein content and caspase-3, a critical marker of apoptosis. Liver inflammation was evident by elevation in liver cytokines contents (TNF-α and IL-10) and myeloperoxidase (MPO) activity. Additionally, oxidative stress was manifested by increased liver lipoperoxidation, glutathione depletion, elevated total nitrate/nitrite (NOx) content and glutathione peroxidase (GPx) activity. Pretreatment with Mg largely mitigated these alternations through its anti-inflammatory and antioxidant potentials. Mg, therefore, could be regarded as an effective strategy for prevention of liver damage associated with septicemia.

Keywords: LPS, liver damage, magnesium, septicemia

Procedia PDF Downloads 397
2005 Experimental Investigation on the Shear Strength Parameters of Sand-Slag Mixtures

Authors: Ayad Salih Sabbar, Amin Chegenizadeh, Hamid Nikraz

Abstract:

Utilizing waste materials in civil engineering applications has a positive influence on the environment by reducing carbon dioxide emissions and issues associated with waste disposal. Granulated blast furnace slag (GBFS) is a by-product of the iron and steel industry, with millions of tons of slag being annually produced worldwide. Slag has been widely used in structural engineering and for stabilizing clay soils; however, studies on the effect of slag on sandy soils are scarce. This article investigates the effect of slag content on shear strength parameters through direct shear tests and unconsolidated undrained triaxial tests on mixtures of Perth sand and slag. For this purpose, sand-slag mixtures, with slag contents of 2%, 4%, and 6% by weight of samples, were tested with direct shear tests under three normal stress values, namely 100 kPa, 150 kPa, and 200 kPa. Unconsolidated undrained triaxial tests were performed under a single confining pressure of 100 kPa and relative density of 80%. The internal friction angles and shear stresses of the mixtures were determined via the direct shear tests, demonstrating that shear stresses increased with increasing normal stress and the internal friction angles and cohesion increased with increasing slag. There were no significant differences in shear stresses parameters when slag content rose from 4% to 6%. The unconsolidated undrained triaxial tests demonstrated that shear strength increased with increasing slag content.

Keywords: direct shear, shear strength, slag, UU test

Procedia PDF Downloads 479
2004 Modelling and Control of Binary Distillation Column

Authors: Narava Manose

Abstract:

Distillation is a very old separation technology for separating liquid mixtures that can be traced back to the chemists in Alexandria in the first century A. D. Today distillation is the most important industrial separation technology. By the eleventh century, distillation was being used in Italy to produce alcoholic beverages. At that time, distillation was probably a batch process based on the use of just a single stage, the boiler. The word distillation is derived from the Latin word destillare, which means dripping or trickling down. By at least the sixteenth century, it was known that the extent of separation could be improved by providing multiple vapor-liquid contacts (stages) in a so called Rectifactorium. The term rectification is derived from the Latin words rectefacere, meaning to improve. Modern distillation derives its ability to produce almost pure products from the use of multi-stage contacting. Throughout the twentieth century, multistage distillation was by far the most widely used industrial method for separating liquid mixtures of chemical components.The basic principle behind this technique relies on the different boiling temperatures for the various components of the mixture, allowing the separation between the vapor from the most volatile component and the liquid of other(s) component(s). •Developed a simple non-linear model of a binary distillation column using Skogestad equations in Simulink. •We have computed the steady-state operating point around which to base our analysis and controller design. However, the model contains two integrators because the condenser and reboiler levels are not controlled. One particular way of stabilizing the column is the LV-configuration where we use D to control M_D, and B to control M_B; such a model is given in cola_lv.m where we have used two P-controllers with gains equal to 10.

Keywords: modelling, distillation column, control, binary distillation

Procedia PDF Downloads 277
2003 Human Gesture Recognition for Real-Time Control of Humanoid Robot

Authors: S. Aswath, Chinmaya Krishna Tilak, Amal Suresh, Ganesh Udupa

Abstract:

There are technologies to control a humanoid robot in many ways. But the use of Electromyogram (EMG) electrodes has its own importance in setting up the control system. The EMG based control system helps to control robotic devices with more fidelity and precision. In this paper, development of an electromyogram based interface for human gesture recognition for the control of a humanoid robot is presented. To recognize control signs in the gestures, a single channel EMG sensor is positioned on the muscles of the human body. Instead of using a remote control unit, the humanoid robot is controlled by various gestures performed by the human. The EMG electrodes attached to the muscles generates an analog signal due to the effect of nerve impulses generated on moving muscles of the human being. The analog signals taken up from the muscles are supplied to a differential muscle sensor that processes the given signal to generate a signal suitable for the microcontroller to get the control over a humanoid robot. The signal from the differential muscle sensor is converted to a digital form using the ADC of the microcontroller and outputs its decision to the CM-530 humanoid robot controller through a Zigbee wireless interface. The output decision of the CM-530 processor is sent to a motor driver in order to control the servo motors in required direction for human like actions. This method for gaining control of a humanoid robot could be used for performing actions with more accuracy and ease. In addition, a study has been conducted to investigate the controllability and ease of use of the interface and the employed gestures.

Keywords: electromyogram, gesture, muscle sensor, humanoid robot, microcontroller, Zigbee

Procedia PDF Downloads 407
2002 Optimization Based Extreme Learning Machine for Watermarking of an Image in DWT Domain

Authors: RAM PAL SINGH, VIKASH CHAUDHARY, MONIKA VERMA

Abstract:

In this paper, we proposed the implementation of optimization based Extreme Learning Machine (ELM) for watermarking of B-channel of color image in discrete wavelet transform (DWT) domain. ELM, a regularization algorithm, works based on generalized single-hidden-layer feed-forward neural networks (SLFNs). However, hidden layer parameters, generally called feature mapping in context of ELM need not to be tuned every time. This paper shows the embedding and extraction processes of watermark with the help of ELM and results are compared with already used machine learning models for watermarking.Here, a cover image is divide into suitable numbers of non-overlapping blocks of required size and DWT is applied to each block to be transformed in low frequency sub-band domain. Basically, ELM gives a unified leaning platform with a feature mapping, that is, mapping between hidden layer and output layer of SLFNs, is tried for watermark embedding and extraction purpose in a cover image. Although ELM has widespread application right from binary classification, multiclass classification to regression and function estimation etc. Unlike SVM based algorithm which achieve suboptimal solution with high computational complexity, ELM can provide better generalization performance results with very small complexity. Efficacy of optimization method based ELM algorithm is measured by using quantitative and qualitative parameters on a watermarked image even though image is subjected to different types of geometrical and conventional attacks.

Keywords: BER, DWT, extreme leaning machine (ELM), PSNR

Procedia PDF Downloads 311
2001 Evaluation of a Potential Metabolism-Mediated Drug-Drug Interaction between Carvedilol and Fluvoxamine in Rats

Authors: Ana-Maria Gheldiu, Bianca M. Abrudan, Maria A. Neag, Laurian Vlase, Dana M. Muntean

Abstract:

Background information: The objective of this study was to investigate the effect of multiple-dose fluvoxamine on the pharmacokinetic profile of single-dose carvedilol in rats, in order to evaluate this possible drug-drug pharmacokinetic interaction. Methods: A preclinical study, in 28 white male Wistar rats, was conducted. Each rat was cannulated on the femoral vein, prior to being connected to BASi Culex ABC®. Carvedilol was orally administrated in rats (3.57 mg/kg body mass (b.m.)) in the absence of fluvoxamine or after a pre-treatment with multiple oral doses of fluvoxamine (14.28 mg/kg b.m.). The plasma concentrations of carvedilol were estimated by high performance liquid chromatography-tandem mass spectrometry. The pharmacokinetic parameters of carvedilol were analyzed by non-compartmental method. Results: After carvediol co-administration with fluvoxamine, an approximately 2-fold increase in the exposure of carvedilol was observed, considering the significantly elevated value of the total area under the concentration versus time curve (AUC₀₋∞). Moreover, an increase by approximately 145% of the peak plasma concentration was found, as well as an augmentation by approximately 230% of the half life time of carvedilol was observed. Conclusion: Fluvoxamine co-administration led to a significant alteration of carvedilol’s pharmacokinetic profile in rats, these effects could be explained by the existence of a drug-drug interaction mediated by CYP2D6 inhibition. Acknowledgement: This work was supported by CNCS Romania – project PNII-RU-TE-2014-4-0242.

Keywords: carvedilol, fluvoxamine, drug-drug pharmacokinetic interaction, rats

Procedia PDF Downloads 274
2000 Unraveling Language Contact through Syntactic Dynamics of ‘Also’ in Hong Kong and Britain English

Authors: Xu Zhang

Abstract:

This article unveils an indicator of language contact between English and Cantonese in one of the Outer Circle Englishes, Hong Kong (HK) English, through an empirical investigation into 1000 tokens from the Global Web-based English (GloWbE) corpus, employing frequency analysis and logistic regression analysis. It is perceived that Cantonese and general Chinese are contextually marked by an integral underlying thinking pattern. Chinese speakers exhibit a reliance on semantic context over syntactic rules and lexical forms. This linguistic trait carries over to their use of English, affording greater flexibility to formal elements in constructing English sentences. The study focuses on the syntactic positioning of the focusing subjunct ‘also’, a linguistic element used to add new or contrasting prominence to specific sentence constituents. The English language generally allows flexibility in the relative position of 'also’, while there is a preference for close marking relationships. This article shifts attention to Hong Kong, where Cantonese and English converge, and 'also' finds counterparts in Cantonese ‘jaa’ and Mandarin ‘ye’. Employing a corpus-based data-driven method, we investigate the syntactic position of 'also' in both HK and GB English. The study aims to ascertain whether HK English exhibits a greater 'syntactic freedom,' allowing for a more distant marking relationship with 'also' compared to GB English. The analysis involves a random extraction of 500 samples from both HK and GB English from the GloWbE corpus, forming a dataset (N=1000). Exclusions are made for cases where 'also' functions as an additive conjunct or serves as a copulative adverb, as well as sentences lacking sufficient indication that 'also' functions as a focusing particle. The final dataset comprises 820 tokens, with 416 for GB and 404 for HK, annotated according to the focused constituent and the relative position of ‘also’. Frequency analysis reveals significant differences in the relative position of 'also' and marking relationships between HK and GB English. Regression analysis indicates a preference in HK English for a distant marking relationship between 'also' and its focused constituent. Notably, the subject and other constituents emerge as significant predictors of a distant position for 'also.' Together, these findings underscore the nuanced linguistic dynamics in HK English and contribute to our understanding of language contact. It suggests that future pedagogical practice should consider incorporating the syntactic variation within English varieties, facilitating leaners’ effective communication in diverse English-speaking environments and enhancing their intercultural communication competence.

Keywords: also, Cantonese, English, focus marker, frequency analysis, language contact, logistic regression analysis

Procedia PDF Downloads 55
1999 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 231
1998 Incidence, Pattern and Risk Factors of Congenial Heart Diseases in Neonates in a Tertiary Care Hospital, Egyptian Study

Authors: Gehan Hussein, Hams Ahmad, Baher Matta, Yasmeen Mansi, Mohamad Fawzi

Abstract:

Background: Congenital heart disease (CHD) is a common problem worldwide with variable incidence in different countries. The exact etiology is unknown, suggested to be multifactorial. We aimed to study the incidence of various CHD in a neonatal intensive care unit (NICU) in a tertiary care hospital in Egypt and the possible associations with variable risk factors. Methods: Prospective study was conducted over a period of one year (2013 /2014) at NICU KasrAlAini School of Medicine, Cairo University. Questionnaire about possible maternal and/or paternal risk factors for CHD, clinical examination, bedside echocardiography were done. Cases were classified into groups: group 1 without CHD and group 2 with CHD. Results: from 723 neonates admitted to NICU, 180 cases were proved to have CHD, 58 % of them were males. patent ductus arteriosus(PDA) was the most common CHD (70%), followed by an atrial septal defect (ASD8%), while Fallot tetralogy and single ventricle were the least common (0.45 %) for each. CHD was found in 30 % of consanguineous parents Maternal age ≥ 35 years at the time of conception was associated with increased incidence of PDA (p= 0.45 %). Maternal diabetes and insulin intake were significantly associated with cases of CHD (p=0.02 &0.001 respectively), maternal hypertension and hypothyroidism were both associated with VSD, but the difference did not reach statistical significance (P=0.36 &0.44respectively). Maternal passive smoking was significantly associated with PDA (p=0.03). Conclusion: The most frequent CHD in the studied population was PDA, followed by ASD. Maternal conditions as diabetes was associated with VSD occurrence.

Keywords: NICU, risk factors, congenital heart disease, echocardiography

Procedia PDF Downloads 191
1997 An Explanatory Study into the Information-Seeking Behaviour of Egyptian Beggars

Authors: Essam Mansour

Abstract:

The key purpose of this study is to provide first-hand information about beggars in Egypt, especially from the perspective of their information seeking behaviour including their information needs. The researcher tries to investigate the information-seeking behaviour of Egyptian beggars with regard to their thoughts, perceptions, motivations, attitudes, habits, preferences as well as challenges that may impede their use of information. The research methods used were an adapted form of snowball sampling of a heterogeneous demographic group of participants in the beggary activity in Egypt. This sampling was used to select focus groups to explore a range of relevant issues. Data on the demographic characteristics of the Egyptian beggars showed that they tend to be men, mostly with no formal education, with an average age around 30s, labeled as low-income persons, mostly single and mostly Muslims. A large number of Egyptian beggars were seeking for information to meet their basic needs as well as their daily needs, although some of them were not able to identify their information needs clearly. The information-seeking behaviour profile of a very large number of Egyptian beggars indicated a preference for informal sources of information over formal ones to solve different problems and meet the challenges they face during their beggary activity depending on assistive devices, such as mobile phones. The high degree of illiteracy and the lack of awareness about the basic rights of information as well as information needs were the most important problems Egyptian beggars face during accessing information. The study recommended further research to be conducted about the role of the library in the education of beggars. It also recommended that beggars’ awareness about their information rights should be promoted through educational programs that help them value the role of information in their life.

Keywords: user studies, information-seeking behaviour, information needs, information sources, beggars, Egypt

Procedia PDF Downloads 319
1996 Modeling of Anode Catalyst against CO in Fuel Cell Using Material Informatics

Authors: M. Khorshed Alam, H. Takaba

Abstract:

The catalytic properties of metal usually change by intermixturing with another metal in polymer electrolyte fuel cells. Pt-Ru alloy is one of the much-talked used alloy to enhance the CO oxidation. In this work, we have investigated the CO coverage on the Pt2Ru3 nanoparticle with different atomic conformation of Pt and Ru using a combination of material informatics with computational chemistry. Density functional theory (DFT) calculations used to describe the adsorption strength of CO and H with different conformation of Pt Ru ratio in the Pt2Ru3 slab surface. Then through the Monte Carlo (MC) simulations we examined the segregation behaviour of Pt as a function of surface atom ratio, subsurface atom ratio, particle size of the Pt2Ru3 nanoparticle. We have constructed a regression equation so as to reproduce the results of DFT only from the structural descriptors. Descriptors were selected for the regression equation; xa-b indicates the number of bonds between targeted atom a and neighboring atom b in the same layer (a,b = Pt or Ru). Terms of xa-H2 and xa-CO represent the number of atoms a binding H2 and CO molecules, respectively. xa-S is the number of atom a on the surface. xa-b- is the number of bonds between atom a and neighboring atom b located outside the layer. The surface segregation in the alloying nanoparticles is influenced by their component elements, composition, crystal lattice, shape, size, nature of the adsorbents and its pressure, temperature etc. Simulations were performed on different size (2.0 nm, 3.0 nm) of nanoparticle that were mixing of Pt and Ru atoms in different conformation considering of temperature range 333K. In addition to the Pt2Ru3 alloy we also considered pure Pt and Ru nanoparticle to make comparison of surface coverage by adsorbates (H2, CO). Hence, we assumed the pure and Pt-Ru alloy nanoparticles have an fcc crystal structures as well as a cubo-octahedron shape, which is bounded by (111) and (100) facets. Simulations were performed up to 50 million MC steps. From the results of MC, in the presence of gases (H2, CO), the surfaces are occupied by the gas molecules. In the equilibrium structure the coverage of H and CO as a function of the nature of surface atoms. In the initial structure, the Pt/Ru ratios on the surfaces for different cluster sizes were in range of 0.50 - 0.95. MC simulation was employed when the partial pressure of H2 (PH2) and CO (PCO) were 70 kPa and 100-500 ppm, respectively. The Pt/Ru ratios decrease as the increase in the CO concentration, without little exception only for small nanoparticle. The adsorption strength of CO on the Ru site is higher than the Pt site that would be one of the reason for decreasing the Pt/Ru ratio on the surface. Therefore, our study identifies that controlling the nanoparticle size, composition, conformation of alloying atoms, concentration and chemical potential of adsorbates have impact on the steadiness of nanoparticle alloys which ultimately and also overall catalytic performance during the operations.

Keywords: anode catalysts, fuel cells, material informatics, Monte Carlo

Procedia PDF Downloads 192
1995 Supplier Risk Management: A Multivariate Statistical Modelling and Portfolio Optimization Based Approach for Supplier Delivery Performance Development

Authors: Jiahui Yang, John Quigley, Lesley Walls

Abstract:

In this paper, the authors develop a stochastic model regarding the investment in supplier delivery performance development from a buyer’s perspective. The authors propose a multivariate model through a Multinomial-Dirichlet distribution within an Empirical Bayesian inference framework, representing both the epistemic and aleatory uncertainties in deliveries. A closed form solution is obtained and the lower and upper bound for both optimal investment level and expected profit under uncertainty are derived. The theoretical properties provide decision makers with useful insights regarding supplier delivery performance improvement problems where multiple delivery statuses are involved. The authors also extend the model from a single supplier investment into a supplier portfolio, using a Lagrangian method to obtain a theoretical expression for an optimal investment level and overall expected profit. The model enables a buyer to know how the marginal expected profit/investment level of each supplier changes with respect to the budget and which supplier should be invested in when additional budget is available. An application of this model is illustrated in a simulation study. Overall, the main contribution of this study is to provide an optimal investment decision making framework for supplier development, taking into account multiple delivery statuses as well as multiple projects.

Keywords: decision making, empirical bayesian, portfolio optimization, supplier development, supply chain management

Procedia PDF Downloads 288
1994 Comparison of Polyphonic Profile of a Berry from Two Different Sources, Using an Optimized Extraction Method

Authors: G. Torabian, A. Fathi, P. Valtchev, F. Dehghani

Abstract:

The superior polyphenol content of Sambucus nigra berries has high health potentials for the production of nutraceutical products. Numerous factors influence the polyphenol content of the final products including the berries’ source and the subsequent processing production steps. The aim of this study is to compare the polyphenol content of berries from two different sources and also to optimise the polyphenol extraction process from elderberries. Berries from source B obtained more acceptable physical properties than source A; a single berry from source B was double in size and weight (both wet and dry weight) compared with a source A berry. Despite the appropriate physical characteristics of source B berries, their polyphenolic profile was inferior; as source A berries had 2.3 fold higher total anthocyanin content, and nearly two times greater total phenolic content and total flavonoid content compared to source B. Moreover, the result of this study showed that almost 50 percent of the phenolic content of berries are entrapped within their skin and pulp that potentially cannot be extracted by press juicing. To address this challenge and to increase the total polyphenol yield of the extract, we used cold-shock blade grinding method to break the cell walls. The result of this study showed that using cultivars with higher phenolic content as well as using the whole fruit including juice, skin and pulp can increase polyphenol yield significantly; and thus, may boost the potential of using elderberries as therapeutic products.

Keywords: different sources, elderberry, grinding, juicing, polyphenols

Procedia PDF Downloads 294
1993 A Methodology for Automatic Diversification of Document Categories

Authors: Dasom Kim, Chen Liu, Myungsu Lim, Su-Hyeon Jeon, ByeoungKug Jeon, Kee-Young Kwahk, Namgyu Kim

Abstract:

Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we previously proposed a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. In this paper, we design a survey-based verification scenario for estimating the accuracy of our automatic categorization methodology.

Keywords: big data analysis, document classification, multi-category, text mining, topic analysis

Procedia PDF Downloads 272
1992 High-Dimensional Single-Cell Imaging Maps Inflammatory Cell Types in Pulmonary Arterial Hypertension

Authors: Selena Ferrian, Erin Mccaffrey, Toshie Saito, Aiqin Cao, Noah Greenwald, Mark Robert Nicolls, Trevor Bruce, Roham T. Zamanian, Patricia Del Rosario, Marlene Rabinovitch, Michael Angelo

Abstract:

Recent experimental and clinical observations are advancing immunotherapies to clinical trials in pulmonary arterial hypertension (PAH). However, comprehensive mapping of the immune landscape in pulmonary arteries (PAs) is necessary to understand how immune cell subsets interact to induce pulmonary vascular pathology. We used multiplexed ion beam imaging by time-of-flight (MIBI-TOF) to interrogate the immune landscape in PAs from idiopathic (IPAH) and hereditary (HPAH) PAH patients. Massive immune infiltration in I/HPAH was observed with intramural infiltration linked to PA occlusive changes. The spatial context of CD11c+DCs expressing SAMHD1, TIM-3 and IDO-1 within immune-enriched microenvironments and neutrophils were associated with greater immune activation in HPAH. Furthermore, CD11c-DC3s (mo-DC-like cells) within a smooth muscle cell (SMC) enriched microenvironment were linked to vessel score, proliferating SMCs, and inflamed endothelial cells. Experimental data in cultured cells reinforced a causal relationship between neutrophils and mo-DCs in mediating pulmonary arterial SMC proliferation. These findings merit consideration in developing effective immunotherapies for PAH.

Keywords: pulmonary arterial hypertension, vascular remodeling, indoleamine 2-3-dioxygenase 1 (IDO-1), neutrophils, monocyte-derived dendritic cells, BMPR2 mutation, interferon gamma (IFN-γ)

Procedia PDF Downloads 173
1991 Configuration as a Service in Multi-Tenant Enterprise Resource Planning System

Authors: Mona Misfer Alshardan, Djamal Ziani

Abstract:

Enterprise resource planning (ERP) systems are the organizations tickets to the global market. With the implementation of ERP, organizations can manage and coordinate all functions, processes, resources and data from different departments by a single software. However, many organizations consider the cost of traditional ERP to be expensive and look for alternative affordable solutions within their budget. One of these alternative solutions is providing ERP over a software as a service (SaaS) model. This alternative could be considered as a cost effective solution compared to the traditional ERP system. A key feature of any SaaS system is the multi-tenancy architecture where multiple customers (tenants) share the system software. However, different organizations have different requirements. Thus, the SaaS developers accommodate each tenant’s unique requirements by allowing tenant-level customization or configuration. While customization requires source code changes and in most cases a programming experience, the configuration process allows users to change many features within a predefined scope in an easy and controlled manner. The literature provides many techniques to accomplish the configuration process in different SaaS systems. However, the nature and complexity of SaaS ERP needs more attention to the details regarding the configuration process which is merely described in previous researches. Thus, this research is built on strong knowledge regarding the configuration in SaaS to define specifically the configuration borders in SaaS ERP and to design a configuration service with the consideration of the different configuration aspects. The proposed architecture will ensure the easiness of the configuration process by using wizard technology. Also, the privacy and performance are guaranteed by adopting the databases isolation technique.

Keywords: configuration, software as a service, multi-tenancy, ERP

Procedia PDF Downloads 393
1990 Study of Aqueous Solutions: A Dielectric Spectroscopy Approach

Authors: Kumbharkhane Ashok

Abstract:

The time domain dielectric relaxation spectroscopy (TDRS) probes the interaction of a macroscopic sample with a time-dependent electrical field. The resulting complex permittivity spectrum, characterizes amplitude (voltage) and time scale of the charge-density fluctuations within the sample. These fluctuations may arise from the reorientation of the permanent dipole moments of individual molecules or from the rotation of dipolar moieties in flexible molecules, like polymers. The time scale of these fluctuations depends on the sample and its relative relaxation mechanism. Relaxation times range from some picoseconds in low viscosity liquids to hours in glasses, Therefore the DRS technique covers an extensive dynamical process, its corresponding frequency range from 10-4 Hz to 1012 Hz. This inherent ability to monitor the cooperative motion of molecular ensemble distinguishes dielectric relaxation from methods like NMR or Raman spectroscopy which yield information on the motions of individual molecules. An experimental set up for Time Domain Reflectometry (TDR) technique from 10 MHz to 30 GHz has been developed for the aqueous solutions. This technique has been very simple and covers a wide band of frequencies in the single measurement. Dielectric Relaxation Spectroscopy is especially sensitive to intermolecular interactions. The complex permittivity spectra of aqueous solutions have been fitted using Cole-Davidson (CD) model to determine static dielectric constants and relaxation times for entire concentrations. The heterogeneous molecular interactions in aqueous solutions have been discussed through Kirkwood correlation factor and excess properties.

Keywords: liquid, aqueous solutions, time domain reflectometry

Procedia PDF Downloads 444
1989 Evaluation of DNA Oxidation and Chemical DNA Damage Using Electrochemiluminescent Enzyme/DNA Microfluidic Array

Authors: Itti Bist, Snehasis Bhakta, Di Jiang, Tia E. Keyes, Aaron Martin, Robert J. Forster, James F. Rusling

Abstract:

DNA damage from metabolites of lipophilic drugs and pollutants, generated by enzymes, represents a major toxicity pathway in humans. These metabolites can react with DNA to form either 8-oxo-7,8-dihydro-2-deoxyguanosine (8-oxodG), which is the oxidative product of DNA or covalent DNA adducts, both of which are genotoxic and hence considered important biomarkers to detect cancer in humans. Therefore, detecting reactions of metabolites with DNA is an effective approach for the safety assessment of new chemicals and drugs. Here we describe a novel electrochemiluminescent (ECL) sensor array which can detect DNA oxidation and chemical DNA damage in a single array, facilitating a more accurate diagnostic tool for genotoxicity screening. Layer-by-layer assembly of DNA and enzyme are assembled on the pyrolytic graphite array which is housed in a microfluidic device for sequential detection of two type of the DNA damages. Multiple enzyme reactions are run on test compounds using the array, generating toxic metabolites in situ. These metabolites react with DNA in the films to cause DNA oxidation and chemical DNA damage which are detected by ECL generating osmium compound and ruthenium polymer, respectively. The method is further validated by the formation of 8-oxodG and DNA adduct using similar films of DNA/enzyme on magnetic bead biocolloid reactors, hydrolyzing the DNA, and analyzing by liquid chromatography-mass spectrometry (LC-MS). Hence, this combined DNA/enzyme array/LC-MS approach can efficiently explore metabolic genotoxic pathways for drugs and environmental chemicals.

Keywords: biosensor, electrochemiluminescence, DNA damage, microfluidic array

Procedia PDF Downloads 367