Search results for: acid number
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12910

Search results for: acid number

8980 Use of RAPD and ISSR Markers in Detection of Genetic Variation among Colletotrichum falcatum Went Isolates from South Gujarat India

Authors: Prittesh Patel, Rushabh Shah, Krishnamurthy Ramar, Vakulbhushan Bhaskar

Abstract:

The present research work aims at finding genetic differences in the genomes of sugarcane red rot isolates Colletotrichum falcatum Went using Random Amplified Polymorphic DNA (RAPD) and interspersed simple sequence repeat (ISSR) molecular markers. Ten isolates of C. falcatum isolated from different red rot infected sugarcane cultivars stalk were used in present study. The amplified bands were scored across the lanes obtained in 15 RAPD primes and 21 ISSR primes successfully. The data were analysed using NTSYSpc 2.2 software. The results showed 80.6% and 68.07% polymorphism in RPAD and ISSR analysis respectively. Based on the RAPD analysis, ten genotypes were grouped into two major clusters at a cut-off value of 0.75. Geographically distant C. falcatum isolate cfGAN from south Gujarat had a level of similarity with Coimbatore isolate cf8436 presented on separate clade of bootstrapped dendrograms. First and second cluster consisted of five and three isolates respectively, indicating the close relation among them. The 21 ISSR primers produced 119 distinct and scorable loci in that 38 were monomorphic. The number of scorable loci for each primer varied from 2 (ISSR822) to 8 (ISSR807, ISSR823 and ISSR15) with an average of 5.66 loci per primer. Primer ISSR835 amplified the highest number of bands (57), while only 16 bands were obtained by primers ISSR822. Four primers namely ISSR830, ISSR845, ISSR4 and ISSR15 showed the highest value of percentage of polymorphism (100%). The results indicated that both of the marker systems RAPD and ISSR, individually can be effectively used in determination of genetic relationship among C falcatum accessions collected from different parts of south Gujarat.

Keywords: Colletotrichum falcatum, ISSR, RAPD, Red Rot

Procedia PDF Downloads 345
8979 Heterogeneous Catalytic Hydroesterification of Soybean Oil to Develop a Biodiesel Formation

Authors: O. Mowla, E. Kennedy, M. Stockenhuber

Abstract:

Finding alternative renewable resources of energy has attracted the attentions in consequence of limitation of the traditional fossil fuel resources, increasing of crude oil price and environmental concern over greenhouse gas emissions. Biodiesel (or Fatty Acid Methyl Esters (FAME)), an alternative energy source, is synthesised from renewable sources such as vegetable oils and animal fats and can be produced from waste oils. FAME can be produced via hydroesterification of oils. The process involves two stages. In the first stage of this process, fatty acids and glycerol are being obtained by hydrolysis of the feed stock oil. In the second stage, the recovered fatty acids are then esterified with an alcohol to methyl esters. The presence of a catalyst accelerates the rate of the hydroesterification reaction of oils. The overarching aim of this study is to find the effect of using zeolite as a catalyst in the heterogeneous hydroesterification of soybean oil. Both stages of the catalytic hydroesterification of soybean oil had been conducted at atmospheric and high-pressure conditions using reflux glass reactor and Parr reactor, respectively. The effect of operating parameters such as temperature and reaction time on the overall yield of biodiesel formation was also investigated.

Keywords: biodiesel, heterogeneous catalytic hydroesterification, soybean oil, zeolite

Procedia PDF Downloads 417
8978 Identification of Disease Causing DNA Motifs in Human DNA Using Clustering Approach

Authors: G. Tamilpavai, C. Vishnuppriya

Abstract:

Studying DNA (deoxyribonucleic acid) sequence is useful in biological processes and it is applied in the fields such as diagnostic and forensic research. DNA is the hereditary information in human and almost all other organisms. It is passed to their generations. Earlier stage detection of defective DNA sequence may lead to many developments in the field of Bioinformatics. Nowadays various tedious techniques are used to identify defective DNA. The proposed work is to analyze and identify the cancer-causing DNA motif in a given sequence. Initially the human DNA sequence is separated as k-mers using k-mer separation rule. The separated k-mers are clustered using Self Organizing Map (SOM). Using Levenshtein distance measure, cancer associated DNA motif is identified from the k-mer clusters. Experimental results of this work indicate the presence or absence of cancer causing DNA motif. If the cancer associated DNA motif is found in DNA, it is declared as the cancer disease causing DNA sequence. Otherwise the input human DNA is declared as normal sequence. Finally, elapsed time is calculated for finding the presence of cancer causing DNA motif using clustering formation. It is compared with normal process of finding cancer causing DNA motif. Locating cancer associated motif is easier in cluster formation process than the other one. The proposed work will be an initiative aid for finding genetic disease related research.

Keywords: bioinformatics, cancer motif, DNA, k-mers, Levenshtein distance, SOM

Procedia PDF Downloads 175
8977 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race

Authors: Joonas Pääkkönen

Abstract:

In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.

Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling

Procedia PDF Downloads 113
8976 The Impact of the New Head Injury Pathway on the Number of CTs Performed in a Paediatric Population

Authors: Amel M. A. Osman, Roy Mahony, Lisa Dann, McKenna S.

Abstract:

Background: Computed Tomography (CT) is a significant source of radiation in the pediatric population. A new head injury (HI) pathway was introduced in 2021, which altered the previous process of HI being jointly admitted with general pediatrics and surgery to admit these patients under the Emergency Medicine Team. Admitted patients included those with positive CT findings not requiring immediate neurosurgical intervention and those who did not meet current criteria for urgent CT brain as per NICE guidelines but were still symptomatic for prolonged observations. This approach aims to decrease the number of CT scans performed. The main aim is to assess the variation in CT scanning rates since the change in the admitting process. A retrospective review of patients presenting to CHI PECU with HI over 6-month period (01/01/19-31/05/19) compared to a 6-month period post introduction of the new pathway (01/06/2022-31/12/2022). Data was collected from the electronic record databases, symphony, and PACS. Results: In 2019, there were 869 presentations of HI, among which 32 (3.68%) had CT scans performed. 2 (6.25%) of those scanned had positive findings. In 2022, there were 1122 HI presentations, with 47 (4.19%) CT scans performed and positive findings in 5 (10.6%) cases. 57 patients were admitted under the new pathway for observation, with 1 having a CT scan following admission. Conclusion: Quantitative lifetime radiation risks for children are not negligible. While there was no statistically significant reduction in CTs performed amongst HIs presenting to our department, a significant group met the criteria for admission under the PECU consultant for prolonged monitoring. There was also a greater proportion of abnormalities on CT scans performed in 2022, demonstrating improved patient selection for imaging. Further data analysis is ongoing to determine if those who were admitted would have previously been scanned under the old pathway.

Keywords: head injury, CT, admission, guidline

Procedia PDF Downloads 37
8975 Basins of Attraction for Quartic-Order Methods

Authors: Young Hee Geum

Abstract:

We compare optimal quartic order method for the multiple zeros of nonlinear equations illustrating the basins of attraction. To construct basins of attraction effectively, we take a 600×600 uniform grid points at the origin of the complex plane and paint the initial values on the basins of attraction with different colors according to the iteration number required for convergence.

Keywords: basins of attraction, convergence, multiple-root, nonlinear equation

Procedia PDF Downloads 243
8974 Building an Opinion Dynamics Model from Experimental Data

Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.

Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule

Procedia PDF Downloads 99
8973 Application of Particle Swarm Optimization to Thermal Sensor Placement for Smart Grid

Authors: Hung-Shuo Wu, Huan-Chieh Chiu, Xiang-Yao Zheng, Yu-Cheng Yang, Chien-Hao Wang, Jen-Cheng Wang, Chwan-Lu Tseng, Joe-Air Jiang

Abstract:

Dynamic Thermal Rating (DTR) provides crucial information by estimating the ampacity of transmission lines to improve power dispatching efficiency. To perform the DTR, it is necessary to install on-line thermal sensors to monitor conductor temperature and weather variables. A simple and intuitive strategy is to allocate a thermal sensor to every span of transmission lines, but the cost of sensors might be too high to bear. To deal with the cost issue, a thermal sensor placement problem must be solved. This research proposes and implements a hybrid algorithm which combines proper orthogonal decomposition (POD) with particle swarm optimization (PSO) methods. The proposed hybrid algorithm solves a multi-objective optimization problem that concludes the minimum number of sensors and the minimum error on conductor temperature, and the optimal sensor placement is determined simultaneously. The data of 345 kV transmission lines and the hourly weather data from the Taiwan Power Company and Central Weather Bureau (CWB), respectively, are used by the proposed method. The simulated results indicate that the number of sensors could be reduced using the optimal placement method proposed by the study and an acceptable error on conductor temperature could be achieved. This study provides power companies with a reliable reference for efficiently monitoring and managing their power grids.

Keywords: dynamic thermal rating, proper orthogonal decomposition, particle swarm optimization, sensor placement, smart grid

Procedia PDF Downloads 419
8972 Cutting Plane Methods for Integer Programming: NAZ Cut and Its Variations

Authors: A. Bari

Abstract:

Integer programming is a branch of mathematical programming techniques in operations research in which some or all of the variables are required to be integer valued. Various cuts have been used to solve these problems. We have also developed cuts known as NAZ cut & A-T cut to solve the integer programming problems. These cuts are used to reduce the feasible region and then reaching the optimal solution in minimum number of steps.

Keywords: Integer Programming, NAZ cut, A-T cut, Cutting plane method

Procedia PDF Downloads 349
8971 Microstracture of Iranian Processed Cheese

Authors: R. Ezzati, M. Dezyani, H. Mirzaei

Abstract:

The effects of the concentration of trisodium citrate (TSC) emulsifying salt (0.25 to 2.75%) and holding time (0 to 20 min) on the textural, rheological, and microstructural properties of Iranian Processed Cheese Cheddar cheese were studied using a central composite rotatable design. The loss tangent parameter (from small amplitude oscillatory rheology), extent of flow, and melt area (from the Schreiber test) all indicated that the meltability of process cheese decreased with increased concentration of TSC and that holding time led to a slight reduction in meltability. Hardness increased as the concentration of TSC increased. Fluorescence micrographs indicated that the size of fat droplets decreased with an increase in the concentration of TSC and with longer holding times. Acid-base titration curves indicated that the buffering peak at pH 4.8, which is due to residual colloidal calcium phosphate, decreased as the concentration of TSC increased. The soluble phosphate content increased as concentration of TSC increased. However, the insoluble Ca decreased with increasing concentration of TSC. The results of this study suggest that TSC chelated Ca from colloidal calcium phosphate and dispersed casein; the citrate-Ca complex remained trapped within the process cheese matrix. Increasing the concentration of TSC helped to improve fat emulsification and casein dispersion during cooking, both of which probably helped to reinforce the structure of process cheese.

Keywords: Iranian processed cheese, cheddar cheese, emulsifying salt, rheology

Procedia PDF Downloads 431
8970 A Two-Phase Flow Interface Tracking Algorithm Using a Fully Coupled Pressure-Based Finite Volume Method

Authors: Shidvash Vakilipour, Scott Ormiston, Masoud Mohammadi, Rouzbeh Riazi, Kimia Amiri, Sahar Barati

Abstract:

Two-phase and multi-phase flows are common flow types in fluid mechanics engineering. Among the basic and applied problems of these flow types, two-phase parallel flow is the one that two immiscible fluids flow in the vicinity of each other. In this type of flow, fluid properties (e.g. density, viscosity, and temperature) are different at the two sides of the interface of the two fluids. The most challenging part of the numerical simulation of two-phase flow is to determine the location of interface accurately. In the present work, a coupled interface tracking algorithm is developed based on Arbitrary Lagrangian-Eulerian (ALE) approach using a cell-centered, pressure-based, coupled solver. To validate this algorithm, an analytical solution for fully developed two-phase flow in presence of gravity is derived, and then, the results of the numerical simulation of this flow are compared with analytical solution at various flow conditions. The results of the simulations show good accuracy of the algorithm despite using a nearly coarse and uniform grid. Temporal variations of interface profile toward the steady-state solution show that a greater difference between fluids properties (especially dynamic viscosity) will result in larger traveling waves. Gravity effect studies also show that favorable gravity will result in a reduction of heavier fluid thickness and adverse gravity leads to increasing it with respect to the zero gravity condition. However, the magnitude of variation in favorable gravity is much more than adverse gravity.

Keywords: coupled solver, gravitational force, interface tracking, Reynolds number to Froude number, two-phase flow

Procedia PDF Downloads 301
8969 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 402
8968 The Impact of Artificial Intelligence on Pharmacy and Pharmacology

Authors: Mamdouh Milad Adly Morkos

Abstract:

Despite having the greatest rates of mortality and morbidity in the world, low- and middle-income (LMIC) nations trail high-income nations in terms of the number of clinical trials, the number of qualified researchers, and the amount of research information specific to their people. Health inequities and the use of precision medicine may be hampered by a lack of local genomic data, clinical pharmacology and pharmacometrics competence, and training opportunities. These issues can be solved by carrying out health care infrastructure development, which includes data gathering and well-designed clinical pharmacology training in LMICs. It will be advantageous if there is international cooperation focused at enhancing education and infrastructure and promoting locally motivated clinical trials and research. This paper outlines various instances where clinical pharmacology knowledge could be put to use, including pharmacogenomic opportunities that could lead to better clinical guideline recommendations. Examples of how clinical pharmacology training can be successfully implemented in LMICs are also provided, including clinical pharmacology and pharmacometrics training programmes in Africa and a Tanzanian researcher's personal experience while on a training sabbatical in the United States. These training initiatives will profit from advocacy for clinical pharmacologists' employment prospects and career development pathways, which are gradually becoming acknowledged and established in LMICs. The advancement of training and research infrastructure to increase clinical pharmacologists' knowledge in LMICs would be extremely beneficial because they have a significant role to play in global health

Keywords: electromagnetic solar system, nano-material, nano pharmacology, pharmacovigilance, quantum theoryclinical simulation, education, pharmacology, simulation, virtual learning low- and middle-income, clinical pharmacology, pharmacometrics, career development pathways

Procedia PDF Downloads 57
8967 Rheological Properties and Consumer Acceptability of Supplemented with Flaxseed

Authors: A. Albaridi Najla

Abstract:

Flaxseed (Linum usitatissimum) is well known to have beneficial effect on health. The seeds are rich in protein, α-linolenic fatty acid and dietary fiber. Bakery products are important part of our daily meals. Functional food recently received considerable attention among consumers. The increase in bread daily consumption leads to the production of breads with functional ingredients such as flaxseed The aim of this Study was to improve the nutritional value of bread by adding flaxseed flour and assessing the effect of adding 0, 5, 10 and 15% flaxseed on whole wheat bread rheological and sensorial properties. The total consumer's acceptability of the flaxseed bread was assessed. Dough characteristics were determined using Farinograph (C.W. Brabender® Instruments, Inc). The result shows no change was observed in water absorption between the stander dough (without flaxseed) and the bread with flaxseed (67%). An Increase in the peak time and dough stickiness was observed with the increase in flaxseed level. Further, breads were evaluated for sensory parameters, colour and texture. High flaxseed level increased the bread crumb softness. Bread with 5% flaxseed was optimized for total sensory evaluation. Overall, flaxseed bread produced in this study was highly acceptable for daily consumption as a functional foods with a potentially health benefits.

Keywords: bread, flaxseed, rheological properties, whole-wheat bread

Procedia PDF Downloads 419
8966 A Gauge Repeatability and Reproducibility Study for Multivariate Measurement Systems

Authors: Jeh-Nan Pan, Chung-I Li

Abstract:

Measurement system analysis (MSA) plays an important role in helping organizations to improve their product quality. Generally speaking, the gauge repeatability and reproducibility (GRR) study is performed according to the MSA handbook stated in QS9000 standards. Usually, GRR study for assessing the adequacy of gauge variation needs to be conducted prior to the process capability analysis. Traditional MSA only considers a single quality characteristic. With the advent of modern technology, industrial products have become very sophisticated with more than one quality characteristic. Thus, it becomes necessary to perform multivariate GRR analysis for a measurement system when collecting data with multiple responses. In this paper, we take the correlation coefficients among tolerances into account to revise the multivariate precision-to-tolerance (P/T) ratio as proposed by Majeske (2008). We then compare the performance of our revised P/T ratio with that of the existing ratios. The simulation results show that our revised P/T ratio outperforms others in terms of robustness and proximity to the actual value. Moreover, the optimal allocation of several parameters such as the number of quality characteristics (v), sample size of parts (p), number of operators (o) and replicate measurements (r) is discussed using the confidence interval of the revised P/T ratio. Finally, a standard operating procedure (S.O.P.) to perform the GRR study for multivariate measurement systems is proposed based on the research results. Hopefully, it can be served as a useful reference for quality practitioners when conducting such study in industries. Measurement system analysis (MSA) plays an important role in helping organizations to improve their product quality. Generally speaking, the gauge repeatability and reproducibility (GRR) study is performed according to the MSA handbook stated in QS9000 standards. Usually, GRR study for assessing the adequacy of gauge variation needs to be conducted prior to the process capability analysis. Traditional MSA only considers a single quality characteristic. With the advent of modern technology, industrial products have become very sophisticated with more than one quality characteristic. Thus, it becomes necessary to perform multivariate GRR analysis for a measurement system when collecting data with multiple responses. In this paper, we take the correlation coefficients among tolerances into account to revise the multivariate precision-to-tolerance (P/T) ratio as proposed by Majeske (2008). We then compare the performance of our revised P/T ratio with that of the existing ratios. The simulation results show that our revised P/T ratio outperforms others in terms of robustness and proximity to the actual value. Moreover, the optimal allocation of several parameters such as the number of quality characteristics (v), sample size of parts (p), number of operators (o) and replicate measurements (r) is discussed using the confidence interval of the revised P/T ratio. Finally, a standard operating procedure (S.O.P.) to perform the GRR study for multivariate measurement systems is proposed based on the research results. Hopefully, it can be served as a useful reference for quality practitioners when conducting such study in industries.

Keywords: gauge repeatability and reproducibility, multivariate measurement system analysis, precision-to-tolerance ratio, Gauge repeatability

Procedia PDF Downloads 247
8965 Design of Experiment for Optimizing Immunoassay Microarray Printing

Authors: Alex J. Summers, Jasmine P. Devadhasan, Douglas Montgomery, Brittany Fischer, Jian Gu, Frederic Zenhausern

Abstract:

Immunoassays have been utilized for several applications, including the detection of pathogens. Our laboratory is in the development of a tier 1 biothreat panel utilizing Vertical Flow Assay (VFA) technology for simultaneous detection of pathogens and toxins. One method of manufacturing VFA membranes is with non-contact piezoelectric dispensing, which provides advantages, such as low-volume and rapid dispensing without compromising the structural integrity of antibody or substrate. Challenges of this processinclude premature discontinuation of dispensing and misaligned spotting. Preliminary data revealed the Yp 11C7 mAb (11C7)reagent to exhibit a large angle of failure during printing which may have contributed to variable printing outputs. A Design of Experiment (DOE) was executed using this reagent to investigate the effects of hydrostatic pressure and reagent concentration on microarray printing outputs. A Nano-plotter 2.1 (GeSIM, Germany) was used for printing antibody reagents ontonitrocellulose membrane sheets in a clean room environment. A spotting plan was executed using Spot-Front-End software to dispense volumes of 11C7 reagent (20-50 droplets; 1.5-5 mg/mL) in a 6-test spot array at 50 target membrane locations. Hydrostatic pressure was controlled by raising the Pressure Compensation Vessel (PCV) above or lowering it below our current working level. It was hypothesized that raising or lowering the PCV 6 inches would be sufficient to cause either liquid accumulation at the tip or discontinue droplet formation. After aspirating 11C7 reagent, we tested this hypothesis under stroboscope.75% of the effective raised PCV height and of our hypothesized lowered PCV height were used. Humidity (55%) was maintained using an Airwin BO-CT1 humidifier. The number and quality of membranes was assessed after staining printed membranes with dye. The droplet angle of failure was recorded before and after printing to determine a “stroboscope score” for each run. The DOE set was analyzed using JMP software. Hydrostatic pressure and reagent concentration had a significant effect on the number of membranes output. As hydrostatic pressure was increased by raising the PCV 3.75 inches or decreased by lowering the PCV -4.5 inches, membrane output decreased. However, with the hydrostatic pressure closest to equilibrium, our current working level, membrane output, reached the 50-membrane target. As the reagent concentration increased from 1.5 to 5 mg/mL, the membrane output also increased. Reagent concentration likely effected the number of membrane output due to the associated dispensing volume needed to saturate the membranes. However, only hydrostatic pressure had a significant effect on stroboscope score, which could be due to discontinuation of dispensing, and thus the stroboscope check could not find a droplet to record. Our JMP predictive model had a high degree of agreement with our observed results. The JMP model predicted that dispensing the highest concentration of 11C7 at our current PCV working level would yield the highest number of quality membranes, which correlated with our results. Acknowledgements: This work was supported by the Chemical Biological Technologies Directorate (Contract # HDTRA1-16-C-0026) and the Advanced Technology International (Contract # MCDC-18-04-09-002) from the Department of Defense Chemical and Biological Defense program through the Defense Threat Reduction Agency (DTRA).

Keywords: immunoassay, microarray, design of experiment, piezoelectric dispensing

Procedia PDF Downloads 165
8964 Sardine Oil as a Source of Lipid in the Diet of Giant Freshwater Prawn (Macrobrachium rosenbergii)

Authors: A. T. Ramachandra Naik, H. Shivananda Murthy, H. n. Anjanayappa

Abstract:

The freshwater prawn, Macrobrachium rosenbergii is a more popular crustacean cultured widely in monoculture system in India. It has got high nutritional value in the human diet. Hence, understanding its enzymatic and body composition is important in order to judge its flesh quality. Fish oil specially derived from Indian oil sardine is a good source of highly unsaturated fatty acid and lipid source in fish/prawn diet. A 35% crude protein diet with graded levels of Sardine oil as a source of fat was incorporated at four levels viz, 2.07, 4.07, 6.07 and 8.07% maintaining a total lipid level of feed at 8.11, 10.24, 12.28 and 14.33% respectively. Diet without sardine oil (6.05% total lipid) was served as basal treatment. The giant freshwater prawn, Macrobrachium rosenbergii was used as test animal and the experiment was lost for 112 days. Significantly, higher gain in weight of prawn was recorded in the treatment with 6.07% sardine oil incorporation followed by higher specific growth rate, food conversion rate and protein efficiency ratio. The 8.07% sardine oil diet produced the highest RNA: DNA ratio in the prawn muscle. Digestive enzyme analyses in the digestive tract and mid-gut gland showed the greatest activity in prawns fed the 8.07% diet.

Keywords: digestive enzyme, fish diet, Macrobrachium rosenbergii, sardine oil

Procedia PDF Downloads 312
8963 Software Development for Both Small Wind Performance Optimization and Structural Compliance Analysis with International Safety Regulations

Authors: K. M. Yoo, M. H. Kang

Abstract:

Conventional commercial wind turbine design software is limited to large wind turbines due to not incorporating with low Reynold’s Number aerodynamic characteristics typically for small wind turbines. To extract maximum annual energy product from an intermediately designed small wind turbine associated with measured wind data, numerous simulation is highly recommended to have a best fitting planform design with proper airfoil configuration. Since depending upon wind distribution with average wind speed, an optimal wind turbine planform design changes accordingly. It is theoretically not difficult, though, it is very inconveniently time-consuming design procedure to finalize conceptual layout of a desired small wind turbine. Thus, to help simulations easier and faster, a GUI software is developed to conveniently iterate and change airfoil types, wind data, and geometric blade data as well. With magnetic generator torque curve, peak power tracking simulation is also available to better match with the magnetic generator. Small wind turbine often lacks starting torque due to blade optimization. Thus this simulation is also embedded along with yaw design. This software provides various blade cross section details at user’s design convenience such as skin thickness control with fiber direction option, spar shape, and their material properties. Since small wind turbine is under international safety regulations with fatigue damage during normal operations and safety load analyses with ultimate excessive loads, load analyses are provided with each category mandated in the safety regulations.

Keywords: GUI software, Low Reynold’s number aerodynamics, peak power tracking, safety regulations, wind turbine performance optimization

Procedia PDF Downloads 287
8962 Isolation and Chemical Characterization of Residual Lignin from Areca Nut Shells

Authors: Dipti Yadav, Latha Rangan, Pinakeswar Mahanta

Abstract:

Recent fuel-development strategies to reduce oil dependency, mitigate greenhouse gas emissions, and utilize domestic resources have generated interest in the search for alternative sources of fuel supplies. Bioenergy production from lignocellulosic biomass has a great potential. Cellulose, hemicellulose and Lignin are main constituent of woods or agrowaste. In all the industries there are always left over or waste products mainly lignin, due to the heterogeneous nature of wood and pulp fibers and the heterogeneity that exists between individual fibers, no method is currently available for the quantitative isolation of native or residual lignin without the risk of structural changes during the isolation. The potential benefits from finding alternative uses of lignin are extensive, and with a double effect. Lignin can be used to replace fossil-based raw materials in a wide range of products, from plastics to individual chemical products, activated carbon, motor fuels and carbon fibers. Furthermore, if there is a market for lignin for such value-added products, the mills will also have an additional economic incentive to take measures for higher energy efficiency. In this study residual lignin were isolated from areca nut shells by acid hydrolysis and were analyzed and characterized by Fourier Transform Infrared (FTIR), LCMS and complexity of its structure investigated by NMR.

Keywords: Areca nut, Lignin, wood, bioenergy

Procedia PDF Downloads 464
8961 Efficiency of Pre-Treatment Methods for Biodiesel Production from Mixed Culture of Microalgae

Authors: Malith Premarathne, Shehan Bandara, Kaushalya G. Batawala, Thilini U. Ariyadasa

Abstract:

The rapid depletion of fossil fuel supplies and the emission of carbon dioxide by their continued combustion have paved the way for increased production of carbon-neutral biodiesel from naturally occurring oil sources. The high biomass growth rate and lipid production of microalgae make it a viable source for biodiesel production compared to conventional feedstock. In Sri Lanka, the production of biodiesel by employing indigenous microalgae species is at its emerging stage. This work was an attempt to compare the various pre-treatment methods before extracting lipids such as autoclaving, microwaving and sonication. A mixed culture of microalgae predominantly consisting of Chlorella sp. was obtained from Beire Lake which is an algae rich, organically polluted water body located in Colombo, Sri Lanka. After each pre-treatment method, a standard solvent extraction using Bligh and Dyer’s method was used to compare the total lipid content in percentage dry weight (% dwt). The fatty acid profiles of the oils extracted with each pretreatment method were analyzed using gas chromatography-mass spectrometry (GC-MS). The properties of the biodiesels were predicted by Biodiesel Analyzer© Version 1.1, in order to compare with ASTM 6751-08 biodiesel standard.

Keywords: biodiesel, lipid extraction, microalgae, pre-treatment

Procedia PDF Downloads 162
8960 Ionic Polymer Actuators with Fast Response and High Power Density Based on Sulfonated Phthalocyanine/Sulfonated Polysulfone Composite Membrane

Authors: Taehoon Kwon, Hyeongrae Cho, Dirk Henkensmeier, Youngjong Kang, Chong Min Koo

Abstract:

Ionic polymer actuators have been of interest in the bio-inspired artificial muscle devices. However, the relatively slow response and low power density were the obstacles for practical applications. In this study, ionic polymer actuators are fabricated with ionic polymer composite membranes based on sulfonated poly(arylene ether sulfone) (SPAES) and copper(II) phthalocyanine tetrasulfonic acid (CuPCSA). CuPCSA is an organic filler with very high ion exchange capacity (IEC, 4.5 mmol H+/g) that can be homogeneously dispersed on the molecular scale into the SPAES membrane. SPAES/CuPCSA actuators show larger ionic conductivity, mechanical properties, bending deformation, exceptional faster response to electrical stimuli, and larger mechanical power density (3028 W m–3) than Nafion actuators. This outstanding actuation performance of SPAES/CuPCSA composite membrane actuators makes them attractive for next generation transducers with high power density, which are currently developed biomimetic devices such as endoscopic surgery.

Keywords: actuation performance, composite membranes, ionic polymer actuators, organic filler

Procedia PDF Downloads 261
8959 Biological Activities of Gentiana brachyphylla Vill. Herba from Turkey

Authors: Hulya Tuba Kiyan, Nilgun Ozturk

Abstract:

Gentiana, a member of Gentianaceae, is represented by approximately 400 species in the world and 12 species in Turkey. Flavonoids, iridoids, triterpenoids and also xanthones are the major compounds of this genus, have been previously reported to have antiinflammatory, antimicrobial, antioxidant, hepatoprotective, hypotensive, hypoglycaemic, DNA repair and immunomodulatory properties. The methanolic extract of the aerial parts of Gentiana brachyphylla Vill. from Turkey was evaluated for its biological activities and its total phenolic content in the present study. According to the antioxidant activity results, G. brachyphylla methanolic extract showed very strong anti-DNA damage antioxidant activity with an inhibition of 81.82%. It showed weak ferric-reducing power with a EC50 value of 0.65 when compared to BHT (EC50 = 0.2). Also, at 0.5 mg/ml concentration, the methanolic extract inhibited ABTS radical cation activity with an inhibition of 20.13% when compared to Trolox (79.01%). Chelating ability of G. brachyphylla was 44.71% whereas EDTA showed 78.87% chelating activity at 0.2 mg/ml. Also G. brachyphylla showed weak 27.21% AChE, 20.23% BChE, strong 67.86% MAO-A and moderate 50.06% MAO-B, weak 19.14% COX-1, 29.11% COX-2 inhibitory activities at 0.25 mg/ml. The total phenolic content of G. brachyphylla was 156.23 ± 2.73 mg gallic acid equivalent/100 g extract.

Keywords: antioxidant activity, cholinesterase inhibitory activity, Gentiana brachyphylla Vill., total phenolic content

Procedia PDF Downloads 186
8958 Factors Affecting Air Surface Temperature Variations in the Philippines

Authors: John Christian Lequiron, Gerry Bagtasa, Olivia Cabrera, Leoncio Amadore, Tolentino Moya

Abstract:

Changes in air surface temperature play an important role in the Philippine’s economy, industry, health, and food production. While increasing global mean temperature in the recent several decades has prompted a number of climate change and variability studies in the Philippines, most studies still focus on rainfall and tropical cyclones. This study aims to investigate the trend and variability of observed air surface temperature and determine its major influencing factor/s in the Philippines. A non-parametric Mann-Kendall trend test was applied to monthly mean temperature of 17 synoptic stations covering 56 years from 1960 to 2015 and a mean change of 0.58 °C or a positive trend of 0.0105 °C/year (p < 0.05) was found. In addition, wavelet decomposition was used to determine the frequency of temperature variability show a 12-month, 30-80-month and more than 120-month cycles. This indicates strong annual variations, interannual variations that coincide with ENSO events, and interdecadal variations that are attributed to PDO and CO2 concentrations. Air surface temperature was also correlated with smoothed sunspot number and galactic cosmic rays, the results show a low to no effect. The influence of ENSO teleconnection on temperature, wind pattern, cloud cover, and outgoing longwave radiation on different ENSO phases had significant effects on regional temperature variability. Particularly, an anomalous anticyclonic (cyclonic) flow east of the Philippines during the peak and decay phase of El Niño (La Niña) events leads to the advection of warm southeasterly (cold northeasterly) air mass over the country. Furthermore, an apparent increasing cloud cover trend is observed over the West Philippine Sea including portions of the Philippines, and this is believed to lessen the effect of the increasing air surface temperature. However, relative humidity was also found to be increasing especially on the central part of the country, which results in a high positive trend of heat index, exacerbating the effects on human discomfort. Finally, an assessment of gridded temperature datasets was done to look at the viability of using three high-resolution datasets in future climate analysis and model calibration and verification. Several error statistics (i.e. Pearson correlation, Bias, MAE, and RMSE) were used for this validation. Results show that gridded temperature datasets generally follows the observed surface temperature change and anomalies. In addition, it is more representative of regional temperature rather than a substitute to station-observed air temperature.

Keywords: air surface temperature, carbon dioxide, ENSO, galactic cosmic rays, smoothed sunspot number

Procedia PDF Downloads 301
8957 Application of Rapidly Exploring Random Tree Star-Smart and G2 Quintic Pythagorean Hodograph Curves to the UAV Path Planning Problem

Authors: Luiz G. Véras, Felipe L. Medeiros, Lamartine F. Guimarães

Abstract:

This work approaches the automatic planning of paths for Unmanned Aerial Vehicles (UAVs) through the application of the Rapidly Exploring Random Tree Star-Smart (RRT*-Smart) algorithm. RRT*-Smart is a sampling process of positions of a navigation environment through a tree-type graph. The algorithm consists of randomly expanding a tree from an initial position (root node) until one of its branches reaches the final position of the path to be planned. The algorithm ensures the planning of the shortest path, considering the number of iterations tending to infinity. When a new node is inserted into the tree, each neighbor node of the new node is connected to it, if and only if the extension of the path between the root node and that neighbor node, with this new connection, is less than the current extension of the path between those two nodes. RRT*-smart uses an intelligent sampling strategy to plan less extensive routes by spending a smaller number of iterations. This strategy is based on the creation of samples/nodes near to the convex vertices of the navigation environment obstacles. The planned paths are smoothed through the application of the method called quintic pythagorean hodograph curves. The smoothing process converts a route into a dynamically-viable one based on the kinematic constraints of the vehicle. This smoothing method models the hodograph components of a curve with polynomials that obey the Pythagorean Theorem. Its advantage is that the obtained structure allows computation of the curve length in an exact way, without the need for quadratural techniques for the resolution of integrals.

Keywords: path planning, path smoothing, Pythagorean hodograph curve, RRT*-Smart

Procedia PDF Downloads 156
8956 Adding a Degree of Freedom to Opinion Dynamics Models

Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle

Abstract:

Within agent-based modeling, opinion dynamics is the field that focuses on modeling people's opinions. In this prolific field, most of the literature is dedicated to the exploration of the two 'degrees of freedom' and how they impact the model’s properties (e.g., the average final opinion, the number of final clusters, etc.). These degrees of freedom are (1) the interaction rule, which determines how agents update their own opinion, and (2) the network topology, which defines the possible interaction among agents. In this work, we show that the third degree of freedom exists. This can be used to change a model's output up to 100% of its initial value or to transform two models (both from the literature) into each other. Since opinion dynamics models are representations of the real world, it is fundamental to understand how people’s opinions can be measured. Even for abstract models (i.e., not intended for the fitting of real-world data), it is important to understand if the way of numerically representing opinions is unique; and, if this is not the case, how the model dynamics would change by using different representations. The process of measuring opinions is non-trivial as it requires transforming real-world opinion (e.g., supporting most of the liberal ideals) to a number. Such a process is usually not discussed in opinion dynamics literature, but it has been intensively studied in a subfield of psychology called psychometrics. In psychometrics, opinion scales can be converted into each other, similarly to how meters can be converted to feet. Indeed, psychometrics routinely uses both linear and non-linear transformations of opinion scales. Here, we analyze how this transformation affects opinion dynamics models. We analyze this effect by using mathematical modeling and then validating our analysis with agent-based simulations. Firstly, we study the case of perfect scales. In this way, we show that scale transformations affect the model’s dynamics up to a qualitative level. This means that if two researchers use the same opinion dynamics model and even the same dataset, they could make totally different predictions just because they followed different renormalization processes. A similar situation appears if two different scales are used to measure opinions even on the same population. This effect may be as strong as providing an uncertainty of 100% on the simulation’s output (i.e., all results are possible). Still, by using perfect scales, we show that scales transformations can be used to perfectly transform one model to another. We test this using two models from the standard literature. Finally, we test the effect of scale transformation in the case of finite precision using a 7-points Likert scale. In this way, we show how a relatively small-scale transformation introduces both changes at the qualitative level (i.e., the most shared opinion at the end of the simulation) and in the number of opinion clusters. Thus, scale transformation appears to be a third degree of freedom of opinion dynamics models. This result deeply impacts both theoretical research on models' properties and on the application of models on real-world data.

Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics

Procedia PDF Downloads 110
8955 The Nutritive Value of Fermented Sago Pith (Metroxylon sago Rottb) Enriched with Micro Nutrients for Poultry Feed

Authors: Wizna, Helmi Muis, Hafil Abbas

Abstract:

An experiment was conducted to improve the nutrient value of sago pith (Metroxylon sago Rottb) supplemented with Zn, Sulfur and urea through fermentation by using cellulolytic bacteria (Bacillus amyloliquefaciens) as inoculums. The experiment was determination of the optimum dose combination (dosage of Zn, S and urea) for sago pith fermentation based on nutrient quality and quantity of these fermented products. The study was conducted in experimental method, using the completely randomized design in factorial with 3 treatments consist of: factor A (Dose of urea: A1 = 2.0%, A2 = 3.0%), factor B (Dose of S: B1 = 0.2%, B2 = 0.4%) and factor C (Dose of Zn: C1 = 0.0025%, C2 = 0.005%). Results of study showed that optimum condition for fermentation process of sago pith with B. amyloliquefaciens caused a change of nutrient content was obtained at urea (3%), S (0,2%), and Zn (0,0025%). This fermentation process was able to increase amino acid average, reduce crude fiber content by 67% and increase crude protein by 433%, which made the nutritional value of the product based on dry matter was 18.22% crude protein, 12.42% crude fiber, 2525 Kcal/kg metabolic energy and 65.73% nitrogen retention.

Keywords: fermentation, sago pith, sulfur, Zn, urea, Bacillus amyloliquefaciens

Procedia PDF Downloads 496
8954 Motivation of Doctors and its Impact on the Quality of Working Life

Authors: E. V. Fakhrutdinova, K. R. Maksimova, P. B. Chursin

Abstract:

At the present stage of the society progress the health care is an integral part of both the economic system and social, while in the second case the medicine is a major component of a number of basic and necessary social programs. Since the foundation of the health system are highly qualified health professionals, it is logical proposition that increase of doctor`s professionalism improves the effectiveness of the system as a whole. Professionalism of the doctor is a collection of many components, essential role played by such personal-psychological factors as honesty, willingness and desire to help people, and motivation. A number of researchers consider motivation as an expression of basic human needs that have passed through the “filter” which is a worldview and values learned in the process of socialization by the individual, to commit certain actions designed to achieve the expected result. From this point of view a number of researchers propose the following classification of highly skilled employee’s needs: 1. the need for confirmation the competence (setting goals that meet the professionalism and receipt of positive emotions in their decision), 2. The need for independence (the ability to make their own choices in contentious situations arising in the process carry out specialist functions), 3. The need for ownership (in the case of health care workers, to the profession and accordingly, high in the eyes of the public status of the doctor). Nevertheless, it is important to understand that in a market economy a significant motivator for physicians (both legal and natural persons) is to maximize its own profits. In the case of health professionals duality motivational structure creates an additional contrast, as in the public mind the image of the ideal physician; usually a altruistically minded person thinking is not primarily about their own benefit, and to assist others. In this context, the question of the real motivation of health workers deserves special attention. The survey conducted by the American researcher Harrison Terni for the magazine "Med Tech" in 2010 revealed the opinion of more than 200 medical students starting courses, and the primary motivation in a profession choice is "desire to help people", only 15% said that they want become a doctor, "to earn a lot". From the point of view of most of the classical theories of motivation this trend can be called positive, as intangible incentives are more effective. However, it is likely that over time the opinion of the respondents may change in the direction of mercantile motives. Thus, it is logical to assume that well-designed system of motivation of doctor`s labor should be based on motivational foundations laid during training in higher education.

Keywords: motivation, quality of working life, health system, personal-psychological factors, motivational structure

Procedia PDF Downloads 345
8953 Nonlinear Aerodynamic Parameter Estimation of a Supersonic Air to Air Missile by Using Artificial Neural Networks

Authors: Tugba Bayoglu

Abstract:

Aerodynamic parameter estimation is very crucial in missile design phase, since accurate high fidelity aerodynamic model is required for designing high performance and robust control system, developing high fidelity flight simulations and verification of computational and wind tunnel test results. However, in literature, there is not enough missile aerodynamic parameter identification study for three main reasons: (1) most air to air missiles cannot fly with constant speed, (2) missile flight test number and flight duration are much less than that of fixed wing aircraft, (3) variation of the missile aerodynamic parameters with respect to Mach number is higher than that of fixed wing aircraft. In addition to these challenges, identification of aerodynamic parameters for high wind angles by using classical estimation techniques brings another difficulty in the estimation process. The reason for this, most of the estimation techniques require employing polynomials or splines to model the behavior of the aerodynamics. However, for the missiles with a large variation of aerodynamic parameters with respect to flight variables, the order of the proposed model increases, which brings computational burden and complexity. Therefore, in this study, it is aimed to solve nonlinear aerodynamic parameter identification problem for a supersonic air to air missile by using Artificial Neural Networks. The method proposed will be tested by using simulated data which will be generated with a six degree of freedom missile model, involving a nonlinear aerodynamic database. The data will be corrupted by adding noise to the measurement model. Then, by using the flight variables and measurements, the parameters will be estimated. Finally, the prediction accuracy will be investigated.

Keywords: air to air missile, artificial neural networks, open loop simulation, parameter identification

Procedia PDF Downloads 258
8952 Estimation of the Nutritive Value of Local Forage Cowpea Cultivars in Different Environments

Authors: Salem Alghamdi

Abstract:

Genotypes collected from farmers at a different region of Saudi Arabia as well as from Egyptian cultivar and a new line from Yamen. Seeds of these genotypes were grown in Dirab Agriculture Research Station, (Middle Region) and Al-Ahsa Palms and Dates Research Center (East region), during summer of 2015. Field experiments were laid out in randomized complete block design on the first week of June with three replications. Each experiment plot contained 6 rows 3m in length. Inter- and intra-row spacing was 60 and 25cm, respectively. Seed yield and its components were estimated in addition to qualitative characters on cowpea plants grown only in Dirab using cowpea descriptor from IPGRI, 1982. Seeds for chemical composite and antioxidant contents were analyzed. Highly significant differences were detected between genotypes in both locations and the combined of two locations for seed yield and its components. Mean data clearly show exceeded determine genotypes in seed yield while indeterminate genotypes had higher biological yield that divided cowpea genotypes to two main groups 1- forage genotypes (KSU-CO98, KSU-CO99, KSU-CO100, and KSU-CO104) that were taller and produce higher branches, biological yield and these are suitable to feed on haulm 2- food genotypes (KSU-CO101, KSU-CO102, and KSU-CO103) that produce higher seed yield with lower haulm and also these genotypes characters by high seed index and light seed color. Highly significant differences were recorded for locations in all studied characters except the number of branches, seed index, and biological yield, however, the interaction of genotype x location was significant only for plant height, the number of pods and seed yield per plant.

Keywords: Cowpea, genotypes, antioxidant contents, yield

Procedia PDF Downloads 236
8951 Assessing Significance of Correlation with Binomial Distribution

Authors: Vijay Kumar Singh, Pooja Kushwaha, Prabhat Ranjan, Krishna Kumar Ojha, Jitendra Kumar

Abstract:

Present day high-throughput genomic technologies, NGS/microarrays, are producing large volume of data that require improved analysis methods to make sense of the data. The correlation between genes and samples has been regularly used to gain insight into many biological phenomena including, but not limited to, co-expression/co-regulation, gene regulatory networks, clustering and pattern identification. However, presence of outliers and violation of assumptions underlying Pearson correlation is frequent and may distort the actual correlation between the genes and lead to spurious conclusions. Here, we report a method to measure the strength of association between genes. The method assumes that the expression values of a gene are Bernoulli random variables whose outcome depends on the sample being probed. The method considers the two genes as uncorrelated if the number of sample with same outcome for both the genes (Ns) is equal to certainly expected number (Es). The extent of correlation depends on how far Ns can deviate from the Es. The method does not assume normality for the parent population, fairly unaffected by the presence of outliers, can be applied to qualitative data and it uses the binomial distribution to assess the significance of association. At this stage, we would not claim about the superiority of the method over other existing correlation methods, but our method could be another way of calculating correlation in addition to existing methods. The method uses binomial distribution, which has not been used until yet, to assess the significance of association between two variables. We are evaluating the performance of our method on NGS/microarray data, which is noisy and pierce by the outliers, to see if our method can differentiate between spurious and actual correlation. While working with the method, it has not escaped our notice that the method could also be generalized to measure the association of more than two variables which has been proven difficult with the existing methods.

Keywords: binomial distribution, correlation, microarray, outliers, transcriptome

Procedia PDF Downloads 397