Search results for: predictive biomarker
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1253

Search results for: predictive biomarker

443 Nanorods Based Dielectrophoresis for Protein Concentration and Immunoassay

Authors: Zhen Cao, Yu Zhu, Junxue Fu

Abstract:

Immunoassay, i.e., antigen-antibody reaction, is crucial for disease diagnostics. To achieve the adequate signal of the antigen protein detection, a large amount of sample and long incubation time is needed. However, the amount of protein is usually small at the early stage, which makes it difficult to detect. Unlike cells and DNAs, no valid chemical method exists for protein amplification. Thus, an alternative way to improve the signal is through particle manipulation techniques to concentrate proteins, among which dielectrophoresis (DEP) is an effective one. DEP is a technique that concentrates particles to the designated region through a force created by the gradient in a non-uniform electric field. Since DEP force is proportional to the cube of particle size and square of electric field gradient, it is relatively easy to capture larger particles such as cells. For smaller ones like proteins, a super high gradient is then required. In this work, three-dimensional Ag/SiO2 nanorods arrays, fabricated by an easy physical vapor deposition technique called as oblique angle deposition, have been integrated with a DEP device and created the field gradient as high as of 2.6×10²⁴ V²/m³. The nanorods based DEP device is able to enrich bovine serum albumin (BSA) protein by 1800-fold and the rate has reached 180-fold/s when only applying 5 V electric potential. Based on the above nanorods integrated DEP platform, an immunoassay of mouse immunoglobulin G (IgG) proteins has been performed. Briefly, specific antibodies are immobilized onto nanorods, then IgG proteins are concentrated and captured, and finally, the signal from fluorescence-labelled antibodies are detected. The limit of detection (LoD) is measured as 275.3 fg/mL (~1.8 fM), which is a 20,000-fold enhancement compared with identical assays performed on blank glass plates. Further, prostate-specific antigen (PSA), which is a cancer biomarker for diagnosis of prostate cancer after radical prostatectomy, is also quantified with a LoD as low as 2.6 pg/mL. The time to signal saturation has been significantly reduced to one minute. In summary, together with an easy nanorod fabrication and integration method, this nanorods based DEP platform has demonstrated highly sensitive immunoassay performance and thus poses great potentials in applications for early point-of-care diagnostics.

Keywords: dielectrophoresis, immunoassay, oblique angle deposition, protein concentration

Procedia PDF Downloads 103
442 A Longitudinal Study of Psychological Capital, Parent-Child Relationships, and Subjective Well-Beings in Economically Disadvantaged Adolescents

Authors: Chang Li-Yu

Abstract:

Purposes: The present research focuses on exploring the latent growth model of psychological capital in disadvantaged adolescents and assessing its relationship with subjective well-being. Methods: Longitudinal study design was utilized and the data was from Taiwan Database of Children and Youth in Poverty (TDCYP), using the student questionnaires from 2009, 2011, and 2013. Data analysis was conducted using both univariate and multivariate latent growth curve models. Results: This study finds that: (1) The initial state and growth rate of individual factors such as parent-child relationships, psychological capital, and subjective wellbeing in economically disadvantaged adolescents have a predictive impact; (2) There are positive interactive effects in the development among factors like parentchild relationships, psychological capital, and subjective well-being in economically disadvantaged adolescents; and (3) The initial state and growth rate of parent-child relationships and psychological capital in economically disadvantaged adolescents positively affect the initial state and growth rate of their subjective well-being. Recommendations: Based on these findings, this study concretely discusses the significance of psychological capital and family cohesion for the mental health of economically disadvantaged youth and offers suggestions for counseling, psychological therapy, and future research.

Keywords: economically disadvantaged adolescents, psychological capital, parent-child relationships, subjective well-beings

Procedia PDF Downloads 61
441 Evaluation and Assessment of Bioinformatics Methods and Their Applications

Authors: Fatemeh Nokhodchi Bonab

Abstract:

Bioinformatics, in its broad sense, involves application of computer processes to solve biological problems. A wide range of computational tools are needed to effectively and efficiently process large amounts of data being generated as a result of recent technological innovations in biology and medicine. A number of computational tools have been developed or adapted to deal with the experimental riches of complex and multivariate data and transition from data collection to information or knowledge. These bioinformatics tools are being evaluated and applied in various medical areas including early detection, risk assessment, classification, and prognosis of cancer. The goal of these efforts is to develop and identify bioinformatics methods with optimal sensitivity, specificity, and predictive capabilities. The recent flood of data from genome sequences and functional genomics has given rise to new field, bioinformatics, which combines elements of biology and computer science. Bioinformatics is conceptualizing biology in terms of macromolecules (in the sense of physical-chemistry) and then applying "informatics" techniques (derived from disciplines such as applied maths, computer science, and statistics) to understand and organize the information associated with these molecules, on a large-scale. Here we propose a definition for this new field and review some of the research that is being pursued, particularly in relation to transcriptional regulatory systems.

Keywords: methods, applications, transcriptional regulatory systems, techniques

Procedia PDF Downloads 127
440 Coarse Grid Computational Fluid Dynamics Fire Simulations

Authors: Wolfram Jahn, Jose Manuel Munita

Abstract:

While computational fluid dynamics (CFD) simulations of fire scenarios are commonly used in the design of buildings, less attention has been given to the use of CFD simulations as an operational tool for the fire services. The reason of this lack of attention lies mainly in the fact that CFD simulations typically take large periods of time to complete, and their results would thus not be available in time to be of use during an emergency. Firefighters often face uncertain conditions when entering a building to attack a fire. They would greatly benefit from a technology based on predictive fire simulations, able to assist their decision-making process. The principal constraint to faster CFD simulations is the fine grid necessary to solve accurately the physical processes that govern a fire. This paper explores the possibility of overcoming this constraint and using coarse grid CFD simulations for fire scenarios, and proposes a methodology to use the simulation results in a meaningful way that can be used by the fire fighters during an emergency. Data from real scale compartment fire tests were used to compare CFD fire models with different grid arrangements, and empirical correlations were obtained to interpolate data points into the grids. The results show that the strongly predominant effect of the heat release rate of the fire on the fluid dynamics allows for the use of coarse grids with relatively low overall impact of simulation results. Simulations with an acceptable level of accuracy could be run in real time, thus making them useful as a forecasting tool for emergency response purposes.

Keywords: CFD, fire simulations, emergency response, forecast

Procedia PDF Downloads 320
439 Recommendations Using Online Water Quality Sensors for Chlorinated Drinking Water Monitoring at Drinking Water Distribution Systems Exposed to Glyphosate

Authors: Angela Maria Fasnacht

Abstract:

Detection of anomalies due to contaminants’ presence, also known as early detection systems in water treatment plants, has become a critical point that deserves an in-depth study for their improvement and adaptation to current requirements. The design of these systems requires a detailed analysis and processing of the data in real-time, so it is necessary to apply various statistical methods appropriate to the data generated, such as Spearman’s Correlation, Factor Analysis, Cross-Correlation, and k-fold Cross-validation. Statistical analysis and methods allow the evaluation of large data sets to model the behavior of variables; in this sense, statistical treatment or analysis could be considered a vital step to be able to develop advanced models focused on machine learning that allows optimized data management in real-time, applied to early detection systems in water treatment processes. These techniques facilitate the development of new technologies used in advanced sensors. In this work, these methods were applied to identify the possible correlations between the measured parameters and the presence of the glyphosate contaminant in the single-pass system. The interaction between the initial concentration of glyphosate and the location of the sensors on the reading of the reported parameters was studied.

Keywords: glyphosate, emergent contaminants, machine learning, probes, sensors, predictive

Procedia PDF Downloads 124
438 Quantification of Glucosinolates in Turnip Greens and Turnip Tops by Near-Infrared Spectroscopy

Authors: S. Obregon-Cano, R. Moreno-Rojas, E. Cartea-Gonzalez, A. De Haro-Bailon

Abstract:

The potential of near-infrared spectroscopy (NIRS) for screening the total glucosinolate (t-GSL) content, and also, the aliphatic glucosinolates gluconapin (GNA), progoitrin (PRO) and glucobrassicanapin (GBN) in turnip greens and turnip tops was assessed. This crop is grown for edible leaves and stems for human consumption. The reference values for glucosinolates, as they were obtained by high performance liquid chromatography on the vegetable samples, were regressed against different spectral transformations by modified partial least-squares (MPLS) regression (calibration set of samples n= 350). The resulting models were satisfactory, with calibration coefficient values from 0.72 (GBN) to 0.98 (tGSL). The predictive ability of the equations obtained was tested using a set of samples (n=70) independent of the calibration set. The determination coefficients and prediction errors (SEP) obtained in the external validation were: GNA=0.94 (SEP=3.49); PRO=0.41 (SEP=1.08); GBN=0.55 (SEP=0.60); tGSL=0.96 (SEP=3.28). These results show that the equations developed for total glucosinolates, as well as for gluconapin can be used for screening these compounds in the leaves and stems of this species. In addition, the progoitrin and glucobrassicanapin equations obtained can be used to identify those samples with high, medium and low contents. The calibration equations obtained were accurate enough for a fast, non-destructive and reliable analysis of the content in GNA and tGSL directly from NIR spectra. The equations for PRO and GBN can be employed to identify samples with high, medium and low contents.

Keywords: brassica rapa, glucosinolates, gluconapin, NIRS, turnip greens

Procedia PDF Downloads 145
437 Lung Tissue Damage under Diesel Exhaust Exposure: Modification of Proteins, Cells and Functions in Just 14 Days

Authors: Ieva Bruzauskaite, Jovile Raudoniute, Karina Poliakovaite, Danguole Zabulyte, Daiva Bironaite, Ruta Aldonyte

Abstract:

Introduction: Air pollution is a growing global problem which has been shown to be responsible for various adverse health outcomes. Immunotoxicity, such as dysregulated inflammation, has been proposed as one of the main mechanisms in air pollution-associated diseases. Chronic obstructive pulmonary disease (COPD) is among major morbidity and mortality causes worldwide and is characterized by persistent airflow limitation caused by the small airways disease (obstructive bronchiolitis) and irreversible parenchymal destruction (emphysema). Exact pathways explaining the air pollution induced and mediated disease states are still not clear. However, modern societies understand dangers of polluted air, seek to mitigate such effects and are in need for reliable biomarkers of air pollution. We hypothesise that post-translational modifications of structural proteins, e.g. citrullination, might be a good candidate biomarker. Thus, we have designed this study, where mice were exposed to diesel exhaust and the ongoing protein modifications and inflammation in lungs and other tissues were assessed. Materials And Methods: To assess the effects of diesel exhaust a in vivo study was designed. Mice (n=10) were subjected to everyday 2-hour exposure to diesel exhaust for 14 days. Control mice were treated the same way without diesel exhaust. The effects within lung and other tissues were assessed by immunohistochemistry of formalin-fixed and paraffin-embedded tissues. Levels of inflammation and citrullination related markers were investigated. Levels of parenchymal damage were also measured. Results: In vivo study corroborates our own data from in vitro and reveals diesel exhaust initiated inflammatory shift and modulation of lung peptidyl arginine deiminase 4 (PAD4), citrullination associated enzyme, levels. In addition, high levels of citrulline were observed in exposed lung tissue sections co-localising with increased parenchymal destruction. Conclusions: Subacute exposure to diesel exhaust renders mice lungs inflammatory and modifies certain structural proteins. Such structural changes of proteins may pave a pathways to lost/gain function of affected molecules and also propagate autoimmune processes within the lung and systemically.

Keywords: air pollution, citrullination, in vivo, lungs

Procedia PDF Downloads 156
436 The Critical Velocity and Heat of Smoke Outflow in Z-shaped Passage Fires Under Weak Stack Effect

Authors: Zekun Li, Bart Merci, Miaocheng Weng, Fang Liu

Abstract:

The Z-shaped passage, widely used in metro entrance/exit passageways, inclined mining laneways, and other applications, features steep slopes and a combination of horizontal and inclined sections. These characteristics lead to notable differences in airflow patterns and temperature distributions compared to conventional confined passages. In fires occurring within Z-shaped passages under natural ventilation with a weak stack effect, the induced airflow may be insufficient to fully confined smoke downstream of the fire source. This can cause smoke back-layering upstream, with the possibility of smoke escaping from the lower entrance located upstream of the fire. Consequently, not all the heat from the fire source contributes to the stack effect. This study combines theoretical analysis and fire simulations to examine the influence of various heat release rates (HRR), passage structures, and fire source locations on the induced airflow velocity driven by the stack effect. An empirical equation is proposed to quantify the strength of the stack effect under different conditions. Additionally, predictive models have been developed to determine the critical induced airflow and to estimate the heat of smoke escaping from the lower entrance of the passage.

Keywords: stack effect, critical velocity, heat outflow, numerical simulation

Procedia PDF Downloads 12
435 Experiments on Weakly-Supervised Learning on Imperfect Data

Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler

Abstract:

Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.

Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation

Procedia PDF Downloads 200
434 Risk Propagation in Electricity Markets: Measuring the Asymmetric Transmission of Downside and Upside Risks in Energy Prices

Authors: Montserrat Guillen, Stephania Mosquera-Lopez, Jorge Uribe

Abstract:

An empirical study of market risk transmission between electricity prices in the Nord Pool interconnected market is done. Crucially, it is differentiated between risk propagation in the two tails of the price variation distribution. Thus, the downside risk from upside risk spillovers is distinguished. The results found document an asymmetric nature of risk and risk propagation in the two tails of the electricity price log variations. Risk spillovers following price increments in the market are transmitted to a larger extent than those after price reductions. Also, asymmetries related to both, the size of the transaction area and related to whether a given area behaves as a net-exporter or net-importer of electricity, are documented. For instance, on the one hand, the bigger the area of the transaction, the smaller the size of the volatility shocks that it receives. On the other hand, exporters of electricity, alongside countries with a significant dependence on renewable sources, tend to be net-transmitters of volatility to the rest of the system. Additionally, insights on the predictive power of positive and negative semivariances for future market volatility are provided. It is shown that depending on the forecasting horizon, downside and upside shocks to the market are featured by a distinctive persistence, and that upside volatility impacts more on net-importers of electricity, while the opposite holds for net-exporters.

Keywords: electricity prices, realized volatility, semivariances, volatility spillovers

Procedia PDF Downloads 176
433 Machine Learning for Targeting of Conditional Cash Transfers: Improving the Effectiveness of Proxy Means Tests to Identify Future School Dropouts and the Poor

Authors: Cristian Crespo

Abstract:

Conditional cash transfers (CCTs) have been targeted towards the poor. Thus, their targeting assessments check whether these schemes have been allocated to low-income households or individuals. However, CCTs have more than one goal and target group. An additional goal of CCTs is to increase school enrolment. Hence, students at risk of dropping out of school also are a target group. This paper analyses whether one of the most common targeting mechanisms of CCTs, a proxy means test (PMT), is suitable to identify the poor and future school dropouts. The PMT is compared with alternative approaches that use the outputs of a predictive model of school dropout. This model was built using machine learning algorithms and rich administrative datasets from Chile. The paper shows that using machine learning outputs in conjunction with the PMT increases targeting effectiveness by identifying more students who are either poor or future dropouts. This joint targeting approach increases effectiveness in different scenarios except when the social valuation of the two target groups largely differs. In these cases, the most likely optimal approach is to solely adopt the targeting mechanism designed to find the highly valued group.

Keywords: conditional cash transfers, machine learning, poverty, proxy means tests, school dropout prediction, targeting

Procedia PDF Downloads 205
432 Post Pandemic Mobility Analysis through Indexing and Sharding in MongoDB: Performance Optimization and Insights

Authors: Karan Vishavjit, Aakash Lakra, Shafaq Khan

Abstract:

The COVID-19 pandemic has pushed healthcare professionals to use big data analytics as a vital tool for tracking and evaluating the effects of contagious viruses. To effectively analyze huge datasets, efficient NoSQL databases are needed. The analysis of post-COVID-19 health and well-being outcomes and the evaluation of the effectiveness of government efforts during the pandemic is made possible by this research’s integration of several datasets, which cuts down on query processing time and creates predictive visual artifacts. We recommend applying sharding and indexing technologies to improve query effectiveness and scalability as the dataset expands. Effective data retrieval and analysis are made possible by spreading the datasets into a sharded database and doing indexing on individual shards. Analysis of connections between governmental activities, poverty levels, and post-pandemic well being is the key goal. We want to evaluate the effectiveness of governmental initiatives to improve health and lower poverty levels. We will do this by utilising advanced data analysis and visualisations. The findings provide relevant data that supports the advancement of UN sustainable objectives, future pandemic preparation, and evidence-based decision-making. This study shows how Big Data and NoSQL databases may be used to address problems with global health.

Keywords: big data, COVID-19, health, indexing, NoSQL, sharding, scalability, well being

Procedia PDF Downloads 71
431 Tumor Cell Detection, Isolation and Monitoring Using Bi-Layer Magnetic Microfluidic Chip

Authors: Amir Seyfoori, Ehsan Samiei, Mohsen Akbari

Abstract:

The use of microtechnology for detection and high yield isolation of circulating tumor cells (CTCs) has shown enormous promise as an indication of clinical metastasis prognosis and cancer treatment monitoring. The Immunomagnetic assay has been also coupled to microtechnology to improve the selectivity and efficiency of the current methods of cancer biomarker isolation. In this way, generation and configuration of the local high gradient magnetic field play essential roles in such assay. Additionally, considering the intrinsic heterogeneity of cancer cells, real-time analysis of isolated cells is necessary to characterize their responses to therapy. Totally, on-chip isolation and monitoring of the specific tumor cells is considered as a pressing need in the way of modified cancer therapy. To address these challenges, we have developed a bi-layer magnetic-based microfluidic chip for enhanced CTC detection and capturing. Micromagnet arrays at the bottom layer of the chip were fabricated using a new method of magnetic nanoparticle paste deposition so that they were arranged at the center of the chain microchannel with the lowest fluid velocity zone. Breast cancer cells labelled with EPCAM-conjugated smart microgels were immobilized on the tip of the micromagnets with greater localized magnetic field and stronger cell-micromagnet interaction. Considering different magnetic nano-powder usage (MnFe2O4 & gamma-Fe2O3) and micromagnet shapes (ellipsoidal & arrow), the capture efficiency of the systems was adjusted while the higher CTC capture efficiency was acquired for MnFe2O4 arrow micromagnet as around 95.5%. As a proof of concept of on-chip tumor cell monitoring, magnetic smart microgels made of thermo-responsive poly N-isopropylacrylamide-co-acrylic acid (PNIPAM-AA) composition were used for both purposes of targeted cell capturing as well as cell monitoring using antibody conjugation and fluorescent dye loading at the same time. In this regard, magnetic microgels were successfully used as cell tracker after isolation process so that by raising the temperature up to 37⁰ C, they released the contained dye and stained the targeted cell just after capturing. This microfluidic device was able to provide a platform for detection, isolation and efficient real-time analysis of specific CTCs in the liquid biopsy of breast cancer patients.

Keywords: circulating tumor cells, microfluidic, immunomagnetic, cell isolation

Procedia PDF Downloads 143
430 QSAR, Docking and E-pharmacophore Approach on Novel Series of HDAC Inhibitors with Thiophene Linker as Anticancer Agents

Authors: Harish Rajak, Preeti Patel

Abstract:

HDAC inhibitors can reactivate gene expression and inhibit the growth and survival of cancer cells. The 3D-QSAR and Pharmacophore modeling studies were performed to identify important pharmacophoric features and correlate 3D-chemical structure with biological activity. The pharmacophore hypotheses were developed using e-pharmacophore script and phase module. Pharmacophore hypothesis represents the 3D arrangement of molecular features necessary for activity. A series of 55 compounds with well-assigned HDAC inhibitory activity was used for 3D-QSAR model development. Best 3D-QSAR model, which is a five PLS factor model with good statistics and predictive ability, acquired Q2 (0.7293), R2 (0.9811) and standard deviation (0.0952). Molecular docking were performed using Histone Deacetylase protein (PDB ID: 1t69) and prepared series of hydroxamic acid based HDAC inhibitors. Docking study of compound 43 show significant binding interactions Ser 276 and oxygen atom of dioxine cap region, Gly 151 and amino group and Asp 267 with carboxyl group of CONHOH, which are essential for anticancer activity. On docking, most of the compounds exhibited better glide score values between -8 to -10.5. We have established structure activity correlation using docking, energetic based pharmacophore modelling, pharmacophore and atom based 3D QSAR model. The results of these studies were further used for the design and testing of new HDAC analogs.

Keywords: Docking, e-pharmacophore, HDACIs, QSAR, Suberoylanilidehydroxamic acid.

Procedia PDF Downloads 301
429 Time Series Forecasting (TSF) Using Various Deep Learning Models

Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan

Abstract:

Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed-length window in the past as an explicit input. In this paper, we study how the performance of predictive models changes as a function of different look-back window sizes and different amounts of time to predict the future. We also consider the performance of the recent attention-based Transformer models, which have had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (RNN, LSTM, GRU, and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the UCI website, which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Average Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.

Keywords: air quality prediction, deep learning algorithms, time series forecasting, look-back window

Procedia PDF Downloads 156
428 Influence of the Granular Mixture Properties on the Rheological Properties of Concrete: Yield Stress Determination Using Modified Chateau et al. Model

Authors: Rachid Zentar, Mokrane Bala, Pascal Boustingorry

Abstract:

The prediction of the rheological behavior of concrete is at the center of current concerns of the concrete industry for different reasons. The shortage of good quality standard materials combined with variable properties of available materials imposes to improve existing models to take into account these variations at the design stage of concrete. The main reasons for improving the predictive models are, of course, saving time and cost at the design stage as well as to optimize concrete performances. In this study, we will highlight the different properties of the granular mixtures that affect the rheological properties of concrete. Our objective is to identify the intrinsic parameters of the aggregates which make it possible to predict the yield stress of concrete. The work was done using two typologies of grains: crushed and rolled aggregates. The experimental results have shown that the rheology of concrete is improved by increasing the packing density of the granular mixture using rolled aggregates. The experimental program realized allowed to model the yield stress of concrete by a modified model of Chateau et al. through a dimensionless parameter following Krieger-Dougherty law. The modelling confirms that the yield stress of concrete depends not only on the properties of cement paste but also on the packing density of the granular skeleton and the shape of grains.

Keywords: crushed aggregates, intrinsic viscosity, packing density, rolled aggregates, slump, yield stress of concrete

Procedia PDF Downloads 127
427 Bayesian Semiparametric Geoadditive Modelling of Underweight Malnutrition of Children under 5 Years in Ethiopia

Authors: Endeshaw Assefa Derso, Maria Gabriella Campolo, Angela Alibrandi

Abstract:

Objectives:Early childhood malnutrition can have long-term and irreversible effects on a child's health and development. This study uses the Bayesian method with spatial variation to investigate the flexible trends of metrical covariates and to identify communities at high risk of injury. Methods: Cross-sectional data on underweight are collected from the 2016 Ethiopian Demographic and Health Survey (EDHS). The Bayesian geo-additive model is performed. Appropriate prior distributions were provided for scall parameters in the models, and the inference is entirely Bayesian, using Monte Carlo Markov chain (MCMC) stimulation. Results: The results show that metrical covariates like child age, maternal body mass index (BMI), and maternal age affect a child's underweight non-linearly. Lower and higher maternal BMI seem to have a significant impact on the child’s high underweight. There was also a significant spatial heterogeneity, and based on IDW interpolation of predictive values, the western, central, and eastern parts of the country are hotspot areas. Conclusion: Socio-demographic and community- based programs development should be considered compressively in Ethiopian policy to combat childhood underweight malnutrition.

Keywords: bayesX, Ethiopia, malnutrition, MCMC, semi-parametric bayesian analysis, spatial distribution, P- splines

Procedia PDF Downloads 90
426 Transfer of Business Anti-Corruption Norms in Developing Countries: A Case Study of Vietnam

Authors: Candice Lemaitre

Abstract:

During the 1990s, an alliance of international intergovernmental and non-governmental organizations proposed a set of regulatory norms designed to reduce corruption. Many governments in developing countries, such as Vietnam, enacted these global anti-corruption norms into their domestic law. This article draws on empirical research to understand why these anti-corruption norms have failed to reduce corruption in Vietnam and many other developing countries. Rather than investigating state compliance with global anti-corruption provisions, a topic that has already attracted considerable attention, this article aims to explore the comparatively under-researched area of business compliance. Based on data collected from semi-structured interviews with business managers in Vietnam and archival research, this article examines how businesses in Vietnam interpret and comply with global anti-corruption norms. It investigates why different types of companies in Vietnam engage with and respond to these norms in different ways. This article suggests that global anti-corruption norms have not been effective in reducing corruption in Vietnam because there is fragmentation in the way companies in Vietnam interpret and respond to these norms. This fragmentation results from differences in the epistemic (or interpretive) communities that companies draw upon to interpret global anti-corruption norms. This article uses discourse analysis to understand how the communities interpret global anti-corruption norms. This investigation aims to generate some predictive insights into how companies are likely to respond to anti-corruption regimes based on global anti-corruption norms.

Keywords: anti-corruption, business law, legal transfer, Vietnam

Procedia PDF Downloads 159
425 Assessing Level of Pregnancy Rate and Milk Yield in Indian Murrah Buffaloes

Authors: V. Jamuna, A. K. Chakravarty, C. S. Patil, Vijay Kumar, M. A. Mir, Rakesh Kumar

Abstract:

Intense selection of buffaloes for milk production at organized herds of the country without giving due attention to fertility traits viz. pregnancy rate has lead to deterioration in their performances. Aim of study is to develop an optimum model for predicting pregnancy rate and to assess the level of pregnancy rate with respect to milk production Murrah buffaloes. Data pertaining to 1224 lactation records of Murrah buffaloes spread over a period 21 years were analyzed and it was observed that pregnancy rate depicted negative phenotypic association with lactation milk yield (-0.08 ± 0.04). For developing optimum model for pregnancy rate in Murrah buffaloes seven simple and multiple regression models were developed. Among the seven models, model II having only Service period as an independent reproduction variable, was found to be the best prediction model, based on the four statistical criterions (high coefficient of determination (R 2), low mean sum of squares due to error (MSSe), conceptual predictive (CP) value, and Bayesian information criterion (BIC). For standardizing the level of fertility with milk production, pregnancy rate was classified into seven classes with the increment of 10% in all parities, life time and their corresponding average pregnancy rate in relation to the average lactation milk yield (MY).It was observed that to achieve around 2000 kg MY which can be considered optimum for Indian Murrah buffaloes, level of pregnancy rate should be in between 30-50%.

Keywords: life time, pregnancy rate, production, service period, standardization

Procedia PDF Downloads 636
424 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information

Authors: Haifeng Wang, Haili Zhang

Abstract:

Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.

Keywords: computational social science, movie preference, machine learning, SVM

Procedia PDF Downloads 260
423 Antidiabetic and Admet Pharmacokinetic Properties of Grewia Lasiocarpa E. Mey. Ex Harv. Stem Bark Extracts: An in Vitro and in Silico Study

Authors: Akwu N. A., Naidoo Y., Salau V. F., Olofinsan K. A.

Abstract:

Grewia lasiocarpa E. Mey. ex Harv. (Malvaceae) is a Southern African medicinal plant indigenously used with other plants for birthing problems. The anti-diabetic properties of the hexane, chloroform, and methanol extracts of Grewia lasiocarpa stem bark were assessed using in vitro α-glucosidase enzyme inhibition assay. The predictive in silico drug-likeness and toxicity properties of the phytocompounds were conducted using the pKCSM, ADMElab, and SwissADME computer-aided online tools. The highest α-glucosidase percentage inhibition was observed in the hexane extract (86.76%, IC50= 0.24 mg/mL), followed by chloroform (63.08%, IC50= 4.87 mg/mL) and methanol (53.22%, IC50= 9.41 mg/mL); while acarbose, the standard anti-diabetic drug was (84.54%, IC50= 1.96 mg/mL). The α-glucosidase assay revealed that the hexane extract exhibited the strongest carbohydrate inhibiting capacity and is a better inhibitor than the standard reference drug-acarbose. The computational studies also affirm the results observed in the in vitroα-glucosidaseassay. Thus, the extracts of G. lasiocarpa may be considered a potential plant-sourced compound for treating type 2 diabetes mellitus. This is the first study on the anti-diabetic properties of Grewia lasiocarpa hexane, chloroform, and methanol extracts using in vitro and in silico models.

Keywords: grewia lasiocarpa, α-glucosidase inhibition, anti-diabetes, ADMET

Procedia PDF Downloads 104
422 Prediction of Music Track Popularity: A Machine Learning Approach

Authors: Syed Atif Hassan, Luv Mehta, Syed Asif Hassan

Abstract:

Hit song science is a field of investigation wherein machine learning techniques are applied to music tracks in order to extract such features from audio signals which can capture information that could explain the popularity of respective tracks. Record companies invest huge amounts of money into recruiting fresh talents and churning out new music each year. Gaining insight into the basis of why a song becomes popular will result in tremendous benefits for the music industry. This paper aims to extract basic musical and more advanced, acoustic features from songs while also taking into account external factors that play a role in making a particular song popular. We use a dataset derived from popular Spotify playlists divided by genre. We use ten genres (blues, classical, country, disco, hip-hop, jazz, metal, pop, reggae, rock), chosen on the basis of clear to ambiguous delineation in the typical sound of their genres. We feed these features into three different classifiers, namely, SVM with RBF kernel, a deep neural network, and a recurring neural network, to build separate predictive models and choosing the best performing model at the end. Predicting song popularity is particularly important for the music industry as it would allow record companies to produce better content for the masses resulting in a more competitive market.

Keywords: classifier, machine learning, music tracks, popularity, prediction

Procedia PDF Downloads 665
421 An Open Trial of Mobile-Assisted Cognitive Behavioral Therapy for Negative Symptoms in Schizophrenia: Pupillometry Predictors of Outcome

Authors: Eric Granholm, Christophe Delay, Jason Holden, Peter Link

Abstract:

Negative symptoms are an important unmet treatment needed for schizophrenia. We conducted an open trial of a novel blended intervention called mobile-assisted cognitive behavior therapy for negative symptoms (mCBTn). mCBTn is a weekly group therapy intervention combining in-person and smartphone-based CBT (CBT2go app) to improve experiential negative symptoms in people with schizophrenia. Both the therapy group and CBT2go app included recovery goal setting, thought challenging, scheduling of pleasurable activities and social interactions, and pleasure savoring interventions to modify defeatist attitudes, a target mechanism associated with negative symptoms, and improve experiential negative symptoms. We tested whether participants with schizophrenia or schizoaffective disorder (N=31) who met prospective criteria for persistent negative symptoms showed improvement in experiential negative symptoms. Retention was excellent (87% at 18 weeks) and severity of defeatist attitudes and motivation and pleasure negative symptoms declined significantly in mCBTn with large effect sizes. We also tested whether pupillary responses, a measure of cognitive effort, predicted improvement in negative symptoms mCBTn. Pupillary responses were recorded at baseline using a Tobii pupillometer during the digit span task with 3-, 6- and 9-digit spans. Mixed models showed that greater dilation during the task at baseline significantly predicted a greater reduction in experiential negative symptoms. Pupillary responses may provide a much-needed prognostic biomarker of which patients are most likely to benefit from CBT. Greater pupil dilation during a cognitive task predicted greater improvement in experiential negative symptoms. Pupil dilation has been linked to motivation and engagement of executive control, so these factors may contribute to benefits in interventions that train cognitive skills to manage negative thoughts and emotions. The findings suggest mCBTn is a feasible and effective treatment for experiential negative symptoms and justify a larger randomized controlled clinical trial. The findings also provide support for the defeatist attitude model of experiential negative symptoms and suggest that mobile-assisted interventions like mCBTn can strengthen and shorten intensive psychosocial interventions for schizophrenia.

Keywords: cognitive-behavioral therapy, mobile interventions, negative symptoms, pupillometry schizophrenia

Procedia PDF Downloads 181
420 Optimal Geothermal Borehole Design Guided By Dynamic Modeling

Authors: Hongshan Guo

Abstract:

Ground-source heat pumps provide stable and reliable heating and cooling when designed properly. The confounding effect of the borehole depth for a GSHP system, however, is rarely taken into account for any optimization: the determination of the borehole depth usually comes prior to the selection of corresponding system components and thereafter any optimization of the GSHP system. The depth of the borehole is important to any GSHP system because the shallower the borehole, the larger the fluctuation of temperature of the near-borehole soil temperature. This could lead to fluctuations of the coefficient of performance (COP) for the GSHP system in the long term when the heating/cooling demand is large. Yet the deeper the boreholes are drilled, the more the drilling cost and the operational expenses for the circulation. A controller that reads different building load profiles, optimizing for the smallest costs and temperature fluctuation at the borehole wall, eventually providing borehole depth as the output is developed. Due to the nature of the nonlinear dynamic nature of the GSHP system, it was found that between conventional optimal controller problem and model predictive control problem, the latter was found to be more feasible due to a possible history of both the trajectory during the iteration as well as the final output could be computed and compared against. Aside from a few scenarios of different weighting factors, the resulting system costs were verified with literature and reports and were found to be relatively accurate, while the temperature fluctuation at the borehole wall was also found to be within acceptable range. It was therefore determined that the MPC is adequate to optimize for the investment as well as the system performance for various outputs.

Keywords: geothermal borehole, MPC, dynamic modeling, simulation

Procedia PDF Downloads 287
419 An Electrocardiography Deep Learning Model to Detect Atrial Fibrillation on Clinical Application

Authors: Jui-Chien Hsieh

Abstract:

Background:12-lead electrocardiography(ECG) is one of frequently-used tools to detect atrial fibrillation (AF), which might degenerate into life-threaten stroke, in clinical Practice. Based on this study, the AF detection by the clinically-used 12-lead ECG device has only 0.73~0.77 positive predictive value (ppv). Objective: It is on great demand to develop a new algorithm to improve the precision of AF detection using 12-lead ECG. Due to the progress on artificial intelligence (AI), we develop an ECG deep model that has the ability to recognize AF patterns and reduce false-positive errors. Methods: In this study, (1) 570-sample 12-lead ECG reports whose computer interpretation by the ECG device was AF were collected as the training dataset. The ECG reports were interpreted by 2 senior cardiologists, and confirmed that the precision of AF detection by the ECG device is 0.73.; (2) 88 12-lead ECG reports whose computer interpretation generated by the ECG device was AF were used as test dataset. Cardiologist confirmed that 68 cases of 88 reports were AF, and others were not AF. The precision of AF detection by ECG device is about 0.77; (3) A parallel 4-layer 1 dimensional convolutional neural network (CNN) was developed to identify AF based on limb-lead ECGs and chest-lead ECGs. Results: The results indicated that this model has better performance on AF detection than traditional computer interpretation of the ECG device in 88 test samples with 0.94 ppv, 0.98 sensitivity, 0.80 specificity. Conclusions: As compared to the clinical ECG device, this AI ECG model promotes the precision of AF detection from 0.77 to 0.94, and can generate impacts on clinical applications.

Keywords: 12-lead ECG, atrial fibrillation, deep learning, convolutional neural network

Procedia PDF Downloads 114
418 Predictive Value of Primary Tumor Depth for Cervical Lymphadenopathy in Squamous Cell Carcinoma of Buccal Mucosa

Authors: Zohra Salim

Abstract:

Objective: To access the relationship of primary tumor thickness with cervical lymphadenopathy in squamous cell carcinoma of buccal mucosa. Methodology: A cross-sectional observational study was carried out on 80 Patients with biopsy-proven oral squamous cell carcinoma of buccal mucosa at Dow University of Health Sciences. All the study participants were treated with wide local excision of the primary tumor with elective neck dissection. Patients with prior head and neck malignancy or those with prior radiotherapy or chemotherapy were excluded from the study. Data was entered and analyzed on SPSS 21. Chi-squared test with 95% C.I and 80% power of the test was used to evaluate the relationship of tumor depth with cervical lymph nodes. Results: 50 participants were male, and 30 patients were female. 30 patients were in the age range of 20-40 years, 36 patients in the range of 40-60 years, while 14 patients were beyond age 60 years. Tumor size ranged from 0.3cm to 5cm with a mean of 2.03cm. Tumor depth ranged from 0.2cm to 5cm. 20% of the participants reported with tumor depth greater than 2.5cm, while 80% of patients reported with tumor depth less than 2.5cm. Out of 80 patients, 27 reported with negative lymph nodes, while 53 patients reported with positive lymph nodes. Conclusion: Our study concludes that relationship exists between the depth of primary tumor and cervical lymphadenopathy in squamous cell carcinoma of buccal mucosa.

Keywords: squamous cell carcinoma, tumor depth, cervical lymphadenopathy, buccal mucosa

Procedia PDF Downloads 237
417 Firm Performance and Stock Price in Nigeria

Authors: Tijjani Bashir Musa

Abstract:

The recent global crisis which suddenly results to Nigerian stock market crash revealed some peculiarities of Nigerian firms. Some firms in Nigeria are performing but their stock prices are not increasing while some firms are at the brink of collapse but their stock prices are increasing. Thus, this study examines the relationship between firm performance and stock price in Nigeria. The study covered the period of 2005 to 2009. This period is the period of stock boom and also marked the period of stock market crash as a result of global financial meltdown. The study is a panel study. A total of 140 firms were sampled from 216 firms listed on the Nigerian Stock Exchange (NSE). Data were collected from secondary source. These data were divided into four strata comprising the most performing stock, the least performing stock, most performing firms and the least performing firms. Each stratum contains 35 firms with characteristic of most performing stock, most performing firms, least performing stock and least performing firms. Multiple linear regression models were used to analyse the data while statistical/econometrics package of Stata 11.0 version was used to run the data. The study found that, relationship exists between selected firm performance parameters (operating efficiency, firm profit, earning per share and working capital) and stock price. As such firm performance gave sufficient information or has predictive power on stock prices movements in Nigeria for all the years under study.. The study recommends among others that Managers of firms in Nigeria should formulate policies and exert effort geared towards improving firm performance that will enhance stock prices movements.

Keywords: firm, Nigeria, performance, stock price

Procedia PDF Downloads 477
416 Logistic Regression Based Model for Predicting Students’ Academic Performance in Higher Institutions

Authors: Emmanuel Osaze Oshoiribhor, Adetokunbo MacGregor John-Otumu

Abstract:

In recent years, there has been a desire to forecast student academic achievement prior to graduation. This is to help them improve their grades, particularly for individuals with poor performance. The goal of this study is to employ supervised learning techniques to construct a predictive model for student academic achievement. Many academics have already constructed models that predict student academic achievement based on factors such as smoking, demography, culture, social media, parent educational background, parent finances, and family background, to name a few. This feature and the model employed may not have correctly classified the students in terms of their academic performance. This model is built using a logistic regression classifier with basic features such as the previous semester's course score, attendance to class, class participation, and the total number of course materials or resources the student is able to cover per semester as a prerequisite to predict if the student will perform well in future on related courses. The model outperformed other classifiers such as Naive bayes, Support vector machine (SVM), Decision Tree, Random forest, and Adaboost, returning a 96.7% accuracy. This model is available as a desktop application, allowing both instructors and students to benefit from user-friendly interfaces for predicting student academic achievement. As a result, it is recommended that both students and professors use this tool to better forecast outcomes.

Keywords: artificial intelligence, ML, logistic regression, performance, prediction

Procedia PDF Downloads 98
415 Geochemical Study of Natural Bitumen, Condensate and Gas Seeps from Sousse Area, Central Tunisia

Authors: Belhaj Mohamed, M. Saidi, N. Boucherab, N. Ouertani, I. Bouazizi, M. Ben Jrad

Abstract:

Natural hydrocarbon seepage has helped petroleum exploration as a direct indicator of gas and/or oil subsurface accumulations. Surface macro-seeps are generally an indication of a fault in an active Petroleum Seepage System belonging to a Total Petroleum System. This paper describes a case study in which multiple analytical techniques were used to identify and characterize trace petroleum-related hydrocarbons and other volatile organic compounds in groundwater samples collected from Sousse aquifer (Central Tunisia). The analytical techniques used for analyses of water samples included gas chromatography-mass spectrometry (GC-MS), capillary GC with flame-ionization detection, Compund Specific Isotope Analysis, Rock Eval Pyrolysis. The objective of the study was to confirm the presence of gasoline and other petroleum products or other volatile organic pollutants in those samples in order to assess the respective implication of each of the potentially responsible parties to the contamination of the aquifer. In addition, the degree of contamination at different depths in the aquifer was also of interest. The oil and gas seeps have been investigated using biomarker and stable carbon isotope analyses to perform oil-oil and oil-source rock correlations. The seepage gases are characterized by high CH4 content, very low δ13CCH4 values (-71,9 ‰) and high C1/C1–5 ratios (0.95–1.0), light deuterium–hydrogen isotope ratios (-198 ‰) and light δ13CC2 and δ13CCO2 values (-23,8‰ and-23,8‰ respectively) indicating a thermogenic origin with the contribution of the biogenic gas. An organic geochemistry study was carried out on the more ten oil seep samples. This study includes light hydrocarbon and biomarkers analyses (hopanes, steranes, n-alkanes, acyclic isoprenoids, and aromatic steroids) using GC and GC-MS. The studied samples show at least two distinct families, suggesting two different types of crude oil origins: the first oil seeps appears to be highly mature, showing evidence of chemical and/or biological degradation and was derived from a clay-rich source rock deposited in suboxic conditions. It has been sourced mainly by the lower Fahdene (Albian) source rocks. The second oil seeps was derived from a carbonate-rich source rock deposited in anoxic conditions, well correlated with the Bahloul (Cenomanian-Turonian) source rock.

Keywords: biomarkers, oil and gas seeps, organic geochemistry, source rock

Procedia PDF Downloads 444
414 Bayesian Borrowing Methods for Count Data: Analysis of Incontinence Episodes in Patients with Overactive Bladder

Authors: Akalu Banbeta, Emmanuel Lesaffre, Reynaldo Martina, Joost Van Rosmalen

Abstract:

Including data from previous studies (historical data) in the analysis of the current study may reduce the sample size requirement and/or increase the power of analysis. The most common example is incorporating historical control data in the analysis of a current clinical trial. However, this only applies when the historical control dataare similar enough to the current control data. Recently, several Bayesian approaches for incorporating historical data have been proposed, such as the meta-analytic-predictive (MAP) prior and the modified power prior (MPP) both for single control as well as for multiple historical control arms. Here, we examine the performance of the MAP and the MPP approaches for the analysis of (over-dispersed) count data. To this end, we propose a computational method for the MPP approach for the Poisson and the negative binomial models. We conducted an extensive simulation study to assess the performance of Bayesian approaches. Additionally, we illustrate our approaches on an overactive bladder data set. For similar data across the control arms, the MPP approach outperformed the MAP approach with respect to thestatistical power. When the means across the control arms are different, the MPP yielded a slightly inflated type I error (TIE) rate, whereas the MAP did not. In contrast, when the dispersion parameters are different, the MAP gave an inflated TIE rate, whereas the MPP did not.We conclude that the MPP approach is more promising than the MAP approach for incorporating historical count data.

Keywords: count data, meta-analytic prior, negative binomial, poisson

Procedia PDF Downloads 118