Search results for: click prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2304

Search results for: click prediction

204 Estimation of Scour Using a Coupled Computational Fluid Dynamics and Discrete Element Model

Authors: Zeinab Yazdanfar, Dilan Robert, Daniel Lester, S. Setunge

Abstract:

Scour has been identified as the most common threat to bridge stability worldwide. Traditionally, scour around bridge piers is calculated using the empirical approaches that have considerable limitations and are difficult to generalize. The multi-physic nature of scouring which involves turbulent flow, soil mechanics and solid-fluid interactions cannot be captured by simple empirical equations developed based on limited laboratory data. These limitations can be overcome by direct numerical modeling of coupled hydro-mechanical scour process that provides a robust prediction of bridge scour and valuable insights into the scour process. Several numerical models have been proposed in the literature for bridge scour estimation including Eulerian flow models and coupled Euler-Lagrange models incorporating an empirical sediment transport description. However, the contact forces between particles and the flow-particle interaction haven’t been taken into consideration. Incorporating collisional and frictional forces between soil particles as well as the effect of flow-driven forces on particles will facilitate accurate modeling of the complex nature of scour. In this study, a coupled Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) has been developed to simulate the scour process that directly models the hydro-mechanical interactions between the sediment particles and the flowing water. This approach obviates the need for an empirical description as the fundamental fluid-particle, and particle-particle interactions are fully resolved. The sediment bed is simulated as a dense pack of particles and the frictional and collisional forces between particles are calculated, whilst the turbulent fluid flow is modeled using a Reynolds Averaged Navier Stocks (RANS) approach. The CFD-DEM model is validated against experimental data in order to assess the reliability of the CFD-DEM model. The modeling results reveal the criticality of particle impact on the assessment of scour depth which, to the authors’ best knowledge, hasn’t been considered in previous studies. The results of this study open new perspectives to the scour depth and time assessment which is the key to manage the failure risk of bridge infrastructures.

Keywords: bridge scour, discrete element method, CFD-DEM model, multi-phase model

Procedia PDF Downloads 131
203 The Usefulness of Premature Chromosome Condensation Scoring Module in Cell Response to Ionizing Radiation

Authors: K. Rawojć, J. Miszczyk, A. Możdżeń, A. Panek, J. Swakoń, M. Rydygier

Abstract:

Due to the mitotic delay, poor mitotic index and disappearance of lymphocytes from peripheral blood circulation, assessing the DNA damage after high dose exposure is less effective. Conventional chromosome aberration analysis or cytokinesis-blocked micronucleus assay do not provide an accurate dose estimation or radiosensitivity prediction in doses higher than 6.0 Gy. For this reason, there is a need to establish reliable methods allowing analysis of biological effects after exposure in high dose range i.e., during particle radiotherapy. Lately, Premature Chromosome Condensation (PCC) has become an important method in high dose biodosimetry and a promising treatment modality to cancer patients. The aim of the study was to evaluate the usefulness of drug-induced PCC scoring procedure in an experimental mode, where 100 G2/M cells were analyzed in different dose ranges. To test the consistency of obtained results, scoring was performed by 3 independent persons in the same mode and following identical scoring criteria. Whole-body exposure was simulated in an in vitro experiment by irradiating whole blood collected from healthy donors with 60 MeV protons and 250 keV X-rays, in the range of 4.0 – 20.0 Gy. Drug-induced PCC assay was performed on human peripheral blood lymphocytes (HPBL) isolated after in vitro exposure. Cells were cultured for 48 hours with PHA. Then to achieve premature condensation, calyculin A was added. After Giemsa staining, chromosome spreads were photographed and manually analyzed by scorers. The dose-effect curves were derived by counting the excess chromosome fragments. The results indicated adequate dose estimates for the whole-body exposure scenario in the high dose range for both studied types of radiation. Moreover, compared results revealed no significant differences between scores, which has an important meaning in reducing the analysis time. These investigations were conducted as a part of an extended examination of 60 MeV protons from AIC-144 isochronous cyclotron, at the Institute of Nuclear Physics in Kraków, Poland (IFJ PAN) by cytogenetic and molecular methods and were partially supported by grant DEC-2013/09/D/NZ7/00324 from the National Science Centre, Poland.

Keywords: cell response to radiation exposure, drug induced premature chromosome condensation, premature chromosome condensation procedure, proton therapy

Procedia PDF Downloads 353
202 The Emergence of Memory at the Nanoscale

Authors: Victor Lopez-Richard, Rafael Schio Wengenroth Silva, Fabian Hartmann

Abstract:

Memcomputing is a computational paradigm that combines information processing and storage on the same physical platform. Key elements for this topic are devices with an inherent memory, such as memristors, memcapacitors, and meminductors. Despite the widespread emergence of memory effects in various solid systems, a clear understanding of the basic microscopic mechanisms that trigger them is still a puzzling task. We report basic ingredients of the theory of solid-state transport, intrinsic to a wide range of mechanisms, as sufficient conditions for a memristive response that points to the natural emergence of memory. This emergence should be discernible under an adequate set of driving inputs, as highlighted by our theoretical prediction and general common trends can be thus listed that become a rule and not the exception, with contrasting signatures according to symmetry constraints, either built-in or induced by external factors at the microscopic level. Explicit analytical figures of merit for the memory modulation of the conductance are presented, unveiling very concise and accessible correlations between general intrinsic microscopic parameters such as relaxation times, activation energies, and efficiencies (encountered throughout various fields in Physics) with external drives: voltage pulses, temperature, illumination, etc. These building blocks of memory can be extended to a vast universe of materials and devices, with combinations of parallel and independent transport channels, providing an efficient and unified physical explanation for a wide class of resistive memory devices that have emerged in recent years. Its simplicity and practicality have also allowed a direct correlation with reported experimental observations with the potential of pointing out the optimal driving configurations. The main methodological tools used to combine three quantum transport approaches, Drude-like model, Landauer-Buttiker formalism, and field-effect transistor emulators, with the microscopic characterization of nonequilibrium dynamics. Both qualitative and quantitative agreements with available experimental responses are provided for validating the main hypothesis. This analysis also shades light on the basic universality of complex natural impedances of systems out of equilibrium and might help pave the way for new trends in the area of memory formation as well as in its technological applications.

Keywords: memories, memdevices, memristors, nonequilibrium states

Procedia PDF Downloads 99
201 Computational Modelling of pH-Responsive Nanovalves in Controlled-Release System

Authors: Tomilola J. Ajayi

Abstract:

A category of nanovalves system containing the α-cyclodextrin (α-CD) ring on a stalk tethered to the pores of mesoporous silica nanoparticles (MSN) is theoretically and computationally modelled. This functions to control opening and blocking of the MSN pores for efficient targeted drug release system. Modeling of the nanovalves is based on the interaction between α-CD and the stalk (p-anisidine) in relation to pH variation. Conformational analysis was carried out prior to the formation of the inclusion complex, to find the global minimum of both neutral and protonated stalk. B3LYP/6-311G**(d, p) basis set was employed to attain all theoretically possible conformers of the stalk. Six conformers were taken into considerations, and the dihedral angle (θ) around the reference atom (N17) of the p-anisidine stalk was scanned from 0° to 360° at 5° intervals. The most stable conformer was obtained at a dihedral angle of 85.3° and was fully optimized at B3LYP/6-311G**(d, p) level of theory. The most stable conformer obtained from conformational analysis was used as the starting structure to create the inclusion complexes. 9 complexes were formed by moving the neutral guest into the α-CD cavity along the Z-axis in 1 Å stepwise while keeping the distance between dummy atom and OMe oxygen atom on the stalk restricted. The dummy atom and the carbon atoms on α-CD structure were equally restricted for orientation A (see Scheme 1). The generated structures at each step were optimized with B3LYP/6-311G**(d, p) methods to determine their energy minima. Protonation of the nitrogen atom on the stalk occurs at acidic pH, leading to unsatisfactory host-guest interaction in the nanogate; hence there is dethreading. High required interaction energy and conformational change are theoretically established to drive the release of α-CD at a certain pH. The release was found to occur between pH 5-7 which agreed with reported experimental results. In this study, we applied the theoretical model for the prediction of the experimentally observed pH-responsive nanovalves which enables blocking, and opening of mesoporous silica nanoparticles pores for targeted drug release system. Our results show that two major factors are responsible for the cargo release at acidic pH. The higher interaction energy needed for the complex/nanovalve formation to exist after protonation as well as conformational change upon protonation are driving the release due to slight pH change from 5 to 7.

Keywords: nanovalves, nanogate, mesoporous silica nanoparticles, cargo

Procedia PDF Downloads 124
200 Layouting Phase II of New Priok Using Adaptive Port Planning Frameworks

Authors: Mustarakh Gelfi, Tiedo Vellinga, Poonam Taneja, Delon Hamonangan

Abstract:

The development of New Priok/Kalibaru as an expansion terminal of the old port has been being done by IPC (Indonesia Port Cooperation) together with the subsidiary company, Port Developer (PT Pengembangan Pelabuhan Indonesia). As stated in the master plan, from 2 phases that had been proposed, phase I has shown its form and even Container Terminal I has been operated in 2016. It was planned principally, the development will be divided into Phase I (2013-2018) consist of 3 container terminals and 2 product terminals and Phase II (2018-2023) consist of 4 container terminals. In fact, the master plan has to be changed due to some major uncertainties which were escaped in prediction. This study is focused on the design scenario of phase II (2035- onwards) to deal with future uncertainty. The outcome is the robust design of phase II of the Kalibaru Terminal taking into account the future changes. Flexibility has to be a major goal in such a large infrastructure project like New Priok in order to deal and manage future uncertainty. The phasing of project needs to be adapted and re-look frequently before being irrelevant to future challenges. One of the frameworks that have been developed by an expert in port planning is Adaptive Port Planning (APP) with scenario-based planning. The idea behind APP framework is the adaptation that might be needed at any moment as an answer to a challenge. It is a continuous procedure that basically aims to increase the lifespan of waterborne transport infrastructure by increasing flexibility in the planning, contracting and design phases. Other methods used in this study are brainstorming with the port authority, desk study, interview and site visit to the real project. The result of the study is expected to be the insight for the port authority of Tanjung Priok over the future look and how it will impact the design of the port. There will be guidelines to do the design in an uncertain environment as well. Solutions of flexibility can be divided into: 1 - Physical solutions, all the items related hard infrastructure in the projects. The common things in this type of solution are using modularity, standardization, multi-functional, shorter and longer design lifetime, reusability, etc. 2 - Non-physical solutions, usually related to the planning processes, decision making and management of the projects. To conclude, APP framework seems quite robust to deal with the problem of designing phase II of New Priok Project for such a long period.

Keywords: Indonesia port, port's design, port planning, scenario-based planning

Procedia PDF Downloads 240
199 Development of Gully Erosion Prediction Model in Sokoto State, Nigeria, using Remote Sensing and Geographical Information System Techniques

Authors: Nathaniel Bayode Eniolorunda, Murtala Abubakar Gada, Sheikh Danjuma Abubakar

Abstract:

The challenge of erosion in the study area is persistent, suggesting the need for a better understanding of the mechanisms that drive it. Thus, the study evolved a predictive erosion model (RUSLE_Sok), deploying Remote Sensing (RS) and Geographical Information System (GIS) tools. The nature and pattern of the factors of erosion were characterized, while soil losses were quantified. Factors’ impacts were also measured, and the morphometry of gullies was described. Data on the five factors of RUSLE and distances to settlements, rivers and roads (K, R, LS, P, C, DS DRd and DRv) were combined and processed following standard RS and GIS algorithms. Harmonized World Soil Data (HWSD), Shuttle Radar Topographical Mission (SRTM) image, Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS), Sentinel-2 image accessed and processed within the Google Earth Engine, road network and settlements were the data combined and calibrated into the factors for erosion modeling. A gully morphometric study was conducted at some purposively selected sites. Factors of soil erosion showed low, moderate, to high patterns. Soil losses ranged from 0 to 32.81 tons/ha/year, classified into low (97.6%), moderate (0.2%), severe (1.1%) and very severe (1.05%) forms. The multiple regression analysis shows that factors statistically significantly predicted soil loss, F (8, 153) = 55.663, p < .0005. Except for the C-Factor with a negative coefficient, all other factors were positive, with contributions in the order of LS>C>R>P>DRv>K>DS>DRd. Gullies are generally from less than 100m to about 3km in length. Average minimum and maximum depths at gully heads are 0.6 and 1.2m, while those at mid-stream are 1 and 1.9m, respectively. The minimum downstream depth is 1.3m, while that for the maximum is 4.7m. Deeper gullies exist in proximity to rivers. With minimum and maximum gully elevation values ranging between 229 and 338m and an average slope of about 3.2%, the study area is relatively flat. The study concluded that major erosion influencers in the study area are topography and vegetation cover and that the RUSLE_Sok well predicted soil loss more effectively than ordinary RUSLE. The adoption of conservation measures such as tree planting and contour ploughing on sloppy farmlands was recommended.

Keywords: RUSLE_Sok, Sokoto, google earth engine, sentinel-2, erosion

Procedia PDF Downloads 78
198 Effects of Nutrients Supply on Milk Yield, Composition and Enteric Methane Gas Emissions from Smallholder Dairy Farms in Rwanda

Authors: Jean De Dieu Ayabagabo, Paul A.Onjoro, Karubiu P. Migwi, Marie C. Dusingize

Abstract:

This study investigated the effects of feed on milk yield and quality through feed monitoring and quality assessment, and the consequent enteric methane gas emissions from smallholder dairy farms in drier areas of Rwanda, using the Tier II approach for four seasons in three zones, namely; Mayaga and peripheral Bugesera (MPB), Eastern Savanna and Central Bugesera (ESCB), and Eastern plateau (EP). The study was carried out using 186 dairy cows with a mean live weight of 292 Kg in three communal cowsheds. The milk quality analysis was carried out on 418 samples. Methane emission was estimated using prediction equations. Data collected were subjected to ANOVA. The dry matter intake was lower (p<0.05) in the long dry season (7.24 Kg), with the ESCB zone having the highest value of 9.10 Kg, explained by the practice of crop-livestock integration agriculture in that zone. The Dry matter digestibility varied between seasons and zones, ranging from 52.5 to 56.4% for seasons and from 51.9 to 57.5% for zones. The daily protein supply was higher (p<0.05) in the long rain season with 969 g. The mean daily milk production of lactating cows was 5.6 L with a lower value (p<0.05) during the long dry season (4.76 L), and the MPB zone having the lowest value of 4.65 L. The yearly milk production per cow was 1179 L. The milk fat varied from 3.79 to 5.49% with a seasonal and zone variation. No variation was observed with milk protein. The seasonal daily methane emission varied from 150 g for the long dry season to 174 g for the long rain season (p<0.05). The rain season had the highest methane emission as it is associated with high forage intake. The mean emission factor was 59.4 Kg of methane/year. The present EFs were higher than the default IPPC value of 41 Kg from developing countries in African, the Middle East, and other tropical regions livestock EFs using Tier I approach due to the higher live weight in the current study. The methane emission per unit of milk production was lower in the EP zone (46.8 g/L) due to the feed efficiency observed in that zone. Farmers should use high-quality feeds to increase the milk yield and reduce the methane gas produced per unit of milk. For an accurate assessment of the methane produced from dairy farms, there is a need for the use of the Life Cycle Assessment approach that considers all the sources of emissions.

Keywords: footprint, forage, girinka, tier

Procedia PDF Downloads 205
197 Assessing Online Learning Paths in an Learning Management Systems Using a Data Mining and Machine Learning Approach

Authors: Alvaro Figueira, Bruno Cabral

Abstract:

Nowadays, students are used to be assessed through an online platform. Educators have stepped up from a period in which they endured the transition from paper to digital. The use of a diversified set of question types that range from quizzes to open questions is currently common in most university courses. In many courses, today, the evaluation methodology also fosters the students’ online participation in forums, the download, and upload of modified files, or even the participation in group activities. At the same time, new pedagogy theories that promote the active participation of students in the learning process, and the systematic use of problem-based learning, are being adopted using an eLearning system for that purpose. However, although there can be a lot of feedback from these activities to student’s, usually it is restricted to the assessments of online well-defined tasks. In this article, we propose an automatic system that informs students of abnormal deviations of a 'correct' learning path in the course. Our approach is based on the fact that by obtaining this information earlier in the semester, may provide students and educators an opportunity to resolve an eventual problem regarding the student’s current online actions towards the course. Our goal is to prevent situations that have a significant probability to lead to a poor grade and, eventually, to failing. In the major learning management systems (LMS) currently available, the interaction between the students and the system itself is registered in log files in the form of registers that mark beginning of actions performed by the user. Our proposed system uses that logged information to derive new one: the time each student spends on each activity, the time and order of the resources used by the student and, finally, the online resource usage pattern. Then, using the grades assigned to the students in previous years, we built a learning dataset that is used to feed a machine learning meta classifier. The produced classification model is then used to predict the grades a learning path is heading to, in the current year. Not only this approach serves the teacher, but also the student to receive automatic feedback on her current situation, having past years as a perspective. Our system can be applied to online courses that integrate the use of an online platform that stores user actions in a log file, and that has access to other student’s evaluations. The system is based on a data mining process on the log files and on a self-feedback machine learning algorithm that works paired with the Moodle LMS.

Keywords: data mining, e-learning, grade prediction, machine learning, student learning path

Procedia PDF Downloads 123
196 Integrating Data Mining with Case-Based Reasoning for Diagnosing Sorghum Anthracnose

Authors: Mariamawit T. Belete

Abstract:

Cereal production and marketing are the means of livelihood for millions of households in Ethiopia. However, cereal production is constrained by technical and socio-economic factors. Among the technical factors, cereal crop diseases are the major contributing factors to the low yield. The aim of this research is to develop an integration of data mining and knowledge based system for sorghum anthracnose disease diagnosis that assists agriculture experts and development agents to make timely decisions. Anthracnose diagnosing systems gather information from Melkassa agricultural research center and attempt to score anthracnose severity scale. Empirical research is designed for data exploration, modeling, and confirmatory procedures for testing hypothesis and prediction to draw a sound conclusion. WEKA (Waikato Environment for Knowledge Analysis) was employed for the modeling. Knowledge based system has come across a variety of approaches based on the knowledge representation method; case-based reasoning (CBR) is one of the popular approaches used in knowledge-based system. CBR is a problem solving strategy that uses previous cases to solve new problems. The system utilizes hidden knowledge extracted by employing clustering algorithms, specifically K-means clustering from sampled anthracnose dataset. Clustered cases with centroid value are mapped to jCOLIBRI, and then the integrator application is created using NetBeans with JDK 8.0.2. The important part of a case based reasoning model includes case retrieval; the similarity measuring stage, reuse; which allows domain expert to transfer retrieval case solution to suit for the current case, revise; to test the solution, and retain to store the confirmed solution to the case base for future use. Evaluation of the system was done for both system performance and user acceptance. For testing the prototype, seven test cases were used. Experimental result shows that the system achieves an average precision and recall values of 70% and 83%, respectively. User acceptance testing also performed by involving five domain experts, and an average of 83% acceptance is achieved. Although the result of this study is promising, however, further study should be done an investigation on hybrid approach such as rule based reasoning, and pictorial retrieval process are recommended.

Keywords: sorghum anthracnose, data mining, case based reasoning, integration

Procedia PDF Downloads 82
195 Effects of Foreign-language Learning on Bilinguals' Production in Both Their Languages

Authors: Natalia Kartushina

Abstract:

Foreign (second) language (L2) learning is highly promoted in modern society. Students are encouraged to study abroad (SA) to achieve the most effective learning outcomes. However, L2 learning has side effects for native language (L1) production, as L1 sounds might show a drift from the L1 norms towards those of the L2, and this, even after a short period of L2 learning. L1 assimilatory drift has been attributed to a strong perceptual association between similar L1 and L2 sounds in the mind of L2 leaners; thus, a change in the production of an L2 target leads to the change in the production of the related L1 sound. However, nowadays, it is quite common that speakers acquire two languages from birth, as, for example, it is the case for many bilingual communities (e.g., Basque and Spanish in the Basque Country). Yet, it remains to be established how FL learning affects native production in individuals who have two native languages, i.e., in simultaneous or very early bilinguals. Does FL learning (here a third language, L3) affect bilinguals’ both languages or only one? What factors determine which of the bilinguals’ languages is more susceptible to change? The current study examines the effects of L3 (English) learning on the production of vowels in the two native languages of simultaneous Spanish-Basque bilingual adolescents enrolled into the Erasmus SA English program. Ten bilingual speakers read five Spanish and Basque consonant-vowel-consonant-vowel words two months before their SA and the next day after their arrival back to Spain. Each word contained the target vowel in the stressed syllable and was repeated five times. Acoustic analyses measuring vowel openness (F1) and backness (F2) were performed. Two possible outcomes were considered. First, we predicted that L3 learning would affect the production of only one language and this would be the language that would be used the most in contact with English during the SA period. This prediction stems from the results of recent studies showing that early bilinguals have separate phonological systems for each of their languages; and that late FL learner (as it is the case of our participants), who tend to use their L1 in language-mixing contexts, have more L2-accented L1 speech. The second possibility stated that L3 learning would affect both of the bilinguals’ languages in line with the studies showing that bilinguals’ L1 and L2 phonologies interact and constantly co-influence each other. The results revealed that speakers who used both languages equally often (balanced users) showed an F1 drift in both languages toward the F1 of the English vowel space. Unbalanced speakers, however, showed a drift only in the less used language. The results are discussed in light of recent studies suggesting that the amount of language use is a strong predictor of the authenticity in speech production with less language use leading to more foreign-accented speech and, eventually, to language attrition.

Keywords: language-contact, multilingualism, phonetic drift, bilinguals' production

Procedia PDF Downloads 110
194 Predicting Photovoltaic Energy Profile of Birzeit University Campus Based on Weather Forecast

Authors: Muhammad Abu-Khaizaran, Ahmad Faza’, Tariq Othman, Yahia Yousef

Abstract:

This paper presents a study to provide sufficient and reliable information about constructing a Photovoltaic energy profile of the Birzeit University campus (BZU) based on the weather forecast. The developed Photovoltaic energy profile helps to predict the energy yield of the Photovoltaic systems based on the weather forecast and hence helps planning energy production and consumption. Two models will be developed in this paper; a Clear Sky Irradiance model and a Cloud-Cover Radiation model to predict the irradiance for a clear sky day and a cloudy day, respectively. The adopted procedure for developing such models takes into consideration two levels of abstraction. First, irradiance and weather data were acquired by a sensory (measurement) system installed on the rooftop of the Information Technology College building at Birzeit University campus. Second, power readings of a fully operational 51kW commercial Photovoltaic system installed in the University at the rooftop of the adjacent College of Pharmacy-Nursing and Health Professions building are used to validate the output of a simulation model and to help refine its structure. Based on a comparison between a mathematical model, which calculates Clear Sky Irradiance for the University location and two sets of accumulated measured data, it is found that the simulation system offers an accurate resemblance to the installed PV power station on clear sky days. However, these comparisons show a divergence between the expected energy yield and actual energy yield in extreme weather conditions, including clouding and soiling effects. Therefore, a more accurate prediction model for irradiance that takes into consideration weather factors, such as relative humidity and cloudiness, which affect irradiance, was developed; Cloud-Cover Radiation Model (CRM). The equivalent mathematical formulas implement corrections to provide more accurate inputs to the simulation system. The results of the CRM show a very good match with the actual measured irradiance during a cloudy day. The developed Photovoltaic profile helps in predicting the output energy yield of the Photovoltaic system installed at the University campus based on the predicted weather conditions. The simulation and practical results for both models are in a very good match.

Keywords: clear-sky irradiance model, cloud-cover radiation model, photovoltaic, weather forecast

Procedia PDF Downloads 133
193 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana

Authors: Gautier Viaud, Paul-Henry Cournède

Abstract:

Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.

Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models

Procedia PDF Downloads 304
192 Bartlett Factor Scores in Multiple Linear Regression Equation as a Tool for Estimating Economic Traits in Broilers

Authors: Oluwatosin M. A. Jesuyon

Abstract:

In order to propose a simpler tool that eliminates the age-long problems associated with the traditional index method for selection of multiple traits in broilers, the Barttlet factor regression equation is being proposed as an alternative selection tool. 100 day-old chicks each of Arbor Acres (AA) and Annak (AN) broiler strains were obtained from two rival hatcheries in Ibadan Nigeria. These were raised in deep litter system in a 56-day feeding trial at the University of Ibadan Teaching and Research Farm, located in South-west Tropical Nigeria. The body weight and body dimensions were measured and recorded during the trial period. Eight (8) zoometric measurements namely live weight (g), abdominal circumference, abdominal length, breast width, leg length, height, wing length and thigh circumference (all in cm) were recorded randomly from 20 birds within strain, at a fixed time on the first day of the new week respectively with a 5-kg capacity Camry scale. These records were analyzed and compared using completely randomized design (CRD) of SPSS analytical software, with the means procedure, Factor Scores (FS) in stepwise Multiple Linear Regression (MLR) procedure for initial live weight equations. Bartlett Factor Score (BFS) analysis extracted 2 factors for each strain, termed Body-length and Thigh-meatiness Factors for AA, and; Breast Size and Height Factors for AN. These derived orthogonal factors assisted in deducing and comparing trait-combinations that best describe body conformation and Meatiness in experimental broilers. BFS procedure yielded different body conformational traits for the two strains, thus indicating the different economic traits and advantages of strains. These factors could be useful as selection criteria for improving desired economic traits. The final Bartlett Factor Regression equations for prediction of body weight were highly significant with P < 0.0001, R2 of 0.92 and above, VIF of 1.00, and DW of 1.90 and 1.47 for Arbor Acres and Annak respectively. These FSR equations could be used as a simple and potent tool for selection during poultry flock improvement, it could also be used to estimate selection index of flocks in order to discriminate between strains, and evaluate consumer preference traits in broilers.

Keywords: alternative selection tool, Bartlet factor regression model, consumer preference trait, linear and body measurements, live body weight

Procedia PDF Downloads 203
191 Data-Driven Strategies for Enhancing Food Security in Vulnerable Regions: A Multi-Dimensional Analysis of Crop Yield Predictions, Supply Chain Optimization, and Food Distribution Networks

Authors: Sulemana Ibrahim

Abstract:

Food security remains a paramount global challenge, with vulnerable regions grappling with issues of hunger and malnutrition. This study embarks on a comprehensive exploration of data-driven strategies aimed at ameliorating food security in such regions. Our research employs a multifaceted approach, integrating data analytics to predict crop yields, optimizing supply chains, and enhancing food distribution networks. The study unfolds as a multi-dimensional analysis, commencing with the development of robust machine learning models harnessing remote sensing data, historical crop yield records, and meteorological data to foresee crop yields. These predictive models, underpinned by convolutional and recurrent neural networks, furnish critical insights into anticipated harvests, empowering proactive measures to confront food insecurity. Subsequently, the research scrutinizes supply chain optimization to address food security challenges, capitalizing on linear programming and network optimization techniques. These strategies intend to mitigate loss and wastage while streamlining the distribution of agricultural produce from field to fork. In conjunction, the study investigates food distribution networks with a particular focus on network efficiency, accessibility, and equitable food resource allocation. Network analysis tools, complemented by data-driven simulation methodologies, unveil opportunities for augmenting the efficacy of these critical lifelines. This study also considers the ethical implications and privacy concerns associated with the extensive use of data in the realm of food security. The proposed methodology outlines guidelines for responsible data acquisition, storage, and usage. The ultimate aspiration of this research is to forge a nexus between data science and food security policy, bestowing actionable insights to mitigate the ordeal of food insecurity. The holistic approach converging data-driven crop yield forecasts, optimized supply chains, and improved distribution networks aspire to revitalize food security in the most vulnerable regions, elevating the quality of life for millions worldwide.

Keywords: data-driven strategies, crop yield prediction, supply chain optimization, food distribution networks

Procedia PDF Downloads 63
190 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Authors: Arindam Chaudhuri

Abstract:

Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.

Keywords: FRSVM, Hadoop, MapReduce, PFRSVM

Procedia PDF Downloads 491
189 Contribution of PALB2 and BLM Mutations to Familial Breast Cancer Risk in BRCA1/2 Negative South African Breast Cancer Patients Detected Using High-Resolution Melting Analysis

Authors: N. C. van der Merwe, J. Oosthuizen, M. F. Makhetha, J. Adams, B. K. Dajee, S-R. Schneider

Abstract:

Women representing high-risk breast cancer families, who tested negative for pathogenic mutations in BRCA1 and BRCA2, are four times more likely to develop breast cancer compared to women in the general population. Sequencing of genes involved in genomic stability and DNA repair led to the identification of novel contributors to familial breast cancer risk. These include BLM and PALB2. Bloom's syndrome is a rare homozygous autosomal recessive chromosomal instability disorder with a high incidence of various types of neoplasia and is associated with breast cancer when in a heterozygous state. PALB2, on the other hand, binds to BRCA2 and together, they partake actively in DNA damage repair. Archived DNA samples of 66 BRCA1/2 negative high-risk breast cancer patients were retrospectively selected based on the presence of an extensive family history of the disease ( > 3 affecteds per family). All coding regions and splice-site boundaries of both genes were screened using High-Resolution Melting Analysis. Samples exhibiting variation were bi-directionally automated Sanger sequenced. The clinical significance of each variant was assessed using various in silico and splice site prediction algorithms. Comprehensive screening identified a total of 11 BLM and 26 PALB2 variants. The variants detected ranged from global to rare and included three novel mutations. Three BLM and two PALB2 likely pathogenic mutations were identified that could account for the disease in these extensive breast cancer families in the absence of BRCA mutations (BLM c.11T > A, p.V4D; BLM c.2603C > T, p.P868L; BLM c.3961G > A, p.V1321I; PALB2 c.421C > T, p.Gln141Ter; PALB2 c.508A > T, p.Arg170Ter). Conclusion: The study confirmed the contribution of pathogenic mutations in BLM and PALB2 to the familial breast cancer burden in South Africa. It explained the presence of the disease in 7.5% of the BRCA1/2 negative families with an extensive family history of breast cancer. Segregation analysis will be performed to confirm the clinical impact of these mutations for each of these families. These results justify the inclusion of both these genes in a comprehensive breast and ovarian next generation sequencing cancer panel and should be screened simultaneously with BRCA1 and BRCA2 as it might explain a significant percentage of familial breast and ovarian cancer in South Africa.

Keywords: Bloom Syndrome, familial breast cancer, PALB2, South Africa

Procedia PDF Downloads 236
188 Achieving Product Robustness through Variation Simulation: An Industrial Case Study

Authors: Narendra Akhadkar, Philippe Delcambre

Abstract:

In power protection and control products, assembly process variations due to the individual parts manufactured from single or multi-cavity tooling is a major problem. The dimensional and geometrical variations on the individual parts, in the form of manufacturing tolerances and assembly tolerances, are sources of clearance in the kinematic joints, polarization effect in the joints, and tolerance stack-up. All these variations adversely affect the quality of product, functionality, cost, and time-to-market. Variation simulation analysis may be used in the early product design stage to predict such uncertainties. Usually, variations exist in both manufacturing processes and materials. In the tolerance analysis, the effect of the dimensional and geometrical variations of the individual parts on the functional characteristics (conditions) of the final assembled products are studied. A functional characteristic of the product may be affected by a set of interrelated dimensions (functional parameters) that usually form a geometrical closure in a 3D chain. In power protection and control products, the prerequisite is: when a fault occurs in the electrical network, the product must respond quickly to react and break the circuit to clear the fault. Usually, the response time is in milliseconds. Any failure in clearing the fault may result in severe damage to the equipment or network, and human safety is at stake. In this article, we have investigated two important functional characteristics that are associated with the robust performance of the product. It is demonstrated that the experimental data obtained at the Schneider Electric Laboratory prove the very good prediction capabilities of the variation simulation performed using CETOL (tolerance analysis software) in an industrial context. Especially, this study allows design engineers to better understand the critical parts in the product that needs to be manufactured with good, capable tolerances. On the contrary, some parts are not critical for the functional characteristics (conditions) of the product and may lead to some reduction of the manufacturing cost, ensuring robust performance. The capable tolerancing is one of the most important aspects in product and manufacturing process design. In the case of miniature circuit breaker (MCB), the product's quality and its robustness are mainly impacted by two aspects: (1) allocation of design tolerances between the components of a mechanical assembly and (2) manufacturing tolerances in the intermediate machining steps of component fabrication.

Keywords: geometrical variation, product robustness, tolerance analysis, variation simulation

Procedia PDF Downloads 164
187 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method

Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry

Abstract:

The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.

Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design

Procedia PDF Downloads 153
186 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources

Authors: Mustafa Alhamdi

Abstract:

Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.

Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification

Procedia PDF Downloads 151
185 Frequency Response of Complex Systems with Localized Nonlinearities

Authors: E. Menga, S. Hernandez

Abstract:

Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.

Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber

Procedia PDF Downloads 266
184 Heat Transfer Dependent Vortex Shedding of Thermo-Viscous Shear-Thinning Fluids

Authors: Markus Rütten, Olaf Wünsch

Abstract:

Non-Newtonian fluid properties can change the flow behaviour significantly, its prediction is more difficult when thermal effects come into play. Hence, the focal point of this work is the wake flow behind a heated circular cylinder in the laminar vortex shedding regime for thermo-viscous shear thinning fluids. In the case of isothermal flows of Newtonian fluids the vortex shedding regime is characterised by a distinct Reynolds number and an associated Strouhal number. In the case of thermo-viscous shear thinning fluids the flow regime can significantly change in dependence of the temperature of the viscous wall of the cylinder. The Reynolds number alters locally and, consequentially, the Strouhal number globally. In the present CFD study the temperature dependence of the Reynolds and Strouhal number is investigated for the flow of a Carreau fluid around a heated cylinder. The temperature dependence of the fluid viscosity has been modelled by applying the standard Williams-Landel-Ferry (WLF) equation. In the present simulation campaign thermal boundary conditions have been varied over a wide range in order to derive a relation between dimensionless heat transfer, Reynolds and Strouhal number. Together with the shear thinning due to the high shear rates close to the cylinder wall this leads to a significant decrease of viscosity of three orders of magnitude in the nearfield of the cylinder and a reduction of two orders of magnitude in the wake field. Yet the shear thinning effect is able to change the flow topology: a complex K´arm´an vortex street occurs, also revealing distinct characteristic frequencies associated with the dominant and sub-dominant vortices. Heating up the cylinder wall leads to a delayed flow separation and narrower wake flow, giving lesser space for the sequence of counter-rotating vortices. This spatial limitation does not only reduce the amplitude of the oscillating wake flow it also shifts the dominant frequency to higher frequencies, furthermore it damps higher harmonics. Eventually the locally heated wake flow smears out. Eventually, the CFD simulation results of the systematically varied thermal flow parameter study have been used to describe a relation for the main characteristic order parameters.

Keywords: heat transfer, thermo-viscous fluids, shear thinning, vortex shedding

Procedia PDF Downloads 298
183 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market

Authors: Taylan Kabbani, Ekrem Duman

Abstract:

The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.

Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent

Procedia PDF Downloads 178
182 Data Mining in Healthcare for Predictive Analytics

Authors: Ruzanna Muradyan

Abstract:

Medical data mining is a crucial field in contemporary healthcare that offers cutting-edge tactics with enormous potential to transform patient care. This abstract examines how sophisticated data mining techniques could transform the healthcare industry, with a special focus on how they might improve patient outcomes. Healthcare data repositories have dynamically evolved, producing a rich tapestry of different, multi-dimensional information that includes genetic profiles, lifestyle markers, electronic health records, and more. By utilizing data mining techniques inside this vast library, a variety of prospects for precision medicine, predictive analytics, and insight production become visible. Predictive modeling for illness prediction, risk stratification, and therapy efficacy evaluations are important points of focus. Healthcare providers may use this abundance of data to tailor treatment plans, identify high-risk patient populations, and forecast disease trajectories by applying machine learning algorithms and predictive analytics. Better patient outcomes, more efficient use of resources, and early treatments are made possible by this proactive strategy. Furthermore, data mining techniques act as catalysts to reveal complex relationships between apparently unrelated data pieces, providing enhanced insights into the cause of disease, genetic susceptibilities, and environmental factors. Healthcare practitioners can get practical insights that guide disease prevention, customized patient counseling, and focused therapies by analyzing these associations. The abstract explores the problems and ethical issues that come with using data mining techniques in the healthcare industry. In order to properly use these approaches, it is essential to find a balance between data privacy, security issues, and the interpretability of complex models. Finally, this abstract demonstrates the revolutionary power of modern data mining methodologies in transforming the healthcare sector. Healthcare practitioners and researchers can uncover unique insights, enhance clinical decision-making, and ultimately elevate patient care to unprecedented levels of precision and efficacy by employing cutting-edge methodologies.

Keywords: data mining, healthcare, patient care, predictive analytics, precision medicine, electronic health records, machine learning, predictive modeling, disease prognosis, risk stratification, treatment efficacy, genetic profiles, precision health

Procedia PDF Downloads 63
181 Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

This study is an attempt to obtain reliable data on the natural history of breast cancer growth. We analyze the opportunities for using classical mathematical models (exponential and logistic tumor growth models, Gompertz and von Bertalanffy tumor growth models) to try to describe growth of the primary tumor and the secondary distant metastases of human breast cancer. The research aim is to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoMPaS and corresponding software. We are interested in: 1) modelling the whole natural history of the primary tumor and the secondary distant metastases; 2) developing adequate and precise CoMPaS which reflects relations between the primary tumor and the secondary distant metastases; 3) analyzing the CoMPaS scope of application; 4) implementing the model as a software tool. The foundation of the CoMPaS is the exponential tumor growth model, which is described by determinate nonlinear and linear equations. The CoMPaS corresponds to TNM classification. It allows to calculate different growth periods of the primary tumor and the secondary distant metastases: 1) ‘non-visible period’ for the primary tumor; 2) ‘non-visible period’ for the secondary distant metastases; 3) ‘visible period’ for the secondary distant metastases. The CoMPaS is validated on clinical data of 10-years and 15-years survival depending on the tumor stage and diameter of the primary tumor. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer growth models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. The CoMPaS model and predictive software: a) fit to clinical trials data; b) detect different growth periods of the primary tumor and the secondary distant metastases; c) make forecast of the period of the secondary distant metastases appearance; d) have higher average prediction accuracy than the other tools; e) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoMPaS: the number of doublings for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases. The CoMPaS enables, for the first time, to predict ‘whole natural history’ of the primary tumor and the secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on the primary tumor sizes. Summarizing: a) CoMPaS describes correctly the primary tumor growth of IA, IIA, IIB, IIIB (T1-4N0M0) stages without metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and inception of the secondary distant metastases.

Keywords: breast cancer, exponential growth model, mathematical model, metastases in lymph nodes, primary tumor, survival

Procedia PDF Downloads 341
180 Identification of Potent and Selective SIRT7 Anti-Cancer Inhibitor via Structure-Based Virtual Screening and Molecular Dynamics Simulation

Authors: Md. Fazlul Karim, Ashik Sharfaraz, Aysha Ferdoushi

Abstract:

Background: Computational medicinal chemistry approaches are used for designing and identifying new drug-like molecules, predicting properties and pharmacological activities, and optimizing lead compounds in drug development. SIRT7, a nicotinamide adenine dinucleotide (NAD+)-dependent deacylase which regulates aging, is an emerging target for cancer therapy with mounting evidence that SIRT7 downregulation plays important roles in reversing cancer phenotypes and suppressing tumor growth. Activation or altered expression of SIRT7 is associated with the progression and invasion of various cancers, including liver, breast, gastric, prostate, and non-small cell lung cancer. Objectives: The goal of this work was to identify potent and selective bioactive candidate inhibitors of SIRT7 by in silico screening of small molecule compounds obtained from Nigella sativa (N. sativa). Methods: SIRT7 structure was retrieved from The Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB PDB), and its active site was identified using CASTp and metaPocket. Molecular docking simulation was performed with PyRx 0.8 virtual screening software. Drug-likeness properties were tested using SwissADME and pkCSM. In silico toxicity was evaluated by Osiris Property Explorer. Bioactivity was predicted by Molinspiration software. Antitumor activity was screened for Prediction of Activity Spectra for Substances (PASS) using Way2Drug web server. Molecular dynamics (MD) simulation was carried out by Desmond v3.6 package. Results: A total of 159 bioactive compounds from the N. Sativa were screened against the SIRT7 enzyme. Five bioactive compounds: chrysin (CID:5281607), pinocembrin (CID:68071), nigellidine (CID:136828302), nigellicine (CID:11402337), and epicatechin (CID:72276) were identified as potent SIRT7 anti-cancer candidates after docking score evaluation and applying Lipinski's Rule of Five. Finally, MD simulation identified Chrysin as the top SIRT7 anti-cancer candidate molecule. Conclusion: Chrysin, which shows a potential inhibitory effect against SIRT7, can act as a possible anti-cancer drug candidate. This inhibitor warrants further evaluation to check its pharmacokinetics and pharmacodynamics properties both in vitro and in vivo.

Keywords: SIRT7, antitumor, molecular docking, molecular dynamics simulation

Procedia PDF Downloads 80
179 Targeting and Developing the Remaining Pay in an Ageing Field: The Ovhor Field Experience

Authors: Christian Ihwiwhu, Nnamdi Obioha, Udeme John, Edward Bobade, Oghenerunor Bekibele, Adedeji Awujoola, Ibi-Ada Itotoi

Abstract:

Understanding the complexity in the distribution of hydrocarbon in a simple structure with flow baffles and connectivity issues is critical in targeting and developing the remaining pay in a mature asset. Subtle facies changes (heterogeneity) can have a drastic impact on reservoir fluids movement, and this can be crucial to identifying sweet spots in mature fields. This study aims to evaluate selected reservoirs in Ovhor Field, Niger Delta, Nigeria, with the objective of optimising production from the field by targeting undeveloped oil reserves, bypassed pay, and gaining an improved understanding of the selected reservoirs to increase the company’s reservoir limits. The task at the Ovhor field is complicated by poor stratigraphic seismic resolution over the field. 3-D geological (sedimentology and stratigraphy) interpretation, use of results from quantitative interpretation, and proper understanding of production data have been used in recognizing flow baffles and undeveloped compartments in the field. The full field 3-D model has been constructed in such a way as to capture heterogeneities and the various compartments in the field to aid the proper simulation of fluid flow in the field for future production prediction, proper history matching and design of good trajectories to adequately target undeveloped oil in the field. Reservoir property models (porosity, permeability, and net-to-gross) have been constructed by biasing log interpreted properties to a defined environment of deposition model whose interpretation captures the heterogeneities expected in the studied reservoirs. At least, two scenarios have been modelled for most of the studied reservoirs to capture the range of uncertainties we are dealing with. The total original oil in-place volume for the four reservoirs studied is 157 MMstb. The cumulative oil and gas production from the selected reservoirs are 67.64 MMstb and 9.76 Bscf respectively, with current production rate of about 7035 bopd and 4.38 MMscf/d (as at 31/08/2019). Dynamic simulation and production forecast on the 4 reservoirs gave an undeveloped reserve of about 3.82 MMstb from two (2) identified oil restoration activities. These activities include side-tracking and re-perforation of existing wells. This integrated approach led to the identification of bypassed oil in some areas of the selected reservoirs and an improved understanding of the studied reservoirs. New wells have/are being drilled now to test the results of our studies, and the results are very confirmatory and satisfying.

Keywords: facies, flow baffle, bypassed pay, heterogeneities, history matching, reservoir limit

Procedia PDF Downloads 129
178 Adverse Childhood Experience of Domestic Violence and Domestic Mental Health Leading to Youth Violence: An Analysis of Selected Boroughs in London

Authors: Sandra Smart-Akande, Chaminda Hewage, Imtiaz Khan, Thanuja Mallikarachchi

Abstract:

According to UK police-recorded data, there has been a substantial increase in knife-related crime and youth violence in the UK since 2014 particularly in the London boroughs. These crime rates are disproportionally distributed across London with the majority of these crimes occurring in the highly deprived areas of London and among young people aged 11 to 24 with large discrepancies across ethnicity, age, gender and borough of residence. Comprehensive studies and literature have identified risk factors associated with a knife carrying among youth to be Adverse Childhood Experience (ACEs), poor mental health, school or social exclusion, drug dealing, drug using, victim of violent crime, bullying, peer pressure or gang involvement, just to mention a few. ACEs are potentially traumatic events that occur in childhood, this can be experiences or stressful events in the early life of a child and can lead to an increased risk of damaging health or social outcomes in the latter life of the individual. Research has shown that children or youths involved in youth violence have had childhood experience characterised by disproportionate adverse childhood experiences and substantial literature link ACEs to be associated with criminal or delinquent behavior. ACEs are commonly grouped by researchers into: Abuse (Physical, Verbal, Sexual), Neglect (Physical, Emotional) and Household adversities (Mental Illness, Incarcerated relative, Domestic violence, Parental Separation or Bereavement). To the author's best knowledge, no study to date has investigated how household mental health (mental health of a parent or mental health of a child) and domestic violence (domestic violence on a parent or domestic violence on a child) is related to knife homicides across the local authorities areas of London. This study seeks to address the gap by examining a large sample of data from the London Metropolitan Police Force and Characteristics of Children in Need data from the UK Department for Education. The aim of this review is to identify and synthesise evidence from data and a range of literature to identify the relationship between adverse childhood experiences and youth violence in the UK. Understanding the link between ACEs and future outcomes can support preventative action.

Keywords: adverse childhood experiences, domestic violence, mental health, youth violence, prediction analysis, London knife crime

Procedia PDF Downloads 121
177 Multicollinearity and MRA in Sustainability: Application of the Raise Regression

Authors: Claudia García-García, Catalina B. García-García, Román Salmerón-Gómez

Abstract:

Much economic-environmental research includes the analysis of possible interactions by using Moderated Regression Analysis (MRA), which is a specific application of multiple linear regression analysis. This methodology allows analyzing how the effect of one of the independent variables is moderated by a second independent variable by adding a cross-product term between them as an additional explanatory variable. Due to the very specification of the methodology, the moderated factor is often highly correlated with the constitutive terms. Thus, great multicollinearity problems arise. The appearance of strong multicollinearity in a model has important consequences. Inflated variances of the estimators may appear, there is a tendency to consider non-significant regressors that they probably are together with a very high coefficient of determination, incorrect signs of our coefficients may appear and also the high sensibility of the results to small changes in the dataset. Finally, the high relationship among explanatory variables implies difficulties in fixing the individual effects of each one on the model under study. These consequences shifted to the moderated analysis may imply that it is not worth including an interaction term that may be distorting the model. Thus, it is important to manage the problem with some methodology that allows for obtaining reliable results. After a review of those works that applied the MRA among the ten top journals of the field, it is clear that multicollinearity is mostly disregarded. Less than 15% of the reviewed works take into account potential multicollinearity problems. To overcome the issue, this work studies the possible application of recent methodologies to MRA. Particularly, the raised regression is analyzed. This methodology mitigates collinearity from a geometrical point of view: the collinearity problem arises because the variables under study are very close geometrically, so by separating both variables, the problem can be mitigated. Raise regression maintains the available information and modifies the problematic variables instead of deleting variables, for example. Furthermore, the global characteristics of the initial model are also maintained (sum of squared residuals, estimated variance, coefficient of determination, global significance test and prediction). The proposal is implemented to data from countries of the European Union during the last year available regarding greenhouse gas emissions, per capita GDP and a dummy variable that represents the topography of the country. The use of a dummy variable as the moderator is a special variant of MRA, sometimes called “subgroup regression analysis.” The main conclusion of this work is that applying new techniques to the field can improve in a substantial way the results of the analysis. Particularly, the use of raised regression mitigates great multicollinearity problems, so the researcher is able to rely on the interaction term when interpreting the results of a particular study.

Keywords: multicollinearity, MRA, interaction, raise

Procedia PDF Downloads 107
176 Identification of Peroxisome Proliferator-Activated Receptors α/γ Dual Agonists for Treatment of Metabolic Disorders, Insilico Screening, and Molecular Dynamics Simulation

Authors: Virendra Nath, Vipin Kumar

Abstract:

Background: TypeII Diabetes mellitus is a foremost health problem worldwide, predisposing to increased mortality and morbidity. Undesirable effects of the current medications have prompted the researcher to develop more potential drug(s) against the disease. The peroxisome proliferator-activated receptors (PPARs) are members of the nuclear receptors family and take part in a vital role in the regulation of metabolic equilibrium. They can induce or repress genes associated with adipogenesis, lipid, and glucose metabolism. Aims: Investigation of PPARα/γ agonistic hits were screened by hierarchical virtual screening followed by molecular dynamics simulation and knowledge-based structure-activity relation (SAR) analysis using approved PPAR α/γ dual agonist. Methods: The PPARα/γ agonistic activity of compounds was searched by using Maestro through structure-based virtual screening and molecular dynamics (MD) simulation application. Virtual screening of nuclear-receptor ligands was done, and the binding modes with protein-ligand interactions of newer entity(s) were investigated. Further, binding energy prediction, Stability studies using molecular dynamics (MD) simulation of PPARα and γ complex was performed with the most promising hit along with the structural comparative analysis of approved PPARα/γ agonists with screened hit was done for knowledge-based SAR. Results and Discussion: The silicone chip-based approach recognized the most capable nine hits and had better predictive binding energy as compared to the reference drug compound (Tesaglitazar). In this study, the key amino acid residues of binding pockets of both targets PPARα/γ were acknowledged as essential and were found to be associated in the key interactions with the most potential dual hit (ChemDiv-3269-0443). Stability studies using molecular dynamics (MD) simulation of PPARα and γ complex was performed with the most promising hit and found root mean square deviation (RMSD) stabile around 2Å and 2.1Å, respectively. Frequency distribution data also revealed that the key residues of both proteins showed maximum contacts with a potent hit during the MD simulation of 20 nanoseconds (ns). The knowledge-based SAR studies of PPARα/γ agonists were studied using 2D structures of approved drugs like aleglitazar, tesaglitazar, etc. for successful designing and synthesis of compounds PPARγ agonistic candidates with anti-hyperlipidimic potential.

Keywords: computational, diabetes, PPAR, simulation

Procedia PDF Downloads 103
175 Rain Gauges Network Optimization in Southern Peninsular Malaysia

Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno

Abstract:

Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.

Keywords: geostatistics, simulated annealing, semivariogram, optimization

Procedia PDF Downloads 304