Search results for: bioinformatic predictions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 669

Search results for: bioinformatic predictions

279 CFD Simulation for Thermo-Hydraulic Performance V-Shaped Discrete Ribs on the Absorber Plate of Solar Air Heater

Authors: J. L. Bhagoria, Ajeet Kumar Giri

Abstract:

A computational investigation of various flow characteristics with artificial roughness in the form of V-types discrete ribs, heated wall of rectangular duct for turbulent flow with Reynolds number range (3800-15000) and p/e (5 to 12) has been carried out with k-e turbulence model is selected by comparing the predictions of different turbulence models with experimental results available in literature. The current study evaluates thermal performance behavior, heat transfer and fluid flow behavior in a v shaped duct with discrete roughened ribs mounted on one of the principal wall (solar plate) by computational fluid dynamics software (Fluent 6.3.26 Solver). In this study, CFD has been carried out through designing 3-demensional model of experimental solar air heater model analysis has been used to perform a numerical simulation to enhance turbulent heat transfer and Reynolds-Averaged Navier–Stokes analysis is used as a numerical technique and the k-epsilon model with near-wall treatment as a turbulent model. The thermal efficiency enhancement because of selected roughness is found to be 16-24%. The result predicts a significant enhancement of heat transfer as compared to that of for a smooth surface with different P’ and various range of Reynolds number.

Keywords: CFD, solar collector, airheater, thermal efficiency

Procedia PDF Downloads 290
278 A System Dynamics Approach to Technological Learning Impact for Cost Estimation of Solar Photovoltaics

Authors: Rong Wang, Sandra Hasanefendic, Elizabeth von Hauff, Bart Bossink

Abstract:

Technological learning and learning curve models have been continuously used to estimate the photovoltaics (PV) cost development over time for the climate mitigation targets. They can integrate a number of technological learning sources which influence the learning process. Yet the accuracy and realistic predictions for cost estimations of PV development are still difficult to achieve. This paper develops four hypothetical-alternative learning curve models by proposing different combinations of technological learning sources, including both local and global technology experience and the knowledge stock. This paper specifically focuses on the non-linear relationship between the costs and technological learning source and their dynamic interaction and uses the system dynamics approach to predict a more accurate PV cost estimation for future development. As the case study, the data from China is gathered and drawn to illustrate that the learning curve model that incorporates both the global and local experience is more accurate and realistic than the other three models for PV cost estimation. Further, absorbing and integrating the global experience into the local industry has a positive impact on PV cost reduction. Although the learning curve model incorporating knowledge stock is not realistic for current PV cost deployment in China, it still plays an effective positive role in future PV cost reduction.

Keywords: photovoltaic, system dynamics, technological learning, learning curve

Procedia PDF Downloads 96
277 Making of Alloy Steel by Direct Alloying with Mineral Oxides during Electro-Slag Remelting

Authors: Vishwas Goel, Kapil Surve, Somnath Basu

Abstract:

In-situ alloying in steel during the electro-slag remelting (ESR) process has already been achieved by the addition of necessary ferroalloys into the electro-slag remelting mold. However, the use of commercially available ferroalloys during ESR processing is often found to be financially less favorable, in comparison with the conventional alloying techniques. However, a process of alloying steel with elements like chromium and manganese using the electro-slag remelting route is under development without any ferrochrome addition. The process utilizes in-situ reduction of refined mineral chromite (Cr₂O₃) and resultant enrichment of chromium in the steel ingot produced. It was established in course of this work that this process can become more advantageous over conventional alloying techniques, both economically and environmentally, for applications which inherently demand the use of the electro-slag remelting process, such as manufacturing of superalloys. A key advantage is the lower overall CO₂ footprint of this process relative to the conventional route of production, storage, and the addition of ferrochrome. In addition to experimentally validating the feasibility of the envisaged reactions, a mathematical model to simulate the reduction of chromium (III) oxide and transfer to chromium to the molten steel droplets was also developed as part of the current work. The developed model helps to correlate the amount of chromite input and the magnitude of chromium alloying that can be achieved through this process. Experiments are in progress to validate the predictions made by this model and to fine-tune its parameters.

Keywords: alloying element, chromite, electro-slag remelting, ferrochrome

Procedia PDF Downloads 223
276 Daily Probability Model of Storm Events in Peninsular Malaysia

Authors: Mohd Aftar Abu Bakar, Noratiqah Mohd Ariff, Abdul Aziz Jemain

Abstract:

Storm Event Analysis (SEA) provides a method to define rainfalls events as storms where each storm has its own amount and duration. By modelling daily probability of different types of storms, the onset, offset and cycle of rainfall seasons can be determined and investigated. Furthermore, researchers from the field of meteorology will be able to study the dynamical characteristics of rainfalls and make predictions for future reference. In this study, four categories of storms; short, intermediate, long and very long storms; are introduced based on the length of storm duration. Daily probability models of storms are built for these four categories of storms in Peninsular Malaysia. The models are constructed by using Bernoulli distribution and by applying linear regression on the first Fourier harmonic equation. From the models obtained, it is found that daily probability of storms at the Eastern part of Peninsular Malaysia shows a unimodal pattern with high probability of rain beginning at the end of the year and lasting until early the next year. This is very likely due to the Northeast monsoon season which occurs from November to March every year. Meanwhile, short and intermediate storms at other regions of Peninsular Malaysia experience a bimodal cycle due to the two inter-monsoon seasons. Overall, these models indicate that Peninsular Malaysia can be divided into four distinct regions based on the daily pattern for the probability of various storm events.

Keywords: daily probability model, monsoon seasons, regions, storm events

Procedia PDF Downloads 343
275 A Monte Carlo Fuzzy Logistic Regression Framework against Imbalance and Separation

Authors: Georgios Charizanos, Haydar Demirhan, Duygu Icen

Abstract:

Two of the most impactful issues in classical logistic regression are class imbalance and complete separation. These can result in model predictions heavily leaning towards the imbalanced class on the binary response variable or over-fitting issues. Fuzzy methodology offers key solutions for handling these problems. However, most studies propose the transformation of the binary responses into a continuous format limited within [0,1]. This is called the possibilistic approach within fuzzy logistic regression. Following this approach is more aligned with straightforward regression since a logit-link function is not utilized, and fuzzy probabilities are not generated. In contrast, we propose a method of fuzzifying binary response variables that allows for the use of the logit-link function; hence, a probabilistic fuzzy logistic regression model with the Monte Carlo method. The fuzzy probabilities are then classified by selecting a fuzzy threshold. Different combinations of fuzzy and crisp input, output, and coefficients are explored, aiming to understand which of these perform better under different conditions of imbalance and separation. We conduct numerical experiments using both synthetic and real datasets to demonstrate the performance of the fuzzy logistic regression framework against seven crisp machine learning methods. The proposed framework shows better performance irrespective of the degree of imbalance and presence of separation in the data, while the considered machine learning methods are significantly impacted.

Keywords: fuzzy logistic regression, fuzzy, logistic, machine learning

Procedia PDF Downloads 74
274 Identification of Hepatocellular Carcinoma Using Supervised Learning Algorithms

Authors: Sagri Sharma

Abstract:

Analysis of diseases integrating multi-factors increases the complexity of the problem and therefore, development of frameworks for the analysis of diseases is an issue that is currently a topic of intense research. Due to the inter-dependence of the various parameters, the use of traditional methodologies has not been very effective. Consequently, newer methodologies are being sought to deal with the problem. Supervised Learning Algorithms are commonly used for performing the prediction on previously unseen data. These algorithms are commonly used for applications in fields ranging from image analysis to protein structure and function prediction and they get trained using a known dataset to come up with a predictor model that generates reasonable predictions for the response to new data. Gene expression profiles generated by DNA analysis experiments can be quite complex since these experiments can involve hypotheses involving entire genomes. The application of well-known machine learning algorithm - Support Vector Machine - to analyze the expression levels of thousands of genes simultaneously in a timely, automated and cost effective way is thus used. The objectives to undertake the presented work are development of a methodology to identify genes relevant to Hepatocellular Carcinoma (HCC) from gene expression dataset utilizing supervised learning algorithms and statistical evaluations along with development of a predictive framework that can perform classification tasks on new, unseen data.

Keywords: artificial intelligence, biomarker, gene expression datasets, hepatocellular carcinoma, machine learning, supervised learning algorithms, support vector machine

Procedia PDF Downloads 429
273 Flame Volume Prediction and Validation for Lean Blowout of Gas Turbine Combustor

Authors: Ejaz Ahmed, Huang Yong

Abstract:

The operation of aero engines has a critical importance in the vicinity of lean blowout (LBO) limits. Lefebvre’s model of LBO based on empirical correlation has been extended to flame volume concept by the authors. The flame volume takes into account the effects of geometric configuration, the complex spatial interaction of mixing, turbulence, heat transfer and combustion processes inside the gas turbine combustion chamber. For these reasons, flame volume based LBO predictions are more accurate. Although LBO prediction accuracy has improved, it poses a challenge associated with Vf estimation in real gas turbine combustors. This work extends the approach of flame volume prediction previously based on fuel iterative approximation with cold flow simulations to reactive flow simulations. Flame volume for 11 combustor configurations has been simulated and validated against experimental data. To make prediction methodology robust as required in the preliminary design stage, reactive flow simulations were carried out with the combination of probability density function (PDF) and discrete phase model (DPM) in FLUENT 15.0. The criterion for flame identification was defined. Two important parameters i.e. critical injection diameter (Dp,crit) and critical temperature (Tcrit) were identified, and their influence on reactive flow simulation was studied for Vf estimation. Obtained results exhibit ±15% error in Vf estimation with experimental data.

Keywords: CFD, combustion, gas turbine combustor, lean blowout

Procedia PDF Downloads 267
272 Predicting Options Prices Using Machine Learning

Authors: Krishang Surapaneni

Abstract:

The goal of this project is to determine how to predict important aspects of options, including the ask price. We want to compare different machine learning models to learn the best model and the best hyperparameters for that model for this purpose and data set. Option pricing is a relatively new field, and it can be very complicated and intimidating, especially to inexperienced people, so we want to create a machine learning model that can predict important aspects of an option stock, which can aid in future research. We tested multiple different models and experimented with hyperparameter tuning, trying to find some of the best parameters for a machine-learning model. We tested three different models: a Random Forest Regressor, a linear regressor, and an MLP (multi-layer perceptron) regressor. The most important feature in this experiment is the ask price; this is what we were trying to predict. In the field of stock pricing prediction, there is a large potential for error, so we are unable to determine the accuracy of the models based on if they predict the pricing perfectly. Due to this factor, we determined the accuracy of the model by finding the average percentage difference between the predicted and actual values. We tested the accuracy of the machine learning models by comparing the actual results in the testing data and the predictions made by the models. The linear regression model performed worst, with an average percentage error of 17.46%. The MLP regressor had an average percentage error of 11.45%, and the random forest regressor had an average percentage error of 7.42%

Keywords: finance, linear regression model, machine learning model, neural network, stock price

Procedia PDF Downloads 75
271 Liquid Bridges in a Complex Geometry: Microfluidic Drop Manipulation Inside a Wedge

Authors: D. Baratian, A. Cavalli, D. van den Ende, F. Mugele

Abstract:

The morphology of liquid bridges inside complex geometries is the subject of interest for many years. These efforts try to find stable liquid configuration considering the boundary condition and the physical properties of the system. On the other hand precise manipulation of droplets is highly significant in many microfluidic applications. The liquid configuration in a complex geometry can be switched by means of external stimuli. We show manipulation of droplets in a wedge structure. The profile and position of a drop in a wedge geometry has been calculated analytically assuming negligible contact angle hysteresis. The characteristic length of liquid bridge and its interfacial tension inside the surrounding medium along with the geometrical parameters of the system determine the morphology and equilibrium position of drop in the system. We use electrowetting to modify one the governing parameters to manipulate the droplet. Electrowetting provides the capability to have precise control on the drop position through tuning the voltage and consequently changing the contact angle. This technique is employed to tune drop displacement and control its position inside the wedge. Experiments demonstrate precise drop movement to its predefined position inside the wedge geometry. Experimental results show promising consistency as it is compared to our geometrical model predictions. For such a drop manipulation, appealing applications in microfluidics have been considered.

Keywords: liquid bridges, microfluidics, drop manipulation, wetting, electrowetting, capillarity

Procedia PDF Downloads 478
270 Develop a Conceptual Data Model of Geotechnical Risk Assessment in Underground Coal Mining Using a Cloud-Based Machine Learning Platform

Authors: Reza Mohammadzadeh

Abstract:

The major challenges in geotechnical engineering in underground spaces arise from uncertainties and different probabilities. The collection, collation, and collaboration of existing data to incorporate them in analysis and design for given prospect evaluation would be a reliable, practical problem solving method under uncertainty. Machine learning (ML) is a subfield of artificial intelligence in statistical science which applies different techniques (e.g., Regression, neural networks, support vector machines, decision trees, random forests, genetic programming, etc.) on data to automatically learn and improve from them without being explicitly programmed and make decisions and predictions. In this paper, a conceptual database schema of geotechnical risks in underground coal mining based on a cloud system architecture has been designed. A new approach of risk assessment using a three-dimensional risk matrix supported by the level of knowledge (LoK) has been proposed in this model. Subsequently, the model workflow methodology stages have been described. In order to train data and LoK models deployment, an ML platform has been implemented. IBM Watson Studio, as a leading data science tool and data-driven cloud integration ML platform, is employed in this study. As a Use case, a data set of geotechnical hazards and risk assessment in underground coal mining were prepared to demonstrate the performance of the model, and accordingly, the results have been outlined.

Keywords: data model, geotechnical risks, machine learning, underground coal mining

Procedia PDF Downloads 274
269 Probabilistic Approach of Dealing with Uncertainties in Distributed Constraint Optimization Problems and Situation Awareness for Multi-agent Systems

Authors: Sagir M. Yusuf, Chris Baber

Abstract:

In this paper, we describe how Bayesian inferential reasoning will contributes in obtaining a well-satisfied prediction for Distributed Constraint Optimization Problems (DCOPs) with uncertainties. We also demonstrate how DCOPs could be merged to multi-agent knowledge understand and prediction (i.e. Situation Awareness). The DCOPs functions were merged with Bayesian Belief Network (BBN) in the form of situation, awareness, and utility nodes. We describe how the uncertainties can be represented to the BBN and make an effective prediction using the expectation-maximization algorithm or conjugate gradient descent algorithm. The idea of variable prediction using Bayesian inference may reduce the number of variables in agents’ sampling domain and also allow missing variables estimations. Experiment results proved that the BBN perform compelling predictions with samples containing uncertainties than the perfect samples. That is, Bayesian inference can help in handling uncertainties and dynamism of DCOPs, which is the current issue in the DCOPs community. We show how Bayesian inference could be formalized with Distributed Situation Awareness (DSA) using uncertain and missing agents’ data. The whole framework was tested on multi-UAV mission for forest fire searching. Future work focuses on augmenting existing architecture to deal with dynamic DCOPs algorithms and multi-agent information merging.

Keywords: DCOP, multi-agent reasoning, Bayesian reasoning, swarm intelligence

Procedia PDF Downloads 119
268 Chatter Prediction of Curved Thin-walled Parts Considering Variation of Dynamic Characteristics Based on Acoustic Signals Acquisition

Authors: Damous Mohamed, Zeroudi Nasredine

Abstract:

High-speed milling of thin-walled parts with complex curvilinear profiles often encounters machining instability, commonly referred to as chatter. This phenomenon arises due to the dynamic interaction between the cutting tool and the part, exacerbated by the part's low rigidity and varying dynamic characteristics along the tool path. This research presents a dynamic model specifically developed to predict machining stability for such curved thin-walled components. The model employs the semi-discretization method, segmenting the tool trajectory into small, straight elements to locally approximate the behavior of an inclined plane. Dynamic characteristics for each segment are extracted through experimental modal analysis and incorporated into the simulation model to generate global stability lobe diagrams. Validation of the model is conducted through cutting tests where acoustic intensity is measured to detect instabilities. The experimental data align closely with the predicted stability limits, confirming the model's accuracy and effectiveness. This work provides a comprehensive approach to enhancing machining stability predictions, thereby improving the efficiency and quality of high-speed milling operations for thin-walled parts.

Keywords: chatter, curved thin-walled part, semi-discretization method, stability lobe diagrams

Procedia PDF Downloads 26
267 Combining the Deep Neural Network with the K-Means for Traffic Accident Prediction

Authors: Celso L. Fernando, Toshio Yoshii, Takahiro Tsubota

Abstract:

Understanding the causes of a road accident and predicting their occurrence is key to preventing deaths and serious injuries from road accident events. Traditional statistical methods such as the Poisson and the Logistics regressions have been used to find the association of the traffic environmental factors with the accident occurred; recently, an artificial neural network, ANN, a computational technique that learns from historical data to make a more accurate prediction, has emerged. Although the ability to make accurate predictions, the ANN has difficulty dealing with highly unbalanced attribute patterns distribution in the training dataset; in such circumstances, the ANN treats the minority group as noise. However, in the real world data, the minority group is often the group of interest; e.g., in the road traffic accident data, the events of the accident are the group of interest. This study proposes a combination of the k-means with the ANN to improve the predictive ability of the neural network model by alleviating the effect of the unbalanced distribution of the attribute patterns in the training dataset. The results show that the proposed method improves the ability of the neural network to make a prediction on a highly unbalanced distributed attribute patterns dataset; however, on an even distributed attribute patterns dataset, the proposed method performs almost like a standard neural network.

Keywords: accident risks estimation, artificial neural network, deep learning, k-mean, road safety

Procedia PDF Downloads 163
266 Settlement Prediction in Cape Flats Sands Using Shear Wave Velocity – Penetration Resistance Correlations

Authors: Nanine Fouche

Abstract:

The Cape Flats is a low-lying sand-covered expanse of approximately 460 square kilometres, situated to the southeast of the central business district of Cape Town in the Western Cape of South Africa. The aeolian sands masking this area are often loose and compressible in the upper 1m to 1.5m of the surface, and there is a general exceedance of the maximum allowable settlement in these sands. The settlement of shallow foundations on Cape Flats sands is commonly predicted using the results of in-situ tests such as the SPT or DPSH due to the difficulty of retrieving undisturbed samples for laboratory testing. Varying degrees of accuracy and reliability are associated with these methods. More recently, shear wave velocity (Vs) profiles obtained from seismic testing, such as continuous surface wave tests (CSW), are being used for settlement prediction. Such predictions have the advantage of considering non-linear stress-strain behaviour of soil and the degradation of stiffness with increasing strain. CSW tests are rarely executed in the Cape Flats, whereas SPT’s are commonly performed. For this reason, and to facilitate better settlement predictions in Cape Flats sand, equations representing shear wave velocity (Vs) as a function of SPT blow count (N60) and vertical effective stress (v’) were generated by statistical regression of site investigation data. To reveal the most appropriate method of overburden correction, analyses were performed with a separate overburden term (Pa/σ’v) as well as using stress corrected shear wave velocity and SPT blow counts (correcting Vs. and N60 to Vs1and (N1)60respectively). Shear wave velocity profiles and SPT blow count data from three sites masked by Cape Flats sands were utilised to generate 80 Vs-SPT N data pairs for analysis. Investigated terrains included sites in the suburbs of Athlone, Muizenburg, and Atlantis, all underlain by windblown deposits comprising fine and medium sand with varying fines contents. Elastic settlement analysis was also undertaken for the Cape Flats sands, using a non-linear stepwise method based on small-strain stiffness estimates, which was obtained from the best Vs-N60 model and compared to settlement estimates using the general elastic solution with stiffness profiles determined using Stroud’s (1989) and Webb’s (1969) SPT N60-E transformation models. Stroud’s method considers strain level indirectly whereasWebb’smethod does not take account of the variation in elastic modulus with strain. The expression of Vs. in terms of N60 and Pa/σv’ derived from the Atlantis data set revealed the best fit with R2 = 0.83 and a standard error of 83.5m/s. Less accurate Vs-SPT N relations associated with the combined data set is presumably the result of inversion routines used in the analysis of the CSW results showcasing significant variation in relative density and stiffness with depth. The regression analyses revealed that the inclusion of a separate overburden term in the regression of Vs and N60, produces improved fits, as opposed to the stress corrected equations in which the R2 of the regression is notably lower. It is the correction of Vs and N60 to Vs1 and (N1)60 with empirical constants ‘n’ and ‘m’ prior to regression, that introduces bias with respect to overburden pressure. When comparing settlement prediction methods, both Stroud’s method (considering strain level indirectly) and the small strain stiffness method predict higher stiffnesses for medium dense and dense profiles than Webb’s method, which takes no account of strain level in the determination of soil stiffness. Webb’s method appears to be suitable for loose sands only. The Versak software appears to underestimate differences in settlement between square and strip footings of similar width. In conclusion, settlement analysis using small-strain stiffness data from the proposed Vs-N60 model for Cape Flats sands provides a way to take account of the non-linear stress-strain behaviour of the sands when calculating settlement.

Keywords: sands, settlement prediction, continuous surface wave test, small-strain stiffness, shear wave velocity, penetration resistance

Procedia PDF Downloads 175
265 Conversion of Glycerol to 3-Hydroxypropanoic Acid by Genetically Engineered Bacillus subtilis

Authors: Aida Kalantari, Boyang Ji, Tao Chen, Ivan Mijakovic

Abstract:

3-hydroxypropanoic acid (3-HP) is one of the most important biomass-derivable platform chemicals that can be converted into a number of industrially important compounds. There have been several attempts at production of 3-HP from renewable sources in cell factories, focusing mainly on Escherichia coli, Klebsiella pneumoniae, and Saccharomyces cerevisiae. Despite the significant progress made in this field, commercially exploitable large-scale production of 3-HP in microbial strains has still not been achieved. In this study, we investigated the potential of Bacillus subtilis to be used as a microbial platform for bioconversion of glycerol into 3-HP. Our recombinant B. subtilis strains overexpress the two-step heterologous pathway containing glycerol dehydratase and aldehyde dehydrogenase from various backgrounds. The recombinant strains harboring the codon-optimized synthetic pathway from K. pneumoniae produced low levels of 3-HP. Since the enzymes in the heterologous pathway are sensitive to oxygen, we had to perform our experiments in micro-aerobic conditions. Under these conditions, the cell produces lactate in order to regenerate NAD+, and we found the lactate production to be in competition with the production of 3-HP. Therefore, based on the in silico predictions, we knocked out the glycerol kinase (glpk), which in combination with growth on glucose, resulted in improving the 3-HP titer to 1 g/L and the removal of lactate. Cultivation of the same strain in an enriched medium improved the 3-HP titer up to 7.6 g/L. Our findings provide the first report of successful introduction of the biosynthetic pathway for conversion of glycerol into 3-HP in B. subtilis.

Keywords: bacillus subtilis, glycerol, 3-hydroxypropanoic acid, metabolic engineering

Procedia PDF Downloads 247
264 Comparison of Leeway Space Predictions Using Moyers and Tanaka-Johnston Upper Jaw and Lower Jaw on Batak Tribe Between Male and Female in Elementary School Students in Medan City, Sumatera Utara, Indonesia: A Cross-sectional Study

Authors: Hilda Fitria Lubis, Erna Sulistyawati

Abstract:

Objective: The study aims to compare Leeway space averages between Moyers and Tanaka-Johnston's analysis of elementary school students from the Batak tribe in Medan City. Material and Methods: The study involved 106 students from the Batak tribe elementary school in Medan, comprising 53 male and 53 female students. The samples obtained were then printed on both jaws to obtain a working model, and the mesiodistal width of the four permanent biting teeth of the lower jaw and the amount of space available on the canine-premolar region, as well as the predicted mesiodistal number of the canine-premolar on the Moyers probability table with a 75% degree of confidence and the Tanaka-Johnston formula. Results: Using Moyers analysis, students at Batak Elementary School in Medan City have an average Leeway space value of 2 mm on the upper jaw and 2.78 mm on the lower jaw. The average Leeway spatial value using Tanaka-Johnston analysis in the Batak tribe in elementary school in Medan City is 1.33 mm on the top jaw and 2.39 mm on the bottom jaw. Conclusion: According to Moyers and Tanaka-Johnsnton's analysis of both the upper and lower jaws in elementary school students of the Batak tribe in Medan City, there is a significant difference between Leeway's average space.

Keywords: leeways space, batak tribe, genders, diagnosis

Procedia PDF Downloads 31
263 Development of Precise Ephemeris Generation Module for Thaichote Satellite Operations

Authors: Manop Aorpimai, Ponthep Navakitkanok

Abstract:

In this paper, the development of the ephemeris generation module used for the Thaichote satellite operations is presented. It is a vital part of the flight dynamics system, which comprises, the orbit determination, orbit propagation, event prediction and station-keeping maneuver modules. In the generation of the spacecraft ephemeris data, the estimated orbital state vector from the orbit determination module is used as an initial condition. The equations of motion are then integrated forward in time to predict the satellite states. The higher geopotential harmonics, as well as other disturbing forces, are taken into account to resemble the environment in low-earth orbit. Using a highly accurate numerical integrator based on the Burlish-Stoer algorithm the ephemeris data can be generated for long-term predictions, by using a relatively small computation burden and short calculation time. Some events occurring during the prediction course that are related to the mission operations, such as the satellite’s rise/set viewed from the ground station, Earth and Moon eclipses, the drift in ground track as well as the drift in the local solar time of the orbital plane are all detected and reported. When combined with other modules to form a flight dynamics system, this application is aimed to be applied for the Thaichote satellite and successive Thailand’s Earth-observation missions.

Keywords: flight dynamics system, orbit propagation, satellite ephemeris, Thailand’s Earth Observation Satellite

Procedia PDF Downloads 377
262 Comparing Forecasting Performances of the Bass Diffusion Model and Time Series Methods for Sales of Electric Vehicles

Authors: Andreas Gohs, Reinhold Kosfeld

Abstract:

This study should be of interest for practitioners who want to predict precisely the sales numbers of vehicles equipped with an innovative propulsion technology as well as for researchers interested in applied (regional) time series analysis. The study is based on the numbers of new registrations of pure electric and hybrid cars. Methods of time series analysis like ARIMA are compared with the Bass Diffusion-model concerning their forecasting performances for new registrations in Germany at the national and federal state levels. Especially it is investigated if the additional information content from regional data increases the forecasting accuracy for the national level by adding predictions for the federal states. Results of parameters of the Bass Diffusion Model estimated for Germany and its sixteen federal states are reported. While the focus of this research is on the German market, estimation results are also provided for selected European and other countries. Concerning Bass-parameters and forecasting performances, we get very different results for Germany's federal states and the member states of the European Union. This corresponds to differences across the EU-member states in the adoption process of this innovative technology. Concerning the German market, the adoption is rather proceeded in southern Germany and stays behind in Eastern Germany except for Berlin.

Keywords: bass diffusion model, electric vehicles, forecasting performance, market diffusion

Procedia PDF Downloads 167
261 Determination of Tide Height Using Global Navigation Satellite Systems (GNSS)

Authors: Faisal Alsaaq

Abstract:

Hydrographic surveys have traditionally relied on the availability of tide information for the reduction of sounding observations to a common datum. In most cases, tide information is obtained from tide gauge observations and/or tide predictions over space and time using local, regional or global tide models. While the latter often provides a rather crude approximation, the former relies on tide gauge stations that are spatially restricted, and often have sparse and limited distribution. A more recent method that is increasingly being used is Global Navigation Satellite System (GNSS) positioning which can be utilised to monitor height variations of a vessel or buoy, thus providing information on sea level variations during the time of a hydrographic survey. However, GNSS heights obtained under the dynamic environment of a survey vessel are affected by “non-tidal” processes such as wave activity and the attitude of the vessel (roll, pitch, heave and dynamic draft). This research seeks to examine techniques that separate the tide signal from other non-tidal signals that may be contained in GNSS heights. This requires an investigation of the processes involved and their temporal, spectral and stochastic properties in order to apply suitable recovery techniques of tide information. In addition, different post-mission and near real-time GNSS positioning techniques will be investigated with focus on estimation of height at ocean. Furthermore, the study will investigate the possibility to transfer the chart datums at the location of tide gauges.

Keywords: hydrography, GNSS, datum, tide gauge

Procedia PDF Downloads 264
260 A Study on the Failure Modes of Steel Moment Frame in Post-Earthquake Fire Using Coupled Mechanical-Thermal Analysis

Authors: Ehsan Asgari, Meisam Afazeli, Nezhla Attarchian

Abstract:

Post-earthquake fire is considered as a major threat in seismic areas. After an earthquake, fire is possible in structures. In this research, the effect of post-earthquake fire on steel moment frames with and without fireproofing coating is investigated. For this purpose, finite element method is employed. For the verification of finite element results, the results of an experimental study carried out by previous researchers are used, and the predicted FE results are compared with the test results, and good agreement is observed. After ensuring the accuracy of the predictions of finite element models, the effect of post-earthquake fire on the frames is investigated taking into account the parameters including the presence or absence of fire protection, frame design assumptions, earthquake type and different fire scenario. Ordinary fire and post-earthquake fire effect on the frames is also studied. The plastic hinges induced by earthquake in the structure are determined in the beam to the column connection and in panel zone. These areas should be accurately considered when providing fireproofing coatings. The results of the study show that the occurrence of fire beside corner columns is the most damaging scenario that results in progressive collapse of structure. It was also concluded that the behavior of structure in fire after a strong ground motion is significantly different from that in a normal fire.

Keywords: post earthquake fire, moment frame, finite element simulation, coupled temperature-displacement analysis, fire scenario

Procedia PDF Downloads 154
259 Performance Gap and near Zero Energy Buildings Compliance of Monitored Passivhaus in Northern Ireland, the Republic of Ireland and Italy

Authors: S. Colclough, V. Costanzo, K. Fabbri, S. Piraccini, P. Griffiths

Abstract:

The near Zero Energy Building (nZEB) standard is required for all buildings from 2020. The Passive House (PH) standard is a well-established low-energy building standard, having been designed over 25 years ago, and could potentially be used to achieve the nZEB standard in combination with renewables. By comparing measured performance with design predictions, this paper considers if there is a performance gap for a number of monitored properties and assesses if the nZEB standard can be achieved by following the well-established PH scheme. Analysis is carried out based on monitoring results from real buildings located in Northern Ireland, the Republic of Ireland and Italy respectively, with particular focus on the indoor air quality including the assumed and measured indoor temperature and heating periods for both standards as recorded during a full annual cycle. An analysis is carried out also on the energy performance certificates of each of the dwellings to determine if they meet the near Zero Energy Buildings primary energy consumption targets set in the respective jurisdictions. Each of the dwellings is certified as complying with the passive house standard, and accordingly have very good insulation levels, heat recovery and ventilation systems of greater than 75% efficiency and an airtightness of less than 0.6 air changes per hour at 50 Pa. It is found that indoor temperature and relative humidity were within the comfort boundaries set in the design stage, while carbon dioxide concentrations are sometimes higher than the values suggested by EN 15251 Standard for comfort class I especially in bedrooms.

Keywords: monitoring campaign, nZEB (near zero energy buildings), Passivhaus, performance gap

Procedia PDF Downloads 152
258 A Hierarchical Method for Multi-Class Probabilistic Classification Vector Machines

Authors: P. Byrnes, F. A. DiazDelaO

Abstract:

The Support Vector Machine (SVM) has become widely recognised as one of the leading algorithms in machine learning for both regression and binary classification. It expresses predictions in terms of a linear combination of kernel functions, referred to as support vectors. Despite its popularity amongst practitioners, SVM has some limitations, with the most significant being the generation of point prediction as opposed to predictive distributions. Stemming from this issue, a probabilistic model namely, Probabilistic Classification Vector Machines (PCVM), has been proposed which respects the original functional form of SVM whilst also providing a predictive distribution. As physical system designs become more complex, an increasing number of classification tasks involving industrial applications consist of more than two classes. Consequently, this research proposes a framework which allows for the extension of PCVM to a multi class setting. Additionally, the original PCVM framework relies on the use of type II maximum likelihood to provide estimates for both the kernel hyperparameters and model evidence. In a high dimensional multi class setting, however, this approach has been shown to be ineffective due to bad scaling as the number of classes increases. Accordingly, we propose the application of Markov Chain Monte Carlo (MCMC) based methods to provide a posterior distribution over both parameters and hyperparameters. The proposed framework will be validated against current multi class classifiers through synthetic and real life implementations.

Keywords: probabilistic classification vector machines, multi class classification, MCMC, support vector machines

Procedia PDF Downloads 221
257 Efficient Credit Card Fraud Detection Based on Multiple ML Algorithms

Authors: Neha Ahirwar

Abstract:

In the contemporary digital era, the rise of credit card fraud poses a significant threat to both financial institutions and consumers. As fraudulent activities become more sophisticated, there is an escalating demand for robust and effective fraud detection mechanisms. Advanced machine learning algorithms have become crucial tools in addressing this challenge. This paper conducts a thorough examination of the design and evaluation of a credit card fraud detection system, utilizing four prominent machine learning algorithms: random forest, logistic regression, decision tree, and XGBoost. The surge in digital transactions has opened avenues for fraudsters to exploit vulnerabilities within payment systems. Consequently, there is an urgent need for proactive and adaptable fraud detection systems. This study addresses this imperative by exploring the efficacy of machine learning algorithms in identifying fraudulent credit card transactions. The selection of random forest, logistic regression, decision tree, and XGBoost for scrutiny in this study is based on their documented effectiveness in diverse domains, particularly in credit card fraud detection. These algorithms are renowned for their capability to model intricate patterns and provide accurate predictions. Each algorithm is implemented and evaluated for its performance in a controlled environment, utilizing a diverse dataset comprising both genuine and fraudulent credit card transactions.

Keywords: efficient credit card fraud detection, random forest, logistic regression, XGBoost, decision tree

Procedia PDF Downloads 67
256 Numerical Analysis of the Aging Effects of RC Shear Walls Repaired by CFRP Sheets: Application of CEB-FIP MC 90 Model

Authors: Yeghnem Redha, Guerroudj Hicham Zakaria, Hanifi Hachemi Amar Lemiya, Meftah Sid Ahmed, Tounsi Abdelouahed, Adda Bedia El Abbas

Abstract:

Creep deformation of concrete is often responsible for excessive deflection at service loads which can compromise the performance of elements within a structure. Although laboratory test may be undertaken to determine the deformation properties of concrete, these are time-consuming, often expensive and generally not a practical option. Therefore, relatively simple empirically design code models are relied to predict the creep strain. This paper reviews the accuracy of creep and shrinkage predictions of reinforced concrete (RC) shear walls structures strengthened with carbon fibre reinforced polymer (CFRP) sheets, which is characterized by a widthwise varying fibre volume fraction. This review is yielded by CEB-FIB MC90 model. The time-dependent behavior was investigated to analyze their static behavior. In the numerical formulation, the adherents and the adhesives are all modelled as shear wall elements, using the mixed finite element method. Several tests were used to dem¬onstrate the accuracy and effectiveness of the proposed method. Numerical results from the present analysis are presented to illustrate the significance of the time-dependency of the lateral displacements.

Keywords: RC shear walls strengthened, CFRP sheets, creep and shrinkage, CEB-FIP MC90 model, finite element method, static behavior

Procedia PDF Downloads 309
255 Rheology and Structural Arrest of Dense Dairy Suspensions: A Soft Matter Approach

Authors: Marjan Javanmard

Abstract:

The rheological properties of dairy products critically depend on the underlying organisation of proteins at multiple length scales. When heated and acidified, milk proteins form particle gel that is viscoelastic, solvent rich, ‘soft’ material. In this work recent developments on the rheology of soft particles suspensions were used to interpret and potentially define the properties of dairy gel structures. It is discovered that at volume fractions below random close packing (RCP), the Maron-Pierce-Quemada (MPQ) model accurately predicts the viscosity of the dairy gel suspensions without fitting parameters; the MPQ model has been shown previously to provide reasonable predictions of the viscosity of hard sphere suspensions from the volume fraction, solvent viscosity and RCP. This surprising finding demonstrates that up to RCP, the dairy gel system behaves as a hard sphere suspension and that the structural aggregates behave as discrete particulates akin to what is observed for microgel suspensions. At effective phase volumes well above RCP, the system is a soft solid. In this region, it is discovered that the storage modulus of the sheared AMG scales with the storage modulus of the set gel. The storage modulus in this regime is reasonably well described as a function of effective phase volume by the Evans and Lips model. Findings of this work has potential to aid in rational design and control of dairy food structure-properties.

Keywords: dairy suspensions, rheology-structure, Maron-Pierce-Quemada Model, Evans and Lips Model

Procedia PDF Downloads 219
254 An Inverse Docking Approach for Identifying New Potential Anticancer Targets

Authors: Soujanya Pasumarthi

Abstract:

Inverse docking is a relatively new technique that has been used to identify potential receptor targets of small molecules. Our docking software package MDock is well suited for such an application as it is both computationally efficient, yet simultaneously shows adequate results in binding affinity predictions and enrichment tests. As a validation study, we present the first stage results of an inverse-docking study which seeks to identify potential direct targets of PRIMA-1. PRIMA-1 is well known for its ability to restore mutant p53's tumor suppressor function, leading to apoptosis in several types of cancer cells. For this reason, we believe that potential direct targets of PRIMA-1 identified in silico should be experimentally screened for their ability to inhibitcancer cell growth. The highest-ranked human protein of our PRIMA-1 docking results is oxidosqualene cyclase (OSC), which is part of the cholesterol synthetic pathway. The results of two followup experiments which treat OSC as a possible anti-cancer target are promising. We show that both PRIMA-1 and Ro 48-8071, a known potent OSC inhibitor, significantly reduce theviability of BT-474 breast cancer cells relative to normal mammary cells. In addition, like PRIMA-1, we find that Ro 48-8071 results in increased binding of mutant p53 to DNA in BT- 474cells (which highly express p53). For the first time, Ro 48-8071 is shown as a potent agent in killing human breast cancer cells. The potential of OSC as a new target for developing anticancer therapies is worth further investigation.

Keywords: inverse docking, in silico screening, protein-ligand interactions, molecular docking

Procedia PDF Downloads 446
253 Modeling of Age Hardening Process Using Adaptive Neuro-Fuzzy Inference System: Results from Aluminum Alloy A356/Cow Horn Particulate Composite

Authors: Chidozie C. Nwobi-Okoye, Basil Q. Ochieze, Stanley Okiy

Abstract:

This research reports on the modeling of age hardening process using adaptive neuro-fuzzy inference system (ANFIS). The age hardening output (Hardness) was predicted using ANFIS. The input parameters were ageing time, temperature and percentage composition of cow horn particles (CHp%). The results show the correlation coefficient (R) of the predicted hardness values versus the measured values was of 0.9985. Subsequently, values outside the experimental data points were predicted. When the temperature was kept constant, and other input parameters were varied, the average relative error of the predicted values was 0.0931%. When the temperature was varied, and other input parameters kept constant, the average relative error of the hardness values predictions was 80%. The results show that ANFIS with coarse experimental data points for learning is not very effective in predicting process outputs in the age hardening operation of A356 alloy/CHp particulate composite. The fine experimental data requirements by ANFIS make it more expensive in modeling and optimization of age hardening operations of A356 alloy/CHp particulate composite.

Keywords: adaptive neuro-fuzzy inference system (ANFIS), age hardening, aluminum alloy, metal matrix composite

Procedia PDF Downloads 153
252 2D Numerical Modeling of Ultrasonic Measurements in Concrete: Wave Propagation in a Multiple-Scattering Medium

Authors: T. Yu, L. Audibert, J. F. Chaix, D. Komatitsch, V. Garnier, J. M. Henault

Abstract:

Linear Ultrasonic Techniques play a major role in Non-Destructive Evaluation (NDE) for civil engineering structures in concrete since they can meet operational requirements. Interpretation of ultrasonic measurements could be improved by a better understanding of ultrasonic wave propagation in a multiple scattering medium. This work aims to develop a 2D numerical model of ultrasonic wave propagation in a heterogeneous medium, like concrete, integrating the multiple scattering phenomena in SPECFEM software. The coherent field of multiple scattering is obtained by averaging numerical wave fields, and it is used to determine the effective phase velocity and attenuation corresponding to an equivalent homogeneous medium. First, this model is applied to one scattering element (a cylinder) in a homogenous medium in a linear-elastic system, and its validation is completed thanks to the comparison with analytical solution. Then, some cases of multiple scattering by a set of randomly located cylinders or polygons are simulated to perform parametric studies on the influence of frequency and scatterer size, concentration, and shape. Also, the effective properties are compared with the predictions of Waterman-Truell model to verify its validity. Finally, the mortar viscoelastic behavior is introduced in the simulation in order to considerer the dispersion and the attenuation due to porosity included in the cement paste. In the future, different steps will be developed: The comparisons with experimental results, the interpretation of NDE measurements, and the optimization of NDE parameters before an auscultation.

Keywords: attenuation, multiple-scattering medium, numerical modeling, phase velocity, ultrasonic measurements

Procedia PDF Downloads 275
251 Breaking the Stained-Glass Ceiling: Personality Traits and Ambivalent Sexism in Shaping Gender Income Equality

Authors: Shiza Shahid, Saba Shahid, Kenji Noguchi, Raegan Bishop, Elena Stepanova

Abstract:

According to data from the U.S. Census Bureau, in 2020, in the United States, women who worked full-time earned only 82 cents for every dollar earned by men who worked full-time, year-round. This study examined how personality traits (extraversion, agreeableness, conscientiousness, emotional stability, openness to experience) interacts with ambivalent sexism to influence acceptance of gender income inequality. Using a quantitative method approach, this study collected data from a sample of N=150 students from Social Science Online Subject Pool (SONA). The study predicted that (a) extraversion and openness to experience would be positively related to acceptance of gender income inequality, while emotional stability and agreeableness would be negatively related to acceptance of gender income inequality, (b) Individuals who scored higher on measures of hostile sexism would show greater acceptance of gender income inequality than individuals who score higher on measures of benevolent sexism. The results were reported according to the predictions for the study. This study broadens the importance of addressing the underlying factors contributing to attitudes towards gender income inequality and contributes to ongoing efforts to achieve gender equality, which is important for promoting economic well-being.

Keywords: gender income ineqaulity, ambivalent sexism, personality traits, sustainable development goals

Procedia PDF Downloads 65
250 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling

Authors: Florin Leon, Silvia Curteanu

Abstract:

Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.

Keywords: batch bulk methyl methacrylate polymerization, adaptive sampling, machine learning, large margin nearest neighbor regression

Procedia PDF Downloads 304