Search results for: e2e reliability prediction
3437 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles
Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi
Abstract:
Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.Keywords: artificial neural networks, fuel consumption, friedman test, machine learning, statistical hypothesis testing
Procedia PDF Downloads 1783436 Reliability-Based Maintenance Management Methodology to Minimise Life Cycle Cost of Water Supply Networks
Authors: Mojtaba Mahmoodian, Joshua Phelan, Mehdi Shahparvari
Abstract:
With a large percentage of countries’ total infrastructure expenditure attributed to water network maintenance, it is essential to optimise maintenance strategies to rehabilitate or replace underground pipes before failure occurs. The aim of this paper is to provide water utility managers with a maintenance management approach for underground water pipes, subject to external loading and material corrosion, to give the lowest life cycle cost over a predetermined time period. This reliability-based maintenance management methodology details the optimal years for intervention, the ideal number of maintenance activities to perform before replacement and specifies feasible renewal options and intervention prioritisation to minimise the life cycle cost. The study was then extended to include feasible renewal methods by determining the structural condition index and potential for soil loss, then obtaining the failure impact rating to assist in prioritising pipe replacement. A case study on optimisation of maintenance plans for the Melbourne water pipe network is considered in this paper to evaluate the practicality of the proposed methodology. The results confirm that the suggested methodology can provide water utility managers with a reliable systematic approach to determining optimum maintenance plans for pipe networks.Keywords: water pipe networks, maintenance management, reliability analysis, optimum maintenance plan
Procedia PDF Downloads 1553435 Providing a Practical Model to Reduce Maintenance Costs: A Case Study in Golgohar Company
Authors: Iman Atighi, Jalal Soleimannejad, Ahmad Akbarinasab, Saeid Moradpour
Abstract:
In the past, we could increase profit by increasing product prices. But in the new decade, a competitive market does not let us to increase profit with increase prices. Therefore, the only way to increase profit will be reduce costs. A significant percentage of production costs are the maintenance costs, and analysis of these costs could achieve more profit. Most maintenance strategies such as RCM (Reliability-Center-Maintenance), TPM (Total Productivity Maintenance), PM (Preventive Maintenance) etc., are trying to reduce maintenance costs. In this paper, decreasing the maintenance costs of Concentration Plant of Golgohar Company (GEG) was examined by using of MTBF (Mean Time between Failures) and MTTR (Mean Time to Repair) analyses. These analyses showed that instead of buying new machines and increasing costs in order to promote capacity, the improving of MTBF and MTTR indexes would solve capacity problems in the best way and decrease costs.Keywords: Golgohar Iron Ore Mining and Industrial Company, maintainability, maintenance costs, reliability-center-maintenance
Procedia PDF Downloads 3023434 Cooperative Coevolution for Neuro-Evolution of Feed Forward Networks for Time Series Prediction Using Hidden Neuron Connections
Authors: Ravneil Nand
Abstract:
Cooperative coevolution uses problem decomposition methods to solve a larger problem. The problem decomposition deals with breaking down the larger problem into a number of smaller sub-problems depending on their method. Different problem decomposition methods have their own strengths and limitations depending on the neural network used and application problem. In this paper we are introducing a new problem decomposition method known as Hidden-Neuron Level Decomposition (HNL). The HNL method is competing with established problem decomposition method in time series prediction. The results show that the proposed approach has improved the results in some benchmark data sets when compared to the standalone method and has competitive results when compared to methods from literature.Keywords: cooperative coevaluation, feed forward network, problem decomposition, neuron, synapse
Procedia PDF Downloads 3353433 Numerical Prediction of Entropy Generation in Heat Exchangers
Authors: Nadia Allouache
Abstract:
The concept of second law is assumed to be important to optimize the energy losses in heat exchangers. The present study is devoted to the numerical prediction of entropy generation due to heat transfer and friction in a double tube heat exchanger partly or fully filled with a porous medium. The goal of this work is to find the optimal conditions that allow minimizing entropy generation. For this purpose, numerical modeling based on the control volume method is used to describe the flow and heat transfer phenomena in the fluid and the porous medium. Effects of the porous layer thickness, its permeability, and the effective thermal conductivity have been investigated. Unexpectedly, the fully porous heat exchanger yields a lower entropy generation than the partly porous case or the fluid case even if the friction increases the entropy generation.Keywords: heat exchangers, porous medium, second law approach, turbulent flow
Procedia PDF Downloads 3003432 Predicting Data Center Resource Usage Using Quantile Regression to Conserve Energy While Fulfilling the Service Level Agreement
Authors: Ahmed I. Alutabi, Naghmeh Dezhabad, Sudhakar Ganti
Abstract:
Data centers have been growing in size and dema nd continuously in the last two decades. Planning for the deployment of resources has been shallow and always resorted to over-provisioning. Data center operators try to maximize the availability of their services by allocating multiple of the needed resources. One resource that has been wasted, with little thought, has been energy. In recent years, programmable resource allocation has paved the way to allow for more efficient and robust data centers. In this work, we examine the predictability of resource usage in a data center environment. We use a number of models that cover a wide spectrum of machine learning categories. Then we establish a framework to guarantee the client service level agreement (SLA). Our results show that using prediction can cut energy loss by up to 55%.Keywords: machine learning, artificial intelligence, prediction, data center, resource allocation, green computing
Procedia PDF Downloads 1083431 Big Data: Appearance and Disappearance
Authors: James Moir
Abstract:
The mainstay of Big Data is prediction in that it allows practitioners, researchers, and policy analysts to predict trends based upon the analysis of large and varied sources of data. These can range from changing social and political opinions, patterns in crimes, and consumer behaviour. Big Data has therefore shifted the criterion of success in science from causal explanations to predictive modelling and simulation. The 19th-century science sought to capture phenomena and seek to show the appearance of it through causal mechanisms while 20th-century science attempted to save the appearance and relinquish causal explanations. Now 21st-century science in the form of Big Data is concerned with the prediction of appearances and nothing more. However, this pulls social science back in the direction of a more rule- or law-governed reality model of science and away from a consideration of the internal nature of rules in relation to various practices. In effect Big Data offers us no more than a world of surface appearance and in doing so it makes disappear any context-specific conceptual sensitivity.Keywords: big data, appearance, disappearance, surface, epistemology
Procedia PDF Downloads 4203430 Data Mining Approach: Classification Model Evaluation
Authors: Lubabatu Sada Sodangi
Abstract:
The rapid growth in exchange and accessibility of information via the internet makes many organisations acquire data on their own operation. The aim of data mining is to analyse the different behaviour of a dataset using observation. Although, the subset of the dataset being analysed may not display all the behaviours and relationships of the entire data and, therefore, may not represent other parts that exist in the dataset. There is a range of techniques used in data mining to determine the hidden or unknown information in datasets. In this paper, the performance of two algorithms Chi-Square Automatic Interaction Detection (CHAID) and multilayer perceptron (MLP) would be matched using an Adult dataset to find out the percentage of an/the adults that earn > 50k and those that earn <= 50k per year. The two algorithms were studied and compared using IBM SPSS statistics software. The result for CHAID shows that the most important predictors are relationship and education. The algorithm shows that those are married (husband) and have qualification: Bachelor, Masters, Doctorate or Prof-school whose their age is > 41<57 earn > 50k. Also, multilayer perceptron displays marital status and capital gain as the most important predictors of the income. It also shows that individuals that their capital gain is less than 6,849 and are single, separated or widow, earn <= 50K, whereas individuals with their capital gain is > 6,849, work > 35 hrs/wk, and > 27yrs their income will be > 50k. By comparing the two algorithms, it is observed that both algorithms are reliable but there is strong reliability in CHAID which clearly shows that relation and education contribute to the prediction as displayed in the data visualisation.Keywords: data mining, CHAID, multi-layer perceptron, SPSS, Adult dataset
Procedia PDF Downloads 3783429 Weibull Cumulative Distribution Function Analysis with Life Expectancy Endurance Test Result of Power Window Switch
Authors: Miky Lee, K. Kim, D. Lim, D. Cho
Abstract:
This paper presents the planning, rationale for test specification derivation, sampling requirements, test facilities, and result analysis used to conduct lifetime expectancy endurance tests on power window switches (PWS) considering thermally induced mechanical stress under diurnal cyclic temperatures during normal operation (power cycling). The detail process of analysis and test results on the selected PWS set were discussed in this paper. A statistical approach to ‘life time expectancy’ was given to the measurement standards dealing with PWS lifetime determination through endurance tests. The approach choice, within the framework of the task, was explained. The present task was dedicated to voltage drop measurement to derive lifetime expectancy while others mostly consider contact or surface resistance. The measurements to perform and the main instruments to measure were fully described accordingly. The failure data from tests were analyzed to conclude lifetime expectancy through statistical method using Weibull cumulative distribution function. The first goal of this task is to develop realistic worst case lifetime endurance test specification because existing large number of switch test standards cannot induce degradation mechanism which makes the switches less reliable. 2nd goal is to assess quantitative reliability status of PWS currently manufactured based on test specification newly developed thru this project. The last and most important goal is to satisfy customer’ requirement regarding product reliability.Keywords: power window switch, endurance test, Weibull function, reliability, degradation mechanism
Procedia PDF Downloads 2353428 Prediction of Childbearing Orientations According to Couples' Sexual Review Component
Authors: Razieh Rezaeekalantari
Abstract:
Objective: The purpose of this study was to investigate the prediction of parenting orientations in terms of the components of couples' sexual review. Methods: This was a descriptive correlational research method. The population consisted of 500 couples referring to Sari Health Center. Two hundred and fifteen (215) people were selected randomly by using Krejcie-Morgan-sample-size-table. For data collection, the childbearing orientations scale and the Multidimensional Sexual Self-Concept Questionnaire were used. Result: For data analysis, the mean and standard deviation were used and to analyze the research hypothesis regression correlation and inferential statistics were used. Conclusion: The findings indicate that there is not a significant relationship between the tendency to childbearing and the predictive value of sexual review (r = 0.84) with significant level (sig = 219.19) (P < 0.05). So, with 95% confidence, we conclude that there is not a meaningful relationship between sexual orientation and tendency to child-rearing.Keywords: couples referring, health center, sexual review component, parenting orientations
Procedia PDF Downloads 2193427 Sorghum Grains Grading for Food, Feed, and Fuel Using NIR Spectroscopy
Authors: Irsa Ejaz, Siyang He, Wei Li, Naiyue Hu, Chaochen Tang, Songbo Li, Meng Li, Boubacar Diallo, Guanghui Xie, Kang Yu
Abstract:
Background: Near-infrared spectroscopy (NIR) is a non-destructive, fast, and low-cost method to measure the grain quality of different cereals. Previously reported NIR model calibrations using the whole grain spectra had moderate accuracy. Improved predictions are achievable by using the spectra of whole grains, when compared with the use of spectra collected from the flour samples. However, the feasibility for determining the critical biochemicals, related to the classifications for food, feed, and fuel products are not adequately investigated. Objectives: To evaluate the feasibility of using NIRS and the influence of four sample types (whole grains, flours, hulled grain flours, and hull-less grain flours) on the prediction of chemical components to improve the grain sorting efficiency for human food, animal feed, and biofuel. Methods: NIR was applied in this study to determine the eight biochemicals in four types of sorghum samples: hulled grain flours, hull-less grain flours, whole grains, and grain flours. A total of 20 hybrids of sorghum grains were selected from the two locations in China. Followed by NIR spectral and wet-chemically measured biochemical data, partial least squares regression (PLSR) was used to construct the prediction models. Results: The results showed that sorghum grain morphology and sample format affected the prediction of biochemicals. Using NIR data of grain flours generally improved the prediction compared with the use of NIR data of whole grains. In addition, using the spectra of whole grains enabled comparable predictions, which are recommended when a non-destructive and rapid analysis is required. Compared with the hulled grain flours, hull-less grain flours allowed for improved predictions for tannin, cellulose, and hemicellulose using NIR data. Conclusion: The established PLSR models could enable food, feed, and fuel producers to efficiently evaluate a large number of samples by predicting the required biochemical components in sorghum grains without destruction.Keywords: FT-NIR, sorghum grains, biochemical composition, food, feed, fuel, PLSR
Procedia PDF Downloads 693426 National Assessment for Schools in Saudi Arabia: Score Reliability and Plausible Values
Authors: Dimiter M. Dimitrov, Abdullah Sadaawi
Abstract:
The National Assessment for Schools (NAFS) in Saudi Arabia consists of standardized tests in Mathematics, Reading, and Science for school grade levels 3, 6, and 9. One main goal is to classify students into four categories of NAFS performance (minimal, basic, proficient, and advanced) by schools and the entire national sample. The NAFS scoring and equating is performed on a bounded scale (D-scale: ranging from 0 to 1) in the framework of the recently developed “D-scoring method of measurement.” The specificity of the NAFS measurement framework and data complexity presented both challenges and opportunities to (a) the estimation of score reliability for schools, (b) setting cut-scores for the classification of students into categories of performance, and (c) generating plausible values for distributions of student performance on the D-scale. The estimation of score reliability at the school level was performed in the framework of generalizability theory (GT), with students “nested” within schools and test items “nested” within test forms. The GT design was executed via a multilevel modeling syntax code in R. Cut-scores (on the D-scale) for the classification of students into performance categories was derived via a recently developed method of standard setting, referred to as “Response Vector for Mastery” (RVM) method. For each school, the classification of students into categories of NAFS performance was based on distributions of plausible values for the students’ scores on NAFS tests by grade level (3, 6, and 9) and subject (Mathematics, Reading, and Science). Plausible values (on the D-scale) for each individual student were generated via random selection from a statistical logit-normal distribution with parameters derived from the student’s D-score and its conditional standard error, SE(D). All procedures related to D-scoring, equating, generating plausible values, and classification of students into performance levels were executed via a computer program in R developed for the purpose of NAFS data analysis.Keywords: large-scale assessment, reliability, generalizability theory, plausible values
Procedia PDF Downloads 183425 Analytical Study of Data Mining Techniques for Software Quality Assurance
Authors: Mariam Bibi, Rubab Mehboob, Mehreen Sirshar
Abstract:
Satisfying the customer requirements is the ultimate goal of producing or developing any product. The quality of the product is decided on the bases of the level of customer satisfaction. There are different techniques which have been reported during the survey which enhance the quality of the product through software defect prediction and by locating the missing software requirements. Some mining techniques were proposed to assess the individual performance indicators in collaborative environment to reduce errors at individual level. The basic intention is to produce a product with zero or few defects thereby producing a best product quality wise. In the analysis of survey the techniques like Genetic algorithm, artificial neural network, classification and clustering techniques and decision tree are studied. After analysis it has been discovered that these techniques contributed much to the improvement and enhancement of the quality of the product.Keywords: data mining, defect prediction, missing requirements, software quality
Procedia PDF Downloads 4673424 Cardiovascular Disease Prediction Using Machine Learning Approaches
Abstract:
It is estimated that heart disease accounts for one in ten deaths worldwide. United States deaths due to heart disease are among the leading causes of death according to the World Health Organization. Cardiovascular diseases (CVDs) account for one in four U.S. deaths, according to the Centers for Disease Control and Prevention (CDC). According to statistics, women are more likely than men to die from heart disease as a result of strokes. A 50% increase in men's mortality was reported by the World Health Organization in 2009. The consequences of cardiovascular disease are severe. The causes of heart disease include diabetes, high blood pressure, high cholesterol, abnormal pulse rates, etc. Machine learning (ML) can be used to make predictions and decisions in the healthcare industry. Thus, scientists have turned to modern technologies like Machine Learning and Data Mining to predict diseases. The disease prediction is based on four algorithms. Compared to other boosts, the Ada boost is much more accurate.Keywords: heart disease, cardiovascular disease, coronary artery disease, feature selection, random forest, AdaBoost, SVM, decision tree
Procedia PDF Downloads 1533423 Prediction of Sepsis Illness from Patients Vital Signs Using Long Short-Term Memory Network and Dynamic Analysis
Authors: Marcio Freire Cruz, Naoaki Ono, Shigehiko Kanaya, Carlos Arthur Mattos Teixeira Cavalcante
Abstract:
The systems that record patient care information, known as Electronic Medical Record (EMR) and those that monitor vital signs of patients, such as heart rate, body temperature, and blood pressure have been extremely valuable for the effectiveness of the patient’s treatment. Several kinds of research have been using data from EMRs and vital signs of patients to predict illnesses. Among them, we highlight those that intend to predict, classify, or, at least identify patterns, of sepsis illness in patients under vital signs monitoring. Sepsis is an organic dysfunction caused by a dysregulated patient's response to an infection that affects millions of people worldwide. Early detection of sepsis is expected to provide a significant improvement in its treatment. Preceding works usually combined medical, statistical, mathematical and computational models to develop detection methods for early prediction, getting higher accuracies, and using the smallest number of variables. Among other techniques, we could find researches using survival analysis, specialist systems, machine learning and deep learning that reached great results. In our research, patients are modeled as points moving each hour in an n-dimensional space where n is the number of vital signs (variables). These points can reach a sepsis target point after some time. For now, the sepsis target point was calculated using the median of all patients’ variables on the sepsis onset. From these points, we calculate for each hour the position vector, the first derivative (velocity vector) and the second derivative (acceleration vector) of the variables to evaluate their behavior. And we construct a prediction model based on a Long Short-Term Memory (LSTM) Network, including these derivatives as explanatory variables. The accuracy of the prediction 6 hours before the time of sepsis, considering only the vital signs reached 83.24% and by including the vectors position, speed, and acceleration, we obtained 94.96%. The data are being collected from Medical Information Mart for Intensive Care (MIMIC) Database, a public database that contains vital signs, laboratory test results, observations, notes, and so on, from more than 60.000 patients.Keywords: dynamic analysis, long short-term memory, prediction, sepsis
Procedia PDF Downloads 1253422 Statistical Analysis Approach for the e-Glassy Mortar And Radiation Shielding Behaviors Using Anova
Authors: Abadou Yacine, Faid Hayette
Abstract:
Significant investigations were performed on the use and impact on physical properties along with the mechanical strength of the recycled and reused E-glass waste powder. However, it has been modelled how recycled display e-waste glass may affect the characteristics and qualities of dune sand mortar. To be involved in this field, an investigation has been done with the substitution of dune sand for recycled E-glass waste and constant water-cement ratios. The linear relationship between the dune sand mortar and E-glass mortar mix % contributes to the model's reliability. The experimental data was exposed to regression analysis using JMP Statistics software. The regression model with one predictor presented the general form of the equation for the prediction of the five properties' characteristics of dune sand mortar from the substitution ratio of E-waste glass and curing age. The results illustrate that curing a long-term process produced an E-glass waste mortar specimen with the highest compressive strength of 68 MPa in the laboratory environment. Anova analysis indicated that the curing at long-term has the utmost importance on the sorptivity level and ultrasonic pulse velocity loss. Furthermore, the E-glass waste powder percentage has the utmost importance on the compressive strength and improvement in dynamic elasticity modulus. Besides, a significant enhancement of radiation-shielding applications.Keywords: ANOVA analysis, E-glass waste, durability and sustainability, radiation-shielding
Procedia PDF Downloads 593421 Personalized Infectious Disease Risk Prediction System: A Knowledge Model
Authors: Retno A. Vinarti, Lucy M. Hederman
Abstract:
This research describes a knowledge model for a system which give personalized alert to users about infectious disease risks in the context of weather, location and time. The knowledge model is based on established epidemiological concepts augmented by information gleaned from infection-related data repositories. The existing disease risk prediction research has more focuses on utilizing raw historical data and yield seasonal patterns of infectious disease risk emergence. This research incorporates both data and epidemiological concepts gathered from Atlas of Human Infectious Disease (AHID) and Centre of Disease Control (CDC) as basic reasoning of infectious disease risk prediction. Using CommonKADS methodology, the disease risk prediction task is an assignment synthetic task, starting from knowledge identification through specification, refinement to implementation. First, knowledge is gathered from AHID primarily from the epidemiology and risk group chapters for each infectious disease. The result of this stage is five major elements (Person, Infectious Disease, Weather, Location and Time) and their properties. At the knowledge specification stage, the initial tree model of each element and detailed relationships are produced. This research also includes a validation step as part of knowledge refinement: on the basis that the best model is formed using the most common features, Frequency-based Selection (FBS) is applied. The portion of the Infectious Disease risk model relating to Person comes out strongest, with Location next, and Weather weaker. For Person attribute, Age is the strongest, Activity and Habits are moderate, and Blood type is weakest. At the Location attribute, General category (e.g. continents, region, country, and island) results much stronger than Specific category (i.e. terrain feature). For Weather attribute, Less Precise category (i.e. season) comes out stronger than Precise category (i.e. exact temperature or humidity interval). However, given that some infectious diseases are significantly more serious than others, a frequency based metric may not be appropriate. Future work will incorporate epidemiological measurements of disease seriousness (e.g. odds ratio, hazard ratio and fatality rate) into the validation metrics. This research is limited to modelling existing knowledge about epidemiology and chain of infection concepts. Further step, verification in knowledge refinement stage, might cause some minor changes on the shape of tree.Keywords: epidemiology, knowledge modelling, infectious disease, prediction, risk
Procedia PDF Downloads 2423420 Cross-Cultural Adaptation and Validation of the Child Engagement in Daily Life in Greek
Authors: Rigas Dimakopoulos, Marianna Papadopoulou, Roser Pons
Abstract:
Background: Participation in family, recreational activities and self-care is an integral part of health. It is also the main outcome of rehabilitation services for children and adolescents with motor disabilities. There are currently no tools in Greek to assess participation in young children. Purpose: To culturally adapt and validate the Greek version of the Child Engagement in Daily Living (CEDL). Method: The CEDL was cross-culturally translated into Greek using forward-backward translation, review by the expert committee, pretest application and final review. Internal consistency was evaluated using the Cronbach alpha and test-retest reliability using the intra-class correlation coefficient (ICC). Parents of children aged 18 months to 5 years and with motor disabilities were recruited. Participants completed the CEDL and the children’s gross motor function was classified using the Gross Motor Function Classification System (GMFCS). Results: Eighty-three children were included, GMFCS I-V. Mean ± standard deviation of the CEDL domains “frequency of participation” “enjoyment of participation” and “self-care” were 58.4±14.0, 3.8±1.0 and 49.9±24, respectively. Internal consistency of all domains was high; Cronbach alpha for “frequency of participation” was 0.83, for “enjoyment of participation” was 0.76 and for “self-care” was 0.92. Test-retest reliability (ICC) was excellent for the “self-care” (0.95) and good for “frequency of participation” and “enjoyment of participation” domains (0.90 and 0.88, respectively). Conclusion: The Greek CEDL has good reliability. It can be used to evaluate participation in Greek young children with motor disabilities GMFCS levels I-V.Keywords: participation, child, disabilities, child engagement in daily living
Procedia PDF Downloads 1753419 Surface Roughness Prediction Using Numerical Scheme and Adaptive Control
Authors: Michael K.O. Ayomoh, Khaled A. Abou-El-Hossein., Sameh F.M. Ghobashy
Abstract:
This paper proposes a numerical modelling scheme for surface roughness prediction. The approach is premised on the use of 3D difference analysis method enhanced with the use of feedback control loop where a set of adaptive weights are generated. The surface roughness values utilized in this paper were adapted from [1]. Their experiments were carried out using S55C high carbon steel. A comparison was further carried out between the proposed technique and those utilized in [1]. The experimental design has three cutting parameters namely: depth of cut, feed rate and cutting speed with twenty-seven experimental sample-space. The simulation trials conducted using Matlab software is of two sub-classes namely: prediction of the surface roughness readings for the non-boundary cutting combinations (NBCC) with the aid of the known surface roughness readings of the boundary cutting combinations (BCC). The following simulation involved the use of the predicted outputs from the NBCC to recover the surface roughness readings for the boundary cutting combinations (BCC). The simulation trial for the NBCC attained a state of total stability in the 7th iteration i.e. a point where the actual and desired roughness readings are equal such that error is minimized to zero by using a set of dynamic weights generated in every following simulation trial. A comparative study among the three methods showed that the proposed difference analysis technique with adaptive weight from feedback control, produced a much accurate output as against the abductive and regression analysis techniques presented in this.Keywords: Difference Analysis, Surface Roughness; Mesh- Analysis, Feedback control, Adaptive weight, Boundary Element
Procedia PDF Downloads 6213418 The Design of a Vehicle Traffic Flow Prediction Model for a Gauteng Freeway Based on an Ensemble of Multi-Layer Perceptron
Authors: Tebogo Emma Makaba, Barnabas Ndlovu Gatsheni
Abstract:
The cities of Johannesburg and Pretoria both located in the Gauteng province are separated by a distance of 58 km. The traffic queues on the Ben Schoeman freeway which connects these two cities can stretch for almost 1.5 km. Vehicle traffic congestion impacts negatively on the business and the commuter’s quality of life. The goal of this paper is to identify variables that influence the flow of traffic and to design a vehicle traffic prediction model, which will predict the traffic flow pattern in advance. The model will unable motorist to be able to make appropriate travel decisions ahead of time. The data used was collected by Mikro’s Traffic Monitoring (MTM). Multi-Layer perceptron (MLP) was used individually to construct the model and the MLP was also combined with Bagging ensemble method to training the data. The cross—validation method was used for evaluating the models. The results obtained from the techniques were compared using predictive and prediction costs. The cost was computed using combination of the loss matrix and the confusion matrix. The predicted models designed shows that the status of the traffic flow on the freeway can be predicted using the following parameters travel time, average speed, traffic volume and day of month. The implications of this work is that commuters will be able to spend less time travelling on the route and spend time with their families. The logistics industry will save more than twice what they are currently spending.Keywords: bagging ensemble methods, confusion matrix, multi-layer perceptron, vehicle traffic flow
Procedia PDF Downloads 3443417 Springback Prediction for Sheet Metal Cold Stamping Using Convolutional Neural Networks
Abstract:
Cold stamping has been widely applied in the automotive industry for the mass production of a great range of automotive panels. Predicting the springback to ensure the dimensional accuracy of the cold-stamped components is a critical step. The main approaches for the prediction and compensation of springback in cold stamping include running Finite Element (FE) simulations and conducting experiments, which require forming process expertise and can be time-consuming and expensive for the design of cold stamping tools. Machine learning technologies have been proven and successfully applied in learning complex system behaviours using presentative samples. These technologies exhibit the promising potential to be used as supporting design tools for metal forming technologies. This study, for the first time, presents a novel application of a Convolutional Neural Network (CNN) based surrogate model to predict the springback fields for variable U-shape cold bending geometries. A dataset is created based on the U-shape cold bending geometries and the corresponding FE simulations results. The dataset is then applied to train the CNN surrogate model. The result shows that the surrogate model can achieve near indistinguishable full-field predictions in real-time when compared with the FE simulation results. The application of CNN in efficient springback prediction can be adopted in industrial settings to aid both conceptual and final component designs for designers without having manufacturing knowledge.Keywords: springback, cold stamping, convolutional neural networks, machine learning
Procedia PDF Downloads 1493416 Evaluation of Transfer Capability Considering Uncertainties of System Operating Condition and System Cascading Collapse
Authors: Nur Ashida Salim, Muhammad Murtadha Othman, Ismail Musirin, Mohd Salleh Serwan
Abstract:
Over the past few decades, the power system industry in many developing and developed countries has gone through a restructuring process of the industry where they are moving towards a deregulated power industry. This situation will lead to competition among the generation and distribution companies to achieve a certain objective which is to provide quality and efficient production of electric energy, which will reduce the price of electricity. Therefore it is important to obtain an accurate value of the Available Transfer Capability (ATC) and Transmission Reliability Margin (TRM) in order to ensure the effective power transfer between areas during the occurrence of uncertainties in the system. In this paper, the TRM and ATC is determined by taking into consideration the uncertainties of the system operating condition and system cascading collapse by applying the bootstrap technique. A case study of the IEEE RTS-79 is employed to verify the robustness of the technique proposed in the determination of TRM and ATC.Keywords: available transfer capability, bootstrap technique, cascading collapse, transmission reliability margin
Procedia PDF Downloads 4083415 Design and Burnback Analysis of Three Dimensional Modified Star Grain
Authors: Almostafa Abdelaziz, Liang Guozhu, Anwer Elsayed
Abstract:
The determination of grain geometry is an important and critical step in the design of solid propellant rocket motor. In this study, the design process involved parametric geometry modeling in CAD, MATLAB coding of performance prediction and 2D star grain ignition experiment. The 2D star grain burnback achieved by creating new surface via each web increment and calculating geometrical properties at each step. The 2D star grain is further modified to burn as a tapered 3D star grain. Zero dimensional method used to calculate the internal ballistic performance. Experimental and theoretical results were compared in order to validate the performance prediction of the solid rocket motor. The results show that the usage of 3D grain geometry will decrease the pressure inside the combustion chamber and enhance the volumetric loading ratio.Keywords: burnback analysis, rocket motor, star grain, three dimensional grains
Procedia PDF Downloads 2433414 Prediction of Flow Around a NACA 0015 Profile
Authors: Boukhadia Karima
Abstract:
The fluid mechanics is the study of fluid motion laws and their interaction with solid bodies, this project leads to illustrate this interaction with depth studies and approved by experiments on the wind tunnel TE44, ensuring the efficiency, accuracy and reliability of these tests on a NACA0015 profile. A symmetric NACA0015 was placed in a subsonic wind tunnel, and measurements were made of the pressure on the upper and lower surface of the wing and of the velocity across the vortex trailing downstream from the tip of the wing. The aim of this work is to investigate experimentally the scattered pressure profile in a free airflow and the aerodynamic forces acting on this profile. The addition of around-lateral edge to the wing tip was found to eliminate the secondary vortex near the wing tip, but had little effect on the downstream characteristics of the trailing vortex. The increase in wing lift near the tip because of the presence of the trailing vortex was evident in the surface pressure, but was not captured by circulation-box measurements. The circumferential velocity within the vortex was found to reach free-stream values and produce core rotational speeds. Near the wing, the trailing vortex is asymmetric and contains definite zones where the stream wise velocity both exceeds and falls behind the free-stream value. When referenced to the free stream velocity, the maximum vertical velocity of the vortex is directly dependent on α and is independent of Re. A numerical study was conducted through a CFD code called FLUENT 6.0, and the results are compared with experimental.Keywords: CFD code, NACA Profile, detachment, angle of incidence, wind tunnel
Procedia PDF Downloads 4113413 Automated Driving Deep Neural Networks Model Accuracy and Performance Assessment in a Simulated Environment
Authors: David Tena-Gago, Jose M. Alcaraz Calero, Qi Wang
Abstract:
The evolution and integration of automated vehicles have become more and more tangible in recent years. State-of-the-art technological advances in the field of camera-based Artificial Intelligence (AI) and computer vision greatly favor the performance and reliability of the Advanced Driver Assistance System (ADAS), leading to a greater knowledge of vehicular operation and resembling human behavior. However, the exclusive use of this technology still seems insufficient to control vehicular operation at 100%. To reveal the degree of accuracy of the current camera-based automated driving AI modules, this paper studies the structure and behavior of one of the main solutions in a controlled testing environment. The results obtained clearly outline the lack of reliability when using exclusively the AI model in the perception stage, thereby entailing using additional complementary sensors to improve its safety and performance.Keywords: accuracy assessment, AI-driven mobility, artificial intelligence, automated vehicles
Procedia PDF Downloads 1133412 Influence of Travel Time Reliability on Elderly Drivers Crash Severity
Authors: Ren Moses, Emmanuel Kidando, Eren Ozguven, Yassir Abdelrazig
Abstract:
Although older drivers (defined as those of age 65 and above) are less involved with speeding, alcohol use as well as night driving, they are more vulnerable to severe crashes. The major contributing factors for severe crashes include frailty and medical complications. Several studies have evaluated the contributing factors on severity of crashes. However, few studies have established the impact of travel time reliability (TTR) on road safety. In particular, the impact of TTR on senior adults who face several challenges including hearing difficulties, decreasing of the processing skills and cognitive problems in driving is not well established. Therefore, this study focuses on determining possible impacts of TTR on the traffic safety with focus on elderly drivers. Historical travel speed data from freeway links in the study area were used to calculate travel time and the associated TTR metrics that is, planning time index, the buffer index, the standard deviation of the travel time and the probability of congestion. Four-year information on crashes occurring on these freeway links was acquired. The binary logit model estimated using the Markov Chain Monte Carlo (MCMC) sampling technique was used to evaluate variables that could be influencing elderly crash severity. Preliminary results of the analysis suggest that TTR is statistically significant in affecting the severity of a crash involving an elderly driver. The result suggests that one unit increase in the probability of congestion reduces the likelihood of the elderly severe crash by nearly 22%. These findings will enhance the understanding of TTR and its impact on the elderly crash severity.Keywords: highway safety, travel time reliability, elderly drivers, traffic modeling
Procedia PDF Downloads 4933411 Reliability Analysis for the Functioning of Complete and Low Capacity MLDB Systems in Piston Plants
Authors: Ramanpreet Kaur, Upasana Sharma
Abstract:
The purpose of this paper is to address the challenges facing the water supply for the Machine Learning Database (MLDB) system at the piston foundry plant. In the MLDB system, one main unit, i.e., robotic, is connected by two sub-units. The functioning of the system depends on the robotic and water supply. Lack of water supply causes system failure. The system operates at full capacity with the help of two sub-units. If one sub-unit fails, the system runs at a low capacity. Reliability modeling is performed using semi-Markov processes and regenerative point techniques. Several system effects such as mean time to system failure, availability at full capacity, availability at reduced capacity, busy period for repair and expected number of visits have been achieved. Benefits have been analyzed. The graphical study is designed for a specific case using programming in C++ and MS Excel.Keywords: MLDB system, robotic, semi-Markov process, regenerative point technique
Procedia PDF Downloads 1033410 Time-Dependent Reliability Analysis of Corrosion Affected Cast Iron Pipes with Mixed Mode Fracture
Authors: Chun-Qing Li, Guoyang Fu, Wei Yang
Abstract:
A significant portion of current water networks is made of cast iron pipes. Due to aging and deterioration with corrosion being the most predominant mechanism, the failure rate of cast iron pipes is very high. Although considerable research has been carried out in the past few decades, most are on the effect of corrosion on the structural capacity of pipes using strength theory as the failure criterion. This paper presents a reliability-based methodology for the assessment of corrosion affected cast iron pipe cracking failures. A nonlinear limit state function taking into account all three fracture modes is proposed for brittle metal pipes with mixed mode fracture. A stochastic model of the load effect is developed, and time-dependent reliability method is employed to quantify the probability of failure and predict the remaining service life. A case study is carried out using the proposed methodology, followed by sensitivity analysis to investigate the effects of the random variables on the probability of failure. It has been found that the larger the inclination angle or the Mode I fracture toughness is, the smaller the probability of pipe failure is. It has also been found that the multiplying and exponential coefficients k and n in the power law corrosion model and the internal pressure have the most influence on the probability of failure for cast iron pipes. The methodology presented in this paper can assist pipe engineers and asset managers in developing a risk-informed and cost-effective strategy for better management of corrosion-affected pipelines.Keywords: corrosion, inclined surface cracks, pressurized cast iron pipes, stress intensity
Procedia PDF Downloads 3213409 Effects of Global Validity of Predictive Cues upon L2 Discourse Comprehension: Evidence from Self-paced Reading
Authors: Binger Lu
Abstract:
It remains unclear whether second language (L2) speakers could use discourse context cues to predict upcoming information as native speakers do during online comprehension. Some researchers propose that L2 learners may have a reduced ability to generate predictions during discourse processing. At the same time, there is evidence that discourse-level cues are weighed more heavily in L2 processing than in L1. Previous studies showed that L1 prediction is sensitive to the global validity of predictive cues. The current study aims to explore whether and to what extent L2 learners can dynamically and strategically adjust their prediction in accord with the global validity of predictive cues in L2 discourse comprehension as native speakers do. In a self-paced reading experiment, Chinese native speakers (N=128), C-E bilinguals (N=128), and English native speakers (N=128) read high-predictable (e.g., Jimmy felt thirsty after running. He wanted to get some water from the refrigerator.) and low-predictable (e.g., Jimmy felt sick this morning. He wanted to get some water from the refrigerator.) discourses in two-sentence frames. The global validity of predictive cues was manipulated by varying the ratio of predictable (e.g., Bill stood at the door. He opened it with the key.) and unpredictable fillers (e.g., Bill stood at the door. He opened it with the card.), such that across conditions, the predictability of the final word of the fillers ranged from 100% to 0%. The dependent variable was reading time on the critical region (the target word and the following word), analyzed with linear mixed-effects models in R. C-E bilinguals showed reliable prediction across all validity conditions (β = -35.6 ms, SE = 7.74, t = -4.601, p< .001), and Chinese native speakers showed significant effect (β = -93.5 ms, SE = 7.82, t = -11.956, p< .001) in two of the four validity conditions (namely, the High-validity and MedLow conditions, where fillers ended with predictable words in 100% and 25% cases respectively), whereas English native speakers didn’t predict at all (β = -2.78 ms, SE = 7.60, t = -.365, p = .715). There was neither main effect (χ^²(3) = .256, p = .968) nor interaction (Predictability: Background: Validity, χ^²(3) = 1.229, p = .746; Predictability: Validity, χ^²(3) = 2.520, p = .472; Background: Validity, χ^²(3) = 1.281, p = .734) of Validity with speaker groups. The results suggest that prediction occurs in L2 discourse processing but to a much less extent in L1, witha significant effect in some conditions of L1 Chinese and anull effect in L1 English processing, consistent with the view that L2 speakers are more sensitive to discourse cues compared with L1 speakers. Additionally, the pattern of L1 and L2 predictive processing was not affected by the global validity of predictive cues. C-E bilinguals’ predictive processing could be partly transferred from their L1, as prior research showed that discourse information played a more significant role in L1 Chinese processing.Keywords: bilingualism, discourse processing, global validity, prediction, self-paced reading
Procedia PDF Downloads 1383408 Predicting National Football League (NFL) Match with Score-Based System
Authors: Marcho Setiawan Handok, Samuel S. Lemma, Abdoulaye Fofana, Naseef Mansoor
Abstract:
This paper is proposing a method to predict the outcome of the National Football League match with data from 2019 to 2022 and compare it with other popular models. The model uses open-source statistical data of each team, such as passing yards, rushing yards, fumbles lost, and scoring. Each statistical data has offensive and defensive. For instance, a data set of anticipated values for a specific matchup is created by comparing the offensive passing yards obtained by one team to the defensive passing yards given by the opposition. We evaluated the model’s performance by contrasting its result with those of established prediction algorithms. This research is using a neural network to predict the score of a National Football League match and then predict the winner of the game.Keywords: game prediction, NFL, football, artificial neural network
Procedia PDF Downloads 84