Search results for: cheating prediction
1764 Artificial Neural Network Based Approach in Prediction of Potential Water Pollution Across Different Land-Use Patterns
Authors: M.Rüştü Karaman, İsmail İşeri, Kadir Saltalı, A.Reşit Brohi, Ayhan Horuz, Mümin Dizman
Abstract:
Considerable relations has recently been given to the environmental hazardous caused by agricultural chemicals such as excess fertilizers. In this study, a neural network approach was investigated in the prediction of potential nitrate pollution across different land-use patterns by using a feedforward multilayered computer model of artificial neural network (ANN) with proper training. Periodical concentrations of some anions, especially nitrate (NO3-), and cations were also detected in drainage waters collected from the drain pipes placed in irrigated tomato field, unirrigated wheat field, fallow and pasture lands. The soil samples were collected from the irrigated tomato field and unirrigated wheat field on a grid system with 20 m x 20 m intervals. Site specific nitrate concentrations in the soil samples were measured for ANN based simulation of nitrate leaching potential from the land profiles. In the application of ANN model, a multi layered feedforward was evaluated, and data sets regarding with training, validation and testing containing the measured soil nitrate values were estimated based on spatial variability. As a result of the testing values, while the optimal structures of 2-15-1 was obtained (R2= 0.96, P < 0.01) for unirrigated field, the optimal structures of 2-10-1 was obtained (R2= 0.96, P < 0.01) for irrigated field. The results showed that the ANN model could be successfully used in prediction of the potential leaching levels of nitrate, based on different land use patterns. However, for the most suitable results, the model should be calibrated by training according to different NN structures depending on site specific soil parameters and varied agricultural managements.Keywords: artificial intelligence, ANN, drainage water, nitrate pollution
Procedia PDF Downloads 3081763 Legal Study on the Construction of Olympic and Paralympic Soft Law about Manipulation of Sports Competition
Authors: Clemence Collon, Didier Poracchia
Abstract:
The manipulation of sports competitions is a new type of sports integrity problem. While doping has become an organized, institutionalized struggle, the manipulation of sports competitions is gradually building up. This study aims to describe and understand how the soft Olympic and Paralympic law was gradually built. It also summarizes the legal tools for prevention, detection, and sanction developed by the international Olympic movement. Then, it analyzes the impact of this soft law on the law of the States, in particular in French law. This study is mainly based on an analysis of existing legal literature and non-binding law in the International Olympic and Paralympic movement and on the French National Olympic Committee. Interviews were carried out with experts from the Olympic movement or experts working on combating the manipulation of sports competitions; the answers are also used in this article. The International Olympic Committee has created a supranational legal base to fight against the manipulation of sports competitions. This legal basis must be respected by sports organizations. The Olympic Charter, the Olympic Code of Ethics, the Olympic Movement Code on the prevention of the manipulation of sports competitions, the rules of standards, the basic universal principles, the manuals, the declarations have been published in this perspective. This sports soft law has influences or repercussions in each state. Many states take this new form of integrity problem into account by creating state laws or measures in favor of the fight against sports manipulations. France has so far only a legal basis for manipulation related to betting on sports competitions through the infraction of sports corruption included in the penal code and also created a national platform with various actors to combat this cheating. This legal study highlights the progressive construction of the sports law rules of the Olympic movement in the fight against the manipulation of sports competitions linked to sports betting and their impact on the law of the states.Keywords: integrity, law and ethics, manipulation of sports competitions, olympic, sports law
Procedia PDF Downloads 1531762 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models
Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg
Abstract:
Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction
Procedia PDF Downloads 3071761 The Ability of Forecasting the Term Structure of Interest Rates Based on Nelson-Siegel and Svensson Model
Authors: Tea Poklepović, Zdravka Aljinović, Branka Marasović
Abstract:
Due to the importance of yield curve and its estimation it is inevitable to have valid methods for yield curve forecasting in cases when there are scarce issues of securities and/or week trade on a secondary market. Therefore in this paper, after the estimation of weekly yield curves on Croatian financial market from October 2011 to August 2012 using Nelson-Siegel and Svensson models, yield curves are forecasted using Vector auto-regressive model and Neural networks. In general, it can be concluded that both forecasting methods have good prediction abilities where forecasting of yield curves based on Nelson Siegel estimation model give better results in sense of lower Mean Squared Error than forecasting based on Svensson model Also, in this case Neural networks provide slightly better results. Finally, it can be concluded that most appropriate way of yield curve prediction is neural networks using Nelson-Siegel estimation of yield curves.Keywords: Nelson-Siegel Model, neural networks, Svensson Model, vector autoregressive model, yield curve
Procedia PDF Downloads 3291760 Photo-Fenton Decolorization of Methylene Blue Adsolubilized on Co2+ -Embedded Alumina Surface: Comparison of Process Modeling through Response Surface Methodology and Artificial Neural Network
Authors: Prateeksha Mahamallik, Anjali Pal
Abstract:
In the present study, Co(II)-adsolubilized surfactant modified alumina (SMA) was prepared, and methylene blue (MB) degradation was carried out on Co-SMA surface by visible light photo-Fenton process. The entire reaction proceeded on solid surface as MB was embedded on Co-SMA surface. The reaction followed zero order kinetics. Response surface methodology (RSM) and artificial neural network (ANN) were used for modeling the decolorization of MB by photo-Fenton process as a function of dose of Co-SMA (10, 20 and 30 g/L), initial concentration of MB (10, 20 and 30 mg/L), concentration of H2O2 (174.4, 348.8 and 523.2 mM) and reaction time (30, 45 and 60 min). The prediction capabilities of both the methodologies (RSM and ANN) were compared on the basis of correlation coefficient (R2), root mean square error (RMSE), standard error of prediction (SEP), relative percent deviation (RPD). Due to lower value of RMSE (1.27), SEP (2.06) and RPD (1.17) and higher value of R2 (0.9966), ANN was proved to be more accurate than RSM in order to predict decolorization efficiency.Keywords: adsolubilization, artificial neural network, methylene blue, photo-fenton process, response surface methodology
Procedia PDF Downloads 2521759 Air Dispersion Modeling for Prediction of Accidental Emission in the Atmosphere along Northern Coast of Egypt
Authors: Moustafa Osman
Abstract:
Modeling of air pollutants from the accidental release is performed for quantifying the impact of industrial facilities into the ambient air. The mathematical methods are requiring for the prediction of the accidental scenario in probability of failure-safe mode and analysis consequences to quantify the environmental damage upon human health. The initial statement of mitigation plan is supporting implementation during production and maintenance periods. In a number of mathematical methods, the flow rate at which gaseous and liquid pollutants might be accidentally released is determined from various types in term of point, line and area sources. These emissions are integrated meteorological conditions in simplified stability parameters to compare dispersion coefficients from non-continuous air pollution plumes. The differences are reflected in concentrations levels and greenhouse effect to transport the parcel load in both urban and rural areas. This research reveals that the elevation effect nearby buildings with other structure is higher 5 times more than open terrains. These results are agreed with Sutton suggestion for dispersion coefficients in different stability classes.Keywords: air pollutants, dispersion modeling, GIS, health effect, urban planning
Procedia PDF Downloads 3731758 Multi-Faceted Growth in Creative Industries
Authors: Sanja Pfeifer, Nataša Šarlija, Marina Jeger, Ana Bilandžić
Abstract:
The purpose of this study is to explore the different facets of growth among micro, small and medium-sized firms in Croatia and to analyze the differences between models designed for all micro, small and medium-sized firms and those in creative industries. Three growth prediction models were designed and tested using the growth of sales, employment and assets of the company as dependent variables. The key drivers of sales growth are: prudent use of cash, industry affiliation and higher share of intangible assets. Growth of assets depends on retained profits, internal and external sources of financing, as well as industry affiliation. Growth in employment is closely related to sources of financing, in particular, debt and it occurs less frequently than growth in sales and assets. The findings confirm the assumption that growth strategies of small and medium-sized enterprises (SMEs) in creative industries have specific differences in comparison to SMEs in general. Interestingly, only 2.2% of growing enterprises achieve growth in employment, assets and sales simultaneously.Keywords: creative industries, growth prediction model, growth determinants, growth measures
Procedia PDF Downloads 3301757 Graph Clustering Unveiled: ClusterSyn - A Machine Learning Framework for Predicting Anti-Cancer Drug Synergy Scores
Authors: Babak Bahri, Fatemeh Yassaee Meybodi, Changiz Eslahchi
Abstract:
In the pursuit of effective cancer therapies, the exploration of combinatorial drug regimens is crucial to leverage synergistic interactions between drugs, thereby improving treatment efficacy and overcoming drug resistance. However, identifying synergistic drug pairs poses challenges due to the vast combinatorial space and limitations of experimental approaches. This study introduces ClusterSyn, a machine learning (ML)-powered framework for classifying anti-cancer drug synergy scores. ClusterSyn employs a two-step approach involving drug clustering and synergy score prediction using a fully connected deep neural network. For each cell line in the training dataset, a drug graph is constructed, with nodes representing drugs and edge weights denoting synergy scores between drug pairs. Drugs are clustered using the Markov clustering (MCL) algorithm, and vectors representing the similarity of drug pairs to each cluster are input into the deep neural network for synergy score prediction (synergy or antagonism). Clustering results demonstrate effective grouping of drugs based on synergy scores, aligning similar synergy profiles. Subsequently, neural network predictions and synergy scores of the two drugs on others within their clusters are used to predict the synergy score of the considered drug pair. This approach facilitates comparative analysis with clustering and regression-based methods, revealing the superior performance of ClusterSyn over state-of-the-art methods like DeepSynergy and DeepDDS on diverse datasets such as Oniel and Almanac. The results highlight the remarkable potential of ClusterSyn as a versatile tool for predicting anti-cancer drug synergy scores.Keywords: drug synergy, clustering, prediction, machine learning., deep learning
Procedia PDF Downloads 751756 Comparative Analysis of Predictive Models for Customer Churn Prediction in the Telecommunication Industry
Authors: Deepika Christopher, Garima Anand
Abstract:
To determine the best model for churn prediction in the telecom industry, this paper compares 11 machine learning algorithms, namely Logistic Regression, Support Vector Machine, Random Forest, Decision Tree, XGBoost, LightGBM, Cat Boost, AdaBoost, Extra Trees, Deep Neural Network, and Hybrid Model (MLPClassifier). It also aims to pinpoint the top three factors that lead to customer churn and conducts customer segmentation to identify vulnerable groups. According to the data, the Logistic Regression model performs the best, with an F1 score of 0.6215, 81.76% accuracy, 68.95% precision, and 56.57% recall. The top three attributes that cause churn are found to be tenure, Internet Service Fiber optic, and Internet Service DSL; conversely, the top three models in this article that perform the best are Logistic Regression, Deep Neural Network, and AdaBoost. The K means algorithm is applied to establish and analyze four different customer clusters. This study has effectively identified customers that are at risk of churn and may be utilized to develop and execute strategies that lower customer attrition.Keywords: attrition, retention, predictive modeling, customer segmentation, telecommunications
Procedia PDF Downloads 561755 Implementation of Correlation-Based Data Analysis as a Preliminary Stage for the Prediction of Geometric Dimensions Using Machine Learning in the Forming of Car Seat Rails
Authors: Housein Deli, Loui Al-Shrouf, Hammoud Al Joumaa, Mohieddine Jelali
Abstract:
When forming metallic materials, fluctuations in material properties, process conditions, and wear lead to deviations in the component geometry. Several hundred features sometimes need to be measured, especially in the case of functional and safety-relevant components. These can only be measured offline due to the large number of features and the accuracy requirements. The risk of producing components outside the tolerances is minimized but not eliminated by the statistical evaluation of process capability and control measurements. The inspection intervals are based on the acceptable risk and are at the expense of productivity but remain reactive and, in some cases, considerably delayed. Due to the considerable progress made in the field of condition monitoring and measurement technology, permanently installed sensor systems in combination with machine learning and artificial intelligence, in particular, offer the potential to independently derive forecasts for component geometry and thus eliminate the risk of defective products - actively and preventively. The reliability of forecasts depends on the quality, completeness, and timeliness of the data. Measuring all geometric characteristics is neither sensible nor technically possible. This paper, therefore, uses the example of car seat rail production to discuss the necessary first step of feature selection and reduction by correlation analysis, as otherwise, it would not be possible to forecast components in real-time and inline. Four different car seat rails with an average of 130 features were selected and measured using a coordinate measuring machine (CMM). The run of such measuring programs alone takes up to 20 minutes. In practice, this results in the risk of faulty production of at least 2000 components that have to be sorted or scrapped if the measurement results are negative. Over a period of 2 months, all measurement data (> 200 measurements/ variant) was collected and evaluated using correlation analysis. As part of this study, the number of characteristics to be measured for all 6 car seat rail variants was reduced by over 80%. Specifically, direct correlations for almost 100 characteristics were proven for an average of 125 characteristics for 4 different products. A further 10 features correlate via indirect relationships so that the number of features required for a prediction could be reduced to less than 20. A correlation factor >0.8 was assumed for all correlations.Keywords: long-term SHM, condition monitoring, machine learning, correlation analysis, component prediction, wear prediction, regressions analysis
Procedia PDF Downloads 451754 Comparison of Different Intraocular Lens Power Calculation Formulas in People With Very High Myopia
Authors: Xia Chen, Yulan Wang
Abstract:
purpose: To compare the accuracy of Haigis, SRK/T, T2, Holladay 1, Hoffer Q, Barrett Universal II, Emmetropia Verifying Optical (EVO) and Kane for intraocular lens power calculation in patients with axial length (AL) ≥ 28 mm. Methods: In this retrospective single-center study, 50 eyes of 41 patients with AL ≥ 28 mm that underwent uneventful cataract surgery were enrolled. The actual postoperative refractive results were compared to the predicted refraction calculated with different formulas (Haigis, SRK/T, T2, Holladay 1, Hoffer Q, Barrett Universal II, EVO and Kane). The mean absolute prediction errors (MAE) 1 month postoperatively were compared. Results: The MAE of different formulas were as follows: Haigis (0.509), SRK/T (0.705), T2 (0.999), Holladay 1 (0.714), Hoffer Q (0.583), Barrett Universal II (0.552), EVO (0.463) and Kane (0.441). No significant difference was found among the different formulas (P = .122). The Kane and EVO formulas achieved the lowest level of mean prediction error (PE) and median absolute error (MedAE) (p < 0.05). Conclusion: The Kane and EVO formulas had a better success rate than others in predicting IOL power in high myopic eyes with AL longer than 28 mm in this study.Keywords: cataract, power calculation formulas, intraocular lens, long axial length
Procedia PDF Downloads 821753 Prediction of Critical Flow Rate in Tubular Heat Exchangers for the Onset of Damaging Flow-Induced Vibrations
Authors: Y. Khulief, S. Bashmal, S. Said, D. Al-Otaibi, K. Mansour
Abstract:
The prediction of flow rates at which the vibration-induced instability takes place in tubular heat exchangers due to cross-flow is of major importance to the performance and service life of such equipment. In this paper, the semi-analytical model for square tube arrays was extended and utilized to study the triangular tube patterns. A laboratory test rig with instrumented test section is used to measure the fluidelastic coefficients to be used for tuning the mathematical model. The test section can be made of any bundle pattern. In this study, two test sections were constructed for both the normal triangular and the rotated triangular tube arrays. The developed scheme is utilized in predicting the onset of flow-induced instability in the two triangular tube arrays. The results are compared to those obtained for two other bundle configurations. The results of the four different tube patterns are viewed in the light of TEMA predictions. The comparison demonstrated that TEMA guidelines are more conservative in all configurations consideredKeywords: fluid-structure interaction, cross-flow, heat exchangers,
Procedia PDF Downloads 2771752 Rainfall–Runoff Simulation Using WetSpa Model in Golestan Dam Basin, Iran
Authors: M. R. Dahmardeh Ghaleno, M. Nohtani, S. Khaledi
Abstract:
Flood simulation and prediction is one of the most active research areas in surface water management. WetSpa is a distributed, continuous, and physical model with daily or hourly time step that explains precipitation, runoff, and evapotranspiration processes for both simple and complex contexts. This model uses a modified rational method for runoff calculation. In this model, runoff is routed along the flow path using Diffusion-Wave equation which depends on the slope, velocity, and flow route characteristics. Golestan Dam Basin is located in Golestan province in Iran and it is passing over coordinates 55° 16´ 50" to 56° 4´ 25" E and 37° 19´ 39" to 37° 49´ 28"N. The area of the catchment is about 224 km2, and elevations in the catchment range from 414 to 2856 m at the outlet, with average slope of 29.78%. Results of the simulations show a good agreement between calculated and measured hydrographs at the outlet of the basin. Drawing upon Nash-Sutcliffe model efficiency coefficient for calibration periodic model estimated daily hydrographs and maximum flow rate with an accuracy up to 59% and 80.18%, respectively.Keywords: watershed simulation, WetSpa, stream flow, flood prediction
Procedia PDF Downloads 2421751 Iraqi Short Term Electrical Load Forecasting Based on Interval Type-2 Fuzzy Logic
Authors: Firas M. Tuaimah, Huda M. Abdul Abbas
Abstract:
Accurate Short Term Load Forecasting (STLF) is essential for a variety of decision making processes. However, forecasting accuracy can drop due to the presence of uncertainty in the operation of energy systems or unexpected behavior of exogenous variables. Interval Type 2 Fuzzy Logic System (IT2 FLS), with additional degrees of freedom, gives an excellent tool for handling uncertainties and it improved the prediction accuracy. The training data used in this study covers the period from January 1, 2012 to February 1, 2012 for winter season and the period from July 1, 2012 to August 1, 2012 for summer season. The actual load forecasting period starts from January 22, till 28, 2012 for winter model and from July 22 till 28, 2012 for summer model. The real data for Iraqi power system which belongs to the Ministry of Electricity.Keywords: short term load forecasting, prediction interval, type 2 fuzzy logic systems, electric, computer systems engineering
Procedia PDF Downloads 3951750 Prediction of Gully Erosion with Stochastic Modeling by using Geographic Information System and Remote Sensing Data in North of Iran
Authors: Reza Zakerinejad
Abstract:
Gully erosion is a serious problem that threading the sustainability of agricultural area and rangeland and water in a large part of Iran. This type of water erosion is the main source of sedimentation in many catchment areas in the north of Iran. Since in many national assessment approaches just qualitative models were applied the aim of this study is to predict the spatial distribution of gully erosion processes by means of detail terrain analysis and GIS -based logistic regression in the loess deposition in a case study in the Golestan Province. This study the DEM with 25 meter result ion from ASTER data has been used. The Landsat ETM data have been used to mapping of land use. The TreeNet model as a stochastic modeling was applied to prediction the susceptible area for gully erosion. In this model ROC we have set 20 % of data as learning and 20 % as learning data. Therefore, applying the GIS and satellite image analysis techniques has been used to derive the input information for these stochastic models. The result of this study showed a high accurate map of potential for gully erosion.Keywords: TreeNet model, terrain analysis, Golestan Province, Iran
Procedia PDF Downloads 5351749 Evaluation of QSRR Models by Sum of Ranking Differences Approach: A Case Study of Prediction of Chromatographic Behavior of Pesticides
Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević
Abstract:
The present study deals with the selection of the most suitable quantitative structure-retention relationship (QSRR) models which should be used in prediction of the retention behavior of basic, neutral, acidic and phenolic pesticides which belong to different classes: fungicides, herbicides, metabolites, insecticides and plant growth regulators. Sum of ranking differences (SRD) approach can give a different point of view on selection of the most consistent QSRR model. SRD approach can be applied not only for ranking of the QSRR models, but also for detection of similarity or dissimilarity among them. Applying the SRD analysis, the most similar models can be found easily. In this study, selection of the best model was carried out on the basis of the reference ranking (“golden standard”) which was defined as the row average values of logarithm of retention time (logtr) defined by high performance liquid chromatography (HPLC). Also, SRD analysis based on experimental logtr values as reference ranking revealed similar grouping of the established QSRR models already obtained by hierarchical cluster analysis (HCA).Keywords: chemometrics, chromatography, pesticides, sum of ranking differences
Procedia PDF Downloads 3731748 A New Computational Tool for Noise Prediction of Rotating Surfaces (FACT)
Authors: Ana Vieira, Fernando Lau, João Pedro Mortágua, Luís Cruz, Rui Santos
Abstract:
The air transport impact on environment is more than ever a limitative obstacle to the aeronautical industry continuous growth. Over the last decades, considerable effort has been carried out in order to obtain quieter aircraft solutions, whether by changing the original design or investigating more silent maneuvers. The noise propagated by rotating surfaces is one of the most important sources of annoyance, being present in most aerial vehicles. Bearing this is mind, CEIIA developed a new computational chain for noise prediction with in-house software tools to obtain solutions in relatively short time without using excessive computer resources. This work is based on the new acoustic tool, which aims to predict the rotor noise generated during steady and maneuvering flight, making use of the flexibility of the C language and the advantages of GPU programming in terms of velocity. The acoustic tool is based in the Formulation 1A of Farassat, capable of predicting two important types of noise: the loading and thickness noise. The present work describes the most important features of the acoustic tool, presenting its most relevant results and framework analyses for helicopters and UAV quadrotors.Keywords: rotor noise, acoustic tool, GPU Programming, UAV noise
Procedia PDF Downloads 4011747 Cost Sensitive Feature Selection in Decision-Theoretic Rough Set Models for Customer Churn Prediction: The Case of Telecommunication Sector Customers
Authors: Emel Kızılkaya Aydogan, Mihrimah Ozmen, Yılmaz Delice
Abstract:
In recent days, there is a change and the ongoing development of the telecommunications sector in the global market. In this sector, churn analysis techniques are commonly used for analysing why some customers terminate their service subscriptions prematurely. In addition, customer churn is utmost significant in this sector since it causes to important business loss. Many companies make various researches in order to prevent losses while increasing customer loyalty. Although a large quantity of accumulated data is available in this sector, their usefulness is limited by data quality and relevance. In this paper, a cost-sensitive feature selection framework is developed aiming to obtain the feature reducts to predict customer churn. The framework is a cost based optional pre-processing stage to remove redundant features for churn management. In addition, this cost-based feature selection algorithm is applied in a telecommunication company in Turkey and the results obtained with this algorithm.Keywords: churn prediction, data mining, decision-theoretic rough set, feature selection
Procedia PDF Downloads 4441746 Applying Pre-Accident Observational Methods for Accident Assessment and Prediction at Intersections in Norrkoping City in Sweden
Authors: Ghazwan Al-Haji, Adeyemi Adedokun
Abstract:
Traffic safety at intersections is highly represented, given the fact that accidents occur randomly in time and space. It is necessary to judge whether the intersection is dangerous or not based on short-term observations, and not waiting for many years of assessing historical accident data. There are active and pro-active road infrastructure safety methods for assessing safety at intersections. This study aims to investigate the use of quantitative and qualitative pre-observational methods as the best practice for accident prediction, future black spot identification, and treatment. Historical accident data from STRADA (the Swedish Traffic Accident Data Acquisition) was used within Norrkoping city in Sweden. The ADT (Average Daily Traffic), capacity and speed were used to predict accident rates. Locations with the highest accident records and predicted accident counts were identified and hence audited qualitatively by using Street Audit. The results from these quantitative and qualitative methods were analyzed, validated and compared. The paper provides recommendations on the used methods as well as on how to reduce the accident occurrence at the chosen intersections.Keywords: intersections, traffic conflict, traffic safety, street audit, accidents predictions
Procedia PDF Downloads 2311745 Graph Based Traffic Analysis and Delay Prediction Using a Custom Built Dataset
Authors: Gabriele Borg, Alexei Debono, Charlie Abela
Abstract:
There on a constant rise in the availability of high volumes of data gathered from multiple sources, resulting in an abundance of unprocessed information that can be used to monitor patterns and trends in user behaviour. Similarly, year after year, Malta is also constantly experiencing ongoing population growth and an increase in mobilization demand. This research takes advantage of data which is continuously being sourced and converting it into useful information related to the traffic problem on the Maltese roads. The scope of this paper is to provide a methodology to create a custom dataset (MalTra - Malta Traffic) compiled from multiple participants from various locations across the island to identify the most common routes taken to expose the main areas of activity. This use of big data is seen being used in various technologies and is referred to as ITSs (Intelligent Transportation Systems), which has been concluded that there is significant potential in utilising such sources of data on a nationwide scale. Furthermore, a series of traffic prediction graph neural network models are conducted to compare MalTra to large-scale traffic datasets.Keywords: graph neural networks, traffic management, big data, mobile data patterns
Procedia PDF Downloads 1271744 Physically Informed Kernels for Wave Loading Prediction
Authors: Daniel James Pitchforth, Timothy James Rogers, Ulf Tyge Tygesen, Elizabeth Jane Cross
Abstract:
Wave loading is a primary cause of fatigue within offshore structures and its quantification presents a challenging and important subtask within the SHM framework. The accurate representation of physics in such environments is difficult, however, driving the development of data-driven techniques in recent years. Within many industrial applications, empirical laws remain the preferred method of wave loading prediction due to their low computational cost and ease of implementation. This paper aims to develop an approach that combines data-driven Gaussian process models with physical empirical solutions for wave loading, including Morison’s Equation. The aim here is to incorporate physics directly into the covariance function (kernel) of the Gaussian process, enforcing derived behaviors whilst still allowing enough flexibility to account for phenomena such as vortex shedding, which may not be represented within the empirical laws. The combined approach has a number of advantages, including improved performance over either component used independently and interpretable hyperparameters.Keywords: offshore structures, Gaussian processes, Physics informed machine learning, Kernel design
Procedia PDF Downloads 1881743 Experimental Study on the Creep Characteristics of FRC Base for Composite Pavement System
Authors: Woo-Tai Jung, Sung-Yong Choi, Young-Hwan Park
Abstract:
The composite pavement system considered in this paper is composed of a functional surface layer, a fiber reinforced asphalt middle layer and a fiber reinforced lean concrete base layer. The mix design of the fiber reinforced lean concrete corresponds to the mix composition of conventional lean concrete but reinforced by fibers. The quasi-absence of research on the durability or long-term performances (fatigue, creep, etc.) of such mix design stresses the necessity to evaluate experimentally the long-term characteristics of this layer composition. This study tests the creep characteristics as one of the long-term characteristics of the fiber reinforced lean concrete layer for composite pavement using a new creep device. The test results reveal that the lean concrete mixed with fiber reinforcement and fly ash develops smaller creep than the conventional lean concrete. The results of the application of the CEB-FIP prediction equation indicate that a modified creep prediction equation should be developed to fit with the new mix design of the layer.Keywords: creep, lean concrete, pavement, fiber reinforced concrete, base
Procedia PDF Downloads 5211742 Establishing a Surrogate Approach to Assess the Exposure Concentrations during Coating Process
Authors: Shan-Hong Ying, Ying-Fang Wang
Abstract:
A surrogate approach was deployed for assessing exposures of multiple chemicals at the selected working area of coating processes and applied to assess the exposure concentration of similar exposed groups using the same chemicals but different formula ratios. For the selected area, 6 to 12 portable photoionization detector (PID) were placed uniformly in its workplace to measure its total VOCs concentrations (CT-VOCs) for 6 randomly selected workshifts. Simultaneously, one sampling strain was placed beside one of these portable PIDs, and the collected air sample was analyzed for individual concentration (CVOCi) of 5 VOCs (xylene, butanone, toluene, butyl acetate, and dimethylformamide). Predictive models were established by relating the CT-VOCs to CVOCi of each individual compound via simple regression analysis. The established predictive models were employed to predict each CVOCi based on the measured CT-VOC for each the similar working area using the same portable PID. Results show that predictive models obtained from simple linear regression analyses were found with an R2 = 0.83~0.99 indicating that CT-VOCs were adequate for predicting CVOCi. In order to verify the validity of the exposure prediction model, the sampling analysis of the above chemical substances was further carried out and the correlation between the measured value (Cm) and the predicted value (Cp) was analyzed. It was found that there is a good correction between the predicted value and measured value of each measured chemical substance (R2=0.83~0.98). Therefore, the surrogate approach could be assessed the exposure concentration of similar exposed groups using the same chemicals but different formula ratios. However, it is recommended to establish the prediction model between the chemical substances belonging to each coater and the direct-reading PID, which is more representative of reality exposure situation and more accurately to estimate the long-term exposure concentration of operators.Keywords: exposure assessment, exposure prediction model, surrogate approach, TVOC
Procedia PDF Downloads 1481741 Open Forging of Cylindrical Blanks Subjected to Lateral Instability
Authors: A. H. Elkholy, D. M. Almutairi
Abstract:
The successful and efficient execution of a forging process is dependent upon the correct analysis of loading and metal flow of blanks. This paper investigates the Upper Bound Technique (UBT) and its application in the analysis of open forging process when a possibility of blank bulging exists. The UBT is one of the energy rate minimization methods for the solution of metal forming process based on the upper bound theorem. In this regards, the kinematically admissible velocity field is obtained by minimizing the total forging energy rate. A computer program is developed in this research to implement the UBT. The significant advantages of this method is the speed of execution while maintaining a fairly high degree of accuracy and the wide prediction capability. The information from this analysis is useful for the design of forging processes and dies. Results for the prediction of forging loads and stresses, metal flow and surface profiles with the assured benefits in terms of press selection and blank preform design are outlined in some detail. The obtained predictions are ready for comparison with both laboratory and industrial results.Keywords: forging, upper bound technique, metal forming, forging energy, forging die/platen
Procedia PDF Downloads 2921740 Urban Big Data: An Experimental Approach to Building-Value Estimation Using Web-Based Data
Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin
Abstract:
Current real-estate value estimation, difficult for laymen, usually is performed by specialists. This paper presents an automated estimation process based on big data and machine-learning technology that calculates influences of building conditions on real-estate price measurement. The present study analyzed actual building sales sample data for Nonhyeon-dong, Gangnam-gu, Seoul, Korea, measuring the major influencing factors among the various building conditions. Further to that analysis, a prediction model was established and applied using RapidMiner Studio, a graphical user interface (GUI)-based tool for derivation of machine-learning prototypes. The prediction model is formulated by reference to previous examples. When new examples are applied, it analyses and predicts accordingly. The analysis process discerns the crucial factors effecting price increases by calculation of weighted values. The model was verified, and its accuracy determined, by comparing its predicted values with actual price increases.Keywords: apartment complex, big data, life-cycle building value analysis, machine learning
Procedia PDF Downloads 3701739 Machine Learning Assisted Prediction of Sintered Density of Binary W(MO) Alloys
Authors: Hexiong Liu
Abstract:
Powder metallurgy is the optimal method for the consolidation and preparation of W(Mo) alloys, which exhibit excellent application prospects at high temperatures. The properties of W(Mo) alloys are closely related to the sintered density. However, controlling the sintered density and porosity of these alloys is still challenging. In the past, the regulation methods mainly focused on time-consuming and costly trial-and-error experiments. In this study, the sintering data for more than a dozen W(Mo) alloys constituted a small-scale dataset, including both solid and liquid phases of sintering. Furthermore, simple descriptors were used to predict the sintered density of W(Mo) alloys based on the descriptor selection strategy and machine learning method (ML), where the ML algorithm included the least absolute shrinkage and selection operator (Lasso) regression, k-nearest neighbor (k-NN), random forest (RF), and multi-layer perceptron (MLP). The results showed that the interpretable descriptors extracted by our proposed selection strategy and the MLP neural network achieved a high prediction accuracy (R>0.950). By further predicting the sintered density of W(Mo) alloys using different sintering processes, the error between the predicted and experimental values was less than 0.063, confirming the application potential of the model.Keywords: sintered density, machine learning, interpretable descriptors, W(Mo) alloy
Procedia PDF Downloads 791738 Prospectivity Mapping of Orogenic Lode Gold Deposits Using Fuzzy Models: A Case Study of Saqqez Area, Northwestern Iran
Authors: Fanous Mohammadi, Majid H. Tangestani, Mohammad H. Tayebi
Abstract:
This research aims to evaluate and compare Geographical Information Systems (GIS)-based fuzzy models for producing orogenic gold prospectivity maps in the Saqqez area, NW of Iran. Gold occurrences are hosted in sericite schist and mafic to felsic meta-volcanic rocks in this area and are associated with hydrothermal alterations that extend over ductile to brittle shear zones. The predictor maps, which represent the Pre-(Source/Trigger/Pathway), syn-(deposition/physical/chemical traps) and post-mineralization (preservation/distribution of indicator minerals) subsystems for gold mineralization, were generated using empirical understandings of the specifications of known orogenic gold deposits and gold mineral systems and were then pre-processed and integrated to produce mineral prospectivity maps. Five fuzzy logic operators, including AND, OR, Fuzzy Algebraic Product (FAP), Fuzzy Algebraic Sum (FAS), and GAMMA, were applied to the predictor maps in order to find the most efficient prediction model. Prediction-Area (P-A) plots and field observations were used to assess and evaluate the accuracy of prediction models. Mineral prospectivity maps generated by AND, OR, FAP, and FAS operators were inaccurate and, therefore, unable to pinpoint the exact location of discovered gold occurrences. The GAMMA operator, on the other hand, produced acceptable results and identified potentially economic target sites. The P-A plot revealed that 68 percent of known orogenic gold deposits are found in high and very high potential regions. The GAMMA operator was shown to be useful in predicting and defining cost-effective target sites for orogenic gold deposits, as well as optimizing mineral deposit exploitation.Keywords: mineral prospectivity mapping, fuzzy logic, GIS, orogenic gold deposit, Saqqez, Iran
Procedia PDF Downloads 1191737 Long Short-Term Memory Based Model for Modeling Nicotine Consumption Using an Electronic Cigarette and Internet of Things Devices
Authors: Hamdi Amroun, Yacine Benziani, Mehdi Ammi
Abstract:
In this paper, we want to determine whether the accurate prediction of nicotine concentration can be obtained by using a network of smart objects and an e-cigarette. The approach consists of, first, the recognition of factors influencing smoking cessation such as physical activity recognition and participant’s behaviors (using both smartphone and smartwatch), then the prediction of the configuration of the e-cigarette (in terms of nicotine concentration, power, and resistance of e-cigarette). The study uses a network of commonly connected objects; a smartwatch, a smartphone, and an e-cigarette transported by the participants during an uncontrolled experiment. The data obtained from sensors carried in the three devices were trained by a Long short-term memory algorithm (LSTM). Results show that our LSTM-based model allows predicting the configuration of the e-cigarette in terms of nicotine concentration, power, and resistance with a root mean square error percentage of 12.9%, 9.15%, and 11.84%, respectively. This study can help to better control consumption of nicotine and offer an intelligent configuration of the e-cigarette to users.Keywords: Iot, activity recognition, automatic classification, unconstrained environment
Procedia PDF Downloads 2221736 Influence of Flexible Plate's Contour on Dynamic Behavior of High Speed Flexible Coupling of Combat Aircraft
Authors: Dineshsingh Thakur, S. Nagesh, J. Basha
Abstract:
A lightweight High Speed Flexible Coupling (HSFC) is used to connect the Engine Gear Box (EGB) with an Accessory Gear Box (AGB) of the combat aircraft. The HSFC transmits the power at high speeds ranging from 10000 to 18000 rpm from the EGB to AGB. The HSFC is also accommodates larger misalignments resulting from thermal expansion of the aircraft engine and mounting arrangement. The HSFC has the series of metallic contoured annular thin cross-sectioned flexible plates to accommodate the misalignments. The flexible plates are accommodating the misalignment by the elastic material flexure. As the HSFC operates at higher speed, the flexural and axial resonance frequencies are to be kept away from the operating speed and proper prediction is required to prevent failure in the transmission line of a single engine fighter aircraft. To study the influence of flexible plate’s contour on the lateral critical speed (LCS) of HSFC, a mathematical model of HSFC as a elven rotor system is developed. The flexible plate being the bending member of the system, its bending stiffness which results from the contoured governs the LCS. Using transfer matrix method, Influence of various flexible plate contours on critical speed is analyzed. In the above analysis, the support bearing flexibility on critical speed prediction is also considered. Based on the study, a model is built with the optimum contour of flexible plate, for validation by experimental modal analysis. A good correlation between the theoretical prediction and model behavior is observed. From the study, it is found that the flexible plate’s contour is playing vital role in modification of system’s dynamic behavior and the present model can be extended for the development of similar type of flexible couplings for its computational simplicity and reliability.Keywords: flexible rotor, critical speed, experimental modal analysis, high speed flexible coupling (HSFC), misalignment
Procedia PDF Downloads 2141735 The Power House of Mind: Determination of Action
Authors: Sheetla Prasad
Abstract:
The focus issue of this article is to determine the mechanism of mind with geometrical analysis of human face. Research paradigm has been designed for study of spatial dynamic of face and it was found that different shapes of face have their own function for determine the action of mind. The functional ratio (FR) of face has determined the behaviour operation of human beings. It is not based on the formulistic approach of prediction but scientific dogmatism and mathematical analysis is the root of the prediction of behaviour. For analysis, formulae were developed and standardized. It was found that human psyche is designed in three forms; manipulated, manifested and real psyche. Functional output of the psyche has been determined by degree of energy flow in the psyche and reserve energy for future. Face is the recipient and transmitter of energy but distribution and control is the possible by mind. Mind directs behaviour. FR indicates that the face is a power house of energy and as per its geometrical domain force of behaviours has been designed and actions are possible in the nature of individual. The impact factor of this study is the promotion of human capital for job fitness objective and minimization of criminalization in society.Keywords: functional ratio, manipulated psyche, manifested psyche, real psyche
Procedia PDF Downloads 451