Search results for: prediction interval
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3045

Search results for: prediction interval

2355 BiFormerDTA: Structural Embedding of Protein in Drug Target Affinity Prediction Using BiFormer

Authors: Leila Baghaarabani, Parvin Razzaghi, Mennatolla Magdy Mostafa, Ahmad Albaqsami, Al Warith Al Rushaidi, Masoud Al Rawahi

Abstract:

Predicting the interaction between drugs and their molecular targets is pivotal for advancing drug development processes. Due to the time and cost limitations, computational approaches have emerged as an effective approach to drug-target interaction (DTI) prediction. Most of the introduced computational based approaches utilize the drug molecule and protein sequence as input. This study does not only utilize these inputs, it also introduces a protein representation developed using a masked protein language model. In this representation, for every individual amino acid residue within the protein sequence, there exists a corresponding probability distribution that indicates the likelihood of each amino acid being present at that particular position. Then, the similarity between each pair of amino-acids is computed to create similarity matrix. To encode the knowledge of the similarity matrix, Bi-Level Routing Attention (BiFormer) is utilized, which combines aspects of transformer-based models with protein sequence analysis and represents a significant advancement in the field of drug-protein interaction prediction. BiFormer has the ability to pinpoint the most effective regions of the protein sequence that are responsible for facilitating interactions between the protein and drugs, thereby enhancing the understanding of these critical interactions. Thus, it appears promising in its ability to capture the local structural relationship of the proteins by enhancing the understanding of how it contributes to drug protein interactions, thereby facilitating more accurate predictions. To evaluate the proposed method, it was tested on two widely recognized datasets: Davis and KIBA. A comprehensive series of experiments was conducted to illustrate its effectiveness in comparison to cuttingedge techniques.

Keywords: BiFormer, transformer, protein language processing, self-attention mechanism, binding affinity, drug target interaction, similarity matrix, protein masked representation, protein language model

Procedia PDF Downloads 15
2354 Strategies in Customer Relationship Management and Customers’ Behavior in Making Decision on Buying Car Insurance of Southeast Insurance Co. Ltd. in Bangkok

Authors: Nattapong Techarattanased, Paweena Sribunrueng

Abstract:

The objective of this study is to investigate strategies in customer relationship management and customers’ behavior in making decision on buying car insurance of Southeast Insurance Co. Ltd. in Bangkok. Subjects in this study included 400 customers with the age over 20 years old to complete questionnaires. The data were analyzed by arithmetic mean and multiple regressions. The results revealed that the customers’ opinions on the strategies in customer relationship management, i.e. customer relationship, customer feedback, customer follow-up, useful service suggestions, customer communication, and service channels were in moderate level but on the customer retention was in high level. Moreover, the strategy in customer relationship management, i.e. customer relationship, and customer feedback had an influence on customers’ buying decision on buying car insurance. The two factors above can be used for the prediction at the rate of 34%. In addition, the strategy in customer relationship management, i.e. customer retention, customer feedback, and useful service suggestions had an influence on the customers’ buying decision on period of being customers. The three factors could be used for the prediction at the rate of 45%.

Keywords: strategies, customer relationship management, behavior in buying decision, car insurance

Procedia PDF Downloads 406
2353 Using Simulation Modeling Approach to Predict USMLE Steps 1 and 2 Performances

Authors: Chau-Kuang Chen, John Hughes, Jr., A. Dexter Samuels

Abstract:

The prediction models for the United States Medical Licensure Examination (USMLE) Steps 1 and 2 performances were constructed by the Monte Carlo simulation modeling approach via linear regression. The purpose of this study was to build robust simulation models to accurately identify the most important predictors and yield the valid range estimations of the Steps 1 and 2 scores. The application of simulation modeling approach was deemed an effective way in predicting student performances on licensure examinations. Also, sensitivity analysis (a/k/a what-if analysis) in the simulation models was used to predict the magnitudes of Steps 1 and 2 affected by changes in the National Board of Medical Examiners (NBME) Basic Science Subject Board scores. In addition, the study results indicated that the Medical College Admission Test (MCAT) Verbal Reasoning score and Step 1 score were significant predictors of the Step 2 performance. Hence, institutions could screen qualified student applicants for interviews and document the effectiveness of basic science education program based on the simulation results.

Keywords: prediction model, sensitivity analysis, simulation method, USMLE

Procedia PDF Downloads 340
2352 Exploring Syntactic and Semantic Features for Text-Based Authorship Attribution

Authors: Haiyan Wu, Ying Liu, Shaoyun Shi

Abstract:

Authorship attribution is to extract features to identify authors of anonymous documents. Many previous works on authorship attribution focus on statistical style features (e.g., sentence/word length), content features (e.g., frequent words, n-grams). Modeling these features by regression or some transparent machine learning methods gives a portrait of the authors' writing style. But these methods do not capture the syntactic (e.g., dependency relationship) or semantic (e.g., topics) information. In recent years, some researchers model syntactic trees or latent semantic information by neural networks. However, few works take them together. Besides, predictions by neural networks are difficult to explain, which is vital in authorship attribution tasks. In this paper, we not only utilize the statistical style and content features but also take advantage of both syntactic and semantic features. Different from an end-to-end neural model, feature selection and prediction are two steps in our method. An attentive n-gram network is utilized to select useful features, and logistic regression is applied to give prediction and understandable representation of writing style. Experiments show that our extracted features can improve the state-of-the-art methods on three benchmark datasets.

Keywords: authorship attribution, attention mechanism, syntactic feature, feature extraction

Procedia PDF Downloads 137
2351 Application of Multilinear Regression Analysis for Prediction of Synthetic Shear Wave Velocity Logs in Upper Assam Basin

Authors: Triveni Gogoi, Rima Chatterjee

Abstract:

Shear wave velocity (Vs) estimation is an important approach in the seismic exploration and characterization of a hydrocarbon reservoir. There are varying methods for prediction of S-wave velocity, if recorded S-wave log is not available. But all the available methods for Vs prediction are empirical mathematical models. Shear wave velocity can be estimated using P-wave velocity by applying Castagna’s equation, which is the most common approach. The constants used in Castagna’s equation vary for different lithologies and geological set-ups. In this study, multiple regression analysis has been used for estimation of S-wave velocity. The EMERGE module from Hampson-Russel software has been used here for generation of S-wave log. Both single attribute and multi attributes analysis have been carried out for generation of synthetic S-wave log in Upper Assam basin. Upper Assam basin situated in North Eastern India is one of the most important petroleum provinces of India. The present study was carried out using four wells of the study area. Out of these wells, S-wave velocity was available for three wells. The main objective of the present study is a prediction of shear wave velocities for wells where S-wave velocity information is not available. The three wells having S-wave velocity were first used to test the reliability of the method and the generated S-wave log was compared with actual S-wave log. Single attribute analysis has been carried out for these three wells within the depth range 1700-2100m, which corresponds to Barail group of Oligocene age. The Barail Group is the main target zone in this study, which is the primary producing reservoir of the basin. A system generated list of attributes with varying degrees of correlation appeared and the attribute with the highest correlation was concerned for the single attribute analysis. Crossplot between the attributes shows the variation of points from line of best fit. The final result of the analysis was compared with the available S-wave log, which shows a good visual fit with a correlation of 72%. Next multi-attribute analysis has been carried out for the same data using all the wells within the same analysis window. A high correlation of 85% has been observed between the output log from the analysis and the recorded S-wave. The almost perfect fit between the synthetic S-wave and the recorded S-wave log validates the reliability of the method. For further authentication, the generated S-wave data from the wells have been tied to the seismic and correlated them. Synthetic share wave log has been generated for the well M2 where S-wave is not available and it shows a good correlation with the seismic. Neutron porosity, density, AI and P-wave velocity are proved to be the most significant variables in this statistical method for S-wave generation. Multilinear regression method thus can be considered as a reliable technique for generation of shear wave velocity log in this study.

Keywords: Castagna's equation, multi linear regression, multi attribute analysis, shear wave logs

Procedia PDF Downloads 232
2350 Survival Analysis Based Delivery Time Estimates for Display FAB

Authors: Paul Han, Jun-Geol Baek

Abstract:

In the flat panel display industry, the scheduler and dispatching system to meet production target quantities and the deadline of production are the major production management system which controls each facility production order and distribution of WIP (Work in Process). In dispatching system, delivery time is a key factor for the time when a lot can be supplied to the facility. In this paper, we use survival analysis methods to identify main factors and a forecasting model of delivery time. Of survival analysis techniques to select important explanatory variables, the cox proportional hazard model is used to. To make a prediction model, the Accelerated Failure Time (AFT) model was used. Performance comparisons were conducted with two other models, which are the technical statistics model based on transfer history and the linear regression model using same explanatory variables with AFT model. As a result, the Mean Square Error (MSE) criteria, the AFT model decreased by 33.8% compared to the existing prediction model, decreased by 5.3% compared to the linear regression model. This survival analysis approach is applicable to implementing a delivery time estimator in display manufacturing. And it can contribute to improve the productivity and reliability of production management system.

Keywords: delivery time, survival analysis, Cox PH model, accelerated failure time model

Procedia PDF Downloads 544
2349 Crack Width Analysis of Reinforced Concrete Members under Shrinkage Effect by Pseudo-Discrete Crack Model

Authors: F. J. Ma, A. K. H. Kwan

Abstract:

Crack caused by shrinkage movement of concrete is a serious problem especially when restraint is provided. It may cause severe serviceability and durability problems. The existing prediction methods for crack width of concrete due to shrinkage movement are mainly numerical methods under simplified circumstances, which do not agree with each other. To get a more unified prediction method applicable to more sophisticated circumstances, finite element crack width analysis for shrinkage effect should be developed. However, no existing finite element analysis can be carried out to predict the crack width of concrete due to shrinkage movement because of unsolved reasons of conventional finite element analysis. In this paper, crack width analysis implemented by finite element analysis is presented with pseudo-discrete crack model, which combines traditional smeared crack model and newly proposed crack queuing algorithm. The proposed pseudo-discrete crack model is capable of simulating separate and single crack without adopting discrete crack element. And the improved finite element analysis can successfully simulate the stress redistribution when concrete is cracked, which is crucial for predicting crack width, crack spacing and crack number.

Keywords: crack queuing algorithm, crack width analysis, finite element analysis, shrinkage effect

Procedia PDF Downloads 419
2348 Early Prediction of Diseases in a Cow for Cattle Industry

Authors: Ghufran Ahmed, Muhammad Osama Siddiqui, Shahbaz Siddiqui, Rauf Ahmad Shams Malick, Faisal Khan, Mubashir Khan

Abstract:

In this paper, a machine learning-based approach for early prediction of diseases in cows is proposed. Different ML algos are applied to extract useful patterns from the available dataset. Technology has changed today’s world in every aspect of life. Similarly, advanced technologies have been developed in livestock and dairy farming to monitor dairy cows in various aspects. Dairy cattle monitoring is crucial as it plays a significant role in milk production around the globe. Moreover, it has become necessary for farmers to adopt the latest early prediction technologies as the food demand is increasing with population growth. This highlight the importance of state-ofthe-art technologies in analyzing how important technology is in analyzing dairy cows’ activities. It is not easy to predict the activities of a large number of cows on the farm, so, the system has made it very convenient for the farmers., as it provides all the solutions under one roof. The cattle industry’s productivity is boosted as the early diagnosis of any disease on a cattle farm is detected and hence it is treated early. It is done on behalf of the machine learning output received. The learning models are already set which interpret the data collected in a centralized system. Basically, we will run different algorithms on behalf of the data set received to analyze milk quality, and track cows’ health, location, and safety. This deep learning algorithm draws patterns from the data, which makes it easier for farmers to study any animal’s behavioral changes. With the emergence of machine learning algorithms and the Internet of Things, accurate tracking of animals is possible as the rate of error is minimized. As a result, milk productivity is increased. IoT with ML capability has given a new phase to the cattle farming industry by increasing the yield in the most cost-effective and time-saving manner.

Keywords: IoT, machine learning, health care, dairy cows

Procedia PDF Downloads 73
2347 The Effect of Multiple Environmental Conditions on Acacia senegal Seedling’s Carbon, Nitrogen, and Hydrogen Contents: An Experimental Investigation

Authors: Abdelmoniem A. Attaelmanan, Ahmed A. H. Siddig

Abstract:

This study was conducted in light of continual global climate changes that projected increasing aridity, changes in soil fertility, and pollution. Plant growth and development largely depend on the combination of availing water and nutrients in the soil. Changes in the climate and atmospheric chemistry can cause serious effects on these growth factors. Plant carbon (C), nitrogen (N), and hydrogen (H) play a fundamental role in the maintenance of ecosystem structure and function. Hashab (Acacia senegal), which produces gum Arabic, supports dryland ecosystems in tropical zones by its potentiality to restore degraded soils; hence it is ecologically and economically important for the dry areas of sub-Saharan Africa. The study aims at investigating the effects of water stress (simulated drought) and poor soil type on Acacia senegal C, N, and H contents. Seven days old seedlings were assigned to the treatments in Split- plot design for four weeks. The main plot is irrigation interval (well-watered and water-stressed), and the subplot is soil types (silt and sand soils). Seedling's C%, N%, and H% were measured using CHNS-O Analyzer and applying Standard Test Method. Irrigation intervals and soil types had no effects on seedlings and leaves C%, N%, and H%, irrigation interval had affected stem C and H%, both irrigation intervals and soil types had affected root N% and interaction effect of water and soil was found on leaves and root's N%. Synthesis application of well-watered irrigation with soil that is rich in N and other nutrients would result in the greatest seedling C, N, and H content which will enhance growth and biomass accumulation and can play a crucial role in ecosystem productivity and services in the dryland regions.

Keywords: Acacia senegal, Africa, climate change, drylands, nutrients biomass, Sub-Saharan, Sudan

Procedia PDF Downloads 118
2346 A Machine Learning Approach for Intelligent Transportation System Management on Urban Roads

Authors: Ashish Dhamaniya, Vineet Jain, Rajesh Chouhan

Abstract:

Traffic management is one of the gigantic issue in most of the urban roads in al-most all metropolitan cities in India. Speed is one of the critical traffic parameters for effective Intelligent Transportation System (ITS) implementation as it decides the arrival rate of vehicles on an intersection which are majorly the point of con-gestions. The study aimed to leverage Machine Learning (ML) models to produce precise predictions of speed on urban roadway links. The research objective was to assess how categorized traffic volume and road width, serving as variables, in-fluence speed prediction. Four tree-based regression models namely: Decision Tree (DT), Random Forest (RF), Extra Tree (ET), and Extreme Gradient Boost (XGB)are employed for this purpose. The models' performances were validated using test data, and the results demonstrate that Random Forest surpasses other machine learning techniques and a conventional utility theory-based model in speed prediction. The study is useful for managing the urban roadway network performance under mixed traffic conditions and effective implementation of ITS.

Keywords: stream speed, urban roads, machine learning, traffic flow

Procedia PDF Downloads 71
2345 Shark Detection and Classification with Deep Learning

Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti

Abstract:

Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.

Keywords: classification, data mining, Instagram, remote monitoring, sharks

Procedia PDF Downloads 122
2344 Intelligent Platform for Photovoltaic Park Operation and Maintenance

Authors: Andreas Livera, Spyros Theocharides, Michalis Florides, Charalambos Anastassiou

Abstract:

A main challenge in the quest for ensuring quality of operation, especially for photovoltaic (PV) systems, is to safeguard the reliability and optimal performance by detecting and diagnosing potential failures and performance losses at early stages or before the occurrence through real-time monitoring, supervision, fault detection, and predictive maintenance. The purpose of this work is to present the functionalities and results related to the development and validation of a software platform for PV assets diagnosis and maintenance. The platform brings together proprietary hardware sensors and software algorithms to enable the early detection and prediction of the most common and critical faults in PV systems. It was validated using field measurements from operating PV systems. The results showed the effectiveness of the platform for detecting faults and losses (e.g., inverter failures, string disconnections, and potential induced degradation) at early stages, forecasting PV power production while also providing recommendations for maintenance actions. Increased PV energy yield production and revenue can be thus achieved while also minimizing operation and maintenance (O&M) costs.

Keywords: failure detection and prediction, operation and maintenance, performance monitoring, photovoltaic, platform, recommendations, predictive maintenance

Procedia PDF Downloads 52
2343 Optimal Design of RC Pier Accompanied with Multi Sliding Friction Damping Mechanism Using Combination of SNOPT and ANN Method

Authors: Angga S. Fajar, Y. Takahashi, J. Kiyono, S. Sawada

Abstract:

The structural system concept of RC pier accompanied with multi sliding friction damping mechanism was developed based on numerical analysis approach. However in the implementation, to make design for such kind of this structural system consumes a lot of effort in case high of complexity. During making design, the special behaviors of this structural system should be considered including flexible small deformation, sufficient elastic deformation capacity, sufficient lateral force resistance, and sufficient energy dissipation. The confinement distribution of friction devices has significant influence to its. Optimization and prediction with multi function regression of this structural system expected capable of providing easier and simpler design method. The confinement distribution of friction devices is optimized with SNOPT in Opensees, while some design variables of the structure are predicted using multi function regression of ANN. Based on the optimization and prediction this structural system is able to be designed easily and simply.

Keywords: RC Pier, multi sliding friction device, optimal design, flexible small deformation

Procedia PDF Downloads 367
2342 Prediction of Live Birth in a Matched Cohort of Elective Single Embryo Transfers

Authors: Mohsen Bahrami, Banafsheh Nikmehr, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Tamer M. Yalcinkaya

Abstract:

In recent years, we have witnessed an explosion of studies aimed at using a combination of artificial intelligence (AI) and time-lapse imaging data on embryos to improve IVF outcomes. However, despite promising results, no study has used a matched cohort of transferred embryos which only differ in pregnancy outcome, i.e., embryos from a single clinic which are similar in parameters, such as: morphokinetic condition, patient age, and overall clinic and lab performance. Here, we used time-lapse data on embryos with known pregnancy outcomes to see if the rich spatiotemporal information embedded in this data would allow the prediction of the pregnancy outcome regardless of such critical parameters. Methodology—We did a retrospective analysis of time-lapse data from our IVF clinic utilizing Embryoscope 100% of the time for embryo culture to blastocyst stage with known clinical outcomes, including live birth vs nonpregnant (embryos with spontaneous abortion outcomes were excluded). We used time-lapse data from 200 elective single transfer embryos randomly selected from January 2019 to June 2021. Our sample included 100 embryos in each group with no significant difference in patient age (P=0.9550) and morphokinetic scores (P=0.4032). Data from all patients were combined to make a 4th order tensor, and feature extraction were subsequently carried out by a tensor decomposition methodology. The features were then used in a machine learning classifier to classify the two groups. Major Findings—The performance of the model was evaluated using 100 random subsampling cross validation (train (80%) - test (20%)). The prediction accuracy, averaged across 100 permutations, exceeded 80%. We also did a random grouping analysis, in which labels (live birth, nonpregnant) were randomly assigned to embryos, which yielded 50% accuracy. Conclusion—The high accuracy in the main analysis and the low accuracy in random grouping analysis suggest a consistent spatiotemporal pattern which is associated with pregnancy outcomes, regardless of patient age and embryo morphokinetic condition, and beyond already known parameters, such as: early cleavage or early blastulation. Despite small samples size, this ongoing analysis is the first to show the potential of AI methods in capturing the complex morphokinetic changes embedded in embryo time-lapse data, which contribute to successful pregnancy outcomes, regardless of already known parameters. The results on a larger sample size with complementary analysis on prediction of other key outcomes, such as: euploidy and aneuploidy of embryos will be presented at the meeting.

Keywords: IVF, embryo, machine learning, time-lapse imaging data

Procedia PDF Downloads 93
2341 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis

Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab

Abstract:

Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.

Keywords: deep neural network, foot disorder, plantar pressure, support vector machine

Procedia PDF Downloads 359
2340 Uncertainty in Building Energy Performance Analysis at Different Stages of the Building’s Lifecycle

Authors: Elham Delzendeh, Song Wu, Mustafa Al-Adhami, Rima Alaaeddine

Abstract:

Over the last 15 years, prediction of energy consumption has become a common practice and necessity at different stages of the building’s lifecycle, particularly, at the design and post-occupancy stages for planning and maintenance purposes. This is due to the ever-growing response of governments to address sustainability and reduction of CO₂ emission in the building sector. However, there is a level of uncertainty in the estimation of energy consumption in buildings. The accuracy of energy consumption predictions is directly related to the precision of the initial inputs used in the energy assessment process. In this study, multiple cases of large non-residential buildings at design, construction, and post-occupancy stages are investigated. The energy consumption process and inputs, and the actual and predicted energy consumption of the cases are analysed. The findings of this study have pointed out and evidenced various parameters that cause uncertainty in the prediction of energy consumption in buildings such as modelling, location data, and occupant behaviour. In addition, unavailability and insufficiency of energy-consumption-related inputs at different stages of the building’s lifecycle are classified and categorized. Understanding the roots of uncertainty in building energy analysis will help energy modellers and energy simulation software developers reach more accurate energy consumption predictions in buildings.

Keywords: building lifecycle, efficiency, energy analysis, energy performance, uncertainty

Procedia PDF Downloads 139
2339 Improve Safety Performance of Un-Signalized Intersections in Oman

Authors: Siham G. Farag

Abstract:

The main objective of this paper is to provide a new methodology for road safety assessment in Oman through the development of suitable accident prediction models. GLM technique with Poisson or NBR using SAS package was carried out to develop these models. The paper utilized the accidents data of 31 un-signalized T-intersections during three years. Five goodness-of-fit measures were used to assess the overall quality of the developed models. Two types of models were developed separately; the flow-based models including only traffic exposure functions, and the full models containing both exposure functions and other significant geometry and traffic variables. The results show that, traffic exposure functions produced much better fit to the accident data. The most effective geometric variables were major-road mean speed, minor-road 85th percentile speed, major-road lane width, distance to the nearest junction, and right-turn curb radius. The developed models can be used for intersection treatment or upgrading and specify the appropriate design parameters of T- intersections. Finally, the models presented in this thesis reflect the intersection conditions in Oman and could represent the typical conditions in several countries in the middle east area, especially gulf countries.

Keywords: accidents prediction models (APMs), generalized linear model (GLM), T-intersections, Oman

Procedia PDF Downloads 273
2338 Optimizing E-commerce Retention: A Detailed Study of Machine Learning Techniques for Churn Prediction

Authors: Saurabh Kumar

Abstract:

In the fiercely competitive landscape of e-commerce, understanding and mitigating customer churn has become paramount for sustainable business growth. This paper presents a thorough investigation into the application of machine learning techniques for churn prediction in e-commerce, aiming to provide actionable insights for businesses seeking to enhance customer retention strategies. We conduct a comparative study of various machine learning algorithms, including traditional statistical methods and ensemble techniques, leveraging a rich dataset sourced from Kaggle. Through rigorous evaluation, we assess the predictive performance, interpretability, and scalability of each method, elucidating their respective strengths and limitations in capturing the intricate dynamics of customer churn. We identified the XGBoost classifier to be the best performing. Our findings not only offer practical guidelines for selecting suitable modeling approaches but also contribute to the broader understanding of customer behavior in the e-commerce domain. Ultimately, this research equips businesses with the knowledge and tools necessary to proactively identify and address churn, thereby fostering long-term customer relationships and sustaining competitive advantage.

Keywords: customer churn, e-commerce, machine learning techniques, predictive performance, sustainable business growth

Procedia PDF Downloads 32
2337 Recurrence of Pterygium after Surgery and the Effect of Surgical Technique on the Recurrence of Pterygium in Patients with Pterygium

Authors: Luksanaporn Krungkraipetch

Abstract:

A pterygium is an eye surface lesion that begins in the limbal conjunctiva and progresses to the cornea. The lesion is more common in the nasal limbus than in the temporal, and it has a distinctive wing-like aspect. Indications for surgery, in decreasing order of significance, are grown over the corneal center, decreased vision due to corneal deformation, documented growth, sensations of discomfort, and aesthetic concerns. Recurrent pterygium results in the loss of time, the expense of therapy, and the potential for vision impairment. The objective of this study is to find out how often the recurrence of pterygium after surgery occurs, what effect the surgery technique has, and what causes them to come back in people with pterygium. Materials and Methods: Observational case control in retrospect: the study involves a retrospective analysis of 164 patient samples. Data analysis is descriptive statistics analysis, i.e., basic data details about pterygium surgery and the risk of recurrent pterygium. For factor analysis, the inferential statistics odds ratio (OR) and 95% confidence interval (CI) ANOVA are utilized. A p-value of 0.05 was deemed statistically important. Results: The majority of patients, according to the results, were female (60.4%). Twenty-four of the 164 (14.6%) patients who underwent surgery exhibited recurrent pterygium. The average age is 55.33 years old. Postoperative recurrence was reported in 19 cases (79.3%) of bare sclera techniques and five cases (20.8%) of conjunctival autograft techniques. The recurrence interval is 10.25 months, with the most common (54.17 percent) being 12 months. In 91.67 percent of cases, all follow-ups are successful. The most common recurrence level is 1 (25%). A surgical complication is a subconjunctival hemorrhage (33.33 percent). Comparing the surgeries done on people with recurrent pterygium didn't show anything important (F = 1.13, p = 0.339). Age significantly affected the recurrence of pterygium (95% CI, 6.79-63.56; OR = 20.78, P 0.001). Conclusion: This study discovered a 14.6% rate of pterygium recurrence after pterygium surgery. Across all surgeries and patients, the rate of recurrence was four times higher with the bare sclera method than with conjunctival autograft. The researchers advise selecting a more conventional surgical technique to avoid a recurrence.

Keywords: pterygium, recurrence pterygium, pterygium surgery, excision pterygium

Procedia PDF Downloads 90
2336 Traffic Congestions Modeling and Predictions by Social Networks

Authors: Bojan Najdenov, Danco Davcev

Abstract:

Reduction of traffic congestions and the effects of pollution and waste of resources that come with them has been a big challenge in the past decades. Having reliable systems to facilitate the process of modeling and prediction of traffic conditions would not only reduce the environmental pollution, but will also save people time and money. Social networks play big role of people’s lives nowadays providing them means of communicating and sharing thoughts and ideas, that way generating huge knowledge bases by crowdsourcing. In addition to that, crowdsourcing as a concept provides mechanisms for fast and relatively reliable data generation and also many services are being used on regular basis because they are mainly powered by the public as main content providers. In this paper we present the Social-NETS-Traffic-Control System (SNTCS) that should serve as a facilitator in the process of modeling and prediction of traffic congestions. The main contribution of our system is to integrate data from social networks as Twitter and also implements a custom created crowdsourcing subsystem with which users report traffic conditions using an android application. Our first experience of the usage of the system confirms that the integrated approach allows easy extension of the system with other social networks and represents a very useful tool for traffic control.

Keywords: traffic, congestion reduction, crowdsource, social networks, twitter, android

Procedia PDF Downloads 483
2335 An Approach for Pattern Recognition and Prediction of Information Diffusion Model on Twitter

Authors: Amartya Hatua, Trung Nguyen, Andrew Sung

Abstract:

In this paper, we study the information diffusion process on Twitter as a multivariate time series problem. Our model concerns three measures (volume, network influence, and sentiment of tweets) based on 10 features, and we collected 27 million tweets to build our information diffusion time series dataset for analysis. Then, different time series clustering techniques with Dynamic Time Warping (DTW) distance were used to identify different patterns of information diffusion. Finally, we built the information diffusion prediction models for new hashtags which comprise two phrases: The first phrase is recognizing the pattern using k-NN with DTW distance; the second phrase is building the forecasting model using the traditional Autoregressive Integrated Moving Average (ARIMA) model and the non-linear recurrent neural network of Long Short-Term Memory (LSTM). Preliminary results of performance evaluation between different forecasting models show that LSTM with clustering information notably outperforms other models. Therefore, our approach can be applied in real-world applications to analyze and predict the information diffusion characteristics of selected topics or memes (hashtags) in Twitter.

Keywords: ARIMA, DTW, information diffusion, LSTM, RNN, time series clustering, time series forecasting, Twitter

Procedia PDF Downloads 392
2334 Spillage Prediction Using Fluid-Structure Interaction Simulation with Coupled Eulerian-Lagrangian Technique

Authors: Ravi Soni, Irfan Pathan, Manish Pande

Abstract:

The current product development process needs simultaneous consideration of different physics. The performance of the product needs to be considered under both structural and fluid loads. Examples include ducts and valves where structural behavior affects fluid motion and vice versa. Simulation of fluid-structure interaction involves modeling interaction between moving components and the fluid flow. In these scenarios, it is difficult to calculate the damping provided by fluid flow because of dynamic motions of components and the transient nature of the flow. Abaqus Explicit offers general capabilities for modeling fluid-structure interaction with the Coupled Eulerian-Lagrangian (CEL) method. The Coupled Eulerian-Lagrangian technique has been used to simulate fluid spillage through fuel valves during dynamic closure events. The technique to simulate pressure drops across Eulerian domains has been developed using stagnation pressure. Also, the fluid flow is calculated considering material flow through elements at the outlet section of the valves. The methodology has been verified on Eaton products and shows a good correlation with the test results.

Keywords: Coupled Eulerian-Lagrangian Technique, fluid structure interaction, spillage prediction, stagnation pressure

Procedia PDF Downloads 380
2333 A Predictive Model for Turbulence Evolution and Mixing Using Machine Learning

Authors: Yuhang Wang, Jorg Schluter, Sergiy Shelyag

Abstract:

The high cost associated with high-resolution computational fluid dynamics (CFD) is one of the main challenges that inhibit the design, development, and optimisation of new combustion systems adapted for renewable fuels. In this study, we propose a physics-guided CNN-based model to predict turbulence evolution and mixing without requiring a traditional CFD solver. The model architecture is built upon U-Net and the inception module, while a physics-guided loss function is designed by introducing two additional physical constraints to allow for the conservation of both mass and pressure over the entire predicted flow fields. Then, the model is trained on the Large Eddy Simulation (LES) results of a natural turbulent mixing layer with two different Reynolds number cases (Re = 3000 and 30000). As a result, the model prediction shows an excellent agreement with the corresponding CFD solutions in terms of both spatial distributions and temporal evolution of turbulent mixing. Such promising model prediction performance opens up the possibilities of doing accurate high-resolution manifold-based combustion simulations at a low computational cost for accelerating the iterative design process of new combustion systems.

Keywords: computational fluid dynamics, turbulence, machine learning, combustion modelling

Procedia PDF Downloads 92
2332 The Prediction of Reflection Noise and Its Reduction by Shaped Noise Barriers

Authors: I. L. Kim, J. Y. Lee, A. K. Tekile

Abstract:

In consequence of the very high urbanization rate of Korea, the number of traffic noise damages in areas congested with population and facilities is steadily increasing. The current environmental noise levels data in major cities of the country show that the noise levels exceed the standards set for both day and night times. This research was about comparative analysis in search for optimal soundproof panel shape and design factor that can minimize sound reflection noise. In addition to the normal flat-type panel shape, the reflection noise reduction of swelling-type, combined swelling and curved-type, and screen-type were evaluated. The noise source model Nord 2000, which often provides abundant information compared to models for the similar purpose, was used in the study to determine the overall noise level. Based on vehicle categorization in Korea, the noise levels for varying frequency from different heights of the sound source (directivity heights of Harmonize model) have been calculated for simulation. Each simulation has been made using the ray-tracing method. The noise level has also been calculated using the noise prediction program called SoundPlan 7.2, for comparison. The noise level prediction was made at 15m (R1), 30 m (R2) and at middle of the road, 2m (R3) receiving the point. By designing the noise barriers by shape and running the prediction program by inserting the noise source on the 2nd lane to the noise barrier side, among the 6 lanes considered, the reflection noise slightly decreased or increased in all noise barriers. At R1, especially in the cases of the screen-type noise barriers, there was no reduction effect predicted in all conditions. However, the swelling-type showed a decrease of 0.7~1.2 dB at R1, performing the best reduction effect among the tested noise barriers. Compared to other forms of noise barriers, the swelling-type was thought to be the most suitable for reducing the reflection noise; however, since a slight increase was predicted at R2, further research based on a more sophisticated categorization of related design factors is necessary. Moreover, as swellings are difficult to produce and the size of the modules are smaller than other panels, it is challenging to install swelling-type noise barriers. If these problems are solved, its applicable region will not be limited to other types of noise barriers. Hence, when a swelling-type noise barrier is installed at a downtown region where the amount of traffic is increasing every day, it will both secure visibility through the transparent walls and diminish any noise pollution due to the reflection. Moreover, when decorated with shapes and design, noise barriers will achieve a visual attraction than a flat-type one and thus will alleviate any psychological hardships related to noise, other than the unique physical soundproofing functions of the soundproof panels.

Keywords: reflection noise, shaped noise barriers, sound proof panel, traffic noise

Procedia PDF Downloads 509
2331 Using Soil Texture Field Observations as Ordinal Qualitative Variables for Digital Soil Mapping

Authors: Anne C. Richer-De-Forges, Dominique Arrouays, Songchao Chen, Mercedes Roman Dobarco

Abstract:

Most of the digital soil mapping (DSM) products rely on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs. However, many other observations (often qualitative, nominal, or ordinal) could be used as proxies of lab measurements or as input data for ML of PTF predictions. DSM and ML are briefly described with some examples taken from the literature. Then, we explore the potential of an ordinal qualitative variable, i.e., the hand-feel soil texture (HFST) estimating the mineral particle distribution (PSD): % of clay (0-2µm), silt (2-50µm) and sand (50-2000µm) in 15 classes. The PSD can also be measured by lab measurements (LAST) to determine the exact proportion of these particle-sizes. However, due to cost constraints, HFST are much more numerous and spatially dense than LAST. Soil texture (ST) is a very important soil parameter to map as it is controlling many of the soil properties and functions. Therefore, comes an essential question: is it possible to use HFST as a proxy of LAST for calibration and/or validation of DSM predictions of ST? To answer this question, the first step is to compare HFST with LAST on a representative set where both information are available. This comparison was made on ca 17,400 samples representative of a French region (34,000 km2). The accuracy of HFST was assessed, and each HFST class was characterized by a probability distribution function (PDF) of its LAST values. This enables to randomly replace HFST observations by LAST values while respecting the PDF previously calculated and results in a very large increase of observations available for the calibration or validation of PTF and ML predictions. Some preliminary results are shown. First, the comparison between HFST classes and LAST analyses showed that accuracies could be considered very good when compared to other studies. The causes of some inconsistencies were explored and most of them were well explained by other soil characteristics. Then we show some examples applying these relationships and the increase of data to several issues related to DSM. The first issue is: do the PDF functions that were established enable to use HSFT class observations to improve the LAST soil texture prediction? For this objective, we replaced all HFST for topsoil by values from the PDF 100 time replicates). Results were promising for the PTF we tested (a PTF predicting soil water holding capacity). For the question related to the ML prediction of LAST soil texture on the region, we did the same kind of replacement, but we implemented a 10-fold cross-validation using points where we had LAST values. We obtained only preliminary results but they were rather promising. Then we show another example illustrating the potential of using HFST as validation data. As in numerous countries, the HFST observations are very numerous; these promising results pave the way to an important improvement of DSM products in all the countries of the world.

Keywords: digital soil mapping, improvement of digital soil mapping predictions, potential of using hand-feel soil texture, soil texture prediction

Procedia PDF Downloads 226
2330 Prediction of Coronary Artery Stenosis Severity Based on Machine Learning Algorithms

Authors: Yu-Jia Jian, Emily Chia-Yu Su, Hui-Ling Hsu, Jian-Jhih Chen

Abstract:

Coronary artery is the major supplier of myocardial blood flow. When fat and cholesterol are deposit in the coronary arterial wall, narrowing and stenosis of the artery occurs, which may lead to myocardial ischemia and eventually infarction. According to the World Health Organization (WHO), estimated 740 million people have died of coronary heart disease in 2015. According to Statistics from Ministry of Health and Welfare in Taiwan, heart disease (except for hypertensive diseases) ranked the second among the top 10 causes of death from 2013 to 2016, and it still shows a growing trend. According to American Heart Association (AHA), the risk factors for coronary heart disease including: age (> 65 years), sex (men to women with 2:1 ratio), obesity, diabetes, hypertension, hyperlipidemia, smoking, family history, lack of exercise and more. We have collected a dataset of 421 patients from a hospital located in northern Taiwan who received coronary computed tomography (CT) angiography. There were 300 males (71.26%) and 121 females (28.74%), with age ranging from 24 to 92 years, and a mean age of 56.3 years. Prior to coronary CT angiography, basic data of the patients, including age, gender, obesity index (BMI), diastolic blood pressure, systolic blood pressure, diabetes, hypertension, hyperlipidemia, smoking, family history of coronary heart disease and exercise habits, were collected and used as input variables. The output variable of the prediction module is the degree of coronary artery stenosis. The output variable of the prediction module is the narrow constriction of the coronary artery. In this study, the dataset was randomly divided into 80% as training set and 20% as test set. Four machine learning algorithms, including logistic regression, stepwise regression, neural network and decision tree, were incorporated to generate prediction results. We used area under curve (AUC) / accuracy (Acc.) to compare the four models, the best model is neural network, followed by stepwise logistic regression, decision tree, and logistic regression, with 0.68 / 79 %, 0.68 / 74%, 0.65 / 78%, and 0.65 / 74%, respectively. Sensitivity of neural network was 27.3%, specificity was 90.8%, stepwise Logistic regression sensitivity was 18.2%, specificity was 92.3%, decision tree sensitivity was 13.6%, specificity was 100%, logistic regression sensitivity was 27.3%, specificity 89.2%. From the result of this study, we hope to improve the accuracy by improving the module parameters or other methods in the future and we hope to solve the problem of low sensitivity by adjusting the imbalanced proportion of positive and negative data.

Keywords: decision support, computed tomography, coronary artery, machine learning

Procedia PDF Downloads 229
2329 Machine Learning Approaches to Water Usage Prediction in Kocaeli: A Comparative Study

Authors: Kasim Görenekli, Ali Gülbağ

Abstract:

This study presents a comprehensive analysis of water consumption patterns in Kocaeli province, Turkey, utilizing various machine learning approaches. We analyzed data from 5,000 water subscribers across residential, commercial, and official categories over an 80-month period from January 2016 to August 2022, resulting in a total of 400,000 records. The dataset encompasses water consumption records, weather information, weekends and holidays, previous months' consumption, and the influence of the COVID-19 pandemic.We implemented and compared several machine learning models, including Linear Regression, Random Forest, Support Vector Regression (SVR), XGBoost, Artificial Neural Networks (ANN), Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRU). Particle Swarm Optimization (PSO) was applied to optimize hyperparameters for all models.Our results demonstrate varying performance across subscriber types and models. For official subscribers, Random Forest achieved the highest R² of 0.699 with PSO optimization. For commercial subscribers, Linear Regression performed best with an R² of 0.730 with PSO. Residential water usage proved more challenging to predict, with XGBoost achieving the highest R² of 0.572 with PSO.The study identified key factors influencing water consumption, with previous months' consumption, meter diameter, and weather conditions being among the most significant predictors. The impact of the COVID-19 pandemic on consumption patterns was also observed, particularly in residential usage.This research provides valuable insights for effective water resource management in Kocaeli and similar regions, considering Turkey's high water loss rate and below-average per capita water supply. The comparative analysis of different machine learning approaches offers a comprehensive framework for selecting appropriate models for water consumption prediction in urban settings.

Keywords: mMachine learning, water consumption prediction, particle swarm optimization, COVID-19, water resource management

Procedia PDF Downloads 19
2328 Stock Prediction and Portfolio Optimization Thesis

Authors: Deniz Peksen

Abstract:

This thesis aims to predict trend movement of closing price of stock and to maximize portfolio by utilizing the predictions. In this context, the study aims to define a stock portfolio strategy from models created by using Logistic Regression, Gradient Boosting and Random Forest. Recently, predicting the trend of stock price has gained a significance role in making buy and sell decisions and generating returns with investment strategies formed by machine learning basis decisions. There are plenty of studies in the literature on the prediction of stock prices in capital markets using machine learning methods but most of them focus on closing prices instead of the direction of price trend. Our study differs from literature in terms of target definition. Ours is a classification problem which is focusing on the market trend in next 20 trading days. To predict trend direction, fourteen years of data were used for training. Following three years were used for validation. Finally, last three years were used for testing. Training data are between 2002-06-18 and 2016-12-30 Validation data are between 2017-01-02 and 2019-12-31 Testing data are between 2020-01-02 and 2022-03-17 We determine Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate as benchmarks which we should outperform. We compared our machine learning basis portfolio return on test data with return of Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate. We assessed our model performance with the help of roc-auc score and lift charts. We use logistic regression, Gradient Boosting and Random Forest with grid search approach to fine-tune hyper-parameters. As a result of the empirical study, the existence of uptrend and downtrend of five stocks could not be predicted by the models. When we use these predictions to define buy and sell decisions in order to generate model-based-portfolio, model-based-portfolio fails in test dataset. It was found that Model-based buy and sell decisions generated a stock portfolio strategy whose returns can not outperform non-model portfolio strategies on test dataset. We found that any effort for predicting the trend which is formulated on stock price is a challenge. We found same results as Random Walk Theory claims which says that stock price or price changes are unpredictable. Our model iterations failed on test dataset. Although, we built up several good models on validation dataset, we failed on test dataset. We implemented Random Forest, Gradient Boosting and Logistic Regression. We discovered that complex models did not provide advantage or additional performance while comparing them with Logistic Regression. More complexity did not lead us to reach better performance. Using a complex model is not an answer to figure out the stock-related prediction problem. Our approach was to predict the trend instead of the price. This approach converted our problem into classification. However, this label approach does not lead us to solve the stock prediction problem and deny or refute the accuracy of the Random Walk Theory for the stock price.

Keywords: stock prediction, portfolio optimization, data science, machine learning

Procedia PDF Downloads 81
2327 Analysis of Residents’ Travel Characteristics and Policy Improving Strategies

Authors: Zhenzhen Xu, Chunfu Shao, Shengyou Wang, Chunjiao Dong

Abstract:

To improve the satisfaction of residents' travel, this paper analyzes the characteristics and influencing factors of urban residents' travel behavior. First, a Multinominal Logit Model (MNL) model is built to analyze the characteristics of residents' travel behavior, reveal the influence of individual attributes, family attributes and travel characteristics on the choice of travel mode, and identify the significant factors. Then put forward suggestions for policy improvement. Finally, Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) models are introduced to evaluate the policy effect. This paper selects Futian Street in Futian District, Shenzhen City for investigation and research. The results show that gender, age, education, income, number of cars owned, travel purpose, departure time, journey time, travel distance and times all have a significant influence on residents' choice of travel mode. Based on the above results, two policy improvement suggestions are put forward from reducing public transportation and non-motor vehicle travel time, and the policy effect is evaluated. Before the evaluation, the prediction effect of MNL, SVM and MLP models was evaluated. After parameter optimization, it was found that the prediction accuracy of the three models was 72.80%, 71.42%, and 76.42%, respectively. The MLP model with the highest prediction accuracy was selected to evaluate the effect of policy improvement. The results showed that after the implementation of the policy, the proportion of public transportation in plan 1 and plan 2 increased by 14.04% and 9.86%, respectively, while the proportion of private cars decreased by 3.47% and 2.54%, respectively. The proportion of car trips decreased obviously, while the proportion of public transport trips increased. It can be considered that the measures have a positive effect on promoting green trips and improving the satisfaction of urban residents, and can provide a reference for relevant departments to formulate transportation policies.

Keywords: neural network, travel characteristics analysis, transportation choice, travel sharing rate, traffic resource allocation

Procedia PDF Downloads 139
2326 Electroencephalogram Based Approach for Mental Stress Detection during Gameplay with Level Prediction

Authors: Priyadarsini Samal, Rajesh Singla

Abstract:

Many mobile games come with the benefits of entertainment by introducing stress to the human brain. In recognizing this mental stress, the brain-computer interface (BCI) plays an important role. It has various neuroimaging approaches which help in analyzing the brain signals. Electroencephalogram (EEG) is the most commonly used method among them as it is non-invasive, portable, and economical. Here, this paper investigates the pattern in brain signals when introduced with mental stress. Two healthy volunteers played a game whose aim was to search hidden words from the grid, and the levels were chosen randomly. The EEG signals during gameplay were recorded to investigate the impacts of stress with the changing levels from easy to medium to hard. A total of 16 features of EEG were analyzed for this experiment which includes power band features with relative powers, event-related desynchronization, along statistical features. Support vector machine was used as the classifier, which resulted in an accuracy of 93.9% for three-level stress analysis; for two levels, the accuracy of 92% and 98% are achieved. In addition to that, another game that was similar in nature was played by the volunteers. A suitable regression model was designed for prediction where the feature sets of the first and second game were used for testing and training purposes, respectively, and an accuracy of 73% was found.

Keywords: brain computer interface, electroencephalogram, regression model, stress, word search

Procedia PDF Downloads 188