Search results for: textile machine
2441 Machine Learning Based Approach for Measuring Promotion Effectiveness in Multiple Parallel Promotions’ Scenarios
Authors: Revoti Prasad Bora, Nikita Katyal
Abstract:
Promotion is a key element in the retail business. Thus, analysis of promotions to quantify their effectiveness in terms of Revenue and/or Margin is an essential activity in the retail industry. However, measuring the sales/revenue uplift is based on estimations, as the actual sales/revenue without the promotion is not present. Further, the presence of Halo and Cannibalization in a multiple parallel promotions’ scenario complicates the problem. Calculating Baseline by considering inter-brand/competitor items or using Halo and Cannibalization's impact on Revenue calculations by considering Baseline as an interpretation of items’ unit sales in neighboring nonpromotional weeks individually may not capture the overall Revenue uplift in the case of multiple parallel promotions. Hence, this paper proposes a Machine Learning based method for calculating the Revenue uplift by considering the Halo and Cannibalization impact on the Baseline and the Revenue. In the first section of the proposed methodology, Baseline of an item is calculated by incorporating the impact of the promotions on its related items. In the later section, the Revenue of an item is calculated by considering both Halo and Cannibalization impacts. Hence, this methodology enables correct calculation of the overall Revenue uplift due a given promotion.Keywords: Halo, Cannibalization, promotion, Baseline, temporary price reduction, retail, elasticity, cross price elasticity, machine learning, random forest, linear regression
Procedia PDF Downloads 1752440 Automatic Detection of Suicidal Behaviors Using an RGB-D Camera: Azure Kinect
Authors: Maha Jazouli
Abstract:
Suicide is one of the most important causes of death in the prison environment, both in Canada and internationally. Rates of attempts of suicide and self-harm have been on the rise in recent years, with hangings being the most frequent method resorted to. The objective of this article is to propose a method to automatically detect in real time suicidal behaviors. We present a gesture recognition system that consists of three modules: model-based movement tracking, feature extraction, and gesture recognition using machine learning algorithms (MLA). Our proposed system gives us satisfactory results. This smart video surveillance system can help assist staff responsible for the safety and health of inmates by alerting them when suicidal behavior is detected, which helps reduce mortality rates and save lives.Keywords: suicide detection, Kinect azure, RGB-D camera, SVM, machine learning, gesture recognition
Procedia PDF Downloads 1872439 'CardioCare': A Cutting-Edge Fusion of IoT and Machine Learning to Bridge the Gap in Cardiovascular Risk Management
Authors: Arpit Patil, Atharav Bhagwat, Rajas Bhope, Pramod Bide
Abstract:
This research integrates IoT and ML to predict heart failure risks, utilizing the Framingham dataset. IoT devices gather real-time physiological data, focusing on heart rate dynamics, while ML, specifically Random Forest, predicts heart failure. Rigorous feature selection enhances accuracy, achieving over 90% prediction rate. This amalgamation marks a transformative step in proactive healthcare, highlighting early detection's critical role in cardiovascular risk mitigation. Challenges persist, necessitating continual refinement for improved predictive capabilities.Keywords: cardiovascular diseases, internet of things, machine learning, cardiac risk assessment, heart failure prediction, early detection, cardio data analysis
Procedia PDF Downloads 92438 A Monte Carlo Fuzzy Logistic Regression Framework against Imbalance and Separation
Authors: Georgios Charizanos, Haydar Demirhan, Duygu Icen
Abstract:
Two of the most impactful issues in classical logistic regression are class imbalance and complete separation. These can result in model predictions heavily leaning towards the imbalanced class on the binary response variable or over-fitting issues. Fuzzy methodology offers key solutions for handling these problems. However, most studies propose the transformation of the binary responses into a continuous format limited within [0,1]. This is called the possibilistic approach within fuzzy logistic regression. Following this approach is more aligned with straightforward regression since a logit-link function is not utilized, and fuzzy probabilities are not generated. In contrast, we propose a method of fuzzifying binary response variables that allows for the use of the logit-link function; hence, a probabilistic fuzzy logistic regression model with the Monte Carlo method. The fuzzy probabilities are then classified by selecting a fuzzy threshold. Different combinations of fuzzy and crisp input, output, and coefficients are explored, aiming to understand which of these perform better under different conditions of imbalance and separation. We conduct numerical experiments using both synthetic and real datasets to demonstrate the performance of the fuzzy logistic regression framework against seven crisp machine learning methods. The proposed framework shows better performance irrespective of the degree of imbalance and presence of separation in the data, while the considered machine learning methods are significantly impacted.Keywords: fuzzy logistic regression, fuzzy, logistic, machine learning
Procedia PDF Downloads 712437 Advances in Machine Learning and Deep Learning Techniques for Image Classification and Clustering
Authors: R. Nandhini, Gaurab Mudbhari
Abstract:
Ranging from the field of health care to self-driving cars, machine learning and deep learning algorithms have revolutionized the field with the proper utilization of images and visual-oriented data. Segmentation, regression, classification, clustering, dimensionality reduction, etc., are some of the Machine Learning tasks that helped Machine Learning and Deep Learning models to become state-of-the-art models for the field where images are key datasets. Among these tasks, classification and clustering are essential but difficult because of the intricate and high-dimensional characteristics of image data. This finding examines and assesses advanced techniques in supervised classification and unsupervised clustering for image datasets, emphasizing the relative efficiency of Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), Deep Embedded Clustering (DEC), and self-supervised learning approaches. Due to the distinctive structural attributes present in images, conventional methods often fail to effectively capture spatial patterns, resulting in the development of models that utilize more advanced architectures and attention mechanisms. In image classification, we investigated both CNNs and ViTs. One of the most promising models, which is very much known for its ability to detect spatial hierarchies, is CNN, and it serves as a core model in our study. On the other hand, ViT is another model that also serves as a core model, reflecting a modern classification method that uses a self-attention mechanism which makes them more robust as this self-attention mechanism allows them to lean global dependencies in images without relying on convolutional layers. This paper evaluates the performance of these two architectures based on accuracy, precision, recall, and F1-score across different image datasets, analyzing their appropriateness for various categories of images. In the domain of clustering, we assess DEC, Variational Autoencoders (VAEs), and conventional clustering techniques like k-means, which are used on embeddings derived from CNN models. DEC, a prominent model in the field of clustering, has gained the attention of many ML engineers because of its ability to combine feature learning and clustering into a single framework and its main goal is to improve clustering quality through better feature representation. VAEs, on the other hand, are pretty well known for using latent embeddings for grouping similar images without requiring for prior label by utilizing the probabilistic clustering method.Keywords: machine learning, deep learning, image classification, image clustering
Procedia PDF Downloads 62436 A Non-Destructive Estimation Method for Internal Time in Perilla Leaf Using Hyperspectral Data
Authors: Shogo Nagano, Yusuke Tanigaki, Hirokazu Fukuda
Abstract:
Vegetables harvested early in the morning or late in the afternoon are valued in plant production, and so the time of harvest is important. The biological functions known as circadian clocks have a significant effect on this harvest timing. The purpose of this study was to non-destructively estimate the circadian clock and so construct a method for determining a suitable harvest time. We took eight samples of green busil (Perilla frutescens var. crispa) every 4 hours, six times for 1 day and analyzed all samples at the same time. A hyperspectral camera was used to collect spectrum intensities at 141 different wavelengths (350–1050 nm). Calculation of correlations between spectrum intensity of each wavelength and harvest time suggested the suitability of the hyperspectral camera for non-destructive estimation. However, even the highest correlated wavelength had a weak correlation, so we used machine learning to raise the accuracy of estimation and constructed a machine learning model to estimate the internal time of the circadian clock. Artificial neural networks (ANN) were used for machine learning because this is an effective analysis method for large amounts of data. Using the estimation model resulted in an error between estimated and real times of 3 min. The estimations were made in less than 2 hours. Thus, we successfully demonstrated this method of non-destructively estimating internal time.Keywords: artificial neural network (ANN), circadian clock, green busil, hyperspectral camera, non-destructive evaluation
Procedia PDF Downloads 2972435 Supervised Learning for Cyber Threat Intelligence
Authors: Jihen Bennaceur, Wissem Zouaghi, Ali Mabrouk
Abstract:
The major aim of cyber threat intelligence (CTI) is to provide sophisticated knowledge about cybersecurity threats to ensure internal and external safeguards against modern cyberattacks. Inaccurate, incomplete, outdated, and invaluable threat intelligence is the main problem. Therefore, data analysis based on AI algorithms is one of the emergent solutions to overcome the threat of information-sharing issues. In this paper, we propose a supervised machine learning-based algorithm to improve threat information sharing by providing a sophisticated classification of cyber threats and data. Extensive simulations investigate the accuracy, precision, recall, f1-score, and support overall to validate the designed algorithm and to compare it with several supervised machine learning algorithms.Keywords: threat information sharing, supervised learning, data classification, performance evaluation
Procedia PDF Downloads 1462434 The Lateral and Torsional Vibration Analysis of a Rotor-Bearing System Using Transfer Matrix Method
Authors: Mohammad Hadi Jalali, Mostafa Ghayour, Saeed Ziaei-Rad, Behrooz Shahriari
Abstract:
The vibration problems that can be occurred in the operational conditions of rotating machines may cause damage to the machine or even failure of the machine completely. Therefore, dynamic analysis of rotors is vital in the design and development stages of the rotating machines. In this study, the uncoupled torsional and lateral vibration analysis of a rotor-bearing system is carried out using transfer matrix method. The Campbell diagram, critical speed and the mode shape corresponding to the critical speed are obtained in order to evaluate the dynamic behavior of the rotor.Keywords: transfer matrix method, rotor-bearing system, campbell diagram, critical speed
Procedia PDF Downloads 4892433 Dynamic Compensation for Environmental Temperature Variation in the Coolant Refrigeration Cycle as a Means of Increasing Machine-Tool Precision
Authors: Robbie C. Murchison, Ibrahim Küçükdemiral, Andrew Cowell
Abstract:
Thermal effects are the largest source of dimensional error in precision machining, and a major proportion is caused by ambient temperature variation. The use of coolant is a primary means of mitigating these effects, but there has been limited work on coolant temperature control. This research critically explored whether CNC-machine coolant refrigeration systems adapted to actively compensate for ambient temperature variation could increase machining accuracy. Accuracy data were collected from operators’ checklists for a CNC 5-axis mill and statistically reduced to bias and precision metrics for observations of one day over a sample period of 27 days. Temperature data were collected using three USB dataloggers in ambient air, the chiller inflow, and the chiller outflow. The accuracy and temperature data were analysed using Pearson correlation, then the thermodynamics of the system were described using system identification with MATLAB. It was found that 75% of thermal error is reflected in the hot coolant temperature but that this is negligibly dependent on ambient temperature. The effect of the coolant refrigeration process on hot coolant outflow temperature was also found to be negligible. Therefore, the evidence indicated that it would not be beneficial to adapt coolant chillers to compensate for ambient temperature variation. However, it is concluded that hot coolant outflow temperature is a robust and accessible source of thermal error data which could be used for prevention strategy evaluation or as the basis of other thermal error strategies.Keywords: CNC manufacturing, machine-tool, precision machining, thermal error
Procedia PDF Downloads 882432 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker
Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation
Procedia PDF Downloads 222431 Machine Learning for Exoplanetary Habitability Assessment
Authors: King Kumire, Amos Kubeka
Abstract:
The synergy of machine learning and astronomical technology advancement is giving rise to the new space age, which is pronounced by better habitability assessments. To initiate this discussion, it should be recorded for definition purposes that the symbiotic relationship between astronomy and improved computing has been code-named the Cis-Astro gateway concept. The cosmological fate of this phrase has been unashamedly plagiarized from the cis-lunar gateway template and its associated LaGrange points which act as an orbital bridge to the moon from our planet Earth. However, for this study, the scientific audience is invited to bridge toward the discovery of new habitable planets. It is imperative to state that cosmic probes of this magnitude can be utilized as the starting nodes of the astrobiological search for galactic life. This research can also assist by acting as the navigation system for future space telescope launches through the delimitation of target exoplanets. The findings and the associated platforms can be harnessed as building blocks for the modeling of climate change on planet earth. The notion that if the human genus exhausts the resources of the planet earth or there is a bug of some sort that makes the earth inhabitable for humans explains the need to find an alternative planet to inhabit. The scientific community, through interdisciplinary discussions of the International Astronautical Federation so far has the common position that engineers can reduce space mission costs by constructing a stable cis-lunar orbit infrastructure for refilling and carrying out other associated in-orbit servicing activities. Similarly, the Cis-Astro gateway can be envisaged as a budget optimization technique that models extra-solar bodies and can facilitate the scoping of future mission rendezvous. It should be registered as well that this broad and voluminous catalog of exoplanets shall be narrowed along the way using machine learning filters. The gist of this topic revolves around the indirect economic rationale of establishing a habitability scoping platform.Keywords: machine-learning, habitability, exoplanets, supercomputing
Procedia PDF Downloads 882430 Machine Learning for Exoplanetary Habitability Assessment
Authors: King Kumire, Amos Kubeka
Abstract:
The synergy of machine learning and astronomical technology advancement is giving rise to the new space age, which is pronounced by better habitability assessments. To initiate this discussion, it should be recorded for definition purposes that the symbiotic relationship between astronomy and improved computing has been code-named the Cis-Astro gateway concept. The cosmological fate of this phrase has been unashamedly plagiarized from the cis-lunar gateway template and its associated LaGrange points which act as an orbital bridge to the moon from our planet Earth. However, for this study, the scientific audience is invited to bridge toward the discovery of new habitable planets. It is imperative to state that cosmic probes of this magnitude can be utilized as the starting nodes of the astrobiological search for galactic life. This research can also assist by acting as the navigation system for future space telescope launches through the delimitation of target exoplanets. The findings and the associated platforms can be harnessed as building blocks for the modeling of climate change on planet earth. The notion that if the human genus exhausts the resources of the planet earth or there is a bug of some sort that makes the earth inhabitable for humans explains the need to find an alternative planet to inhabit. The scientific community, through interdisciplinary discussions of the International Astronautical Federation so far, has the common position that engineers can reduce space mission costs by constructing a stable cis-lunar orbit infrastructure for refilling and carrying out other associated in-orbit servicing activities. Similarly, the Cis-Astro gateway can be envisaged as a budget optimization technique that models extra-solar bodies and can facilitate the scoping of future mission rendezvous. It should be registered as well that this broad and voluminous catalog of exoplanets shall be narrowed along the way using machine learning filters. The gist of this topic revolves around the indirect economic rationale of establishing a habitability scoping platform.Keywords: exoplanets, habitability, machine-learning, supercomputing
Procedia PDF Downloads 1142429 Stable Tending Control of Complex Power Systems: An Example of Localized Design of Power System Stabilizers
Authors: Wenjuan Du
Abstract:
The phase compensation method was proposed based on the concept of the damping torque analysis (DTA). It is a method for the design of a PSS (power system stabilizer) to suppress local-mode power oscillations in a single-machine infinite-bus power system. This paper presents the application of the phase compensation method for the design of a PSS in a multi-machine power system. The application is achieved by examining the direct damping contribution of the stabilizer to the power oscillations. By using linearized equal area criterion, a theoretical proof to the application for the PSS design is presented. Hence PSS design in the paper is an example of stable tending control by localized method.Keywords: phase compensation method, power system small-signal stability, power system stabilizer
Procedia PDF Downloads 6372428 Develop a Conceptual Data Model of Geotechnical Risk Assessment in Underground Coal Mining Using a Cloud-Based Machine Learning Platform
Authors: Reza Mohammadzadeh
Abstract:
The major challenges in geotechnical engineering in underground spaces arise from uncertainties and different probabilities. The collection, collation, and collaboration of existing data to incorporate them in analysis and design for given prospect evaluation would be a reliable, practical problem solving method under uncertainty. Machine learning (ML) is a subfield of artificial intelligence in statistical science which applies different techniques (e.g., Regression, neural networks, support vector machines, decision trees, random forests, genetic programming, etc.) on data to automatically learn and improve from them without being explicitly programmed and make decisions and predictions. In this paper, a conceptual database schema of geotechnical risks in underground coal mining based on a cloud system architecture has been designed. A new approach of risk assessment using a three-dimensional risk matrix supported by the level of knowledge (LoK) has been proposed in this model. Subsequently, the model workflow methodology stages have been described. In order to train data and LoK models deployment, an ML platform has been implemented. IBM Watson Studio, as a leading data science tool and data-driven cloud integration ML platform, is employed in this study. As a Use case, a data set of geotechnical hazards and risk assessment in underground coal mining were prepared to demonstrate the performance of the model, and accordingly, the results have been outlined.Keywords: data model, geotechnical risks, machine learning, underground coal mining
Procedia PDF Downloads 2742427 A Survey on Ambient Intelligence in Agricultural Technology
Abstract:
Despite the advances made in various new technologies, application of these technologies for agriculture still remains a formidable task, as it involves integration of diverse domains for monitoring the different process involved in agricultural management. Advances in ambient intelligence technology represents one of the most powerful technology for increasing the yield of agricultural crops and to mitigate the impact of water scarcity, climatic change and methods for managing pests, weeds, and diseases. This paper proposes a GPS-assisted, machine to machine solutions that combine information collected by multiple sensors for the automated management of paddy crops. To maintain the economic viability of paddy cultivation, the various techniques used in agriculture are discussed and a novel system which uses ambient intelligence technique is proposed in this paper. The ambient intelligence based agricultural system gives a great scope.Keywords: ambient intelligence, agricultural technology, smart agriculture, precise farming
Procedia PDF Downloads 6052426 Hybrid Approach for Software Defect Prediction Using Machine Learning with Optimization Technique
Authors: C. Manjula, Lilly Florence
Abstract:
Software technology is developing rapidly which leads to the growth of various industries. Now-a-days, software-based applications have been adopted widely for business purposes. For any software industry, development of reliable software is becoming a challenging task because a faulty software module may be harmful for the growth of industry and business. Hence there is a need to develop techniques which can be used for early prediction of software defects. Due to complexities in manual prediction, automated software defect prediction techniques have been introduced. These techniques are based on the pattern learning from the previous software versions and finding the defects in the current version. These techniques have attracted researchers due to their significant impact on industrial growth by identifying the bugs in software. Based on this, several researches have been carried out but achieving desirable defect prediction performance is still a challenging task. To address this issue, here we present a machine learning based hybrid technique for software defect prediction. First of all, Genetic Algorithm (GA) is presented where an improved fitness function is used for better optimization of features in data sets. Later, these features are processed through Decision Tree (DT) classification model. Finally, an experimental study is presented where results from the proposed GA-DT based hybrid approach is compared with those from the DT classification technique. The results show that the proposed hybrid approach achieves better classification accuracy.Keywords: decision tree, genetic algorithm, machine learning, software defect prediction
Procedia PDF Downloads 3282425 Comparative Analysis of Spectral Estimation Methods for Brain-Computer Interfaces
Authors: Rafik Djemili, Hocine Bourouba, M. C. Amara Korba
Abstract:
In this paper, we present a method in order to classify EEG signals for Brain-Computer Interfaces (BCI). EEG signals are first processed by means of spectral estimation methods to derive reliable features before classification step. Spectral estimation methods used are standard periodogram and the periodogram calculated by the Welch method; both methods are compared with Logarithm of Band Power (logBP) features. In the method proposed, we apply Linear Discriminant Analysis (LDA) followed by Support Vector Machine (SVM). Classification accuracy reached could be as high as 85%, which proves the effectiveness of classification of EEG signals based BCI using spectral methods.Keywords: brain-computer interface, motor imagery, electroencephalogram, linear discriminant analysis, support vector machine
Procedia PDF Downloads 4982424 Artificial Intelligence-Based Thermal Management of Battery System for Electric Vehicles
Authors: Raghunandan Gurumurthy, Aricson Pereira, Sandeep Patil
Abstract:
The escalating adoption of electric vehicles (EVs) across the globe has underscored the critical importance of advancing battery system technologies. This has catalyzed a shift towards the design and development of battery systems that not only exhibit higher energy efficiency but also boast enhanced thermal performance and sophisticated multi-material enclosures. A significant leap in this domain has been the incorporation of simulation-based design optimization for battery packs and Battery Management Systems (BMS), a move further enriched by integrating artificial intelligence/machine learning (AI/ML) approaches. These strategies are pivotal in refining the design, manufacturing, and operational processes for electric vehicles and energy storage systems. By leveraging AI/ML, stakeholders can now predict battery performance metrics—such as State of Health, State of Charge, and State of Power—with unprecedented accuracy. Furthermore, as Li-ion batteries (LIBs) become more prevalent in urban settings, the imperative for bolstering thermal and fire resilience has intensified. This has propelled Battery Thermal Management Systems (BTMs) to the forefront of energy storage research, highlighting the role of machine learning and AI not just as tools for enhanced safety management through accurate temperature forecasts and diagnostics but also as indispensable allies in the early detection and warning of potential battery fires.Keywords: electric vehicles, battery thermal management, industrial engineering, machine learning, artificial intelligence, manufacturing
Procedia PDF Downloads 952423 Machining Stability of a Milling Machine with Different Preloaded Spindle
Authors: Jui-Pin Hung, Qiao-Wen Chang, Kung-Da Wu, Yong-Run Chen
Abstract:
This study was aimed to investigate the machining stability of a spindle tool with different preloaded amount. To this end, the vibration tests were conducted on the spindle unit with different preload to assess the dynamic characteristics and machining stability of the spindle unit. Current results demonstrate that the tool tip frequency response characteristics and the machining stabilities in X and Y direction are affected to change for spindle with different preload. As can be found from the results, a high preloaded spindle tool shows higher limited cutting depth at mid position, while a spindle with low preload shows a higher limited depth. This implies that the machining stability of spindle tool system is affected to vary by the machine frame structure. Besides, such an effect is quite different and varied with the preload of the spindle.Keywords: bearing preload, dynamic compliance, machining stability, spindle
Procedia PDF Downloads 3842422 Machine Learning Techniques for Estimating Ground Motion Parameters
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine
Procedia PDF Downloads 1212421 Infrared Spectroscopy in Tandem with Machine Learning for Simultaneous Rapid Identification of Bacteria Isolated Directly from Patients' Urine Samples and Determination of Their Susceptibility to Antibiotics
Authors: Mahmoud Huleihel, George Abu-Aqil, Manal Suleiman, Klaris Riesenberg, Itshak Lapidot, Ahmad Salman
Abstract:
Urinary tract infections (UTIs) are considered to be the most common bacterial infections worldwide, which are caused mainly by Escherichia (E.) coli (about 80%). Klebsiella pneumoniae (about 10%) and Pseudomonas aeruginosa (about 6%). Although antibiotics are considered as the most effective treatment for bacterial infectious diseases, unfortunately, most of the bacteria already have developed resistance to the majority of the commonly available antibiotics. Therefore, it is crucial to identify the infecting bacteria and to determine its susceptibility to antibiotics for prescribing effective treatment. Classical methods are time consuming, require ~48 hours for determining bacterial susceptibility. Thus, it is highly urgent to develop a new method that can significantly reduce the time required for determining both infecting bacterium at the species level and diagnose its susceptibility to antibiotics. Fourier-Transform Infrared (FTIR) spectroscopy is well known as a sensitive and rapid method, which can detect minor molecular changes in bacterial genome associated with the development of resistance to antibiotics. The main goal of this study is to examine the potential of FTIR spectroscopy, in tandem with machine learning algorithms, to identify the infected bacteria at the species level and to determine E. coli susceptibility to different antibiotics directly from patients' urine in about 30minutes. For this goal, 1600 different E. coli isolates were isolated for different patients' urine sample, measured by FTIR, and analyzed using different machine learning algorithm like Random Forest, XGBoost, and CNN. We achieved 98% success in isolate level identification and 89% accuracy in susceptibility determination.Keywords: urinary tract infections (UTIs), E. coli, Klebsiella pneumonia, Pseudomonas aeruginosa, bacterial, susceptibility to antibiotics, infrared microscopy, machine learning
Procedia PDF Downloads 1692420 A Fuzzy Mathematical Model for Order Acceptance and Scheduling Problem
Authors: E. Koyuncu
Abstract:
The problem of Order Acceptance and Scheduling (OAS) is defined as a joint decision of which orders to accept for processing and how to schedule them. Any linear programming model representing real-world situation involves the parameters defined by the decision maker in an uncertain way or by means of language statement. Fuzzy data can be used to incorporate vagueness in the real-life situation. In this study, a fuzzy mathematical model is proposed for a single machine OAS problem, where the orders are defined by their fuzzy due dates, fuzzy processing times, and fuzzy sequence dependent setup times. The signed distance method, one of the fuzzy ranking methods, is used to handle the fuzzy constraints in the model.Keywords: fuzzy mathematical programming, fuzzy ranking, order acceptance, single machine scheduling
Procedia PDF Downloads 3382419 A Review on Intelligent Systems for Geoscience
Authors: R Palson Kennedy, P.Kiran Sai
Abstract:
This article introduces machine learning (ML) researchers to the hurdles that geoscience problems present, as well as the opportunities for improvement in both ML and geosciences. This article presents a review from the data life cycle perspective to meet that need. Numerous facets of geosciences present unique difficulties for the study of intelligent systems. Geosciences data is notoriously difficult to analyze since it is frequently unpredictable, intermittent, sparse, multi-resolution, and multi-scale. The first half addresses data science’s essential concepts and theoretical underpinnings, while the second section contains key themes and sharing experiences from current publications focused on each stage of the data life cycle. Finally, themes such as open science, smart data, and team science are considered.Keywords: Data science, intelligent system, machine learning, big data, data life cycle, recent development, geo science
Procedia PDF Downloads 1332418 Development of Fixture for Pipe to Pipe Friction Stir Welding of Dissimilar Materials
Authors: Aashutosh A. Tadse, Kush Mehta, Hardik Vyas
Abstract:
Friction Stir Welding is a process in which an FSW tool produces friction heat and thus penetrates through the junction and upon rotation carries out the weld by exchange of material within the 2 metals being welded. It involves holding the workpieces stiff enough to bear the force of the tool moving across the junction to carry out a successful weld. The weld that has flat plates as workpieces, has a quite simpler geometry in terms of fixture holding them. In the case of FSW of pipes, the pipes need to be held firm with the chucks and jaws according to the diameter of the pipes being welded; the FSW tool is then revolved around the pipes to carry out the weld. Machine requires a larger area and it becomes more costly because of such a setup. To carry out the weld on the Milling machine, the newly designed fixture must be set-up on the table of milling machine and must facilitate rotation of pipes by the motor being shafted to one end of the fixture, and the other end automatically rotated because of the rotating jaws held tight enough with the pipes. The set-up has tapered cones as the jaws that would go in the pipes thus holding it with the help of its knurled surface providing the required grip. The process has rotation of pipes with the stationary rotating tool penetrating into the junction. The FSW on pipes in this process requires a very low RPM of pipes to carry out a fine weld and the speed shall change with every combination of material and diameter of pipes, so a variable speed setting motor shall serve the purpose. To withstand the force of the tool, an attachment to the shaft is provided which will be diameter specific that will resist flow of material towards the center during the weld. The welded joint thus carried out will be proper to required standards and specifications. Current industrial requirements state the need of space efficient, cost-friendly and more generalized form of fixtures and set-ups of machines to be put up. The proposed design considers every mentioned factor and thus proves to be positive in the same.Keywords: force of tool, friction stir welding, milling machine, rotation of pipes, tapered cones
Procedia PDF Downloads 1122417 A Case-Based Reasoning-Decision Tree Hybrid System for Stock Selection
Authors: Yaojun Wang, Yaoqing Wang
Abstract:
Stock selection is an important decision-making problem. Many machine learning and data mining technologies are employed to build automatic stock-selection system. A profitable stock-selection system should consider the stock’s investment value and the market timing. In this paper, we present a hybrid system including both engage for stock selection. This system uses a case-based reasoning (CBR) model to execute the stock classification, uses a decision-tree model to help with market timing and stock selection. The experiments show that the performance of this hybrid system is better than that of other techniques regarding to the classification accuracy, the average return and the Sharpe ratio.Keywords: case-based reasoning, decision tree, stock selection, machine learning
Procedia PDF Downloads 4172416 A Comparison of YOLO Family for Apple Detection and Counting in Orchards
Authors: Yuanqing Li, Changyi Lei, Zhaopeng Xue, Zhuo Zheng, Yanbo Long
Abstract:
In agricultural production and breeding, implementing automatic picking robot in orchard farming to reduce human labour and error is challenging. The core function of it is automatic identification based on machine vision. This paper focuses on apple detection and counting in orchards and implements several deep learning methods. Extensive datasets are used and a semi-automatic annotation method is proposed. The proposed deep learning models are in state-of-the-art YOLO family. In view of the essence of the models with various backbones, a multi-dimensional comparison in details is made in terms of counting accuracy, mAP and model memory, laying the foundation for realising automatic precision agriculture.Keywords: agricultural object detection, deep learning, machine vision, YOLO family
Procedia PDF Downloads 1952415 Feasibility Study of the Binary Fluid Mixtures C3H6/C4H10 and C3H6/C5H12 Used in Diffusion-Absorption Refrigeration Cycles
Authors: N. Soli, B. Chaouachi, M. Bourouis
Abstract:
We propose in this work the thermodynamic feasibility study of the operation of a refrigerating machine with absorption-diffusion with mixtures of hydrocarbons. It is for a refrigerating machine of low power (300 W) functioning on a level of temperature of the generator lower than 150 °C (fossil energy or solar energy) and operative with non-harmful fluids for the environment. According to this study, we determined to start from the digraphs of Oldham of the different binary of hydrocarbons, the minimal and maximum temperature of operation of the generator, as well as possible enrichment. The cooling medium in the condenser and absorber is done by the ambient air with a temperature at 35 °C. Helium is used as inert gas. The total pressure in the cycle is about 17.5 bars. We used suitable software to modulate for the two binary following the system propylene /butane and propylene/pentane. Our model is validated by comparison with the literature’s resultants.Keywords: absorption, DAR cycle, diffusion, propyléne
Procedia PDF Downloads 2722414 Modeling Floodplain Vegetation Response to Groundwater Variability Using ArcSWAT Hydrological Model, Moderate Resolution Imaging Spectroradiometer - Normalised Difference Vegetation Index Data, and Machine Learning
Authors: Newton Muhury, Armando A. Apan, Tek Maraseni
Abstract:
This study modelled the relationships between vegetation response and available water below the soil surface using the Terra’s Moderate Resolution Imaging Spectroradiometer (MODIS) generated Normalised Difference Vegetation Index (NDVI) and soil water content (SWC) data. The Soil & Water Assessment Tool (SWAT) interface known as ArcSWAT was used in ArcGIS for the groundwater analysis. The SWAT model was calibrated and validated in SWAT-CUP software using 10 years (2001-2010) of monthly streamflow data. The average Nash-Sutcliffe Efficiency during the calibration and validation was 0.54 and 0.51, respectively, indicating that the model performances were good. Twenty years (2001-2020) of monthly MODIS NDVI data for three different types of vegetation (forest, shrub, and grass) and soil water content for 43 sub-basins were analysed using the WEKA, machine learning tool with a selection of two supervised machine learning algorithms, i.e., support vector machine (SVM) and random forest (RF). The modelling results show that different types of vegetation response and soil water content vary in the dry and wet season. For example, the model generated high positive relationships (r=0.76, 0.73, and 0.81) between the measured and predicted NDVI values of all vegetation in the study area against the groundwater flow (GW), soil water content (SWC), and the combination of these two variables, respectively, during the dry season. However, these relationships were reduced by 36.8% (r=0.48) and 13.6% (r=0.63) against GW and SWC, respectively, in the wet season. On the other hand, the model predicted a moderate positive relationship (r=0.63) between shrub vegetation type and soil water content during the dry season, which was reduced by 31.7% (r=0.43) during the wet season. Our models also predicted that vegetation in the top location (upper part) of the sub-basin is highly responsive to GW and SWC (r=0.78, and 0.70) during the dry season. The results of this study indicate the study region is suitable for seasonal crop production in dry season. Moreover, the results predicted that the growth of vegetation in the top-point location is highly dependent on groundwater flow in both dry and wet seasons, and any instability or long-term drought can negatively affect these floodplain vegetation communities. This study has enriched our knowledge of vegetation responses to groundwater in each season, which will facilitate better floodplain vegetation management.Keywords: ArcSWAT, machine learning, floodplain vegetation, MODIS NDVI, groundwater
Procedia PDF Downloads 1162413 Predicting Response to Cognitive Behavioral Therapy for Psychosis Using Machine Learning and Functional Magnetic Resonance Imaging
Authors: Eva Tolmeijer, Emmanuelle Peters, Veena Kumari, Liam Mason
Abstract:
Cognitive behavioral therapy for psychosis (CBTp) is effective in many but not all patients, making it important to better understand the factors that determine treatment outcomes. To date, no studies have examined whether neuroimaging can make clinically useful predictions about who will respond to CBTp. To this end, we used machine learning methods that make predictions about symptom improvement at the individual patient level. Prior to receiving CBTp, 22 patients with a diagnosis of schizophrenia completed a social-affective processing task during functional MRI. Multivariate pattern analysis assessed whether treatment response could be predicted by brain activation responses to facial affect that was either socially threatening or prosocial. The resulting models did significantly predict symptom improvement, with distinct multivariate signatures predicting psychotic (r=0.54, p=0.01) and affective (r=0.32, p=0.05) symptoms. Psychotic symptom improvement was accurately predicted from relatively focal threat-related activation across hippocampal, occipital, and temporal regions; affective symptom improvement was predicted by a more dispersed profile of responses to prosocial affect. These findings enrich our understanding of the neurobiological underpinning of treatment response. This study provides a foundation that will hopefully lead to greater precision and tailoring of the interventions offered to patients.Keywords: cognitive behavioral therapy, machine learning, psychosis, schizophrenia
Procedia PDF Downloads 2732412 Scalable Learning of Tree-Based Models on Sparsely Representable Data
Authors: Fares Hedayatit, Arnauld Joly, Panagiotis Papadimitriou
Abstract:
Many machine learning tasks such as text annotation usually require training over very big datasets, e.g., millions of web documents, that can be represented in a sparse input space. State-of the-art tree-based ensemble algorithms cannot scale to such datasets, since they include operations whose running time is a function of the input space size rather than a function of the non-zero input elements. In this paper, we propose an efficient splitting algorithm to leverage input sparsity within decision tree methods. Our algorithm improves training time over sparse datasets by more than two orders of magnitude and it has been incorporated in the current version of scikit-learn.org, the most popular open source Python machine learning library.Keywords: big data, sparsely representable data, tree-based models, scalable learning
Procedia PDF Downloads 261