Search results for: batch machine learning
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9050

Search results for: batch machine learning

8720 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images

Authors: Eiman Kattan, Hong Wei

Abstract:

In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.

Keywords: CNNs, hyperparamters, remote sensing, land cover, land use

Procedia PDF Downloads 166
8719 Big Data in Telecom Industry: Effective Predictive Techniques on Call Detail Records

Authors: Sara ElElimy, Samir Moustafa

Abstract:

Mobile network operators start to face many challenges in the digital era, especially with high demands from customers. Since mobile network operators are considered a source of big data, traditional techniques are not effective with new era of big data, Internet of things (IoT) and 5G; as a result, handling effectively different big datasets becomes a vital task for operators with the continuous growth of data and moving from long term evolution (LTE) to 5G. So, there is an urgent need for effective Big data analytics to predict future demands, traffic, and network performance to full fill the requirements of the fifth generation of mobile network technology. In this paper, we introduce data science techniques using machine learning and deep learning algorithms: the autoregressive integrated moving average (ARIMA), Bayesian-based curve fitting, and recurrent neural network (RNN) are employed for a data-driven application to mobile network operators. The main framework included in models are identification parameters of each model, estimation, prediction, and final data-driven application of this prediction from business and network performance applications. These models are applied to Telecom Italia Big Data challenge call detail records (CDRs) datasets. The performance of these models is found out using a specific well-known evaluation criteria shows that ARIMA (machine learning-based model) is more accurate as a predictive model in such a dataset than the RNN (deep learning model).

Keywords: big data analytics, machine learning, CDRs, 5G

Procedia PDF Downloads 139
8718 Determination of Klebsiella Pneumoniae Susceptibility to Antibiotics Using Infrared Spectroscopy and Machine Learning Algorithms

Authors: Manal Suleiman, George Abu-Aqil, Uraib Sharaha, Klaris Riesenberg, Itshak Lapidot, Ahmad Salman, Mahmoud Huleihel

Abstract:

Klebsiella pneumoniae is one of the most aggressive multidrug-resistant bacteria associated with human infections resulting in high mortality and morbidity. Thus, for an effective treatment, it is important to diagnose both the species of infecting bacteria and their susceptibility to antibiotics. Current used methods for diagnosing the bacterial susceptibility to antibiotics are time-consuming (about 24h following the first culture). Thus, there is a clear need for rapid methods to determine the bacterial susceptibility to antibiotics. Infrared spectroscopy is a well-known method that is known as sensitive and simple which is able to detect minor biomolecular changes in biological samples associated with developing abnormalities. The main goal of this study is to evaluate the potential of infrared spectroscopy in tandem with Random Forest and XGBoost machine learning algorithms to diagnose the susceptibility of Klebsiella pneumoniae to antibiotics within approximately 20 minutes following the first culture. In this study, 1190 Klebsiella pneumoniae isolates were obtained from different patients with urinary tract infections. The isolates were measured by the infrared spectrometer, and the spectra were analyzed by machine learning algorithms Random Forest and XGBoost to determine their susceptibility regarding nine specific antibiotics. Our results confirm that it was possible to classify the isolates into sensitive and resistant to specific antibiotics with a success rate range of 80%-85% for the different tested antibiotics. These results prove the promising potential of infrared spectroscopy as a powerful diagnostic method for determining the Klebsiella pneumoniae susceptibility to antibiotics.

Keywords: urinary tract infection (UTI), Klebsiella pneumoniae, bacterial susceptibility, infrared spectroscopy, machine learning

Procedia PDF Downloads 168
8717 Improved Computational Efficiency of Machine Learning Algorithm Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK

Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick

Abstract:

The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning archetypal that could forecast COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organisation (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data is split into 8:2 ratio for training and testing purposes to forecast future new COVID cases. Support Vector Machines (SVM), Random Forests, and linear regression algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID cases is evaluated. Random Forest outperformed the other two Machine Learning algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n=30. The mean square error obtained for Random Forest is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis Random Forest algorithm can perform more effectively and efficiently in predicting the new COVID cases, which could help the health sector to take relevant control measures for the spread of the virus.

Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest

Procedia PDF Downloads 121
8716 Machine Learning Prediction of Compressive Damage and Energy Absorption in Carbon Fiber-Reinforced Polymer Tubular Structures

Authors: Milad Abbasi

Abstract:

Carbon fiber-reinforced polymer (CFRP) composite structures are increasingly being utilized in the automotive industry due to their lightweight and specific energy absorption capabilities. Although it is impossible to predict composite mechanical properties directly using theoretical methods, various research has been conducted so far in the literature for accurate simulation of CFRP structures' energy-absorbing behavior. In this research, axial compression experiments were carried out on hand lay-up unidirectional CFRP composite tubes. The fabrication method allowed the authors to extract the material properties of the CFRPs using ASTM D3039, D3410, and D3518 standards. A neural network machine learning algorithm was then utilized to build a robust prediction model to forecast the axial compressive properties of CFRP tubes while reducing high-cost experimental efforts. The predicted results have been compared with the experimental outcomes in terms of load-carrying capacity and energy absorption capability. The results showed high accuracy and precision in the prediction of the energy-absorption capacity of the CFRP tubes. This research also demonstrates the effectiveness and challenges of machine learning techniques in the robust simulation of composites' energy-absorption behavior. Interestingly, the proposed method considerably condensed numerical and experimental efforts in the simulation and calibration of CFRP composite tubes subjected to compressive loading.

Keywords: CFRP composite tubes, energy absorption, crushing behavior, machine learning, neural network

Procedia PDF Downloads 153
8715 Comparing the Detection of Autism Spectrum Disorder within Males and Females Using Machine Learning Techniques

Authors: Joseph Wolff, Jeffrey Eilbott

Abstract:

Autism Spectrum Disorders (ASD) are a spectrum of social disorders characterized by deficits in social communication, verbal ability, and interaction that can vary in severity. In recent years, researchers have used magnetic resonance imaging (MRI) to help detect how neural patterns in individuals with ASD differ from those of neurotypical (NT) controls for classification purposes. This study analyzed the classification of ASD within males and females using functional MRI data. Functional connectivity (FC) correlations among brain regions were used as feature inputs for machine learning algorithms. Analysis was performed on 558 cases from the Autism Brain Imaging Data Exchange (ABIDE) I dataset. When trained specifically on females, the algorithm underperformed in classifying the ASD subset of our testing population. Although the subject size was relatively smaller in the female group, the manual matching of both male and female training groups helps explain the algorithm’s bias, indicating the altered sex abnormalities in functional brain networks compared to typically developing peers. These results highlight the importance of taking sex into account when considering how generalizations of findings on males with ASD apply to females.

Keywords: autism spectrum disorder, machine learning, neuroimaging, sex differences

Procedia PDF Downloads 209
8714 Methods for Enhancing Ensemble Learning or Improving Classifiers of This Technique in the Analysis and Classification of Brain Signals

Authors: Seyed Mehdi Ghezi, Hesam Hasanpoor

Abstract:

This scientific article explores enhancement methods for ensemble learning with the aim of improving the performance of classifiers in the analysis and classification of brain signals. The research approach in this field consists of two main parts, each with its own strengths and weaknesses. The choice of approach depends on the specific research question and available resources. By combining these approaches and leveraging their respective strengths, researchers can enhance the accuracy and reliability of classification results, consequently advancing our understanding of the brain and its functions. The first approach focuses on utilizing machine learning methods to identify the best features among the vast array of features present in brain signals. The selection of features varies depending on the research objective, and different techniques have been employed for this purpose. For instance, the genetic algorithm has been used in some studies to identify the best features, while optimization methods have been utilized in others to identify the most influential features. Additionally, machine learning techniques have been applied to determine the influential electrodes in classification. Ensemble learning plays a crucial role in identifying the best features that contribute to learning, thereby improving the overall results. The second approach concentrates on designing and implementing methods for selecting the best classifier or utilizing meta-classifiers to enhance the final results in ensemble learning. In a different section of the research, a single classifier is used instead of multiple classifiers, employing different sets of features to improve the results. The article provides an in-depth examination of each technique, highlighting their advantages and limitations. By integrating these techniques, researchers can enhance the performance of classifiers in the analysis and classification of brain signals. This advancement in ensemble learning methodologies contributes to a better understanding of the brain and its functions, ultimately leading to improved accuracy and reliability in brain signal analysis and classification.

Keywords: ensemble learning, brain signals, classification, feature selection, machine learning, genetic algorithm, optimization methods, influential features, influential electrodes, meta-classifiers

Procedia PDF Downloads 75
8713 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 54
8712 Hull Detection from Handwritten Digit Image

Authors: Sriraman Kothuri, Komal Teja Mattupalli

Abstract:

In this paper we proposed a novel algorithm for recognizing hulls in a hand written digits. This is an extension to the work on “Digit Recognition Using Freeman Chain code”. In order to find out the hulls in a user given digit it is necessary to follow three steps. Those are pre-processing, Boundary Extraction and at last apply the Hull Detection system in a way to attain the better results. The detection of Hull Regions is mainly intended to increase the machine learning capability in detection of characters or digits. This can also extend this in order to get the hull regions and their intensities in Black Holes in Space Exploration.

Keywords: chain code, machine learning, hull regions, hull recognition system, SASK algorithm

Procedia PDF Downloads 400
8711 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 38
8710 Current Methods for Drug Property Prediction in the Real World

Authors: Jacob Green, Cecilia Cabrera, Maximilian Jakobs, Andrea Dimitracopoulos, Mark van der Wilk, Ryan Greenhalgh

Abstract:

Predicting drug properties is key in drug discovery to enable de-risking of assets before expensive clinical trials and to find highly active compounds faster. Interest from the machine learning community has led to the release of a variety of benchmark datasets and proposed methods. However, it remains unclear for practitioners which method or approach is most suitable, as different papers benchmark on different datasets and methods, leading to varying conclusions that are not easily compared. Our large-scale empirical study links together numerous earlier works on different datasets and methods, thus offering a comprehensive overview of the existing property classes, datasets, and their interactions with different methods. We emphasise the importance of uncertainty quantification and the time and, therefore, cost of applying these methods in the drug development decision-making cycle. To the best of the author's knowledge, it has been observed that the optimal approach varies depending on the dataset and that engineered features with classical machine learning methods often outperform deep learning. Specifically, QSAR datasets are typically best analysed with classical methods such as Gaussian Processes, while ADMET datasets are sometimes better described by Trees or deep learning methods such as Graph Neural Networks or language models. Our work highlights that practitioners do not yet have a straightforward, black-box procedure to rely on and sets a precedent for creating practitioner-relevant benchmarks. Deep learning approaches must be proven on these benchmarks to become the practical method of choice in drug property prediction.

Keywords: activity (QSAR), ADMET, classical methods, drug property prediction, empirical study, machine learning

Procedia PDF Downloads 81
8709 Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach

Authors: Rajvir Kaur, Jeewani Anupama Ginige

Abstract:

With recent trends in Big Data and advancements in Information and Communication Technologies, the healthcare industry is at the stage of its transition from clinician oriented to technology oriented. Many people around the world die of cancer because the diagnosis of disease was not done at an early stage. Nowadays, the computational methods in the form of Machine Learning (ML) are used to develop automated decision support systems that can diagnose cancer with high confidence in a timely manner. This paper aims to carry out the comparative evaluation of a selected set of ML classifiers on two existing datasets: breast cancer and cervical cancer. The ML classifiers compared in this study are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and Artificial Neural Networks (ANN). The evaluation is carried out based on standard evaluation metrics Precision (P), Recall (R), F1-score and Accuracy. The experimental results based on the evaluation metrics show that ANN showed the highest-level accuracy (99.4%) when tested with breast cancer dataset. On the other hand, when these ML classifiers are tested with the cervical cancer dataset, Ensemble (Bagged Tree) technique gave better accuracy (93.1%) in comparison to other classifiers.

Keywords: artificial neural networks, breast cancer, classifiers, cervical cancer, f-score, machine learning, precision, recall

Procedia PDF Downloads 276
8708 Comparison of Different Machine Learning Algorithms for Solubility Prediction

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.

Keywords: random forest, machine learning, comparison, feature extraction

Procedia PDF Downloads 40
8707 Smart Sensor Data to Predict Machine Performance with IoT-Based Machine Learning and Artificial Intelligence

Authors: C. J. Rossouw, T. I. van Niekerk

Abstract:

The global manufacturing industry is utilizing the internet and cloud-based services to further explore the anatomy and optimize manufacturing processes in support of the movement into the Fourth Industrial Revolution (4IR). The 4IR from a third world and African perspective is hindered by the fact that many manufacturing systems that were developed in the third industrial revolution are not inherently equipped to utilize the internet and services of the 4IR, hindering the progression of third world manufacturing industries into the 4IR. This research focuses on the development of a non-invasive and cost-effective cyber-physical IoT system that will exploit a machine’s vibration to expose semantic characteristics in the manufacturing process and utilize these results through a real-time cloud-based machine condition monitoring system with the intention to optimize the system. A microcontroller-based IoT sensor was designed to acquire a machine’s mechanical vibration data, process it in real-time, and transmit it to a cloud-based platform via Wi-Fi and the internet. Time-frequency Fourier analysis was applied to the vibration data to form an image representation of the machine’s behaviour. This data was used to train a Convolutional Neural Network (CNN) to learn semantic characteristics in the machine’s behaviour and relate them to a state of operation. The same data was also used to train a Convolutional Autoencoder (CAE) to detect anomalies in the data. Real-time edge-based artificial intelligence was achieved by deploying the CNN and CAE on the sensor to analyse the vibration. A cloud platform was deployed to visualize the vibration data and the results of the CNN and CAE in real-time. The cyber-physical IoT system was deployed on a semi-automated metal granulation machine with a set of trained machine learning models. Using a single sensor, the system was able to accurately visualize three states of the machine’s operation in real-time. The system was also able to detect a variance in the material being granulated. The research demonstrates how non-IoT manufacturing systems can be equipped with edge-based artificial intelligence to establish a remote machine condition monitoring system.

Keywords: IoT, cyber-physical systems, artificial intelligence, manufacturing, vibration analytics, continuous machine condition monitoring

Procedia PDF Downloads 88
8706 Supervised/Unsupervised Mahalanobis Algorithm for Improving Performance for Cyberattack Detection over Communications Networks

Authors: Radhika Ranjan Roy

Abstract:

Deployment of machine learning (ML)/deep learning (DL) algorithms for cyberattack detection in operational communications networks (wireless and/or wire-line) is being delayed because of low-performance parameters (e.g., recall, precision, and f₁-score). If datasets become imbalanced, which is the usual case for communications networks, the performance tends to become worse. Complexities in handling reducing dimensions of the feature sets for increasing performance are also a huge problem. Mahalanobis algorithms have been widely applied in scientific research because Mahalanobis distance metric learning is a successful framework. In this paper, we have investigated the Mahalanobis binary classifier algorithm for increasing cyberattack detection performance over communications networks as a proof of concept. We have also found that high-dimensional information in intermediate features that are not utilized as much for classification tasks in ML/DL algorithms are the main contributor to the state-of-the-art of improved performance of the Mahalanobis method, even for imbalanced and sparse datasets. With no feature reduction, MD offers uniform results for precision, recall, and f₁-score for unbalanced and sparse NSL-KDD datasets.

Keywords: Mahalanobis distance, machine learning, deep learning, NS-KDD, local intrinsic dimensionality, chi-square, positive semi-definite, area under the curve

Procedia PDF Downloads 78
8705 High Titer Cellulosic Ethanol Production Achieved by Fed-Batch Prehydrolysis Simultaneous Enzymatic Saccharification and Fermentation of Sulfite Pretreated Softwood

Authors: Chengyu Dong, Shao-Yuan Leu

Abstract:

Cellulosic ethanol production from lignocellulosic biomass can reduce our reliance on fossil fuel, mitigate climate change, and stimulate rural economic development. The relative low ethanol production (60 g/L) limits the economic viable of lignocellulose-based biorefinery. The ethanol production can be increased up to 80 g/L by removing nearly all the non-cellulosic materials, while the capital of the pretreatment process increased significantly. In this study, a fed-batch prehydrolysis simultaneously saccharification and fermentation process (PSSF) was designed to converse the sulfite pretreated softwood (~30% residual lignin) to high concentrations of ethanol (80 g/L). The liquefaction time of hydrolysis process was shortened down to 24 h by employing the fed-batch strategy. Washing out the spent liquor with water could eliminate the inhibition of the pretreatment spent liquor. However, the ethanol yield of lignocellulose was reduced as the fermentable sugars were also lost during the process. Fed-batch prehydrolyzing the while slurry (i.e. liquid plus solid fraction) pretreated softwood for 24 h followed by simultaneously saccharification and fermentation process at 28 °C can generate 80 g/L ethanol production. Fed-batch strategy is very effectively to eliminate the “solid effect” of the high gravity saccharification, so concentrating the cellulose to nearly 90% by the pretreatment process is not a necessary step to get high ethanol production. Detoxification of the pretreatment spent liquor caused the loss of sugar and reduced the ethanol yield consequently. The tolerance of yeast to inhibitors was better at 28 °C, therefore, reducing the temperature of the following fermentation process is a simple and valid method to produce high ethanol production.

Keywords: cellulosic ethanol, sulfite pretreatment, Fed batch PSSF, temperature

Procedia PDF Downloads 367
8704 Application of Granular Computing Paradigm in Knowledge Induction

Authors: Iftikhar U. Sikder

Abstract:

This paper illustrates an application of granular computing approach, namely rough set theory in data mining. The paper outlines the formalism of granular computing and elucidates the mathematical underpinning of rough set theory, which has been widely used by the data mining and the machine learning community. A real-world application is illustrated, and the classification performance is compared with other contending machine learning algorithms. The predictive performance of the rough set rule induction model shows comparative success with respect to other contending algorithms.

Keywords: concept approximation, granular computing, reducts, rough set theory, rule induction

Procedia PDF Downloads 531
8703 Bidirectional Encoder Representations from Transformers Sentiment Analysis Applied to Three Presidential Pre-Candidates in Costa Rica

Authors: Félix David Suárez Bonilla

Abstract:

A sentiment analysis service to detect polarity (positive, neural, and negative), based on transfer learning, was built using a Spanish version of BERT and applied to tweets written in Spanish. The dataset that was used consisted of 11975 reviews, which were extracted from Google Play using the google-play-scrapper package. The BETO trained model used: the AdamW optimizer, a batch size of 16, a learning rate of 2x10⁻⁵ and 10 epochs. The system was tested using tweets of three presidential pre-candidates from Costa Rica. The system was finally validated using human labeled examples, achieving an accuracy of 83.3%.

Keywords: NLP, transfer learning, BERT, sentiment analysis, social media, opinion mining

Procedia PDF Downloads 174
8702 Heuristic Classification of Hydrophone Recordings

Authors: Daniel M. Wolff, Patricia Gray, Rafael de la Parra Venegas

Abstract:

An unsupervised machine listening system is constructed and applied to a dataset of 17,195 30-second marine hydrophone recordings. The system is then heuristically supplemented with anecdotal listening, contextual recording information, and supervised learning techniques to reduce the number of false positives. Features for classification are assembled by extracting the following data from each of the audio files: the spectral centroid, root-mean-squared values for each frequency band of a 10-octave filter bank, and mel-frequency cepstral coefficients in 5-second frames. In this way both time- and frequency-domain information are contained in the features to be passed to a clustering algorithm. Classification is performed using the k-means algorithm and then a k-nearest neighbors search. Different values of k are experimented with, in addition to different combinations of the available feature sets. Hypothesized class labels are 'primarily anthrophony' and 'primarily biophony', where the best class result conforming to the former label has 104 members after heuristic pruning. This demonstrates how a large audio dataset has been made more tractable with machine learning techniques, forming the foundation of a framework designed to acoustically monitor and gauge biological and anthropogenic activity in a marine environment.

Keywords: anthrophony, hydrophone, k-means, machine learning

Procedia PDF Downloads 170
8701 SAP-Reduce: Staleness-Aware P-Reduce with Weight Generator

Authors: Lizhi Ma, Chengcheng Hu, Fuxian Wong

Abstract:

Partial reduce (P-Reduce) has set a state-of-the-art performance on distributed machine learning in the heterogeneous environment over the All-Reduce architecture. The dynamic P-Reduce based on the exponential moving average (EMA) approach predicts all the intermediate model parameters, which raises unreliability. It is noticed that the approximation trick leads the wrong way to obtaining model parameters in all the nodes. In this paper, SAP-Reduce is proposed, which is a variant of the All-Reduce distributed training model with staleness-aware dynamic P-Reduce. SAP-Reduce directly utilizes the EMA-like algorithm to generate the normalized weights. To demonstrate the effectiveness of the algorithm, the experiments are set based on a number of deep learning models, comparing the single-step training acceleration ratio and convergence time. It is found that SAP-Reduce simplifying dynamic P-Reduce outperforms the intermediate approximation one. The empirical results show SAP-Reduce is 1.3× −2.1× faster than existing baselines.

Keywords: collective communication, decentralized distributed training, machine learning, P-Reduce

Procedia PDF Downloads 32
8700 Reinforcement Learning For Agile CNC Manufacturing: Optimizing Configurations And Sequencing

Authors: Huan Ting Liao

Abstract:

In a typical manufacturing environment, computer numerical control (CNC) machining is essential for automating production through precise computer-controlled tool operations, significantly enhancing efficiency and ensuring consistent product quality. However, traditional CNC production lines often rely on manual loading and unloading, limiting operational efficiency and scalability. Although automated loading systems have been developed, they frequently lack sufficient intelligence and configuration efficiency, requiring extensive setup adjustments for different products and impacting overall productivity. This research addresses the job shop scheduling problem (JSSP) in CNC machining environments, aiming to minimize total completion time (makespan) and maximize CNC machine utilization. We propose a novel approach using reinforcement learning (RL), specifically the Q-learning algorithm, to optimize scheduling decisions. The study simulates the JSSP, incorporating robotic arm operations, machine processing times, and work order demand allocation to determine optimal processing sequences. The Q-learning algorithm enhances machine utilization by dynamically balancing workloads across CNC machines, adapting to varying job demands and machine states. This approach offers robust solutions for complex manufacturing environments by automating decision-making processes for job assignments. Additionally, we evaluate various layout configurations to identify the most efficient setup. By integrating RL-based scheduling optimization with layout analysis, this research aims to provide a comprehensive solution for improving manufacturing efficiency and productivity in CNC-based job shops. The proposed method's adaptability and automation potential promise significant advancements in tackling dynamic manufacturing challenges.

Keywords: job shop scheduling problem, reinforcement learning, operations sequence, layout optimization, q-learning

Procedia PDF Downloads 24
8699 Enhancing Project Performance Forecasting using Machine Learning Techniques

Authors: Soheila Sadeghi

Abstract:

Accurate forecasting of project performance metrics is crucial for successfully managing and delivering urban road reconstruction projects. Traditional methods often rely on static baseline plans and fail to consider the dynamic nature of project progress and external factors. This research proposes a machine learning-based approach to forecast project performance metrics, such as cost variance and earned value, for each Work Breakdown Structure (WBS) category in an urban road reconstruction project. The proposed model utilizes time series forecasting techniques, including Autoregressive Integrated Moving Average (ARIMA) and Long Short-Term Memory (LSTM) networks, to predict future performance based on historical data and project progress. The model also incorporates external factors, such as weather patterns and resource availability, as features to enhance the accuracy of forecasts. By applying the predictive power of machine learning, the performance forecasting model enables proactive identification of potential deviations from the baseline plan, which allows project managers to take timely corrective actions. The research aims to validate the effectiveness of the proposed approach using a case study of an urban road reconstruction project, comparing the model's forecasts with actual project performance data. The findings of this research contribute to the advancement of project management practices in the construction industry, offering a data-driven solution for improving project performance monitoring and control.

Keywords: project performance forecasting, machine learning, time series forecasting, cost variance, earned value management

Procedia PDF Downloads 49
8698 Optimization Based Extreme Learning Machine for Watermarking of an Image in DWT Domain

Authors: RAM PAL SINGH, VIKASH CHAUDHARY, MONIKA VERMA

Abstract:

In this paper, we proposed the implementation of optimization based Extreme Learning Machine (ELM) for watermarking of B-channel of color image in discrete wavelet transform (DWT) domain. ELM, a regularization algorithm, works based on generalized single-hidden-layer feed-forward neural networks (SLFNs). However, hidden layer parameters, generally called feature mapping in context of ELM need not to be tuned every time. This paper shows the embedding and extraction processes of watermark with the help of ELM and results are compared with already used machine learning models for watermarking.Here, a cover image is divide into suitable numbers of non-overlapping blocks of required size and DWT is applied to each block to be transformed in low frequency sub-band domain. Basically, ELM gives a unified leaning platform with a feature mapping, that is, mapping between hidden layer and output layer of SLFNs, is tried for watermark embedding and extraction purpose in a cover image. Although ELM has widespread application right from binary classification, multiclass classification to regression and function estimation etc. Unlike SVM based algorithm which achieve suboptimal solution with high computational complexity, ELM can provide better generalization performance results with very small complexity. Efficacy of optimization method based ELM algorithm is measured by using quantitative and qualitative parameters on a watermarked image even though image is subjected to different types of geometrical and conventional attacks.

Keywords: BER, DWT, extreme leaning machine (ELM), PSNR

Procedia PDF Downloads 311
8697 Distributed System Computing Resource Scheduling Algorithm Based on Deep Reinforcement Learning

Authors: Yitao Lei, Xingxiang Zhai, Burra Venkata Durga Kumar

Abstract:

As the quantity and complexity of computing in large-scale software systems increase, distributed system computing becomes increasingly important. The distributed system realizes high-performance computing by collaboration between different computing resources. If there are no efficient resource scheduling resources, the abuse of distributed computing may cause resource waste and high costs. However, resource scheduling is usually an NP-hard problem, so we cannot find a general solution. However, some optimization algorithms exist like genetic algorithm, ant colony optimization, etc. The large scale of distributed systems makes this traditional optimization algorithm challenging to work with. Heuristic and machine learning algorithms are usually applied in this situation to ease the computing load. As a result, we do a review of traditional resource scheduling optimization algorithms and try to introduce a deep reinforcement learning method that utilizes the perceptual ability of neural networks and the decision-making ability of reinforcement learning. Using the machine learning method, we try to find important factors that influence the performance of distributed system computing and help the distributed system do an efficient computing resource scheduling. This paper surveys the application of deep reinforcement learning on distributed system computing resource scheduling proposes a deep reinforcement learning method that uses a recurrent neural network to optimize the resource scheduling, and proposes the challenges and improvement directions for DRL-based resource scheduling algorithms.

Keywords: resource scheduling, deep reinforcement learning, distributed system, artificial intelligence

Procedia PDF Downloads 111
8696 Perception and Implementation of Machine Translation Applications by the Iranian English Translators

Authors: Abdul Amir Hazbavi

Abstract:

The present study is an attempt to provide a relatively comprehensive preview of the Iranian English translators’ perception on Machine Translation. Furthermore, the study tries to shed light on the status of implementation of Machine Translation among the Iranian English Translators. To reach the aforementioned objectives, the Localization Industry Standards Association’s questioner for measuring perceptions with regard to the adoption of a technology innovation was adapted and used to investigate three parameter among the participants of the study, namely familiarity with Machine Translation, general perception on Machine Translation and implementation of Machine Translation systems in translation tasks. The participants of the study were 224 last-year undergraduate Iranian students of English translation at 10 universities across the country. The study revealed a very low level of adoption and a very high level of willingness to get familiar with and learn about Machine Translation, as well as a positive perception of and attitude toward Machine Translation by the Iranian English translators.

Keywords: translation technology, machine translation, perception, implementation

Procedia PDF Downloads 523
8695 Transformer Fault Diagnostic Predicting Model Using Support Vector Machine with Gradient Decent Optimization

Authors: R. O. Osaseri, A. R. Usiobaifo

Abstract:

The power transformer which is responsible for the voltage transformation is of great relevance in the power system and oil-immerse transformer is widely used all over the world. A prompt and proper maintenance of the transformer is of utmost importance. The dissolved gasses content in power transformer, oil is of enormous importance in detecting incipient fault of the transformer. There is a need for accurate prediction of the incipient fault in transformer oil in order to facilitate the prompt maintenance and reducing the cost and error minimization. Study on fault prediction and diagnostic has been the center of many researchers and many previous works have been reported on the use of artificial intelligence to predict incipient failure of transformer faults. In this study machine learning technique was employed by using gradient decent algorithms and Support Vector Machine (SVM) in predicting incipient fault diagnosis of transformer. The method focuses on creating a system that improves its performance on previous result and historical data. The system design approach is basically in two phases; training and testing phase. The gradient decent algorithm is trained with a training dataset while the learned algorithm is applied to a set of new data. This two dataset is used to prove the accuracy of the proposed model. In this study a transformer fault diagnostic model based on Support Vector Machine (SVM) and gradient decent algorithms has been presented with a satisfactory diagnostic capability with high percentage in predicting incipient failure of transformer faults than existing diagnostic methods.

Keywords: diagnostic model, gradient decent, machine learning, support vector machine (SVM), transformer fault

Procedia PDF Downloads 322
8694 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.

Keywords: classification, achine learning, predictive quality, feature selection

Procedia PDF Downloads 162
8693 Hybrid Approach for Software Defect Prediction Using Machine Learning with Optimization Technique

Authors: C. Manjula, Lilly Florence

Abstract:

Software technology is developing rapidly which leads to the growth of various industries. Now-a-days, software-based applications have been adopted widely for business purposes. For any software industry, development of reliable software is becoming a challenging task because a faulty software module may be harmful for the growth of industry and business. Hence there is a need to develop techniques which can be used for early prediction of software defects. Due to complexities in manual prediction, automated software defect prediction techniques have been introduced. These techniques are based on the pattern learning from the previous software versions and finding the defects in the current version. These techniques have attracted researchers due to their significant impact on industrial growth by identifying the bugs in software. Based on this, several researches have been carried out but achieving desirable defect prediction performance is still a challenging task. To address this issue, here we present a machine learning based hybrid technique for software defect prediction. First of all, Genetic Algorithm (GA) is presented where an improved fitness function is used for better optimization of features in data sets. Later, these features are processed through Decision Tree (DT) classification model. Finally, an experimental study is presented where results from the proposed GA-DT based hybrid approach is compared with those from the DT classification technique. The results show that the proposed hybrid approach achieves better classification accuracy.

Keywords: decision tree, genetic algorithm, machine learning, software defect prediction

Procedia PDF Downloads 329
8692 Early Prediction of Diseases in a Cow for Cattle Industry

Authors: Ghufran Ahmed, Muhammad Osama Siddiqui, Shahbaz Siddiqui, Rauf Ahmad Shams Malick, Faisal Khan, Mubashir Khan

Abstract:

In this paper, a machine learning-based approach for early prediction of diseases in cows is proposed. Different ML algos are applied to extract useful patterns from the available dataset. Technology has changed today’s world in every aspect of life. Similarly, advanced technologies have been developed in livestock and dairy farming to monitor dairy cows in various aspects. Dairy cattle monitoring is crucial as it plays a significant role in milk production around the globe. Moreover, it has become necessary for farmers to adopt the latest early prediction technologies as the food demand is increasing with population growth. This highlight the importance of state-ofthe-art technologies in analyzing how important technology is in analyzing dairy cows’ activities. It is not easy to predict the activities of a large number of cows on the farm, so, the system has made it very convenient for the farmers., as it provides all the solutions under one roof. The cattle industry’s productivity is boosted as the early diagnosis of any disease on a cattle farm is detected and hence it is treated early. It is done on behalf of the machine learning output received. The learning models are already set which interpret the data collected in a centralized system. Basically, we will run different algorithms on behalf of the data set received to analyze milk quality, and track cows’ health, location, and safety. This deep learning algorithm draws patterns from the data, which makes it easier for farmers to study any animal’s behavioral changes. With the emergence of machine learning algorithms and the Internet of Things, accurate tracking of animals is possible as the rate of error is minimized. As a result, milk productivity is increased. IoT with ML capability has given a new phase to the cattle farming industry by increasing the yield in the most cost-effective and time-saving manner.

Keywords: IoT, machine learning, health care, dairy cows

Procedia PDF Downloads 70
8691 Design and Implementation of a Software Platform Based on Artificial Intelligence for Product Recommendation

Authors: Giuseppina Settanni, Antonio Panarese, Raffaele Vaira, Maurizio Galiano

Abstract:

Nowdays, artificial intelligence is used successfully in academia and industry for its ability to learn from a large amount of data. In particular, in recent years the use of machine learning algorithms in the field of e-commerce has spread worldwide. In this research study, a prototype software platform was designed and implemented in order to suggest to users the most suitable products for their needs. The platform includes a chatbot and a recommender system based on artificial intelligence algorithms that provide suggestions and decision support to the customer. The recommendation systems perform the important function of automatically filtering and personalizing information, thus allowing to manage with the IT overload to which the user is exposed on a daily basis. Recently, international research has experimented with the use of machine learning technologies with the aim to increase the potential of traditional recommendation systems. Specifically, support vector machine algorithms have been implemented combined with natural language processing techniques that allow the user to interact with the system, express their requests and receive suggestions. The interested user can access the web platform on the internet using a computer, tablet or mobile phone, register, provide the necessary information and view the products that the system deems them most appropriate. The platform also integrates a dashboard that allows the use of the various functions, which the platform is equipped with, in an intuitive and simple way. Artificial intelligence algorithms have been implemented and trained on historical data collected from user browsing. Finally, the testing phase allowed to validate the implemented model, which will be further tested by letting customers use it.

Keywords: machine learning, recommender system, software platform, support vector machine

Procedia PDF Downloads 134