Search results for: machine modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4562

Search results for: machine modelling

3152 Design and Implementation of Machine Learning Model for Short-Term Energy Forecasting in Smart Home Management System

Authors: R. Ramesh, K. K. Shivaraman

Abstract:

The main aim of this paper is to handle the energy requirement in an efficient manner by merging the advanced digital communication and control technologies for smart grid applications. In order to reduce user home load during peak load hours, utility applies several incentives such as real-time pricing, time of use, demand response for residential customer through smart meter. However, this method provides inconvenience in the sense that user needs to respond manually to prices that vary in real time. To overcome these inconvenience, this paper proposes a convolutional neural network (CNN) with k-means clustering machine learning model which have ability to forecast energy requirement in short term, i.e., hour of the day or day of the week. By integrating our proposed technique with home energy management based on Bluetooth low energy provides predicted value to user for scheduling appliance in advanced. This paper describes detail about CNN configuration and k-means clustering algorithm for short-term energy forecasting.

Keywords: convolutional neural network, fuzzy logic, k-means clustering approach, smart home energy management

Procedia PDF Downloads 305
3151 Service Information Integration Platform as Decision Making Tools for the Service Industry Supply Chain-Indonesia Service Integration Project

Authors: Haikal Achmad Thaha, Pujo Laksono, Dhamma Nibbana Putra

Abstract:

Customer service is one of the core interest in a service sector of a company, whether as the core business or as service part of the operation. Most of the time, the people and the previous research in service industry is focused on finding the best business model solution for the service sector, usually to decide between total in house customer service, outsourcing, or something in between. Conventionally, to take this decision is some important part of the management job, and this is a process that usually takes some time and staff effort, meanwhile market condition and overall company needs may change and cause loss of income and temporary disturbance in the companies operation . However, in this paper we have offer a new concept model to assist decision making process in service industry. This model will featured information platform as central tool to integrate service industry operation. The result is service information model which would ideally increase response time and effectivity of the decision making. it will also help service industry in switching the service solution system quickly through machine learning when the companies growth and the service solution needed are changing.

Keywords: service industry, customer service, machine learning, decision making, information platform

Procedia PDF Downloads 622
3150 Prediction of Survival Rate after Gastrointestinal Surgery Based on The New Japanese Association for Acute Medicine (JAAM Score) With Neural Network Classification Method

Authors: Ayu Nabila Kusuma Pradana, Aprinaldi Jasa Mantau, Tomohiko Akahoshi

Abstract:

The incidence of Disseminated intravascular coagulation (DIC) following gastrointestinal surgery has a poor prognosis. Therefore, it is important to determine the factors that can predict the prognosis of DIC. This study will investigate the factors that may influence the outcome of DIC in patients after gastrointestinal surgery. Eighty-one patients were admitted to the intensive care unit after gastrointestinal surgery in Kyushu University Hospital from 2003 to 2021. Acute DIC scores were estimated using the new Japanese Association for Acute Medicine (JAAM) score from before and after surgery from day 1, day 3, and day 7. Acute DIC scores will be compared with The Sequential Organ Failure Assessment (SOFA) score, platelet count, lactate level, and a variety of biochemical parameters. This study applied machine learning algorithms to predict the prognosis of DIC after gastrointestinal surgery. The results of this study are expected to be used as an indicator for evaluating patient prognosis so that it can increase life expectancy and reduce mortality from cases of DIC patients after gastrointestinal surgery.

Keywords: the survival rate, gastrointestinal surgery, JAAM score, neural network, machine learning, disseminated intravascular coagulation (DIC)

Procedia PDF Downloads 259
3149 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction

Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong

Abstract:

Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.

Keywords: data refinement, machine learning, mutual information, short-term latency prediction

Procedia PDF Downloads 169
3148 Single Imputation for Audiograms

Authors: Sarah Beaver, Renee Bryce

Abstract:

Audiograms detect hearing impairment, but missing values pose problems. This work explores imputations in an attempt to improve accuracy. This work implements Linear Regression, Lasso, Linear Support Vector Regression, Bayesian Ridge, K Nearest Neighbors (KNN), and Random Forest machine learning techniques to impute audiogram frequencies ranging from 125Hz to 8000Hz. The data contains patients who had or were candidates for cochlear implants. Accuracy is compared across two different Nested Cross-Validation k values. Over 4000 audiograms were used from 800 unique patients. Additionally, training on data combines and compares left and right ear audiograms versus single ear side audiograms. The accuracy achieved using Root Mean Square Error (RMSE) values for the best models for Random Forest ranges from 4.74 to 6.37. The R\textsuperscript{2} values for the best models for Random Forest ranges from .91 to .96. The accuracy achieved using RMSE values for the best models for KNN ranges from 5.00 to 7.72. The R\textsuperscript{2} values for the best models for KNN ranges from .89 to .95. The best imputation models received R\textsuperscript{2} between .89 to .96 and RMSE values less than 8dB. We also show that the accuracy of classification predictive models performed better with our best imputation models versus constant imputations by a two percent increase.

Keywords: machine learning, audiograms, data imputations, single imputations

Procedia PDF Downloads 82
3147 The Potential in the Use of Building Information Modelling and Life-Cycle Assessment for Retrofitting Buildings: A Study Based on Interviews with Experts in Both Fields

Authors: Alex Gonzalez Caceres, Jan Karlshøj, Tor Arvid Vik

Abstract:

Life cycle of residential buildings are expected to be several decades, 40% of European residential buildings have inefficient energy conservation measure. The existing building represents 20-40% of the energy use and the CO₂ emission. Since net zero energy buildings are a short-term goal, (should be achieved by EU countries after 2020), is necessary to plan the next logical step, which is to prepare the existing outdated stack of building to retrofit them into an energy efficiency buildings. In order to accomplish this, two specialize and widespread tool can be used Building Information Modelling (BIM) and life-cycle assessment (LCA). BIM and LCA are tools used by a variety of disciplines; both are able to represent and analyze the constructions in different stages. The combination of these technologies could improve greatly the retrofitting techniques. The incorporation of the carbon footprint, introducing a single database source for different material analysis. To this is added the possibility of considering different analysis approaches such as costs and energy saving. Is expected with these measures, enrich the decision-making. The methodology is based on two main activities; the first task involved the collection of data this is accomplished by literature review and interview with experts in the retrofitting field and BIM technologies. The results of this task are presented as an evaluation checklist of BIM ability to manage data and improve decision-making in retrofitting projects. The last activity involves an evaluation using the results of the previous tasks, to check how far the IFC format can support the requirements by each specialist, and its uses by third party software. The result indicates that BIM/LCA have a great potential to improve the retrofitting process in existing buildings, but some modification must be done in order to meet the requirements of the specialists for both, retrofitting and LCA evaluators.

Keywords: retrofitting, BIM, LCA, energy efficiency

Procedia PDF Downloads 220
3146 Examining Motivational Dynamics and L2 Learning Transitions of Air Cadets Between Year One and Year Two: A Retrodictive Qualitative Modelling Approach

Authors: Kanyaporn Sommeechai

Abstract:

Air cadets who aspire to become military pilots upon graduation undergo rigorous training at military academies. As first-year cadets are akin to civilian freshmen, they encounter numerous challenges within the seniority-based military academy system. Imposed routines, such as mandatory morning runs and restrictions on mobile phone usage for two semesters, have the potential to impact their learning process and motivation to study, including second language (L2) acquisition. This study aims to investigate the motivational dynamics and L2 learning transitions experienced by air cadets. To achieve this, a Retrodictive Qualitative Modelling approach will be employed, coupled with the adaptation of the three-barrier structure encompassing institutional factors, situational factors, and dispositional factors. Semi-structured interviews will be conducted to gather rich qualitative data. By analyzing and interpreting the collected data, this research seeks to shed light on the motivational factors that influence air cadets' L2 learning journey. The three-barrier structure will provide a comprehensive framework to identify and understand the institutional, situational, and dispositional factors that may impede or facilitate their motivation and language learning progress. Moreover, the study will explore how these factors interact and shape cadets' motivation and learning experiences. The outcomes of this research will yield fundamental data that can inform strategies and interventions to enhance the motivation and language learning outcomes of air cadets. By better understanding their motivational dynamics and transitions, educators and institutions can create targeted initiatives, tailored pedagogical approaches, and supportive environments that effectively inspire and engage air cadets as L2 learners.

Keywords: second language, education, motivational dynamics, learning transitions

Procedia PDF Downloads 69
3145 Transforming Data Science Curriculum Through Design Thinking

Authors: Samar Swaid

Abstract:

Today, corporates are moving toward the adoption of Design-Thinking techniques to develop products and services, putting their consumer as the heart of the development process. One of the leading companies in Design-Thinking, IDEO (Innovation, Design, Engineering Organization), defines Design-Thinking as an approach to problem-solving that relies on a set of multi-layered skills, processes, and mindsets that help people generate novel solutions to problems. Design thinking may result in new ideas, narratives, objects or systems. It is about redesigning systems, organizations, infrastructures, processes, and solutions in an innovative fashion based on the users' feedback. Tim Brown, president and CEO of IDEO, sees design thinking as a human-centered approach that draws from the designer's toolkit to integrate people's needs, innovative technologies, and business requirements. The application of design thinking has been witnessed to be the road to developing innovative applications, interactive systems, scientific software, healthcare application, and even to utilizing Design-Thinking to re-think business operations, as in the case of Airbnb. Recently, there has been a movement to apply design thinking to machine learning and artificial intelligence to ensure creating the "wow" effect on consumers. The Association of Computing Machinery task force on Data Science program states that" Data scientists should be able to implement and understand algorithms for data collection and analysis. They should understand the time and space considerations of algorithms. They should follow good design principles developing software, understanding the importance of those principles for testability and maintainability" However, this definition hides the user behind the machine who works on data preparation, algorithm selection and model interpretation. Thus, the Data Science program includes design thinking to ensure meeting the user demands, generating more usable machine learning tools, and developing ways of framing computational thinking. Here, describe the fundamentals of Design-Thinking and teaching modules for data science programs.

Keywords: data science, design thinking, AI, currculum, transformation

Procedia PDF Downloads 81
3144 Methods for Enhancing Ensemble Learning or Improving Classifiers of This Technique in the Analysis and Classification of Brain Signals

Authors: Seyed Mehdi Ghezi, Hesam Hasanpoor

Abstract:

This scientific article explores enhancement methods for ensemble learning with the aim of improving the performance of classifiers in the analysis and classification of brain signals. The research approach in this field consists of two main parts, each with its own strengths and weaknesses. The choice of approach depends on the specific research question and available resources. By combining these approaches and leveraging their respective strengths, researchers can enhance the accuracy and reliability of classification results, consequently advancing our understanding of the brain and its functions. The first approach focuses on utilizing machine learning methods to identify the best features among the vast array of features present in brain signals. The selection of features varies depending on the research objective, and different techniques have been employed for this purpose. For instance, the genetic algorithm has been used in some studies to identify the best features, while optimization methods have been utilized in others to identify the most influential features. Additionally, machine learning techniques have been applied to determine the influential electrodes in classification. Ensemble learning plays a crucial role in identifying the best features that contribute to learning, thereby improving the overall results. The second approach concentrates on designing and implementing methods for selecting the best classifier or utilizing meta-classifiers to enhance the final results in ensemble learning. In a different section of the research, a single classifier is used instead of multiple classifiers, employing different sets of features to improve the results. The article provides an in-depth examination of each technique, highlighting their advantages and limitations. By integrating these techniques, researchers can enhance the performance of classifiers in the analysis and classification of brain signals. This advancement in ensemble learning methodologies contributes to a better understanding of the brain and its functions, ultimately leading to improved accuracy and reliability in brain signal analysis and classification.

Keywords: ensemble learning, brain signals, classification, feature selection, machine learning, genetic algorithm, optimization methods, influential features, influential electrodes, meta-classifiers

Procedia PDF Downloads 75
3143 Selecting Answers for Questions with Multiple Answer Choices in Arabic Question Answering Based on Textual Entailment Recognition

Authors: Anes Enakoa, Yawei Liang

Abstract:

Question Answering (QA) system is one of the most important and demanding tasks in the field of Natural Language Processing (NLP). In QA systems, the answer generation task generates a list of candidate answers to the user's question, in which only one answer is correct. Answer selection is one of the main components of the QA, which is concerned with selecting the best answer choice from the candidate answers suggested by the system. However, the selection process can be very challenging especially in Arabic due to its particularities. To address this challenge, an approach is proposed to answer questions with multiple answer choices for Arabic QA systems based on Textual Entailment (TE) recognition. The developed approach employs a Support Vector Machine that considers lexical, semantic and syntactic features in order to recognize the entailment between the generated hypotheses (H) and the text (T). A set of experiments has been conducted for performance evaluation and the overall performance of the proposed method reached an accuracy of 67.5% with C@1 score of 80.46%. The obtained results are promising and demonstrate that the proposed method is effective for TE recognition task.

Keywords: information retrieval, machine learning, natural language processing, question answering, textual entailment

Procedia PDF Downloads 145
3142 mKDNAD: A Network Flow Anomaly Detection Method Based On Multi-teacher Knowledge Distillation

Authors: Yang Yang, Dan Liu

Abstract:

Anomaly detection models for network flow based on machine learning have poor detection performance under extremely unbalanced training data conditions and also have slow detection speed and large resource consumption when deploying on network edge devices. Embedding multi-teacher knowledge distillation (mKD) in anomaly detection can transfer knowledge from multiple teacher models to a single model. Inspired by this, we proposed a state-of-the-art model, mKDNAD, to improve detection performance. mKDNAD mine and integrate the knowledge of one-dimensional sequence and two-dimensional image implicit in network flow to improve the detection accuracy of small sample classes. The multi-teacher knowledge distillation method guides the train of the student model, thus speeding up the model's detection speed and reducing the number of model parameters. Experiments in the CICIDS2017 dataset verify the improvements of our method in the detection speed and the detection accuracy in dealing with the small sample classes.

Keywords: network flow anomaly detection (NAD), multi-teacher knowledge distillation, machine learning, deep learning

Procedia PDF Downloads 122
3141 Comparison of Machine Learning-Based Models for Predicting Streptococcus pyogenes Virulence Factors and Antimicrobial Resistance

Authors: Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Diego Santibañez Oyarce, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán

Abstract:

Streptococcus pyogenes is a gram-positive bacteria involved in a wide range of diseases and is a major-human-specific bacterial pathogen. In Chile, this year the 'Ministerio de Salud' declared an alert due to the increase in strains throughout the year. This increase can be attributed to the multitude of factors including antimicrobial resistance (AMR) and Virulence Factors (VF). Understanding these VF and AMR is crucial for developing effective strategies and improving public health responses. Moreover, experimental identification and characterization of these pathogenic mechanisms are labor-intensive and time-consuming. Therefore, new computational methods are required to provide robust techniques for accelerating this identification. Advances in Machine Learning (ML) algorithms represent the opportunity to refine and accelerate the discovery of VF associated with Streptococcus pyogenes. In this work, we evaluate the accuracy of various machine learning models in predicting the virulence factors and antimicrobial resistance of Streptococcus pyogenes, with the objective of providing new methods for identifying the pathogenic mechanisms of this organism.Our comprehensive approach involved the download of 32,798 genbank files of S. pyogenes from NCBI dataset, coupled with the incorporation of data from Virulence Factor Database (VFDB) and Antibiotic Resistance Database (CARD) which contains sequences of AMR gene sequence and resistance profiles. These datasets provided labeled examples of both virulent and non-virulent genes, enabling a robust foundation for feature extraction and model training. We employed preprocessing, characterization and feature extraction techniques on primary nucleotide/amino acid sequences and selected the optimal more for model training. The feature set was constructed using sequence-based descriptors (e.g., k-mers and One-hot encoding), and functional annotations based on database prediction. The ML models compared are logistic regression, decision trees, support vector machines, neural networks among others. The results of this work show some differences in accuracy between the algorithms, these differences allow us to identify different aspects that represent unique opportunities for a more precise and efficient characterization and identification of VF and AMR. This comparative analysis underscores the value of integrating machine learning techniques in predicting S. pyogenes virulence and AMR, offering potential pathways for more effective diagnostic and therapeutic strategies. Future work will focus on incorporating additional omics data, such as transcriptomics, and exploring advanced deep learning models to further enhance predictive capabilities.

Keywords: antibiotic resistance, streptococcus pyogenes, virulence factors., machine learning

Procedia PDF Downloads 30
3140 The Current Home Hemodialysis Practices and Patients’ Safety Related Factors: A Case Study from Germany

Authors: Ilyas Khan. Liliane Pintelon, Harry Martin, Michael Shömig

Abstract:

The increasing costs of healthcare on one hand, and the rise in aging population and associated chronic disease, on the other hand, are putting increasing burden on the current health care system in many Western countries. For instance, chronic kidney disease (CKD) is a common disease and in Europe, the cost of renal replacement therapy (RRT) is very significant to the total health care cost. However, the recent advancement in healthcare technology, provide the opportunity to treat patients at home in their own comfort. It is evident that home healthcare offers numerous advantages apparently, low costs and high patients’ quality of life. Despite these advantages, the intake of home hemodialysis (HHD) therapy is still low in particular in Germany. Many factors are accounted for the low number of HHD intake. However, this paper is focusing on patients’ safety-related factors of current HHD practices in Germany. The aim of this paper is to analyze the current HHD practices in Germany and to identify risks related factors if any exist. A case study has been conducted in a dialysis center which consists of four dialysis centers in the south of Germany. In total, these dialysis centers have 350 chronic dialysis patients, of which, four patients are on HHD. The centers have 126 staff which includes six nephrologists and 120 other staff i.e. nurses and administration. The results of the study revealed several risk-related factors. Most importantly, these centers do not offer allied health services at the pre-dialysis stage, the HHD training did not have an established curriculum; however, they have just recently developed the first version. Only a soft copy of the machine manual is offered to patients. Surprisingly, the management was not aware of any standard available for home assessment and installation. The home assessment is done by a third party (i.e. the machines and equipment provider) and they may not consider the hygienic quality of the patient’s home. The type of machine provided to patients at home is similar to the one in the center. The model may not be suitable at home because of its size and complexity. Even though portable hemodialysis machines, which are specially designed for home use, are available in the market such as the NxStage series. Besides the type of machine, no assistance is offered for space management at home in particular for placing the machine. Moreover, the centers do not offer remote assistance to patients and their carer at home. However, telephonic assistance is available. Furthermore, no alternative is offered if a carer is not available. In addition, the centers are lacking medical staff including nephrologists and renal nurses.

Keywords: home hemodialysis, home hemodialysis practices, patients’ related risks in the current home hemodialysis practices, patient safety in home hemodialysis

Procedia PDF Downloads 119
3139 The Comparison between Modelled and Measured Nitrogen Dioxide Concentrations in Cold and Warm Seasons in Kaunas

Authors: A. Miškinytė, A. Dėdelė

Abstract:

Road traffic is one of the main sources of air pollution in urban areas associated with adverse effects on human health and environment. Nitrogen dioxide (NO2) is considered as traffic-related air pollutant, which concentrations tend to be higher near highways, along busy roads and in city centres and exceedances are mainly observed in air quality monitoring stations located close to traffic. Atmospheric dispersion models can be used to examine emissions from many various sources and to predict the concentration of pollutants emitted from these sources into the atmosphere. The study aim was to compare modelled concentrations of nitrogen dioxide using ADMS-Urban dispersion model with air quality monitoring network in cold and warm seasons in Kaunas city. Modelled average seasonal concentrations of nitrogen dioxide for 2011 year have been verified with automatic air quality monitoring data from two stations in the city. Traffic station is located near high traffic street in industrial district and background station far away from the main sources of nitrogen dioxide pollution. The modelling results showed that the highest nitrogen dioxide concentration was modelled and measured in station located near intensive traffic street, both in cold and warm seasons. Modelled and measured nitrogen dioxide concentration was respectively 25.7 and 25.2 µg/m3 in cold season and 15.5 and 17.7 µg/m3 in warm season. While the lowest modelled and measured NO2 concentration was determined in background monitoring station, respectively 12.2 and 13.3 µg/m3 in cold season and 6.1 and 7.6 µg/m3 in warm season. The difference between monitoring station located near high traffic street and background monitoring station showed that better agreement between modelled and measured NO2 concentration was observed at traffic monitoring station.

Keywords: air pollution, nitrogen dioxide, modelling, ADMS-Urban model

Procedia PDF Downloads 408
3138 Sepiolite as a Processing Aid in Fibre Reinforced Cement Produced in Hatschek Machine

Authors: R. Pérez Castells, J. M. Carbajo

Abstract:

Sepiolite is used as a processing aid in the manufacture of fibre cement from the start of the replacement of asbestos in the 80s. Sepiolite increases the inter-laminar bond between cement layers and improves homogeneity of the slurries. A new type of sepiolite processed product, Wollatrop TF/C, has been checked as a retention agent for fine particles in the production of fibre cement in a Hatschek machine. The effect of Wollatrop T/FC on filtering and fine particle losses was studied as well as the interaction with anionic polyacrylamide and microsilica. The design of the experiments were factorial and the VDT equipment used for measuring retention and drainage was modified Rapid Köethen laboratory sheet former. Wollatrop TF/C increased the fine particle retention improving the economy of the process and reducing the accumulation of solids in recycled process water. At the same time, drainage time increased sharply at high concentration, however drainage time can be improved by adjusting APAM concentration. Wollatrop TF/C and microsilica are having very small interactions among them. Microsilica does not control fine particle losses while Wollatrop TF/C does efficiently. Further research on APAM type (molecular weight and anionic character) is advisable to improve drainage.

Keywords: drainage, fibre-reinforced cement, fine particle losses, flocculation, microsilica, sepiolite

Procedia PDF Downloads 326
3137 Modification of a Human Powered Lawn Mower

Authors: Akinwale S. O., Koya O. A.

Abstract:

The need to provide ecologically-friendly and effective lawn mowing solution is crucial for the well-being of humans. This study involved the modification of a human-powered lawn mower designed to cut tall grasses in residential areas. This study designed and fabricated a reel-type mower blade system and a pedal-powered test rig for the blade system. It also evaluated the performance of the machine. The machine was tested on some overgrown grass plots at College of Education Staff School Ilesa. Parameters such as theoretical field capacity, field efficiency and effective field capacity were determined from the data gathered. The quality of cut achieved by the unit was also documented. Test results showed that the fabricated cutting system produced a theoretical field capacity of 0.11 ha/h and an effective field capacity of 0.08ha/h. Moreover, the unit’s cutting system showed a substantial improvement over existing reel mower designs in its ability to cut on both the forward and reverse phases of its motion. This study established that the blade system described herein has the capacity to cut tall grasses. Hence, this device can therefore eliminate the need for powered mowers entirely on small residential lawns.

Keywords: effective field capacity, field efficiency, theoretical field capacity, quality of cut

Procedia PDF Downloads 147
3136 Vaccination Coverage and Its Associated Factors in India: An ML Approach to Understand the Hierarchy and Inter-Connections

Authors: Anandita Mitro, Archana Srivastava, Bidisha Banerjee

Abstract:

The present paper attempts to analyze the hierarchy and interconnection of factors responsible for the uptake of BCG vaccination in India. The study uses National Family Health Survey (NFHS-5) data which was conducted during 2019-21. The univariate logistic regression method is used to understand the univariate effects while the interconnection effects have been studied using the Categorical Inference Tree (CIT) which is a non-parametric Machine Learning (ML) model. The hierarchy of the factors is further established using Conditional Inference Forest which is an extension of the CIT approach. The results suggest that BCG vaccination coverage was influenced more by system-level factors and awareness than education or socio-economic status. Factors such as place of delivery, antenatal care, and postnatal care were crucial, with variations based on delivery location. Region-specific differences were also observed which could be explained by the factors. Awareness of the disease was less impactful along with the factor of wealth and urban or rural residence, although awareness did appear to substitute for inadequate ANC. Thus, from the policy point of view, it is revealed that certain subpopulations have less prevalence of vaccination which implies that there is a need for population-specific policy action to achieve a hundred percent coverage.

Keywords: vaccination, NFHS, machine learning, public health

Procedia PDF Downloads 59
3135 Predictive Analytics in Traffic Flow Management: Integrating Temporal Dynamics and Traffic Characteristics to Estimate Travel Time

Authors: Maria Ezziani, Rabie Zine, Amine Amar, Ilhame Kissani

Abstract:

This paper introduces a predictive model for urban transportation engineering, which is vital for efficient traffic management. Utilizing comprehensive datasets and advanced statistical techniques, the model accurately forecasts travel times by considering temporal variations and traffic dynamics. Machine learning algorithms, including regression trees and neural networks, are employed to capture sequential dependencies. Results indicate significant improvements in predictive accuracy, particularly during peak hours and holidays, with the incorporation of traffic flow and speed variables. Future enhancements may integrate weather conditions and traffic incidents. The model's applications range from adaptive traffic management systems to route optimization algorithms, facilitating congestion reduction and enhancing journey reliability. Overall, this research extends beyond travel time estimation, offering insights into broader transportation planning and policy-making realms, empowering stakeholders to optimize infrastructure utilization and improve network efficiency.

Keywords: predictive analytics, traffic flow, travel time estimation, urban transportation, machine learning, traffic management

Procedia PDF Downloads 84
3134 Multi-Particle Finite Element Modelling Simulation Based on Cohesive Zone Method of Cold Compaction Behavior of Laminar Al and NaCl Composite Powders

Authors: Yanbing Feng, Deqing Mei, Yancheng Wang, Zichen Chen

Abstract:

With the advantage of low volume density, high specific surface area, light weight and good permeability, porous aluminum material has the potential to be used in automotive, railway, chemistry and construction industries, etc. A layered powder sintering and dissolution method were developed to fabricate the porous surface Al structure with high efficiency. However, the densification mechanism during the cold compaction of laminar composite powders is still unclear. In this study, multi particle finite element modelling (MPFEM) based on the cohesive zone method (CZM) is used to simulate the cold compaction behavior of laminar Al and NaCl composite powders. To obtain its densification mechanism, the macro and micro properties of final compacts are characterized and analyzed. The robustness and accuracy of the numerical model is firstly verified by experimental results and data fitting. The results indicate that the CZM-based multi particle FEM is an effective way to simulate the compaction of the laminar powders and the fracture process of the NaCl powders. In the compaction of the laminar powders, the void is mainly filled by the particle rearrangement, plastic deformation of Al powders and brittle fracture of NaCl powders. Large stress is mainly concentrated within the NaCl powers and the contact force network is formed. The Al powder near the NaCl powder or the mold has larger stress distribution on its contact surface. Therefore, the densification process of cold compaction of laminar Al and NaCl composite powders is successfully analyzed by the CZM-based multi particle FEM.

Keywords: cold compaction, cohesive zone, multi-particle FEM, numerical modeling, powder forming

Procedia PDF Downloads 152
3133 Wear and Mechanical Properties of Nodular Iron Modified with Copper

Authors: J. Ramos, V. Gil, A. F. Torres

Abstract:

The nodular iron is a material that has shown great advantages respect to other materials (steel and gray iron) in the production of machine elements. The engineering industry, especially automobile, are potential users of this material. As it is known, the alloying elements modify the properties of steels and castings. Copper has been investigated as a structural modifier of nodular iron, but studies of its mechanical and tribological implications still need to be addressed for industrial use. With the aim of improving the mechanical properties of nodular iron, alloying elements (Mn, Si, and Cu) are added in order to increase their pearlite (or ferrite) structure according to the percentage of the alloying element. In this research (using induction furnace process) nodular iron with three different percentages of copper (residual, 0,5% and 1,2%) was obtained. Chemical analysis was performed by optical emission spectrometry and microstructures were characterized by Optical Microscopy (ASTM E3) and Scanning Electron Microscopy (SEM). The study of mechanical behavior was carried out in a mechanical test machine (ASTM E8) and a Pin on disk tribometer (ASTM G99) was used to assess wear resistance. It is observed that copper increases the pearlite structure improving the wear behavior; tension behavior. This improvement is observed in higher proportion with 0,5% due to the fact that too much increase of pearlite leads to ductility loss.

Keywords: copper, mechanical properties, nodular iron, pearlite structure, wear

Procedia PDF Downloads 384
3132 Automatic Adjustment of Thresholds via Closed-Loop Feedback Mechanism for Solder Paste Inspection

Authors: Chia-Chen Wei, Pack Hsieh, Jeffrey Chen

Abstract:

Surface Mount Technology (SMT) is widely used in the area of the electronic assembly in which the electronic components are mounted to the surface of the printed circuit board (PCB). Most of the defects in the SMT process are mainly related to the quality of solder paste printing. These defects lead to considerable manufacturing costs in the electronics assembly industry. Therefore, the solder paste inspection (SPI) machine for controlling and monitoring the amount of solder paste printing has become an important part of the production process. So far, the setting of the SPI threshold is based on statistical analysis and experts’ experiences to determine the appropriate threshold settings. Because the production data are not normal distribution and there are various variations in the production processes, defects related to solder paste printing still occur. In order to solve this problem, this paper proposes an online machine learning algorithm, called the automatic threshold adjustment (ATA) algorithm, and closed-loop architecture in the SMT process to determine the best threshold settings. Simulation experiments prove that our proposed threshold settings improve the accuracy from 99.85% to 100%.

Keywords: big data analytics, Industry 4.0, SPI threshold setting, surface mount technology

Procedia PDF Downloads 116
3131 Service Interactions Coordination Using a Declarative Approach: Focuses on Deontic Rule from Semantics of Business Vocabulary and Rules Models

Authors: Nurulhuda A. Manaf, Nor Najihah Zainal Abidin, Nur Amalina Jamaludin

Abstract:

Coordinating service interactions are a vital part of developing distributed applications that are built up as networks of autonomous participants, e.g., software components, web services, online resources, involve a collaboration between a diverse number of participant services on different providers. The complexity in coordinating service interactions reflects how important the techniques and approaches require for designing and coordinating the interaction between participant services to ensure the overall goal of a collaboration between participant services is achieved. The objective of this research is to develop capability of steering a complex service interaction towards a desired outcome. Therefore, an efficient technique for modelling, generating, and verifying the coordination of service interactions is developed. The developed model describes service interactions using service choreographies approach and focusing on a declarative approach, advocating an Object Management Group (OMG) standard, Semantics of Business Vocabulary and Rules (SBVR). This model, namely, SBVR model for service choreographies focuses on a declarative deontic rule expressing both obligation and prohibition, which can be more useful in working with coordinating service interactions. The generated SBVR model is then be formulated and be transformed into Alloy model using Alloy Analyzer for verifying the generated SBVR model. The transformation of SBVR into Alloy allows to automatically generate the corresponding coordination of service interactions (service choreography), hence producing an immediate instance of execution that satisfies the constraints of the specification and verifies whether a specific request can be realised in the given choreography in the generated choreography.

Keywords: service choreography, service coordination, behavioural modelling, complex interactions, declarative specification, verification, model transformation, semantics of business vocabulary and rules, SBVR

Procedia PDF Downloads 154
3130 A Machining Method of Cross-Shape Nano Channel and Experiments for Silicon Substrate

Authors: Zone-Ching Lin, Hao-Yuan Jheng, Zih-Wun Jhang

Abstract:

The paper innovatively proposes using the concept of specific down force energy (SDFE) and AFM machine to establish a machining method of cross-shape nanochannel on single-crystal silicon substrate. As for machining a cross-shape nanochannel by AFM machine, the paper develop a method of machining cross-shape nanochannel groove at a fixed down force by using SDFE theory and combining the planned cutting path of cross-shape nanochannel up to 5th machining layer it finally achieves a cross-shape nanochannel at a cutting depth of around 20nm. Since there may be standing burr at the machined cross-shape nanochannel edge, the paper uses a smaller down force to cut the edge of the cross-shape nanochannel in order to lower the height of standing burr and converge the height of standing burr at the edge to below 0.54nm as set by the paper. Finally, the paper conducts experiments of machining cross-shape nanochannel groove on single-crystal silicon by AFM probe, and compares the simulation and experimental results. It is proved that this proposed machining method of cross-shape nanochannel is feasible.

Keywords: atomic force microscopy (AFM), cross-shape nanochannel, silicon substrate, specific down force energy (SDFE)

Procedia PDF Downloads 373
3129 Validating Texture Analysis as a Tool for Determining Bioplastic (Bio)Degradation

Authors: Sally J. Price, Greg F. Walker, Weiyi Liu, Craig R. Bunt

Abstract:

Plastics, due to their long lifespan, are becoming more of an environmental concern once their useful life has been completed. There are a vast array of different types of plastic, and they can be found in almost every ecosystem on earth and are of particular concern in terrestrial environments where they can become incorporated into the food chain. Hence bioplastics have become more of interest to manufacturers and the public recently as they have the ability to (bio)degrade in commercial and in home composting situations. However, tools in which to quantify how they degrade in response to environmental variables are still being developed -one such approach is texture analysis using a TA.XT Texture Analyser, Stable Microsystems, was used to determine the force required to break or punch holes in standard ASTM D638 Type IV 3D printed bioplastic “dogbones” depending on the thicknesses of them. Manufacturers’ recommendations for calibrating the Texture Analyser are one such approach for standardising results; however, an independent technique using dummy dogbones and a substitute for the bioplastic was used alongside the samples. This approach was unexpectedly more valuable than realised at the start of the trial as irregular results were later discovered with the substitute material before valuable samples collected from the field were lost due to possible machine malfunction. This work will show the value of having an independent approach to machine calibration for accurate sample analysis with a Texture Analyser when analysing bioplastic samples.

Keywords: bioplastic, degradation, environment, texture analyzer

Procedia PDF Downloads 206
3128 A Comprehensive Survey on Machine Learning Techniques and User Authentication Approaches for Credit Card Fraud Detection

Authors: Niloofar Yousefi, Marie Alaghband, Ivan Garibay

Abstract:

With the increase of credit card usage, the volume of credit card misuse also has significantly increased, which may cause appreciable financial losses for both credit card holders and financial organizations issuing credit cards. As a result, financial organizations are working hard on developing and deploying credit card fraud detection methods, in order to adapt to ever-evolving, increasingly sophisticated defrauding strategies and identifying illicit transactions as quickly as possible to protect themselves and their customers. Compounding on the complex nature of such adverse strategies, credit card fraudulent activities are rare events compared to the number of legitimate transactions. Hence, the challenge to develop fraud detection that are accurate and efficient is substantially intensified and, as a consequence, credit card fraud detection has lately become a very active area of research. In this work, we provide a survey of current techniques most relevant to the problem of credit card fraud detection. We carry out our survey in two main parts. In the first part, we focus on studies utilizing classical machine learning models, which mostly employ traditional transnational features to make fraud predictions. These models typically rely on some static physical characteristics, such as what the user knows (knowledge-based method), or what he/she has access to (object-based method). In the second part of our survey, we review more advanced techniques of user authentication, which use behavioral biometrics to identify an individual based on his/her unique behavior while he/she is interacting with his/her electronic devices. These approaches rely on how people behave (instead of what they do), which cannot be easily forged. By providing an overview of current approaches and the results reported in the literature, this survey aims to drive the future research agenda for the community in order to develop more accurate, reliable and scalable models of credit card fraud detection.

Keywords: Credit Card Fraud Detection, User Authentication, Behavioral Biometrics, Machine Learning, Literature Survey

Procedia PDF Downloads 121
3127 Modelling the Dynamics and Optimal Control Strategies of Terrorism within the Southern Borno State Nigeria

Authors: Lubem Matthew Kwaghkor

Abstract:

Terrorism, which remains one of the largest threats faced by various nations and communities around the world, including Nigeria, is the calculated use of violence to create a general climate of fear in a population to attain particular goals that might be political, religious, or economical. Several terrorist groups are currently active in Nigeria, leading to attacks on both civil and military targets. Among these groups, Boko Haram is the deadliest terrorist group operating majorly in Borno State. The southern part of Borno State in North-Eastern Nigeria has been plagued by terrorism, insurgency, and conflict for several years. Understanding the dynamics of terrorism is crucial for developing effective strategies to mitigate its impact on communities and to facilitate peace-building efforts. This research aims to develop a mathematical model that captures the dynamics of terrorism within the southern part of Borno State, Nigeria, capturing both government and local community intervention strategies as control measures in combating terrorism. A compartmental model of five nonlinear differential equations is formulated. The model analyses show that a feasible solution set of the model exists and is bounded. Stability analyses show that both the terrorism free equilibrium and the terrorism endermic equilibrium are asymptotically stable, making the model to have biological meaning. Optimal control theory will be employed to identify the most effective strategy to prevent or minimize acts of terrorism. The research outcomes are expected to contribute towards enhancing security and stability in Southern Borno State while providing valuable insights for policymakers, security agencies, and researchers. This is an ongoing research.

Keywords: modelling, terrorism, optimal control, susceptible, non-susceptible, community intervention

Procedia PDF Downloads 22
3126 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should properly evaluate their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, Neural Networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable to offer an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 80
3125 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should evaluate properly their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, neural networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable of offering an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 70
3124 Life Prediction of Cutting Tool by the Workpiece Cutting Condition

Authors: Noemia Gomes de Mattos de Mesquita, José Eduardo Ferreira de Oliveira, Arimatea Quaresma Ferraz

Abstract:

Stops to exchange cutting tool, to set up again the tool in a turning operation with CNC or to measure the workpiece dimensions have a direct influence on production. The premature removal of the cutting tool results in high cost of machining since the parcel relating to the cost of the cutting tool increases. On the other hand, the late exchange of cutting tool also increases the cost of production because getting parts out of the preset tolerances may require rework for its use when it does not cause bigger problems such as breaking of cutting tools or the loss of the part. Therefore, the right time to exchange the tool should be well defined when wanted to minimize production costs. When the flank wear is the limiting tool life, the time predetermination that a cutting tool must be used for the machining occurs within the limits of tolerance can be done without difficulty. This paper aims to show how the life of the cutting tool can be calculated taking into account the cutting parameters (cutting speed, feed and depth of cut), workpiece material, power of the machine, the dimensional tolerance of the part, the finishing surface, the geometry of the cutting tool and operating conditions of the machine tool, once known the parameters of Taylor algebraic structure. These parameters were raised for the ABNT 1038 steel machined with cutting tools of hard metal.

Keywords: machining, productions, cutting condition, design, manufacturing, measurement

Procedia PDF Downloads 634
3123 Effect of Electronic Banking on the Performance of Deposit Money Banks in Nigeria: Using ATM and Mobile Phone as a Case Study

Authors: Charity Ifunanya Osakwe, Victoria Ogochuchukwu Obi-Nwosu, Chima Kenneth Anachedo

Abstract:

The study investigates how automated teller machines (ATM) and mobile banking affect deposit money banks in the Nigerian economy. The study made use of time series data which were obtained from the Central Bank of Nigeria Statistical Bulletin from 2009 to 2021. The Central Bank of Nigeria (CBN) data on automated teller machine and mobile phones were used to proxy electronic banking while total deposit in banks proxied the performance of deposit money banks. The analysis for the study was done using ordinary least square econometric technique with the aid of economic view statistical package. The results show that the automated teller machine has a positive and significant effect on the total deposits of deposit money banks in Nigeria and that making use of deposits of deposit money banks in Nigeria. It was concluded in the study that e-banking has equally increased banking access to customers and also created room for banks to expand their operations to more customers. The study recommends that banks in Nigeria should prioritize the expansion and maintenance of ATM networks as well as continue to invest in and develop more mobile banking services.

Keywords: electronic, banking, automated teller machines, mobile, deposit

Procedia PDF Downloads 53