Search results for: bare machine computing
2526 The Relationship between Human Pose and Intention to Fire a Handgun
Authors: Joshua van Staden, Dane Brown, Karen Bradshaw
Abstract:
Gun violence is a significant problem in modern-day society. Early detection of carried handguns through closed-circuit television (CCTV) can aid in preventing potential gun violence. However, CCTV operators have a limited attention span. Machine learning approaches to automating the detection of dangerous gun carriers provide a way to aid CCTV operators in identifying these individuals. This study provides insight into the relationship between human key points extracted using human pose estimation (HPE) and their intention to fire a weapon. We examine the feature importance of each keypoint and their correlations. We use principal component analysis (PCA) to reduce the feature space and optimize detection. Finally, we run a set of classifiers to determine what form of classifier performs well on this data. We find that hips, shoulders, and knees tend to be crucial aspects of the human pose when making these predictions. Furthermore, the horizontal position plays a larger role than the vertical position. Of the 66 key points, nine principal components could be used to make nonlinear classifications with 86% accuracy. Furthermore, linear classifications could be done with 85% accuracy, showing that there is a degree of linearity in the data.Keywords: feature engineering, human pose, machine learning, security
Procedia PDF Downloads 932525 A Framework of Dynamic Rule Selection Method for Dynamic Flexible Job Shop Problem by Reinforcement Learning Method
Authors: Rui Wu
Abstract:
In the volatile modern manufacturing environment, new orders randomly occur at any time, while the pre-emptive methods are infeasible. This leads to a real-time scheduling method that can produce a reasonably good schedule quickly. The dynamic Flexible Job Shop problem is an NP-hard scheduling problem that hybrid the dynamic Job Shop problem with the Parallel Machine problem. A Flexible Job Shop contains different work centres. Each work centre contains parallel machines that can process certain operations. Many algorithms, such as genetic algorithms or simulated annealing, have been proposed to solve the static Flexible Job Shop problems. However, the time efficiency of these methods is low, and these methods are not feasible in a dynamic scheduling problem. Therefore, a dynamic rule selection scheduling system based on the reinforcement learning method is proposed in this research, in which the dynamic Flexible Job Shop problem is divided into several parallel machine problems to decrease the complexity of the dynamic Flexible Job Shop problem. Firstly, the features of jobs, machines, work centres, and flexible job shops are selected to describe the status of the dynamic Flexible Job Shop problem at each decision point in each work centre. Secondly, a framework of reinforcement learning algorithm using a double-layer deep Q-learning network is applied to select proper composite dispatching rules based on the status of each work centre. Then, based on the selected composite dispatching rule, an available operation is selected from the waiting buffer and assigned to an available machine in each work centre. Finally, the proposed algorithm will be compared with well-known dispatching rules on objectives of mean tardiness, mean flow time, mean waiting time, or mean percentage of waiting time in the real-time Flexible Job Shop problem. The result of the simulations proved that the proposed framework has reasonable performance and time efficiency.Keywords: dynamic scheduling problem, flexible job shop, dispatching rules, deep reinforcement learning
Procedia PDF Downloads 1092524 Enhanced Extra Trees Classifier for Epileptic Seizure Prediction
Authors: Maurice Ntahobari, Levin Kuhlmann, Mario Boley, Zhinoos Razavi Hesabi
Abstract:
For machine learning based epileptic seizure prediction, it is important for the model to be implemented in small implantable or wearable devices that can be used to monitor epilepsy patients; however, current state-of-the-art methods are complex and computationally intensive. We use Shapley Additive Explanation (SHAP) to find relevant intracranial electroencephalogram (iEEG) features and improve the computational efficiency of a state-of-the-art seizure prediction method based on the extra trees classifier while maintaining prediction performance. Results for a small contest dataset and a much larger dataset with continuous recordings of up to 3 years per patient from 15 patients yield better than chance prediction performance (p < 0.004). Moreover, while the performance of the SHAP-based model is comparable to that of the benchmark, the overall training and prediction time of the model has been reduced by a factor of 1.83. It can also be noted that the feature called zero crossing value is the best EEG feature for seizure prediction. These results suggest state-of-the-art seizure prediction performance can be achieved using efficient methods based on optimal feature selection.Keywords: machine learning, seizure prediction, extra tree classifier, SHAP, epilepsy
Procedia PDF Downloads 1142523 Radiomics: Approach to Enable Early Diagnosis of Non-Specific Breast Nodules in Contrast-Enhanced Magnetic Resonance Imaging
Authors: N. D'Amico, E. Grossi, B. Colombo, F. Rigiroli, M. Buscema, D. Fazzini, G. Cornalba, S. Papa
Abstract:
Purpose: To characterize, through a radiomic approach, the nature of nodules considered non-specific by expert radiologists, recognized in magnetic resonance mammography (MRm) with T1-weighted (T1w) sequences with paramagnetic contrast. Material and Methods: 47 cases out of 1200 undergoing MRm, in which the MRm assessment gave uncertain classification (non-specific nodules), were admitted to the study. The clinical outcome of the non-specific nodules was later found through follow-up or further exams (biopsy), finding 35 benign and 12 malignant. All MR Images were acquired at 1.5T, a first basal T1w sequence and then four T1w acquisitions after the paramagnetic contrast injection. After a manual segmentation of the lesions, done by a radiologist, and the extraction of 150 radiomic features (30 features per 5 subsequent times) a machine learning (ML) approach was used. An evolutionary algorithm (TWIST system based on KNN algorithm) was used to subdivide the dataset into training and validation test and to select features yielding the maximal amount of information. After this pre-processing, different machine learning systems were applied to develop a predictive model based on a training-testing crossover procedure. 10 cases with a benign nodule (follow-up older than 5 years) and 18 with an evident malignant tumor (clear malignant histological exam) were added to the dataset in order to allow the ML system to better learn from data. Results: NaiveBayes algorithm working on 79 features selected by a TWIST system, resulted to be the best performing ML system with a sensitivity of 96% and a specificity of 78% and a global accuracy of 87% (average values of two training-testing procedures ab-ba). The results showed that in the subset of 47 non-specific nodules, the algorithm predicted the outcome of 45 nodules which an expert radiologist could not identify. Conclusion: In this pilot study we identified a radiomic approach allowing ML systems to perform well in the diagnosis of a non-specific nodule at MR mammography. This algorithm could be a great support for the early diagnosis of malignant breast tumor, in the event the radiologist is not able to identify the kind of lesion and reduces the necessity for long follow-up. Clinical Relevance: This machine learning algorithm could be essential to support the radiologist in early diagnosis of non-specific nodules, in order to avoid strenuous follow-up and painful biopsy for the patient.Keywords: breast, machine learning, MRI, radiomics
Procedia PDF Downloads 2692522 Sustainable Development of Adsorption Solar Cooling Machine
Authors: N. Allouache, W. Elgahri, A. Gahfif, M. Belmedani
Abstract:
Solar radiation is by far the largest and the most world’s abundant, clean and permanent energy source. The amount of solar radiation intercepted by the Earth is much higher than annual global energy use. The energy available from the sun is greater than about 5200 times the global world’s need in 2006. In recent years, many promising technologies have been developed to harness the sun's energy. These technologies help in environmental protection, economizing energy, and sustainable development, which are the major issues of the world in the 21st century. One of these important technologies is the solar cooling systems that make use of either absorption or adsorption technologies. The solar adsorption cooling systems are a good alternative since they operate with environmentally benign refrigerants that are natural, free from CFCs, and therefore they have a zero ozone depleting potential (ODP). A numerical analysis of thermal and solar performances of an adsorption solar refrigerating system using different adsorbent/adsorbate pairs, such as activated carbon AC35 and activated carbon BPL/Ammoniac; is undertaken in this study. The modeling of the adsorption cooling machine requires the resolution of the equation describing the energy and mass transfer in the tubular adsorber, that is the most important component of the machine. The Wilson and Dubinin- Astakhov models of the solid-adsorbat equilibrium are used to calculate the adsorbed quantity. The porous medium is contained in the annular space, and the adsorber is heated by solar energy. Effect of key parameters on the adsorbed quantity and on the thermal and solar performances are analysed and discussed. The performances of the system that depends on the incident global irradiance during a whole day depends on the weather conditions: the condenser temperature and the evaporator temperature. The AC35/methanol pair is the best pair comparing to the BPL/Ammoniac in terms of system performances.Keywords: activated carbon-methanol pair, activated carbon-ammoniac pair, adsorption, performance coefficients, numerical analysis, solar cooling system
Procedia PDF Downloads 792521 Integrating Machine Learning and Rule-Based Decision Models for Enhanced B2B Sales Forecasting and Customer Prioritization
Authors: Wenqi Liu, Reginald Bailey
Abstract:
This study proposes a comprehensive and effective approach to business-to-business (B2B) sales forecasting by integrating advanced machine learning models with a rule-based decision-making framework. The methodology addresses the critical challenge of optimizing sales pipeline performance and improving conversion rates through predictive analytics and actionable insights. The first component involves developing a classification model to predict the likelihood of conversion, aiming to outperform traditional methods such as logistic regression in terms of accuracy, precision, recall, and F1 score. Feature importance analysis highlights key predictive factors, such as client revenue size and sales velocity, providing valuable insights into conversion dynamics. The second component focuses on forecasting sales value using a regression model, designed to achieve superior performance compared to linear regression by minimizing mean absolute error (MAE), mean squared error (MSE), and maximizing R-squared metrics. The regression analysis identifies primary drivers of sales value, further informing data-driven strategies. To bridge the gap between predictive modeling and actionable outcomes, a rule-based decision framework is introduced. This model categorizes leads into high, medium, and low priorities based on thresholds for conversion probability and predicted sales value. By combining classification and regression outputs, this framework enables sales teams to allocate resources effectively, focus on high-value opportunities, and streamline lead management processes. The integrated approach significantly enhances lead prioritization, increases conversion rates, and drives revenue generation, offering a robust solution to the declining pipeline conversion rates faced by many B2B organizations. Our findings demonstrate the practical benefits of blending machine learning with decision-making frameworks, providing a scalable, data-driven solution for strategic sales optimization. This study underscores the potential of predictive analytics to transform B2B sales operations, enabling more informed decision-making and improved organizational outcomes in competitive markets.Keywords: machine learning, XGBoost, regression, decision making framework, system engineering
Procedia PDF Downloads 252520 Developing Early Intervention Tools: Predicting Academic Dishonesty in University Students Using Psychological Traits and Machine Learning
Authors: Pinzhe Zhao
Abstract:
This study focuses on predicting university students' cheating tendencies using psychological traits and machine learning techniques. Academic dishonesty is a significant issue that compromises the integrity and fairness of educational institutions. While much research has been dedicated to detecting cheating behaviors after they have occurred, there is limited work on predicting such tendencies before they manifest. The aim of this research is to develop a model that can identify students who are at higher risk of engaging in academic misconduct, allowing for earlier interventions to prevent such behavior. Psychological factors are known to influence students' likelihood of cheating. Research shows that traits such as test anxiety, moral reasoning, self-efficacy, and achievement motivation are strongly linked to academic dishonesty. High levels of anxiety may lead students to cheat as a way to cope with pressure. Those with lower self-efficacy are less confident in their academic abilities, which can push them toward dishonest behaviors to secure better outcomes. Students with weaker moral judgment may also justify cheating more easily, believing it to be less wrong under certain conditions. Achievement motivation also plays a role, as students driven primarily by external rewards, such as grades, are more likely to cheat compared to those motivated by intrinsic learning goals. In this study, data on students’ psychological traits is collected through validated assessments, including scales for anxiety, moral reasoning, self-efficacy, and motivation. Additional data on academic performance, attendance, and engagement in class are also gathered to create a more comprehensive profile. Using machine learning algorithms such as Random Forest, Support Vector Machines (SVM), and Long Short-Term Memory (LSTM) networks, the research builds models that can predict students’ cheating tendencies. These models are trained and evaluated using metrics like accuracy, precision, recall, and F1 scores to ensure they provide reliable predictions. The findings demonstrate that combining psychological traits with machine learning provides a powerful method for identifying students at risk of cheating. This approach allows for early detection and intervention, enabling educational institutions to take proactive steps in promoting academic integrity. The predictive model can be used to inform targeted interventions, such as counseling for students with high test anxiety or workshops aimed at strengthening moral reasoning. By addressing the underlying factors that contribute to cheating behavior, educational institutions can reduce the occurrence of academic dishonesty and foster a culture of integrity. In conclusion, this research contributes to the growing body of literature on predictive analytics in education. It offers a approach by integrating psychological assessments with machine learning to predict cheating tendencies. This method has the potential to significantly improve how academic institutions address academic dishonesty, shifting the focus from punishment after the fact to prevention before it occurs. By identifying high-risk students and providing them with the necessary support, educators can help maintain the fairness and integrity of the academic environment.Keywords: academic dishonesty, cheating prediction, intervention strategies, machine learning, psychological traits, academic integrity
Procedia PDF Downloads 232519 Comparative Analysis of Change in Vegetation in Four Districts of Punjab through Satellite Imagery, Land Use Statistics and Machine Learning
Authors: Mirza Waseem Abbas, Syed Danish Raza
Abstract:
For many countries agriculture is still the major force driving the economy and a critically important socioeconomic sector, despite exceptional industrial development across the globe. In countries like Pakistan, this sector is considered the backbone of the economy, and most of the economic decision making revolves around agricultural outputs and data. Timely and accurate facts and figures about this vital sector hold immense significance and have serious implications for the long-term development of the economy. Therefore, any significant improvements in the statistics and other forms of data regarding agriculture sector are considered important by all policymakers. This is especially true for decision making for the betterment of crops and the agriculture sector in general. Provincial and federal agricultural departments collect data for all cash and non-cash crops and the sector, in general, every year. Traditional data collection for such a large sector i.e. agriculture, being time-consuming, prone to human error and labor-intensive, is slowly but gradually being replaced by remote sensing techniques. For this study, remotely sensed data were used for change detection (machine learning, supervised & unsupervised classification) to assess the increase or decrease in area under agriculture over the last fifteen years due to urbanization. Detailed Landsat Images for the selected agricultural districts were acquired for the year 2000 and compared to images of the same area acquired for the year 2016. Observed differences validated through detailed analysis of the areas show that there was a considerable decrease in vegetation during the last fifteen years in four major agricultural districts of the Punjab province due to urbanization (housing societies).Keywords: change detection, area estimation, machine learning, urbanization, remote sensing
Procedia PDF Downloads 2512518 Short Text Classification for Saudi Tweets
Authors: Asma A. Alsufyani, Maram A. Alharthi, Maha J. Althobaiti, Manal S. Alharthi, Huda Rizq
Abstract:
Twitter is one of the most popular microblogging sites that allows users to publish short text messages called 'tweets'. Increasing the number of accounts to follow (followings) increases the number of tweets that will be displayed from different topics in an unclassified manner in the timeline of the user. Therefore, it can be a vital solution for many Twitter users to have their tweets in a timeline classified into general categories to save the user’s time and to provide easy and quick access to tweets based on topics. In this paper, we developed a classifier for timeline tweets trained on a dataset consisting of 3600 tweets in total, which were collected from Saudi Twitter and annotated manually. We experimented with the well-known Bag-of-Words approach to text classification, and we used support vector machines (SVM) in the training process. The trained classifier performed well on a test dataset, with an average F1-measure equal to 92.3%. The classifier has been integrated into an application, which practically proved the classifier’s ability to classify timeline tweets of the user.Keywords: corpus creation, feature extraction, machine learning, short text classification, social media, support vector machine, Twitter
Procedia PDF Downloads 1552517 Isolation and Classification of Red Blood Cells in Anemic Microscopic Images
Authors: Jameela Ali Alkrimi, Abdul Rahim Ahmad, Azizah Suliman, Loay E. George
Abstract:
Red blood cells (RBCs) are among the most commonly and intensively studied type of blood cells in cell biology. The lack of RBCs is a condition characterized by lower than normal hemoglobin level; this condition is referred to as 'anemia'. In this study, a software was developed to isolate RBCs by using a machine learning approach to classify anemic RBCs in microscopic images. Several features of RBCs were extracted using image processing algorithms, including principal component analysis (PCA). With the proposed method, RBCs were isolated in 34 second from an image containing 18 to 27 cells. We also proposed that PCA could be performed to increase the speed and efficiency of classification. Our classifier algorithm yielded accuracy rates of 100%, 99.99%, and 96.50% for K-nearest neighbor (K-NN) algorithm, support vector machine (SVM), and neural network ANN, respectively. Classification was evaluated in highly sensitivity, specificity, and kappa statistical parameters. In conclusion, the classification results were obtained for a short time period with more efficient when PCA was used.Keywords: red blood cells, pre-processing image algorithms, classification algorithms, principal component analysis PCA, confusion matrix, kappa statistical parameters, ROC
Procedia PDF Downloads 4052516 Critique of the City-Machine: Dismantling the Scientific Socialist Utopia of Soviet Territorialization
Authors: Rachel P. Vasconcellos
Abstract:
The Russian constructivism is usually enshrined in history as another ''modernist ism'', that is, as an artistic phenomenon related to the early twentieth century‘s zeitgeist. What we aim in this essay is to analyze the constructivist movement not over the Art History field neither through the aesthetic debate, but through a geographical critical theory, taking the main idea of construction in the concrete sense of production of space. Seen from the perspective of the critique of space, the constructivist production is presented as a plan of totality, designed as socialist society‘s spatiality, contemplating and articulating all its scalar levels: the objects of everyday life, the building, the city and the territory. The constructivist avant-garde manifests a geographical ideology, launching the foundation‘s basis of modern planning ideology. Taken in its political sense, the artistic avant-garde of the Russian Revolution intended to anticipate the forms of a social future already put in progress: their plastic research pointed to new formal expressions to revolutionary contents. With the foundation of new institutions under a new State, it was given to the specialized labor of artists, architects, and planners the task of designing the socialist society, based on the thesis of scientific socialism. Their projects were developed under the politico-economics imperatives to the Soviet modernization – that is: the structural needs of industrialization and inclusion of all people in the productive work universe. This context shapes the creative atmosphere of the constructivist avant-garde, which uses the methods of engineering to the transform everyday life. Architecture, urban planning, and state planning integrated must then operate as spatial arrangement morphologically able to produce socialist life. But due to the intrinsic contradictions of the process, the rational and geometric aesthetic of the City-Machine appears, finally, as an image of a scientific socialist utopia.Keywords: city-machine, critique of space, production of space, soviet territorialization
Procedia PDF Downloads 2792515 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models
Authors: Haya Salah, Srinivas Sharan
Abstract:
Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time
Procedia PDF Downloads 1222514 Corrosion Resistance Performance of Epoxy/Polyamidoamine Coating Due to Incorporation of Nano Aluminium Powder
Authors: Asiful Hossain Seikh, Mohammad Asif Alam, Ubair Abdus Samad, Jabair A. Mohammed, S. M. Al-Zahrani, El-Sayed M. Sherif
Abstract:
In this current investigation, aliphatic amine-cured diglycidyl ether of bisphenol-A (DGEBA) based epoxy coating was mixed with certain weight % hardener polyaminoamide (1:2) and was coated on carbon steel panels with and without 1% nano crystalline Al powder. The corrosion behavior of the coated samples were investigated by exposing them in the salt spray chamber, for 500 hours. According to ASTM-B-117, the bath was kept at 35 °C and 5% NaCl containing mist was sprayed at 1.3 bars pressure. Composition of coatings was confirmed using Fourier-transform infrared spectroscopy (FTIR). Electrochemical characterization of the coated samples was also performed using potentiodynamic polarization technique and electrochemical impedance spectroscopy (EIS) technique. All the experiments were done in 3.5% NaCl solution. The nano Al coated sample shows good corrosion resistance property compared to bare Al sample. In fact after salt spray exposure no pitting or local damage was observed for nano coated sample and the coating gloss was negligibly affected. The surface morphology of coated and corroded samples was studied using scanning electron microscopy (SEM).Keywords: epoxy, nano aluminium, potentiodynamic polarization, salt spray, electrochemical impedence spectroscopy
Procedia PDF Downloads 1692513 Performants: A Digital Event Manager-Organizer
Authors: Ioannis Andrianakis, Manolis Falelakis, Maria Pavlidou, Konstantinos Papakonstantinou, Ermioni Avramidou, Dimitrios Kalogiannis, Nikolaos Milios, Katerina Bountakidou, Kiriakos Chatzidimitriou, Panagiotis Panagiotopoulos
Abstract:
Artistic events, such as concerts and performances, are challenging to organize because they involve many people with different skill sets. Small and medium venues often struggle to afford the costs and overheads of booking and hosting remote artists, especially if they lack sponsors or subsidies. This limits the opportunities for both venues and artists, especially those outside of big cities. However, more and more research shows that audiences prefer smaller-scale events and concerts, which benefit local economies and communities. To address this challenge, our project “PerformAnts: Digital Event Manager-Organizer” aims to develop a smart digital tool that automates and optimizes the processes and costs of live shows and tours. By using machine learning, applying best practices and training users through workshops, our platform offers a comprehensive solution for a growing market, enhances the mobility of artists and the accessibility of venues and allows professionals to focus on the creative aspects of concert production.Keywords: event organization, creative industries, event promotion, machine learning
Procedia PDF Downloads 872512 An Examination of Changes on Natural Vegetation due to Charcoal Production Using Multi Temporal Land SAT Data
Authors: T. Garba, Y. Y. Babanyara, M. Isah, A. K. Muktari, R. Y. Abdullahi
Abstract:
The increased in demand of fuel wood for heating, cooking and sometimes bakery has continued to exert appreciable impact on natural vegetation. This study focus on the use of multi-temporal data from land sat TM of 1986, land sat EMT of 1999 and lands sat ETM of 2006 to investigate the changes of Natural Vegetation resulting from charcoal production activities. The three images were classified based on bare soil, built up areas, cultivated land, and natural vegetation, Rock out crop and water bodies. From the classified images Land sat TM of 1986 it shows natural vegetation of the study area to be 308,941.48 hectares equivalent to 50% of the area it then reduces to 278,061.21 which is 42.92% in 1999 it again depreciated to 199,647.81 in 2006 equivalent to 30.83% of the area. Consequently cultivated continue increasing from 259,346.80 hectares (42%) in 1986 to 312,966.27 hectares (48.3%) in 1999 and then to 341.719.92 hectares (52.78%). These show that within the span of 20 years (1986 to 2006) the natural vegetation is depreciated by 119,293.81 hectares. This implies that if the menace is not control the natural might likely be lost in another twenty years. This is because forest cleared for charcoal production is normally converted to farmland. The study therefore concluded that there is the need for alternatives source of domestic energy such as the use of biomass which can easily be accessible and affordable to people. In addition, the study recommended that there should be strong policies enforcement for the protection forest reserved.Keywords: charcoal, classification, data, images, land use, natural vegetation
Procedia PDF Downloads 3652511 Carbon-Nanodots Modified Glassy Carbon Electrode for the Electroanalysis of Selenium in Water
Authors: Azeez O. Idris, Benjamin O. Orimolade, Potlako J. Mafa, Alex T. Kuvarega, Usisipho Feleni, Bhekie B. Mamba
Abstract:
We report a simple and cheaper method for the electrochemical detection of Se(IV) using carbon nanodots (CNDTs) prepared from oat. The carbon nanodots were synthesised by green and facile approach and characterised using scanning electron microscopy, high-resolution transmission electron microscopy, Fourier transform infrared spectroscopy, X-ray diffraction, and Raman spectroscopy. The CNDT was used to fabricate an electrochemical sensor for the quantification of Se(IV) in water. The modification of glassy carbon electrode (GCE) with carbon nanodots led to an increase in the electroactive surface area of the electrode, which enhances the redox current peak of [Fe(CN)₆]₃₋/₄‒ in comparison to the bare GCE. Using the square wave voltammetry, the detection limit and quantification limit of 0.05 and 0.167 ppb were obtained under the optimised parameters using deposition potential of -200 mV, 0.1 M HNO₃ electrolyte, electrodeposition time of 60 s, and pH 1. The results further revealed that the GCE-CNDT was not susceptible to many interfering cations except Cu(II) and Pb(II), and Fe(II). The sensor fabrication involves a one-step electrode modification and was used to detect Se(IV) in a real water sample, and the result obtained is in agreement with the inductively coupled plasma technique. Overall, the electrode offers a cheap, fast, and sensitive way of detecting selenium in environmental matrices.Keywords: carbon nanodots, square wave voltammetry, nanomaterials, selenium, sensor
Procedia PDF Downloads 922510 SVM-RBN Model with Attentive Feature Culling Method for Early Detection of Fruit Plant Diseases
Authors: Piyush Sharma, Devi Prasad Sharma, Sulabh Bansal
Abstract:
Diseases are fairly common in fruits and vegetables because of the changing climatic and environmental circumstances. Crop diseases, which are frequently difficult to control, interfere with the growth and output of the crops. Accurate disease detection and timely disease control measures are required to guarantee high production standards and good quality. In India, apples are a common crop that may be afflicted by a variety of diseases on the fruit, stem, and leaves. It is fungi, bacteria, and viruses that trigger the early symptoms of leaf diseases. In order to assist farmers and take the appropriate action, it is important to develop an automated system that can be used to detect the type of illnesses. Machine learning-based image processing can be used to: this research suggested a system that can automatically identify diseases in apple fruit and apple plants. Hence, this research utilizes the hybrid SVM-RBN model. As a consequence, the model may produce results that are more effective in terms of accuracy, precision, recall, and F1 Score, with respective values of 96%, 99%, 94%, and 93%.Keywords: fruit plant disease, crop disease, machine learning, image processing, SVM-RBN
Procedia PDF Downloads 652509 Application of Griddization Management to Construction Hazard Management
Authors: Lingzhi Li, Jiankun Zhang, Tiantian Gu
Abstract:
Hazard management that can prevent fatal accidents and property losses is a fundamental process during the buildings’ construction stage. However, due to lack of safety supervision resources and operational pressures, the conduction of hazard management is poor and ineffective in China. In order to improve the quality of construction safety management, it is critical to explore the use of information technologies to ensure that the process of hazard management is efficient and effective. After exploring the existing problems of construction hazard management in China, this paper develops the griddization management model for construction hazard management. First, following the knowledge grid infrastructure, the griddization computing infrastructure for construction hazards management is designed which includes five layers: resource entity layer, information management layer, task management layer, knowledge transformation layer and application layer. This infrastructure will be as the technical support for realizing grid management. Second, this study divides the construction hazards into grids through city level, district level and construction site level according to grid principles. Last, a griddization management process including hazard identification, assessment and control is developed. Meanwhile, all stakeholders of construction safety management, such as owners, contractors, supervision organizations and government departments, should take the corresponding responsibilities in this process. Finally, a case study based on actual construction hazard identification, assessment and control is used to validate the effectiveness and efficiency of the proposed griddization management model. The advantage of this designed model is to realize information sharing and cooperative management between various safety management departments.Keywords: construction hazard, griddization computing, grid management, process
Procedia PDF Downloads 2782508 Enhancement of Pool Boiling Regimes by Sand Deposition
Authors: G. Mazor, I. Ladizhensky, A. Shapiro, D. Nemirovsky
Abstract:
A lot of researches was dedicated to the evaluation of the efficiency of the uniform constant and temporary coatings enhancing a heat transfer rate. Our goal is an investigation of the sand coatings distributed by both uniform and non-uniform forms. The sand of different sizes (0.2-0.4-0.6 mm) was attached to a copper ball (30 mm diameter) surface by means of PVA adhesive as a uniform layer. At the next stage, sand spots were distributed over the ball surface with an areal density that ranges between one spot per 1.18 cm² (for low-density spots) and one spot per 0.51 cm² (for high-density spots). The spot's diameter value varied from 3 to 6.5 mm and height from 0.5 to 1.5 mm. All coatings serve as a heat transfer enhancer during the quenching in liquid nitrogen. Highest heat flux densities, achieved during quenching, lie in the range 10.8-20.2 W/cm², depending on the sand layer structure. Application of the enhancing coating increases an amount of heat, evacuated by highly effective nucleate and transition boiling, by a factor of 4.5 as compared to the bare sample. The non-uniform sand coatings were increasing the heat transfer rate value under all pool boiling conditions: nucleate boiling, transfer boiling and the most severe film boiling. A combination of uniform sand coating together with high-density sand spots increased the average heat transfer rate by a factor of 3.Keywords: heat transfer enhancement, nucleate boiling, film boiling, transfer boiling
Procedia PDF Downloads 1282507 Motion Planning and Simulation Design of a Redundant Robot for Sheet Metal Bending Processes
Authors: Chih-Jer Lin, Jian-Hong Hou
Abstract:
Industry 4.0 is a vision of integrated industry implemented by artificial intelligent computing, software, and Internet technologies. The main goal of industry 4.0 is to deal with the difficulty owing to competitive pressures in the marketplace. For today’s manufacturing factories, the type of production is changed from mass production (high quantity production with low product variety) to medium quantity-high variety production. To offer flexibility, better quality control, and improved productivity, robot manipulators are used to combine material processing, material handling, and part positioning systems into an integrated manufacturing system. To implement the automated system for sheet metal bending operations, motion planning of a 7-degrees of freedom (DOF) robot is studied in this paper. A virtual reality (VR) environment of a bending cell, which consists of the robot and a bending machine, is established using the virtual robot experimentation platform (V-REP) simulator. For sheet metal bending operations, the robot only needs six DOFs for the pick-and-place or tracking tasks. Therefore, this 7 DOF robot has more DOFs than the required to execute a specified task; it can be called a redundant robot. Therefore, this robot has kinematic redundancies to deal with the task-priority problems. For redundant robots, Pseudo-inverse of the Jacobian is the most popular motion planning method, but the pseudo-inverse methods usually lead to a kind of chaotic motion with unpredictable arm configurations as the Jacobian matrix lose ranks. To overcome the above problem, we proposed a method to formulate the motion planning problems as optimization problem. Moreover, a genetic algorithm (GA) based method is proposed to deal with motion planning of the redundant robot. Simulation results validate the proposed method feasible for motion planning of the redundant robot in an automated sheet-metal bending operations.Keywords: redundant robot, motion planning, genetic algorithm, obstacle avoidance
Procedia PDF Downloads 1492506 Hate Speech Detection Using Machine Learning: A Survey
Authors: Edemealem Desalegn Kingawa, Kafte Tasew Timkete, Mekashaw Girmaw Abebe, Terefe Feyisa, Abiyot Bitew Mihretie, Senait Teklemarkos Haile
Abstract:
Currently, hate speech is a growing challenge for society, individuals, policymakers, and researchers, as social media platforms make it easy to anonymously create and grow online friends and followers and provide an online forum for debate about specific issues of community life, culture, politics, and others. Despite this, research on identifying and detecting hate speech is not satisfactory performance, and this is why future research on this issue is constantly called for. This paper provides a systematic review of the literature in this field, with a focus on approaches like word embedding techniques, machine learning, deep learning technologies, hate speech terminology, and other state-of-the-art technologies with challenges. In this paper, we have made a systematic review of the last six years of literature from Research Gate and Google Scholar. Furthermore, limitations, along with algorithm selection and use challenges, data collection, and cleaning challenges, and future research directions, are discussed in detail.Keywords: Amharic hate speech, deep learning approach, hate speech detection review, Afaan Oromo hate speech detection
Procedia PDF Downloads 1792505 Thick Data Analytics for Learning Cataract Severity: A Triplet Loss Siamese Neural Network Model
Authors: Jinan Fiaidhi, Sabah Mohammed
Abstract:
Diagnosing cataract severity is an important factor in deciding to undertake surgery. It is usually conducted by an ophthalmologist or through taking a variety of fundus photography that needs to be examined by the ophthalmologist. This paper carries out an investigation using a Siamese neural net that can be trained with small anchor samples to score cataract severity. The model used in this paper is based on a triplet loss function that takes the ophthalmologist best experience in rating positive and negative anchors to a specific cataract scaling system. This approach that takes the heuristics of the ophthalmologist is generally called the thick data approach, which is a kind of machine learning approach that learn from a few shots. Clinical Relevance: The lens of the eye is mostly made up of water and proteins. A cataract occurs when these proteins at the eye lens start to clump together and block lights causing impair vision. This research aims at employing thick data machine learning techniques to rate the severity of the cataract using Siamese neural network.Keywords: thick data analytics, siamese neural network, triplet-loss model, few shot learning
Procedia PDF Downloads 1132504 The Effects of Anapana Meditation Training Program Monitored by Skin Conductance and Temperature (SC/ST) Biofeedback on Stress in Bachelor’s Degree Students
Authors: Ormanee Patarathipakorn
Abstract:
Background: Stress was the major psychological problem that affecting to physical and mental health among undergraduate students. Aim of study was to determine the effective of meditation training program (MTP) for stress reduction measured by biofeedback (BB) machine. Material and Methods: This was quasi-experimental study conducted in Faculty of Dentistry, Thammasat University, Thailand. Study period was between August and December 2023. Participants were the first-year Dentistry students. MTP was concentration meditation (Anapana meditation). Stress measurement was evaluated by using Thai version perceived stress scale (T-PSS-10) was performed at one week before study, 14 and 18 weeks. Stress evaluation by biofeedback machine (skin conductance: SC and skin temperature: ST) were performed at one week before study, 4, 8, 14 and 18 weeks. Data from T-PSS-10 and SC/ST biofeedback were collected and analyzed. Results: A total of 28 subjects were recruited. The mean age of participant was 18.4 years old. Two-thirds (19/28) was female. Stress reduction from MTP was detected since 4 and 8 weeks by STBB and SCBB, respectively. T-PSS 10 scores before MTP, 14 and 18 weeks were 17.7± 5.4, 9.8 ± 3.1 and 8.4 ± 3.1 with statistical significance. Conclusion: Meditation training program could reduce stress and measured by skin conductance and temperature biofeedback.Keywords: stress, meditation, biofeedback, student
Procedia PDF Downloads 382503 T-S Fuzzy Modeling Based on Power Coefficient Limit Nonlinearity Applied to an Isolated Single Machine Load Frequency Deviation Control
Authors: R. S. Sheu, H. Usman, M. S. Lawal
Abstract:
Takagi-Sugeno (T-S) fuzzy model based control of a load frequency deviation in a single machine with limit nonlinearity on power coefficient is presented in the paper. Two T-S fuzzy rules with only rotor angle variable as input in the premise part, and linear state space models in the consequent part involving characteristic matrices determined from limits set on the power coefficient constant are formulated, state feedback control gains for closed loop control was determined from the formulated Linear Matrix Inequality (LMI) with eigenvalue optimization scheme for asymptotic and exponential stability (speed of esponse). Numerical evaluation of the closed loop object was carried out in Matlab. Simulation results generated of both the open and closed loop system showed the effectiveness of the control scheme in maintaining load frequency stability.Keywords: T-S fuzzy model, state feedback control, linear matrix inequality (LMI), frequency deviation control
Procedia PDF Downloads 3982502 Revolutionizing Project Management: A Comprehensive Review of Artificial Intelligence and Machine Learning Applications for Smarter Project Execution
Authors: Wenzheng Fu, Yue Fu, Zhijiang Dong, Yujian Fu
Abstract:
The integration of artificial intelligence (AI) and machine learning (ML) into project management is transforming how engineering projects are executed, monitored, and controlled. This paper provides a comprehensive survey of AI and ML applications in project management, systematically categorizing their use in key areas such as project data analytics, monitoring, tracking, scheduling, and reporting. As project management becomes increasingly data-driven, AI and ML offer powerful tools for improving decision-making, optimizing resource allocation, and predicting risks, leading to enhanced project outcomes. The review highlights recent research that demonstrates the ability of AI and ML to automate routine tasks, provide predictive insights, and support dynamic decision-making, which in turn increases project efficiency and reduces the likelihood of costly delays. This paper also examines the emerging trends and future opportunities in AI-driven project management, such as the growing emphasis on transparency, ethical governance, and data privacy concerns. The research suggests that AI and ML will continue to shape the future of project management by driving further automation and offering intelligent solutions for real-time project control. Additionally, the review underscores the need for ongoing innovation and the development of governance frameworks to ensure responsible AI deployment in project management. The significance of this review lies in its comprehensive analysis of AI and ML’s current contributions to project management, providing valuable insights for both researchers and practitioners. By offering a structured overview of AI applications across various project phases, this paper serves as a guide for the adoption of intelligent systems, helping organizations achieve greater efficiency, adaptability, and resilience in an increasingly complex project management landscape.Keywords: artificial intelligence, decision support systems, machine learning, project management, resource optimization, risk prediction
Procedia PDF Downloads 242501 The Classification of Parkinson Tremor and Essential Tremor Based on Frequency Alteration of Different Activities
Authors: Chusak Thanawattano, Roongroj Bhidayasiri
Abstract:
This paper proposes a novel feature set utilized for classifying the Parkinson tremor and essential tremor. Ten ET and ten PD subjects are asked to perform kinetic, postural and resting tests. The empirical mode decomposition (EMD) is used to decompose collected tremor signal to a set of intrinsic mode functions (IMF). The IMFs are used for reconstructing representative signals. The feature set is composed of peak frequencies of IMFs and reconstructed signals. Hypothesize that the dominant frequency components of subjects with PD and ET change in different directions for different tests, difference of peak frequencies of IMFs and reconstructed signals of pairwise based tests (kinetic-resting, kinetic-postural and postural-resting) are considered as potential features. Sets of features are used to train and test by classifier including the quadratic discriminant classifier (QLC) and the support vector machine (SVM). The best accuracy, the best sensitivity and the best specificity are 90%, 87.5%, and 92.86%, respectively.Keywords: tremor, Parkinson, essential tremor, empirical mode decomposition, quadratic discriminant, support vector machine, peak frequency, auto-regressive, spectrum estimation
Procedia PDF Downloads 4432500 Deployment of Armed Soldiers in European Cities as a Source of Insecurity among Czech Population
Authors: Blanka Havlickova
Abstract:
In the last ten years, there are growing numbers of troops with machine guns serving on streets of European cities. We can see them around government buildings, major transport hubs, synagogues, galleries and main tourist landmarks. As the main purpose of armed soldier’s presence in European cities authorities declare the prevention of terrorist attacks and psychological support for tourists and domestic population. The main objective of the following study is to find out whether the deployment of armed soldiers in European cities has a calming and reassuring effect on Czech citizens (if the presence at armed soldiers make the Czech population feel more secure) or rather becomes a stress factor (the presence of soldiers standing guard in full military fatigues recalls serious criminality and terrorist attacks which are reflected in the fears and insecurity of Czech population). The initial hypothesis of this study is connected with the priming theory, the idea that when we are exposed to an image (armed soldier), it makes us unconsciously focus on a topic connected with this image (terrorism). This paper is based on a quantitative public survey, which was carried out in the form of electronic questioning among the citizens of the Czech Republic. Respondents answered 14 questions about two European cities – London and Paris. Besides general questions investigating the respondents' awareness of these cities, some of the questions focused on the fear that the respondents had when picturing themselves leaving next Monday for the given city (London or Paris). The questions asking about respondent´s travel fears and concerns were accompanied by different photos. When answering the question about fear some respondents have been presented with a photo of Westminster Palace and the Eiffel with ordinary citizens while other respondents have been presented with a picture of the Westminster Palace, the and Eiffel's tower not only with ordinary citizens, but also with one soldier holding a machine gun. The main goal of this paper is to analyse and compare data about concerns for these two groups of respondents (presented with different pictures) and find out if and how an armed soldier with a machine gun in front of the Westminster Palace or the Eiffel Tower affects the public's concerns about visiting the site. In other words, the aim of this paper is to confirm or rebut the hypothesis that the look at a soldier with a machine gun in front of the Eiffel Tower or the Westminster Palace automatically triggers the association with a terrorist attack leading to an increase in fear and insecurity among Czech population.Keywords: terrorism, security measures, priming, risk perception
Procedia PDF Downloads 2522499 Effects of Artificial Intelligence and Machine Learning on Social Media for Health Organizations
Authors: Ricky Leung
Abstract:
Artificial intelligence (AI) and machine learning (ML) have revolutionized the way health organizations approach social media. The sheer volume of data generated through social media can be overwhelming, but AI and ML can help organizations effectively manage this information to improve the health and well-being of individuals and communities. One way AI can be used to enhance social media in health organizations is through sentiment analysis. This involves analyzing the emotions expressed in social media posts to better understand public opinion and respond accordingly. This can help organizations gauge the impact of their campaigns, track the spread of misinformation, and improve communication with the public. While social media is a useful tool, researchers and practitioners have expressed fear that it will be used for the spread of misinformation, which can have serious consequences for public health. Health organizations must work to ensure that AI systems are transparent, trustworthy, and unbiased so they can help minimize the spread of misinformation. In conclusion, AI and ML have the potential to greatly enhance the use of social media in health organizations. These technologies can help organizations effectively manage large amounts of data and understand stakeholders' sentiments. However, it is important to carefully consider the potential consequences and ensure that these systems are carefully designed to minimize the spread of misinformation.Keywords: AI, ML, social media, health organizations
Procedia PDF Downloads 902498 Structural Design Optimization of Reinforced Thin-Walled Vessels under External Pressure Using Simulation and Machine Learning Classification Algorithm
Authors: Lydia Novozhilova, Vladimir Urazhdin
Abstract:
An optimization problem for reinforced thin-walled vessels under uniform external pressure is considered. The conventional approaches to optimization generally start with pre-defined geometric parameters of the vessels, and then employ analytic or numeric calculations and/or experimental testing to verify functionality, such as stability under the projected conditions. The proposed approach consists of two steps. First, the feasibility domain will be identified in the multidimensional parameter space. Every point in the feasibility domain defines a design satisfying both geometric and functional constraints. Second, an objective function defined in this domain is formulated and optimized. The broader applicability of the suggested methodology is maximized by implementing the Support Vector Machines (SVM) classification algorithm of machine learning for identification of the feasible design region. Training data for SVM classifier is obtained using the Simulation package of SOLIDWORKS®. Based on the data, the SVM algorithm produces a curvilinear boundary separating admissible and not admissible sets of design parameters with maximal margins. Then optimization of the vessel parameters in the feasibility domain is performed using the standard algorithms for the constrained optimization. As an example, optimization of a ring-stiffened closed cylindrical thin-walled vessel with semi-spherical caps under high external pressure is implemented. As a functional constraint, von Mises stress criterion is used but any other stability constraint admitting mathematical formulation can be incorporated into the proposed approach. Suggested methodology has a good potential for reducing design time for finding optimal parameters of thin-walled vessels under uniform external pressure.Keywords: design parameters, feasibility domain, von Mises stress criterion, Support Vector Machine (SVM) classifier
Procedia PDF Downloads 3282497 Cloud Resources Utilization and Science Teacher’s Effectiveness in Secondary Schools in Cross River State, Nigeria
Authors: Michael Udey Udam
Abstract:
Background: This study investigated the impact of cloud resources, a component of cloud computing, on science teachers’ effectiveness in secondary schools in Cross River State. Three (3) research questions and three (3) alternative hypotheses guided the study. Method: The descriptive survey design was adopted for the study. The population of the study comprised 1209 science teachers in public secondary schools of Cross River state. Sample: A sample of 487 teachers was drawn from the population using a stratified random sampling technique. The researcher-made structured questionnaire with 18 was used for data collection for the study. Research question one was answered using the Pearson Product Moment Correlation, while research question two and the hypotheses were answered using the Analysis of Variance (ANOVA) statistics in the Statistical Package for Social Sciences (SPSS) at a 0.05 level of significance. Results: The results of the study revealed that there is a positive correlation between the utilization of cloud resources in teaching and teaching effectiveness among science teachers in secondary schools in Cross River state; there is a negative correlation between gender and utilization of cloud resources among science teachers in secondary schools in Cross River state; and that there is a significant correlation between teaching experience and the utilization of cloud resources among science teachers in secondary schools in Cross River state. Conclusion: The study justifies the effectiveness of the Cross River state government policy of introducing cloud computing into the education sector. The study recommends that the policy should be sustained.Keywords: cloud resources, science teachers, effectiveness, secondary school
Procedia PDF Downloads 76