Search results for: predictive accuracy
4078 Real-Time Pedestrian Detection Method Based on Improved YOLOv3
Authors: Jingting Luo, Yong Wang, Ying Wang
Abstract:
Pedestrian detection in image or video data is a very important and challenging task in security surveillance. The difficulty of this task is to locate and detect pedestrians of different scales in complex scenes accurately. To solve these problems, a deep neural network (RT-YOLOv3) is proposed to realize real-time pedestrian detection at different scales in security monitoring. RT-YOLOv3 improves the traditional YOLOv3 algorithm. Firstly, the deep residual network is added to extract vehicle features. Then six convolutional neural networks with different scales are designed and fused with the corresponding scale feature maps in the residual network to form the final feature pyramid to perform pedestrian detection tasks. This method can better characterize pedestrians. In order to further improve the accuracy and generalization ability of the model, a hybrid pedestrian data set training method is used to extract pedestrian data from the VOC data set and train with the INRIA pedestrian data set. Experiments show that the proposed RT-YOLOv3 method achieves 93.57% accuracy of mAP (mean average precision) and 46.52f/s (number of frames per second). In terms of accuracy, RT-YOLOv3 performs better than Fast R-CNN, Faster R-CNN, YOLO, SSD, YOLOv2, and YOLOv3. This method reduces the missed detection rate and false detection rate, improves the positioning accuracy, and meets the requirements of real-time detection of pedestrian objects.Keywords: pedestrian detection, feature detection, convolutional neural network, real-time detection, YOLOv3
Procedia PDF Downloads 1414077 Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach
Authors: Rajvir Kaur, Jeewani Anupama Ginige
Abstract:
With recent trends in Big Data and advancements in Information and Communication Technologies, the healthcare industry is at the stage of its transition from clinician oriented to technology oriented. Many people around the world die of cancer because the diagnosis of disease was not done at an early stage. Nowadays, the computational methods in the form of Machine Learning (ML) are used to develop automated decision support systems that can diagnose cancer with high confidence in a timely manner. This paper aims to carry out the comparative evaluation of a selected set of ML classifiers on two existing datasets: breast cancer and cervical cancer. The ML classifiers compared in this study are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and Artificial Neural Networks (ANN). The evaluation is carried out based on standard evaluation metrics Precision (P), Recall (R), F1-score and Accuracy. The experimental results based on the evaluation metrics show that ANN showed the highest-level accuracy (99.4%) when tested with breast cancer dataset. On the other hand, when these ML classifiers are tested with the cervical cancer dataset, Ensemble (Bagged Tree) technique gave better accuracy (93.1%) in comparison to other classifiers.Keywords: artificial neural networks, breast cancer, classifiers, cervical cancer, f-score, machine learning, precision, recall
Procedia PDF Downloads 2774076 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment
Authors: Ella Sèdé Maforikan
Abstract:
Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment
Procedia PDF Downloads 634075 Concussion: Clinical and Vocational Outcomes from Sport Related Mild Traumatic Brain Injury
Authors: Jack Nash, Chris Simpson, Holly Hurn, Ronel Terblanche, Alan Mistlin
Abstract:
There is an increasing incidence of mild traumatic brain injury (mTBI) cases throughout sport and with this, a growing interest from governing bodies to ensure these are managed appropriately and player welfare is prioritised. The Berlin consensus statement on concussion in sport recommends a multidisciplinary approach when managing those patients who do not have full resolution of mTBI symptoms. There are as of yet no standardised guideline to follow in the treatment of complex cases mTBI in athletes. The aim of this project was to analyse the outcomes, both clinical and vocational, of all patients admitted to the mild Traumatic Brain Injury (mTBI) service at the UK’s Defence Military Rehabilitation Centre Headley Court between 1st June 2008 and 1st February 2017, as a result of a sport induced injury, and evaluate potential predictive indicators of outcome. Patients were identified from a database maintained by the mTBI service. Clinical and occupational outcomes were ascertained from medical and occupational employment records, recorded prospectively, at time of discharge from the mTBI service. Outcomes were graded based on the vocational independence scale (VIS) and clinical documentation at discharge. Predictive indicators including referral time, age at time of injury, previous mental health diagnosis and a financial claim in place at time of entry to service were assessed using logistic regression. 45 Patients were treated for sport-related mTBI during this time frame. Clinically 96% of patients had full resolution of their mTBI symptoms after input from the mTBI service. 51% of patients returned to work at their previous vocational level, 4% had ongoing mTBI symptoms, 22% had ongoing physical rehabilitation needs, 11% required mental health input and 11% required further vestibular rehabilitation. Neither age, time to referral, pre-existing mental health condition nor compensation seeking had a significant impact on either vocational or clinical outcome in this population. The vast majority of patients reviewed in the mTBI clinic had persistent symptoms which could not be managed in primary care. A consultant-led, multidisciplinary approach to the diagnosis and management of mTBI has resulted in excellent clinical outcomes in these complex cases. High levels of symptom resolution suggest that this referral and treatment pathway is successful and is a model which could be replicated in other organisations with consultant led input. Further understanding of both predictive and individual factors would allow clinicians to focus treatments on those who are most likely to develop long-term complications following mTBI. A consultant-led, multidisciplinary service ensures a large number of patients will have complete resolution of mTBI symptoms after sport-related mTBI. Further research is now required to ascertain the key predictive indicators of outcome following sport-related mTBI.Keywords: brain injury, concussion, neurology, rehabilitation, sports injury
Procedia PDF Downloads 1574074 Heritage 3D Digitalization Combining High Definition Photogrammetry with Metrologic Grade Laser Scans
Authors: Sebastian Oportus, Fabrizio Alvarez
Abstract:
3D digitalization of heritage objects is widely used nowadays. However, the most advanced 3D scanners in the market that capture topology and texture at the same time, and are specifically made for this purpose, don’t deliver the accuracy that is needed for scientific research. In the last three years, we have developed a method that combines the use of Metrologic grade laser scans, that allows us to work with a high accuracy topology up to 15 times more precise and combine this mesh with a texture obtained from high definition photogrammetry with up to 100 times more pixel concentrations. The result is an accurate digitalization that promotes heritage preservation, scientific study, high detail reproduction, and digital restoration, among others. In Chile, we have already performed 478 digitalizations of high-value heritage pieces and compared the results with up to five different digitalization methods; the results obtained show a considerable better dimensional accuracy and texture resolution. We know the importance of high precision and resolution for academics and museology; that’s why our proposal is to set a worldwide standard using this open source methodology.Keywords: 3D digitalization, digital heritage, heritage preservation, digital restauration, heritage reproduction
Procedia PDF Downloads 1884073 Modern Scotland Yard: Improving Surveillance Policies Using Adversarial Agent-Based Modelling and Reinforcement Learning
Authors: Olaf Visker, Arnout De Vries, Lambert Schomaker
Abstract:
Predictive policing refers to the usage of analytical techniques to identify potential criminal activity. It has been widely implemented by various police departments. Being a relatively new area of research, there are, to the author’s knowledge, no absolute tried, and true methods and they still exhibit a variety of potential problems. One of those problems is closely related to the lack of understanding of how acting on these prediction influence crime itself. The goal of law enforcement is ultimately crime reduction. As such, a policy needs to be established that best facilitates this goal. This research aims to find such a policy by using adversarial agent-based modeling in combination with modern reinforcement learning techniques. It is presented here that a baseline model for both law enforcement and criminal agents and compare their performance to their respective reinforcement models. The experiments show that our smart law enforcement model is capable of reducing crime by making more deliberate choices regarding the locations of potential criminal activity. Furthermore, it is shown that the smart criminal model presents behavior consistent with popular crime theories and outperforms the baseline model in terms of crimes committed and time to capture. It does, however, still suffer from the difficulties of capturing long term rewards and learning how to handle multiple opposing goals.Keywords: adversarial, agent based modelling, predictive policing, reinforcement learning
Procedia PDF Downloads 1484072 Use of a New Multiplex Quantitative Polymerase Chain Reaction Based Assay for Simultaneous Detection of Neisseria Meningitidis, Escherichia Coli K1, Streptococcus agalactiae, and Streptococcus pneumoniae
Authors: Nastaran Hemmati, Farhad Nikkhahi, Amir Javadi, Sahar Eskandarion, Seyed Mahmuod Amin Marashi
Abstract:
Neisseria meningitidis, Escherichia coli K, Streptococcus agalactiae, and Streptococcus pneumoniae cause 90% of bacterial meningitis. Almost all infected people die or have irreversible neurological complications. Therefore, it is essential to have a diagnostic kit with the ability to quickly detect these fatal infections. The project involved 212 patients from whom cerebrospinal fluid samples were obtained. After total genome extraction and performing multiplex quantitative polymerase chain reaction (qPCR), the presence or absence of each infectious factor was determined by comparing with standard strains. The specificity, sensitivity, positive predictive value, and negative predictive value calculated were 100%, 92.9%, 50%, and 100%, respectively. So, due to the high specificity and sensitivity of the designed primers, they can be used instead of bacterial culture that takes at least 24 to 48 hours. The remarkable benefit of this method is associated with the speed (up to 3 hours) at which the procedure could be completed. It is also worth noting that this method can reduce the personnel unintentional errors which may occur in the laboratory. On the other hand, as this method simultaneously identifies four common factors that cause bacterial meningitis, it could be used as an auxiliary method diagnostic technique in laboratories particularly in cases of emergency medicine.Keywords: cerebrospinal fluid, meningitis, quantitative polymerase chain reaction, simultaneous detection, diagnosis testing
Procedia PDF Downloads 1164071 Classification of Echo Signals Based on Deep Learning
Authors: Aisulu Tileukulova, Zhexebay Dauren
Abstract:
Radar plays an important role because it is widely used in civil and military fields. Target detection is one of the most important radar applications. The accuracy of detecting inconspicuous aerial objects in radar facilities is lower against the background of noise. Convolutional neural networks can be used to improve the recognition of this type of aerial object. The purpose of this work is to develop an algorithm for recognizing aerial objects using convolutional neural networks, as well as training a neural network. In this paper, the structure of a convolutional neural network (CNN) consists of different types of layers: 8 convolutional layers and 3 layers of a fully connected perceptron. ReLU is used as an activation function in convolutional layers, while the last layer uses softmax. It is necessary to form a data set for training a neural network in order to detect a target. We built a Confusion Matrix of the CNN model to measure the effectiveness of our model. The results showed that the accuracy when testing the model was 95.7%. Classification of echo signals using CNN shows high accuracy and significantly speeds up the process of predicting the target.Keywords: radar, neural network, convolutional neural network, echo signals
Procedia PDF Downloads 3534070 Species Distribution Modelling for Assessing the Effect of Land Use Changes on the Habitat of Endangered Proboscis Monkey (Nasalis larvatus) in Kalimantan, Indonesia
Authors: Wardatutthoyyibah, Satyawan Pudyatmoko, Sena Adi Subrata, Muhammad Ali Imron
Abstract:
The proboscis monkey is an endemic species to the island of Borneo with conservation status IUCN (The International Union for Conservation of Nature) of endangered. The population of the monkey has a specific habitat and sensitive to habitat disturbances. As a consequence of increasing rates of land-use change in the last four decades, its population was reported significantly decreased. We quantified the effect of land use change on the proboscis monkey’s habitat through the species distribution modeling (SDM) approach with Maxent Software. We collected presence data and environmental variables, i.e., land cover, topography, bioclimate, distance to the river, distance to the road, and distance to the anthropogenic disturbance to generate predictive distribution maps of the monkeys. We compared two prediction maps for 2000 and 2015 data to represent the current habitat of the monkey. We overlaid the monkey’s predictive distribution map with the existing protected areas to investigate whether the habitat of the monkey is protected under the protected areas networks. The results showed that almost 50% of the monkey’s habitat reduced as the effect of land use change. And only 9% of the current proboscis monkey’s habitat within protected areas. These results are important for the master plan of conservation of the endangered proboscis monkey and provide scientific guidance for the future development incorporating biodiversity issue.Keywords: endemic species, land use change, maximum entropy, spatial distribution
Procedia PDF Downloads 1564069 High Accuracy Analytic Approximations for Modified Bessel Functions I₀(x)
Authors: Pablo Martin, Jorge Olivares, Fernando Maass
Abstract:
A method to obtain analytic approximations for special function of interest in engineering and physics is described here. Each approximate function will be valid for every positive value of the variable and accuracy will be high and increasing with the number of parameters to determine. The general technique will be shown through an application to the modified Bessel function of order zero, I₀(x). The form and the calculation of the parameters are performed with the simultaneous use of the power series and asymptotic expansion. As in Padé method rational functions are used, but now they are combined with other elementary functions as; fractional powers, hyperbolic, trigonometric and exponential functions, and others. The elementary function is determined, considering that the approximate function should be a bridge between the power series and the asymptotic expansion. In the case of the I₀(x) function two analytic approximations have been already determined. The simplest one is (1+x²/4)⁻¹/⁴(1+0.24273x²) cosh(x)/(1+0.43023x²). The parameters of I₀(x) were determined using the leading term of the asymptotic expansion and two coefficients of the power series, and the maximum relative error is 0.05. In a second case, two terms of the asymptotic expansion were used and 4 of the power series and the maximum relative error is 0.001 at x≈9.5. Approximations with much higher accuracy will be also shown. In conclusion a new technique is described to obtain analytic approximations to some functions of interest in sciences, such that they have a high accuracy, they are valid for every positive value of the variable, they can be integrated and differentiated as the usual, functions, and furthermore they can be calculated easily even with a regular pocket calculator.Keywords: analytic approximations, mathematical-physics applications, quasi-rational functions, special functions
Procedia PDF Downloads 2514068 Malaria Parasite Detection Using Deep Learning Methods
Authors: Kaustubh Chakradeo, Michael Delves, Sofya Titarenko
Abstract:
Malaria is a serious disease which affects hundreds of millions of people around the world, each year. If not treated in time, it can be fatal. Despite recent developments in malaria diagnostics, the microscopy method to detect malaria remains the most common. Unfortunately, the accuracy of microscopic diagnostics is dependent on the skill of the microscopist and limits the throughput of malaria diagnosis. With the development of Artificial Intelligence tools and Deep Learning techniques in particular, it is possible to lower the cost, while achieving an overall higher accuracy. In this paper, we present a VGG-based model and compare it with previously developed models for identifying infected cells. Our model surpasses most previously developed models in a range of the accuracy metrics. The model has an advantage of being constructed from a relatively small number of layers. This reduces the computer resources and computational time. Moreover, we test our model on two types of datasets and argue that the currently developed deep-learning-based methods cannot efficiently distinguish between infected and contaminated cells. A more precise study of suspicious regions is required.Keywords: convolution neural network, deep learning, malaria, thin blood smears
Procedia PDF Downloads 1304067 Influence of Scalable Energy-Related Sensor Parameters on Acoustic Localization Accuracy in Wireless Sensor Swarms
Authors: Joyraj Chakraborty, Geoffrey Ottoy, Jean-Pierre Goemaere, Lieven De Strycker
Abstract:
Sensor swarms can be a cost-effectieve and more user-friendly alternative for location based service systems in different application like health-care. To increase the lifetime of such swarm networks, the energy consumption should be scaled to the required localization accuracy. In this paper we have investigated some parameter for energy model that couples localization accuracy to energy-related sensor parameters such as signal length,Bandwidth and sample frequency. The goal is to use the model for the localization of undetermined environmental sounds, by means of wireless acoustic sensors. we first give an overview of TDOA-based localization together with the primary sources of TDOA error (including reverberation effects, Noise). Then we show that in localization, the signal sample rate can be under the Nyquist frequency, provided that enough frequency components remain present in the undersampled signal. The resulting localization error is comparable with that of similar localization systems.Keywords: sensor swarms, localization, wireless sensor swarms, scalable energy
Procedia PDF Downloads 4224066 Sentiment Analysis of Ensemble-Based Classifiers for E-Mail Data
Authors: Muthukumarasamy Govindarajan
Abstract:
Detection of unwanted, unsolicited mails called spam from email is an interesting area of research. It is necessary to evaluate the performance of any new spam classifier using standard data sets. Recently, ensemble-based classifiers have gained popularity in this domain. In this research work, an efficient email filtering approach based on ensemble methods is addressed for developing an accurate and sensitive spam classifier. The proposed approach employs Naive Bayes (NB), Support Vector Machine (SVM) and Genetic Algorithm (GA) as base classifiers along with different ensemble methods. The experimental results show that the ensemble classifier was performing with accuracy greater than individual classifiers, and also hybrid model results are found to be better than the combined models for the e-mail dataset. The proposed ensemble-based classifiers turn out to be good in terms of classification accuracy, which is considered to be an important criterion for building a robust spam classifier.Keywords: accuracy, arcing, bagging, genetic algorithm, Naive Bayes, sentiment mining, support vector machine
Procedia PDF Downloads 1424065 Epileptic Seizure Prediction by Exploiting Signal Transitions Phenomena
Authors: Mohammad Zavid Parvez, Manoranjan Paul
Abstract:
A seizure prediction method is proposed by extracting global features using phase correlation between adjacent epochs for detecting relative changes and local features using fluctuation/deviation within an epoch for determining fine changes of different EEG signals. A classifier and a regularization technique are applied for the reduction of false alarms and improvement of the overall prediction accuracy. The experiments show that the proposed method outperforms the state-of-the-art methods and provides high prediction accuracy (i.e., 97.70%) with low false alarm using EEG signals in different brain locations from a benchmark data set.Keywords: Epilepsy, seizure, phase correlation, fluctuation, deviation.
Procedia PDF Downloads 4674064 Accuracy Analysis of the American Society of Anesthesiologists Classification Using ChatGPT
Authors: Jae Ni Jang, Young Uk Kim
Abstract:
Background: Chat Generative Pre-training Transformer-3 (ChatGPT; San Francisco, California, Open Artificial Intelligence) is an artificial intelligence chatbot based on a large language model designed to generate human-like text. As the usage of ChatGPT is increasing among less knowledgeable patients, medical students, and anesthesia and pain medicine residents or trainees, we aimed to evaluate the accuracy of ChatGPT-3 responses to questions about the American Society of Anesthesiologists (ASA) classification based on patients’ underlying diseases and assess the quality of the generated responses. Methods: A total of 47 questions were submitted to ChatGPT using textual prompts. The questions were designed for ChatGPT-3 to provide answers regarding ASA classification in response to common underlying diseases frequently observed in adult patients. In addition, we created 18 questions regarding the ASA classification for pediatric patients and pregnant women. The accuracy of ChatGPT’s responses was evaluated by cross-referencing with Miller’s Anesthesia, Morgan & Mikhail’s Clinical Anesthesiology, and the American Society of Anesthesiologists’ ASA Physical Status Classification System (2020). Results: Out of the 47 questions pertaining to adults, ChatGPT -3 provided correct answers for only 23, resulting in an accuracy rate of 48.9%. Furthermore, the responses provided by ChatGPT-3 regarding children and pregnant women were mostly inaccurate, as indicated by a 28% accuracy rate (5 out of 18). Conclusions: ChatGPT provided correct responses to questions relevant to the daily clinical routine of anesthesiologists in approximately half of the cases, while the remaining responses contained errors. Therefore, caution is advised when using ChatGPT to retrieve anesthesia-related information. Although ChatGPT may not yet be suitable for clinical settings, we anticipate significant improvements in ChatGPT and other large language models in the near future. Regular assessments of ChatGPT's ASA classification accuracy are essential due to the evolving nature of ChatGPT as an artificial intelligence entity. This is especially important because ChatGPT has a clinically unacceptable rate of error and hallucination, particularly in pediatric patients and pregnant women. The methodology established in this study may be used to continue evaluating ChatGPT.Keywords: American Society of Anesthesiologists, artificial intelligence, Chat Generative Pre-training Transformer-3, ChatGPT
Procedia PDF Downloads 474063 Precision Pest Management by the Use of Pheromone Traps and Forecasting Module in Mobile App
Authors: Muhammad Saad Aslam
Abstract:
In 2021, our organization has launched our proprietary mobile App i.e. Farm Intelligence platform, an industrial-first precision agriculture solution, to Pakistan. It was piloted at 47 locations (spanning around 1,200 hectares of land), addressing growers’ pain points by bringing the benefits of precision agriculture to their doorsteps. This year, we have extended its reach by more than 10 times (nearly 130,000 hectares of land) in almost 600 locations across the country. The project team selected highly infested areas to set up traps, which then enabled the sales team to initiate evidence-based conversations with the grower community about preventive crop protection products that includes pesticides and insecticides. Mega farmer meeting field visits and demonstrations plots coupled with extensive marketing activities, were setup to include farmer community. With the help of App real-time pest monitoring (using heat maps and infestation prediction through predictive analytics) we have equipped our growers with on spot insights that will help them optimize pesticide applications. Heat maps allow growers to identify infestation hot spots to fine-tune pesticide delivery, while predictive analytics enable preventive application of pesticides before the situation escalates. Ultimately, they empower growers to keep their crops safe for a healthy harvest.Keywords: precision pest management, precision agriculture, real time pest tracking, pest forecasting
Procedia PDF Downloads 904062 Stress Hyperglycemia: A Predictor of Major Adverse Cardiac Events in Non-Diabetic Patients With Acute Heart Failure
Authors: Fahad Raj Khan, Suleman Khan
Abstract:
There is a lack of consensus about the predictive value of raised blood glucose levels in terms of major adverse cardiac events (MACEs) in non-diabetic patients admitted for acute decompensated heart failure. The purpose of this research was to examine the long-term prognosis of acute decompensated heart failure (ADHF) in non-diabetic persons who had increased blood glucose levels, i.e., stress hyperglycemia, at the time of their ADHF hospitalization. The research involved 650 non-diabetic patients. Based on their admission stress hyperglycemia, they were divided into two groups.ie with and without (SHGL). The two groups' one-year outcomes for major adverse cardiac events (MACEs) were compared, and key predictors of MACEs were discovered. For statistical analysis, the two-tailed Mann-Whitney U test, Fisher's exact test, and binary logistic regression analysis were utilized. SHGL was found in 353 (54.3%) individuals. It was more frequent in men than in women. About 27% of patients with SHGL had previously been admitted for ADHF. Almost 62% were hypertensive, whereas 14 % had CKD. MACEs were significantly predicted by SHGL, HTN, prior hospitalization for ADHF, CKD, and cardiogenic shock upon admission. SHGL at the time of ADHF admission, independent of DM status, may be a predictive indication of MACEs.Keywords: stress hyperglycemia, acute heart failure, major adverse cardiac events, MACEs
Procedia PDF Downloads 944061 Smart Disassembly of Waste Printed Circuit Boards: The Role of IoT and Edge Computing
Authors: Muhammad Mohsin, Fawad Ahmad, Fatima Batool, Muhammad Kaab Zarrar
Abstract:
The integration of the Internet of Things (IoT) and edge computing devices offers a transformative approach to electronic waste management, particularly in the dismantling of printed circuit boards (PCBs). This paper explores how these technologies optimize operational efficiency and improve environmental sustainability by addressing challenges such as data security, interoperability, scalability, and real-time data processing. Proposed solutions include advanced machine learning algorithms for predictive maintenance, robust encryption protocols, and scalable architectures that incorporate edge computing. Case studies from leading e-waste management facilities illustrate benefits such as improved material recovery efficiency, reduced environmental impact, improved worker safety, and optimized resource utilization. The findings highlight the potential of IoT and edge computing to revolutionize e-waste dismantling and make the case for a collaborative approach between policymakers, waste management professionals, and technology developers. This research provides important insights into the use of IoT and edge computing to make significant progress in the sustainable management of electronic wasteKeywords: internet of Things, edge computing, waste PCB disassembly, electronic waste management, data security, interoperability, machine learning, predictive maintenance, sustainable development
Procedia PDF Downloads 304060 Optimization of Ultrasound-Assisted Extraction and Microwave-Assisted Acid Digestion for the Determination of Heavy Metals in Tea Samples
Authors: Abu Harera Nadeem, Kingsley Donkor
Abstract:
Tea is a popular beverage due to its flavour, aroma and antioxidant properties—with the most consumed varieties being green and black tea. Antioxidants in tea can lower the risk of Alzheimer’s and heart disease and obesity. However, these teas contain heavy metals such as Hg, Cd, or Pb, which can cause autoimmune diseases like Graves disease. In this study, 11 heavy metals in various commercial green, black, and oolong tea samples were determined using inductively coupled plasma-mass spectrometry (ICP-MS). Two methods of sample preparation were compared for accuracy and precision, which were microwave-assisted digestion and ultrasonic-assisted extraction. The developed method was further validated by detection limit, precision, and accuracy. Results showed that the proposed method was highly sensitive with detection limits within parts-per-billion levels. Reasonable method accuracy was obtained by spiked experiments. The findings of this study can be used to delve into the link between tea consumption and disease and to provide information for future studies on metal determination in tea.Keywords: ICP-MS, green tea, black tea, microwave-assisted acid digestion, ultrasound-assisted extraction
Procedia PDF Downloads 1234059 Vehicle Detection and Tracking Using Deep Learning Techniques in Surveillance Image
Authors: Abe D. Desta
Abstract:
This study suggests a deep learning-based method for identifying and following moving objects in surveillance video. The proposed method uses a fast regional convolution neural network (F-RCNN) trained on a substantial dataset of vehicle images to first detect vehicles. A Kalman filter and a data association technique based on a Hungarian algorithm are then used to monitor the observed vehicles throughout time. However, in general, F-RCNN algorithms have been shown to be effective in achieving high detection accuracy and robustness in this research study. For example, in one study The study has shown that the vehicle detection and tracking, the system was able to achieve an accuracy of 97.4%. In this study, the F-RCNN algorithm was compared to other popular object detection algorithms and was found to outperform them in terms of both detection accuracy and speed. The presented system, which has application potential in actual surveillance systems, shows the usefulness of deep learning approaches in vehicle detection and tracking.Keywords: artificial intelligence, computer vision, deep learning, fast-regional convolutional neural networks, feature extraction, vehicle tracking
Procedia PDF Downloads 1264058 Preliminary Study of Hand Gesture Classification in Upper-Limb Prosthetics Using Machine Learning with EMG Signals
Authors: Linghui Meng, James Atlas, Deborah Munro
Abstract:
There is an increasing demand for prosthetics capable of mimicking natural limb movements and hand gestures, but precise movement control of prosthetics using only electrode signals continues to be challenging. This study considers the implementation of machine learning as a means of improving accuracy and presents an initial investigation into hand gesture recognition using models based on electromyographic (EMG) signals. EMG signals, which capture muscle activity, are used as inputs to machine learning algorithms to improve prosthetic control accuracy, functionality and adaptivity. Using logistic regression, a machine learning classifier, this study evaluates the accuracy of classifying two hand gestures from the publicly available Ninapro dataset using two-time series feature extraction algorithms: Time Series Feature Extraction (TSFE) and Convolutional Neural Networks (CNNs). Trials were conducted using varying numbers of EMG channels from one to eight to determine the impact of channel quantity on classification accuracy. The results suggest that although both algorithms can successfully distinguish between hand gesture EMG signals, CNNs outperform TSFE in extracting useful information for both accuracy and computational efficiency. In addition, although more channels of EMG signals provide more useful information, they also require more complex and computationally intensive feature extractors and consequently do not perform as well as lower numbers of channels. The findings also underscore the potential of machine learning techniques in developing more effective and adaptive prosthetic control systems.Keywords: EMG, machine learning, prosthetic control, electromyographic prosthetics, hand gesture classification, CNN, computational neural networks, TSFE, time series feature extraction, channel count, logistic regression, ninapro, classifiers
Procedia PDF Downloads 294057 Emotional Awareness and Working Memory as Predictive Factors for the Habitual Use of Cognitive Reappraisal among Adolescents
Authors: Yuri Kitahara
Abstract:
Background: Cognitive reappraisal refers to an emotion regulation strategy in which one changes the interpretation of emotion-eliciting events. Numerous studies show that cognitive reappraisal is associated with mental health and better social functioning. However the examination of the predictive factors of adaptive emotion regulation remains as an issue. The present study examined the factors contributing to the habitual use of cognitive reappraisal, with a focus on emotional awareness and working memory. Methods: Data was collected from 30 junior high school students, using a Japanese version of the Emotion Regulation Questionnaire (ERQ), the Levels of Emotional Awareness Scale for Children (LEAS-C), and N-back task. Results: A positive correlation between emotional awareness and cognitive reappraisal was observed in the high-working-memory group (r = .54, p < .05), whereas no significant relationship was found in the low-working-memory group. In addition, the results of the analysis of variance (ANOVA) showed a significant interaction between emotional awareness and working memory capacity (F(1, 26) = 7.74, p < .05). Subsequent analysis of simple main effects confirmed that high working memory capacity significantly increases the use of cognitive reappraisal for high-emotional-awareness subjects, and significantly decreases the use of cognitive reappraisal for low-emotional-awareness subjects. Discussion: These results indicate that under the condition when one has an adequate ability for simultaneous processing of information, explicit understanding of emotion would contribute to adaptive cognitive emotion regulation. The findings are discussed along with neuroscientific claims.Keywords: cognitive reappraisal, emotional awareness, emotion regulation, working memory
Procedia PDF Downloads 2314056 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.Keywords: classification, CRISP-DM, machine learning, predictive quality, regression
Procedia PDF Downloads 1444055 The Impact of Corporate Social Responsibility Information Disclosure on the Accuracy of Analysts' Earnings Forecasts
Authors: Xin-Hua Zhao
Abstract:
In recent years, the growth rate of social responsibility reports disclosed by Chinese corporations has grown rapidly. The economic effects of the growing corporate social responsibility reports have become a hot topic. The article takes the chemical listed engineering corporations that disclose social responsibility reports in China as a sample, and based on the information asymmetry theory, examines the economic effect generated by corporate social responsibility disclosure with the method of ordinary least squares. The research is conducted from the perspective of analysts’ earnings forecasts and studies the impact of corporate social responsibility information disclosure on improving the accuracy of analysts' earnings forecasts. The results show that there is a statistically significant negative correlation between corporate social responsibility disclosure index and analysts’ earnings forecast error. The conclusions confirm that enterprises can reduce the asymmetry of social and environmental information by disclosing social responsibility reports, and thus improve the accuracy of analysts’ earnings forecasts. It can promote the effective allocation of resources in the market.Keywords: analysts' earnings forecasts, corporate social responsibility disclosure, economic effect, information asymmetry
Procedia PDF Downloads 1564054 A Hierarchical Method for Multi-Class Probabilistic Classification Vector Machines
Authors: P. Byrnes, F. A. DiazDelaO
Abstract:
The Support Vector Machine (SVM) has become widely recognised as one of the leading algorithms in machine learning for both regression and binary classification. It expresses predictions in terms of a linear combination of kernel functions, referred to as support vectors. Despite its popularity amongst practitioners, SVM has some limitations, with the most significant being the generation of point prediction as opposed to predictive distributions. Stemming from this issue, a probabilistic model namely, Probabilistic Classification Vector Machines (PCVM), has been proposed which respects the original functional form of SVM whilst also providing a predictive distribution. As physical system designs become more complex, an increasing number of classification tasks involving industrial applications consist of more than two classes. Consequently, this research proposes a framework which allows for the extension of PCVM to a multi class setting. Additionally, the original PCVM framework relies on the use of type II maximum likelihood to provide estimates for both the kernel hyperparameters and model evidence. In a high dimensional multi class setting, however, this approach has been shown to be ineffective due to bad scaling as the number of classes increases. Accordingly, we propose the application of Markov Chain Monte Carlo (MCMC) based methods to provide a posterior distribution over both parameters and hyperparameters. The proposed framework will be validated against current multi class classifiers through synthetic and real life implementations.Keywords: probabilistic classification vector machines, multi class classification, MCMC, support vector machines
Procedia PDF Downloads 2214053 Character Development Outcomes: A Predictive Model for Behaviour Analysis in Tertiary Institutions
Authors: Rhoda N. Kayongo
Abstract:
As behavior analysts in education continue to debate on how higher institutions can continue to benefit from their social and academic related programs, higher education is facing challenges in the area of character development. This is manifested in the percentages of college completion rates, teen pregnancies, drug abuse, sexual abuse, suicide, plagiarism, lack of academic integrity, and violence among their students. Attending college is a perceived opportunity to positively influence the actions and behaviors of the next generation of society; thus colleges and universities have to provide opportunities to develop students’ values and behaviors. Prior studies were mainly conducted in private institutions and more so in developed countries. However, with the complexity of the nature of student body currently due to the changing world, a multidimensional approach combining multiple factors that enhance character development outcomes is needed to suit the changing trends. The main purpose of this study was to identify opportunities in colleges and develop a model for predicting character development outcomes. A survey questionnaire composed of 7 scales including in-classroom interaction, out-of-classroom interaction, school climate, personal lifestyle, home environment, and peer influence as independent variables and character development outcomes as the dependent variable was administered to a total of five hundred and one students of 3rd and 4th year level in selected public colleges and universities in the Philippines and Rwanda. Using structural equation modelling, a predictive model explained 57% of the variance in character development outcomes. Findings from the results of the analysis showed that in-classroom interactions have a substantial direct influence on character development outcomes of the students (r = .75, p < .05). In addition, out-of-classroom interaction, school climate, and home environment contributed to students’ character development outcomes but in an indirect way. The study concluded that in the classroom are many opportunities for teachers to teach, model and integrate character development among their students. Thus, suggestions are made to public colleges and universities to deliberately boost and implement experiences that cultivate character within the classroom. These may contribute tremendously to the students' character development outcomes and hence render effective models of behaviour analysis in higher education.Keywords: character development, tertiary institutions, predictive model, behavior analysis
Procedia PDF Downloads 1364052 The Predictive Power of Successful Scientific Theories: An Explanatory Study on Their Substantive Ontologies through Theoretical Change
Authors: Damian Islas
Abstract:
Debates on realism in science concern two different questions: (I) whether the unobservable entities posited by theories can be known; and (II) whether any knowledge we have of them is objective or not. Question (I) arises from the doubt that since observation is the basis of all our factual knowledge, unobservable entities cannot be known. Question (II) arises from the doubt that since scientific representations are inextricably laden with the subjective, idiosyncratic, and a priori features of human cognition and scientific practice, they cannot convey any reliable information on how their objects are in themselves. A way of understanding scientific realism (SR) is through three lines of inquiry: ontological, semantic, and epistemological. Ontologically, scientific realism asserts the existence of a world independent of human mind. Semantically, scientific realism assumes that theoretical claims about reality show truth values and, thus, should be construed literally. Epistemologically, scientific realism believes that theoretical claims offer us knowledge of the world. Nowadays, the literature on scientific realism has proceeded rather far beyond the realism versus antirealism debate. This stance represents a middle-ground position between the two according to which science can attain justified true beliefs concerning relational facts about the unobservable realm but cannot attain justified true beliefs concerning the intrinsic nature of any objects occupying that realm. That is, the structural content of scientific theories about the unobservable can be known, but facts about the intrinsic nature of the entities that figure as place-holders in those structures cannot be known. There are two possible versions of SR: Epistemological Structural Realism (ESR) and Ontic Structural Realism (OSR). On ESR, an agnostic stance is preserved with respect to the natures of unobservable entities, but the possibility of knowing the relations obtaining between those entities is affirmed. OSR includes the rather striking claim that when it comes to the unobservables theorized about within fundamental physics, relations exist, but objects do not. Focusing on ESR, questions arise concerning its ability to explain the empirical success of a theory. Empirical success certainly involves predictive success, and predictive success implies a theory’s power to make accurate predictions. But a theory’s power to make any predictions at all seems to derive precisely from its core axioms or laws concerning unobservable entities and mechanisms, and not simply the sort of structural relations often expressed in equations. The specific challenge to ESR concerns its ability to explain the explanatory and predictive power of successful theories without appealing to their substantive ontologies, which are often not preserved by their successors. The response to this challenge will depend on the various and subtle different versions of ESR and OSR stances, which show a sort of progression through eliminativist OSR to moderate OSR of gradual increase in the ontological status accorded to objects. Knowing the relations between unobserved entities is methodologically identical to assert that these relations between unobserved entities exist.Keywords: eliminativist ontic structural realism, epistemological structuralism, moderate ontic structural realism, ontic structuralism
Procedia PDF Downloads 1184051 Predication Model for Leukemia Diseases Based on Data Mining Classification Algorithms with Best Accuracy
Authors: Fahd Sabry Esmail, M. Badr Senousy, Mohamed Ragaie
Abstract:
In recent years, there has been an explosion in the rate of using technology that help discovering the diseases. For example, DNA microarrays allow us for the first time to obtain a "global" view of the cell. It has great potential to provide accurate medical diagnosis, to help in finding the right treatment and cure for many diseases. Various classification algorithms can be applied on such micro-array datasets to devise methods that can predict the occurrence of Leukemia disease. In this study, we compared the classification accuracy and response time among eleven decision tree methods and six rule classifier methods using five performance criteria. The experiment results show that the performance of Random Tree is producing better result. Also it takes lowest time to build model in tree classifier. The classification rules algorithms such as nearest- neighbor-like algorithm (NNge) is the best algorithm due to the high accuracy and it takes lowest time to build model in classification.Keywords: data mining, classification techniques, decision tree, classification rule, leukemia diseases, microarray data
Procedia PDF Downloads 3204050 Clustering the Wheat Seeds Using SOM Artificial Neural Networks
Authors: Salah Ghamari
Abstract:
In this study, the ability of self organizing map artificial (SOM) neural networks in clustering the wheat seeds varieties according to morphological properties of them was considered. The SOM is one type of unsupervised competitive learning. Experimentally, five morphological features of 300 seeds (including three varieties: gaskozhen, Md and sardari) were obtained using image processing technique. The results show that the artificial neural network has a good performance (90.33% accuracy) in classification of the wheat varieties despite of high similarity in them. The highest classification accuracy (100%) was achieved for sardari.Keywords: artificial neural networks, clustering, self organizing map, wheat variety
Procedia PDF Downloads 6564049 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes
Authors: Angela U. Makolo
Abstract:
Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation
Procedia PDF Downloads 68