Search results for: discriminate accuracy
3711 Prediction of Extreme Precipitation in East Asia Using Complex Network
Authors: Feng Guolin, Gong Zhiqiang
Abstract:
In order to study the spatial structure and dynamical mechanism of extreme precipitation in East Asia, a corresponding climate network is constructed by employing the method of event synchronization. It is found that the area of East Asian summer extreme precipitation can be separated into two regions: one with high area weighted connectivity receiving heavy precipitation mostly during the active phase of the East Asian Summer Monsoon (EASM), and another one with low area weighted connectivity receiving heavy precipitation during both the active and the retreat phase of the EASM. Besides,a way for the prediction of extreme precipitation is also developed by constructing a directed climate networks. The simulation accuracy in East Asia is 58% with a 0-day lead, and the prediction accuracy is 21% and average 12% with a 1-day and an n-day (2≤n≤10) lead, respectively. Compare to the normal EASM year, the prediction accuracy is lower in a weak year and higher in a strong year, which is relevant to the differences in correlations and extreme precipitation rates in different EASM situations. Recognizing and identifying these effects is good for understanding and predicting extreme precipitation in East Asia.Keywords: synchronization, climate network, prediction, rainfall
Procedia PDF Downloads 4473710 A Super-Efficiency Model for Evaluating Efficiency in the Presence of Time Lag Effect
Authors: Yanshuang Zhang, Byungho Jeong
Abstract:
In many cases, there is a time lag between the consumption of inputs and the production of outputs. This time lag effect should be considered in evaluating the performance of organizations. Recently, a couple of DEA models were developed for considering time lag effect in efficiency evaluation of research activities. Multi-periods input(MpI) and Multi-periods output(MpO) models are integrated models to calculate simple efficiency considering time lag effect. However, these models can’t discriminate efficient DMUs because of the nature of basic DEA model in which efficiency scores are limited to ‘1’. That is, efficient DMUs can’t be discriminated because their efficiency scores are same. Thus, this paper suggests a super-efficiency model for efficiency evaluation under the consideration of time lag effect based on the MpO model. A case example using a long-term research project is given to compare the suggested model with the MpO model.Keywords: DEA, super-efficiency, time lag, multi-periods input
Procedia PDF Downloads 4773709 Object Trajectory Extraction by Using Mean of Motion Vectors Form Compressed Video Bitstream
Authors: Ching-Ting Hsu, Wei-Hua Ho, Yi-Chun Chang
Abstract:
Video object tracking is one of the popular research topics in computer graphics area. The trajectory can be applied in security, traffic control, even the sports training. The trajectory for sports training can be utilized to analyze the athlete’s performance without traditional sensors. There are many relevant works which utilize mean shift algorithm with background subtraction. This kind of the schemes should select a kernel function which may affect the accuracy and performance. In this paper, we consider the motion information in the pre-coded bitstream. The proposed algorithm extracts the trajectory by composing the motion vectors from the pre-coded bitstream. We gather the motion vectors from the overlap area of the object and calculate mean of the overlapped motion vectors. We implement and simulate our proposed algorithm in H.264 video codec. The performance is better than relevant works and keeps the accuracy of the object trajectory. The experimental results show that the proposed trajectory extraction can extract trajectory form the pre-coded bitstream in high accuracy and achieve higher performance other relevant works.Keywords: H.264, video bitstream, video object tracking, sports training
Procedia PDF Downloads 4293708 An Advanced YOLOv8 for Vehicle Detection in Intelligent Traffic Management
Authors: A. Degale Desta, Cheng Jian
Abstract:
Background: Vehicle detection accuracy is critical to intelligent transportation systems and autonomous driving. The state-of-the-art object identification technology YOLOv8 has shown significant gains in efficiency and detection accuracy. This study uses the BDD100K dataset, which is renowned for its extensive and varied annotations, to assess how well YOLOv8 performs in vehicle detection. Objectives: The primary objective of this research is to assess YOLOv8's performance in intelligent transportation system vehicle identification and its ability to accurately identify cars in urban environments for safety prioritization. Methods: The primary objective of this research is to assess YOLOv8's performance in intelligent transportation system vehicle identification and its ability to accurately identify cars in urban environments for safety prioritization. Results: The results show that YOLOv8 achieves high mAP, recall, precision, and F1-score values, indicating state-of-the-art performance. This suggests that YOLOv8 can identify cars in complex urban environments with a high degree of accuracy and reliable results in a variety of traffic scenarios. Conclusion: The results indicate that YOLOv8 is a useful tool for enhancing vehicle detection accuracy in intelligent transportation systems, hence advancing urban public safety and security. The model's demonstrated performance shows how well it may be incorporated into autonomous driving applications to improve situational awareness and responsiveness.Keywords: vehicle detection, YOLOv8, BDD100K, object detection, deep learning
Procedia PDF Downloads 153707 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects
Authors: Karan Sharma, Ajay Kumar
Abstract:
Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.Keywords: EEG signal, Reiki, time consuming, epileptic seizure
Procedia PDF Downloads 4083706 Comparative Evaluation of EBT3 Film Dosimetry Using Flat Bad Scanner, Densitometer and Spectrophotometer Methods and Its Applications in Radiotherapy
Authors: K. Khaerunnisa, D. Ryangga, S. A. Pawiro
Abstract:
Over the past few decades, film dosimetry has become a tool which is used in various radiotherapy modalities, either for clinical quality assurance (QA) or dose verification. The response of the film to irradiation is usually expressed in optical density (OD) or net optical density (netOD). While the film's response to radiation is not linear, then the use of film as a dosimeter must go through a calibration process. This study aimed to compare the function of the calibration curve of various measurement methods with various densitometer, using a flat bad scanner, point densitometer and spectrophotometer. For every response function, a radichromic film calibration curve is generated from each method by performing accuracy, precision and sensitivity analysis. netOD is obtained by measuring changes in the optical density (OD) of the film before irradiation and after irradiation when using a film scanner if it uses ImageJ to extract the pixel value of the film on the red channel of three channels (RGB), calculate the change in OD before and after irradiation when using a point densitometer, and calculate changes in absorbance before and after irradiation when using a spectrophotometer. the results showed that the three calibration methods gave readings with a netOD precision of doses below 3% for the uncertainty value of 1σ (one sigma). while the sensitivity of all three methods has the same trend in responding to film readings against radiation, it has a different magnitude of sensitivity. while the accuracy of the three methods provides readings below 3% for doses above 100 cGy and 200 cGy, but for doses below 100 cGy found above 3% when using point densitometers and spectrophotometers. when all three methods are used for clinical implementation, the results of the study show accuracy and precision below 2% for the use of scanners and spectrophotometers and above 3% for precision and accuracy when using point densitometers.Keywords: Callibration Methods, Film Dosimetry EBT3, Flat Bad Scanner, Densitomete, Spectrophotometer
Procedia PDF Downloads 1373705 Impact of External Temperature on the Speleothem Growth in the Moravian Karst
Authors: Frantisek Odvarka
Abstract:
Based on the data from the Moravian Karst, the influence of the calcite speleothem growth by selected meteorological factors was evaluated. External temperature was determined as one of the main factors influencing speleothem growth in Moravian Karst. This factor significantly influences the CO₂ concentration in soil/epikarst, and cave atmosphere in the Moravian Karst and significantly contributes to the changes in the CO₂ partial pressure differences between soil/epikarst and cave atmosphere in Moravian Karst, which determines the drip water supersaturation with respect to the calcite and quantity of precipitated calcite in the Moravian Karst cave environment. External air temperatures and cave air temperatures were measured using a COMET S3120 data logger, which can measure temperatures in the range from -30 to +80 °C with an accuracy of ± 0.4 °C. CO₂ concentrations in the cave and soils were measured with a FT A600 CO₂H Ahlborn probe (value range 0 ppmv to 10,000 ppmv, accuracy 1 ppmv), which was connected to the data logger ALMEMO 2290-4, V5 Ahlborn. The soil temperature was measured with a FHA646E1 Ahlborn probe (temperature range -20 to 70 °C, accuracy ± 0.4 °C) connected to an ALMEMO 2290-4 V5 Ahlborn data logger. The airflow velocities into and out of the cave were monitored by a FVA395 TH4 Thermo anemometer (speed range from 0.05 to 2 m s⁻¹, accuracy ± 0.04 m s⁻¹), which was connected to the ALMEMO 2590-4 V5 Ahlborn data logger for recording. The flow was measured in the lower and upper entrance of the Imperial Cave. The data were analyzed in MS Office Excel 2019 and PHREEQC.Keywords: speleothem growth, carbon dioxide partial pressure, Moravian Karst, external temperature
Procedia PDF Downloads 1473704 Malignancy Assessment of Brain Tumors Using Convolutional Neural Network
Authors: Chung-Ming Lo, Kevin Li-Chun Hsieh
Abstract:
The central nervous system in the World Health Organization defines grade 2, 3, 4 gliomas according to the aggressiveness. For brain tumors, using image examination would have a lower risk than biopsy. Besides, it is a challenge to extract relevant tissues from biopsy operation. Observing the whole tumor structure and composition can provide a more objective assessment. This study further proposed a computer-aided diagnosis (CAD) system based on a convolutional neural network to quantitatively evaluate a tumor's malignancy from brain magnetic resonance imaging. A total of 30 grade 2, 43 grade 3, and 57 grade 4 gliomas were collected in the experiment. Transferred parameters from AlexNet were fine-tuned to classify the target brain tumors and achieved an accuracy of 98% and an area under the receiver operating characteristics curve (Az) of 0.99. Without pre-trained features, only 61% of accuracy was obtained. The proposed convolutional neural network can accurately and efficiently classify grade 2, 3, and 4 gliomas. The promising accuracy can provide diagnostic suggestions to radiologists in the clinic.Keywords: convolutional neural network, computer-aided diagnosis, glioblastoma, magnetic resonance imaging
Procedia PDF Downloads 1513703 A Carrier Phase High Precision Ranging Theory Based on Frequency Hopping
Authors: Jie Xu, Zengshan Tian, Ze Li
Abstract:
Previous indoor ranging or localization systems achieving high accuracy time of flight (ToF) estimation relied on two key points. One is to do strict time and frequency synchronization between the transmitter and receiver to eliminate equipment asynchronous errors such as carrier frequency offset (CFO), but this is difficult to achieve in a practical communication system. The other one is to extend the total bandwidth of the communication because the accuracy of ToF estimation is proportional to the bandwidth, and the larger the total bandwidth, the higher the accuracy of ToF estimation obtained. For example, ultra-wideband (UWB) technology is implemented based on this theory, but high precision ToF estimation is difficult to achieve in common WiFi or Bluetooth systems with lower bandwidth compared to UWB. Therefore, it is meaningful to study how to achieve high-precision ranging with lower bandwidth when the transmitter and receiver are asynchronous. To tackle the above problems, we propose a two-way channel error elimination theory and a frequency hopping-based carrier phase ranging algorithm to achieve high accuracy ranging under asynchronous conditions. The two-way channel error elimination theory uses the symmetry property of the two-way channel to solve the asynchronous phase error caused by the asynchronous transmitter and receiver, and we also study the effect of the two-way channel generation time difference on the phase according to the characteristics of different hardware devices. The frequency hopping-based carrier phase ranging algorithm uses frequency hopping to extend the equivalent bandwidth and incorporates a carrier phase ranging algorithm with multipath resolution to achieve a ranging accuracy comparable to that of UWB at 400 MHz bandwidth in the typical 80 MHz bandwidth of commercial WiFi. Finally, to verify the validity of the algorithm, we implement this theory using a software radio platform, and the actual experimental results show that the method proposed in this paper has a median ranging error of 5.4 cm in the 5 m range, 7 cm in the 10 m range, and 10.8 cm in the 20 m range for a total bandwidth of 80 MHz.Keywords: frequency hopping, phase error elimination, carrier phase, ranging
Procedia PDF Downloads 1273702 Fluorometric Aptasensor: Evaluation of Stability and Comparison to Standard Enzyme-Linked Immunosorbent Assay
Authors: J. Carlos Kuri, Varun Vij, Raymond J. Turner, Orly Yadid-Pecht
Abstract:
Celiac disease (CD) is an immune system disorder that is triggered by ingesting gluten. As a gluten-free (GF) diet has become a concern of many people for health reasons, a gold standard had to be nominated. Enzyme-linked immunosorbent assay (ELISA) has taken the seat of this role. However, multiple limitations were discovered, and with that, the desire for an alternative method now exists. Nucleic acid-based aptamers have become of great interest due to their selectivity, specificity, simplicity, and rapid-testing advantages. However, fluorescence-based aptasensors have been tagged as unstable, but lifespan details are rarely stated. In this work, the lifespan stability of a fluorescence-based aptasensor is shown over an 8-week-long study displaying the accuracy of the sensor and false negatives. This study follows 22 different samples, including GF and gluten-rich (GR) and soy sauce products, off-the-shelf products, and reference material from laboratories, giving a total of 836 tests. The analysis shows an accuracy of correctly classifying GF and GR products of 96.30% and 100%, respectively when the protocol is augmented with molecular sieves. The overall accuracy remains around 94% within the first four weeks and then decays to 63%.Keywords: aptasensor, PEG, rGO, FAM, RM, ELISA
Procedia PDF Downloads 1283701 An Enhanced Support Vector Machine Based Approach for Sentiment Classification of Arabic Tweets of Different Dialects
Authors: Gehad S. Kaseb, Mona F. Ahmed
Abstract:
Arabic Sentiment Analysis (SA) is one of the most common research fields with many open areas. Few studies apply SA to Arabic dialects. This paper proposes different pre-processing steps and a modified methodology to improve the accuracy using normal Support Vector Machine (SVM) classification. The paper works on two datasets, Arabic Sentiment Tweets Dataset (ASTD) and Extended Arabic Tweets Sentiment Dataset (Extended-AATSD), which are publicly available for academic use. The results show that the classification accuracy approaches 86%.Keywords: Arabic, classification, sentiment analysis, tweets
Procedia PDF Downloads 1523700 Analysis of the Accuracy of Earth Movement with Drone Surveys
Authors: Raúl Pereda García, Julio Manuel de Luis Ruiz, Elena Castillo López, Rubén Pérez Álvarez, Felipe Piña García
Abstract:
New technologies for the capture of point clouds have experienced a great advance in recent years. In this way, its use has been extended in geomatics, providing measurement solutions that have been popularized without there being, many times, a detailed study of its accuracy. This research focuses on the study of the viability of topographic works with drones incorporating different sensors sensitive to the visible spectrum. The fundamentals have been applied to a road, located in Cantabria (Spain), where a platform extension and the reform of a riprap were being constructed. A total of six flights were made during two months, all of them with GPS as part of the photogrammetric process, and the results were contrasted with those measured with total station. The obtained results show that the choice of the camera and the planning of the flight have an important impact on the accuracy. In fact, the representations with a level of detail corresponding to 1/1000 scale are admissible, depending on the existing vegetation, and obtaining better results in the area of the riprap. This set of techniques is, therefore, suitable for the control of earthworks in road works but with certain limitations which are exposed in this paper.Keywords: drone, earth movement control, global position system, surveying technology.
Procedia PDF Downloads 1893699 Multi-Class Text Classification Using Ensembles of Classifiers
Authors: Syed Basit Ali Shah Bukhari, Yan Qiang, Saad Abdul Rauf, Syed Saqlaina Bukhari
Abstract:
Text Classification is the methodology to classify any given text into the respective category from a given set of categories. It is highly important and vital to use proper set of pre-processing , feature selection and classification techniques to achieve this purpose. In this paper we have used different ensemble techniques along with variance in feature selection parameters to see the change in overall accuracy of the result and also on some other individual class based features which include precision value of each individual category of the text. After subjecting our data through pre-processing and feature selection techniques , different individual classifiers were tested first and after that classifiers were combined to form ensembles to increase their accuracy. Later we also studied the impact of decreasing the classification categories on over all accuracy of data. Text classification is highly used in sentiment analysis on social media sites such as twitter for realizing people’s opinions about any cause or it is also used to analyze customer’s reviews about certain products or services. Opinion mining is a vital task in data mining and text categorization is a back-bone to opinion mining.Keywords: Natural Language Processing, Ensemble Classifier, Bagging Classifier, AdaBoost
Procedia PDF Downloads 2383698 Measuring How Brightness Mediates Auditory Salience
Authors: Baptiste Bouvier
Abstract:
While we are constantly flooded with stimuli in daily life, attention allows us to select the ones we specifically process and ignore the others. Some salient stimuli may sometimes pass this filter independently of our will, in a "bottom-up" way. The role of the acoustic properties of the timbre of a sound on its salience, i.e., its ability to capture the attention of a listener, is still not well understood. We implemented a paradigm called the "additional singleton paradigm", in which participants have to discriminate targets according to their duration. This task is perturbed (higher error rates and longer response times) by the presence of an irrelevant additional sound, of which we can manipulate a feature of our choice at equal loudness. This allows us to highlight the influence of the timbre features of a sound stimulus on its salience at equal loudness. We have shown that a stimulus that is brighter than the others but not louder leads to an attentional capture phenomenon in this framework. This work opens the door to the study of the influence of any timbre feature on salience.Keywords: attention, audition, bottom-up attention, psychoacoustics, salience, timbre
Procedia PDF Downloads 1743697 Selecting the Best RBF Neural Network Using PSO Algorithm for ECG Signal Prediction
Authors: Najmeh Mohsenifar, Narjes Mohsenifar, Abbas Kargar
Abstract:
In this paper, has been presented a stable method for predicting the ECG signals through the RBF neural networks, by the PSO algorithm. In spite of quasi-periodic ECG signal from a healthy person, there are distortions in electro cardiographic data for a patient. Therefore, there is no precise mathematical model for prediction. Here, we have exploited neural networks that are capable of complicated nonlinear mapping. Although the architecture and spread of RBF networks are usually selected through trial and error, the PSO algorithm has been used for choosing the best neural network. In this way, 2 second of a recorded ECG signal is employed to predict duration of 20 second in advance. Our simulations show that PSO algorithm can find the RBF neural network with minimum MSE and the accuracy of the predicted ECG signal is 97 %.Keywords: electrocardiogram, RBF artificial neural network, PSO algorithm, predict, accuracy
Procedia PDF Downloads 6293696 An Optimal and Efficient Family of Fourth-Order Methods for Nonlinear Equations
Authors: Parshanth Maroju, Ramandeep Behl, Sandile S. Motsa
Abstract:
In this study, we proposed a simple and interesting family of fourth-order multi-point methods without memory for obtaining simple roots. This family requires only three functional evaluations (viz. two of functions f(xn), f(yn) and third one of its first-order derivative f'(xn)) per iteration. Moreover, the accuracy and validity of new schemes is tested by a number of numerical examples are also proposed to illustrate their accuracy by comparing them with the new existing optimal fourth-order methods available in the literature. It is found that they are very useful in high precision computations. Further, the dynamic study of these methods also supports the theoretical aspect.Keywords: basins of attraction, nonlinear equations, simple roots, Newton's method
Procedia PDF Downloads 3143695 Quantifying Uncertainties in an Archetype-Based Building Stock Energy Model by Use of Individual Building Models
Authors: Morten Brøgger, Kim Wittchen
Abstract:
Focus on reducing energy consumption in existing buildings at large scale, e.g. in cities or countries, has been increasing in recent years. In order to reduce energy consumption in existing buildings, political incentive schemes are put in place and large scale investments are made by utility companies. Prioritising these investments requires a comprehensive overview of the energy consumption in the existing building stock, as well as potential energy-savings. However, a building stock comprises thousands of buildings with different characteristics making it difficult to model energy consumption accurately. Moreover, the complexity of the building stock makes it difficult to convey model results to policymakers and other stakeholders. In order to manage the complexity of the building stock, building archetypes are often employed in building stock energy models (BSEMs). Building archetypes are formed by segmenting the building stock according to specific characteristics. Segmenting the building stock according to building type and building age is common, among other things because this information is often easily available. This segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all buildings in a segment of the building stock is associated with loss of detail. Thermal characteristics are aggregated while other characteristics, which could affect the energy efficiency of a building, are disregarded. Thus, using a simplified representation of the building stock could come at the expense of the accuracy of the model. The present study evaluates the accuracy of a conventional archetype-based BSEM that segments the building stock according to building type- and age. The accuracy is evaluated in terms of the archetypes’ ability to accurately emulate the average energy demands of the corresponding buildings they were meant to represent. This is done for the buildings’ energy demands as a whole as well as for relevant sub-demands. Both are evaluated in relation to the type- and the age of the building. This should provide researchers, who use archetypes in BSEMs, with an indication of the expected accuracy of the conventional archetype model, as well as the accuracy lost in specific parts of the calculation, due to use of the archetype method.Keywords: building stock energy modelling, energy-savings, archetype
Procedia PDF Downloads 1573694 Basketball Game-Related Statistics Discriminating Teams Competing in Basketball Africa League and Euroleague: Comparative Analysis
Authors: Ng'etich K. Stephen
Abstract:
Abstract—Globally analytics in basketball has advanced tremendously in the last decade. Organizations are leveraging the insights to improve team and player performance and, in the long run, generate revenue out of it. Due to limited basketball game-related statistics in African competitions, teams are unaware of how they compete with other continental basketball teams. The purpose of this study is to evaluate the regional difference in basketball game-related statistics between African teams that played in the newly formed league, the basketball African league and the European league. The basketball African league, a competition created through the partnership between NBA and FIBA, offers a good starting point since it has valuable basketball metrics to analyze. This study sought to use multivariate linear discriminant analysis to identify the game-related statistics that discriminate the teams in Euro league and the basketball African league.Keywords: basketball africa league, basketball, euroleague, fiba, africa
Procedia PDF Downloads 1073693 Data Quality on Regular Childhood Immunization Programme at Degehabur District: Somali Region, Ethiopia
Authors: Eyob Seife
Abstract:
Immunization is a life-saving intervention which prevents needless suffering through sickness, disability, and death. Emphasis on data quality and use will become even stronger with the development of the immunization agenda 2030 (IA2030). Quality of data is a key factor in generating reliable health information that enables monitoring progress, financial planning, vaccine forecasting capacities, and making decisions for continuous improvement of the national immunization program. However, ensuring data of sufficient quality and promoting an information-use culture at the point of the collection remains critical and challenging, especially in hard-to-reach and pastoralist areas where Degehabur district is selected based on a hypothesis of ‘there is no difference in reported and recounted immunization data consistency. Data quality is dependent on different factors where organizational, behavioral, technical, and contextual factors are the mentioned ones. A cross-sectional quantitative study was conducted on September 2022 in the Degehabur district. The study used the world health organization (WHO) recommended data quality self-assessment (DQS) tools. Immunization tally sheets, registers, and reporting documents were reviewed at 5 health facilities (2 health centers and 3 health posts) of primary health care units for one fiscal year (12 months) to determine the accuracy ratio. The data was collected by trained DQS assessors to explore the quality of monitoring systems at health posts, health centers, and the district health office. A quality index (QI) was assessed, and the accuracy ratio formulated were: the first and third doses of pentavalent vaccines, fully immunized (FI), and the first dose of measles-containing vaccines (MCV). In this study, facility-level results showed both over-reporting and under-reporting were observed at health posts when computing the accuracy ratio of the tally sheet to health post reports found at health centers for almost all antigens verified where pentavalent 1 was 88.3%, 60.4%, and 125.6% for Health posts A, B, and C respectively. For first-dose measles-containing vaccines (MCV), similarly, the accuracy ratio was found to be 126.6%, 42.6%, and 140.9% for Health posts A, B, and C, respectively. The accuracy ratio for fully immunized children also showed 0% for health posts A and B and 100% for health post-C. A relatively better accuracy ratio was seen at health centers where the first pentavalent dose was 97.4% and 103.3% for health centers A and B, while a first dose of measles-containing vaccines (MCV) was 89.2% and 100.9% for health centers A and B, respectively. A quality index (QI) of all facilities also showed results between the maximum of 33.33% and a minimum of 0%. Most of the verified immunization data accuracy ratios were found to be relatively better at the health center level. However, the quality of the monitoring system is poor at all levels, besides poor data accuracy at all health posts. So attention should be given to improving the capacity of staff and quality of monitoring system components, namely recording, reporting, archiving, data analysis, and using information for decision at all levels, especially in pastoralist areas where such kinds of study findings need to be improved beside to improving the data quality at root and health posts level.Keywords: accuracy ratio, Degehabur District, regular childhood immunization program, quality of monitoring system, Somali Region-Ethiopia
Procedia PDF Downloads 1113692 Integrating Optuna and Synthetic Data Generation for Optimized Medical Transcript Classification Using BioBERT
Authors: Sachi Nandan Mohanty, Shreya Sinha, Sweeti Sah, Shweta Sharma
Abstract:
The advancement of natural language processing has majorly influenced the field of medical transcript classification, providing a robust framework for enhancing the accuracy of clinical data processing. It has enormous potential to transform healthcare and improve people's livelihoods. This research focuses on improving the accuracy of medical transcript categorization using Bidirectional Encoder Representations from Transformers (BERT) and its specialized variants, including BioBERT, ClinicalBERT, SciBERT, and BlueBERT. The experimental work employs Optuna, an optimization framework, for hyperparameter tuning to identify the most effective variant, concluding that BioBERT yields the best performance. Furthermore, various optimizers, including Adam, RMSprop, and Layerwise adaptive large batch optimization (LAMB), were evaluated alongside BERT's default AdamW optimizer. The findings show that the LAMB optimizer achieves a performance that is equally good as AdamW's. Synthetic data generation techniques from Gretel were utilized to augment the dataset, expanding the original dataset from 5,000 to 10,000 rows. Subsequent evaluations demonstrated that the model maintained its performance with synthetic data, with the LAMB optimizer showing marginally better results. The enhanced dataset and optimized model configurations improved classification accuracy, showcasing the efficacy of the BioBERT variant and the LAMB optimizer. It resulted in an accuracy of up to 98.2% and 90.8% for the original and combined datasets.Keywords: BioBERT, clinical data, healthcare AI, transformer models
Procedia PDF Downloads 83691 Optimizing Machine Vision System Setup Accuracy by Six-Sigma DMAIC Approach
Authors: Joseph C. Chen
Abstract:
Machine vision system provides automatic inspection to reduce manufacturing costs considerably. However, only a few principles have been found to optimize machine vision system and help it function more accurately in industrial practice. Mostly, there were complicated and impractical design techniques to improve the accuracy of machine vision system. This paper discusses implementing the Six Sigma Define, Measure, Analyze, Improve, and Control (DMAIC) approach to optimize the setup parameters of machine vision system when it is used as a direct measurement technique. This research follows a case study showing how Six Sigma DMAIC methodology has been put into use.Keywords: DMAIC, machine vision system, process capability, Taguchi Parameter Design
Procedia PDF Downloads 4413690 Seismic Inversion to Improve the Reservoir Characterization: Case Study in Central Blue Nile Basin, Sudan
Authors: Safwat E. Musa, Nuha E. Mohamed, Nuha A. Bagi
Abstract:
In this study, several crossplots of the P-impedance with the lithology logs (gamma ray, neutron porosity, deep resistivity, water saturation and Vp/Vs curves) were made in three available wells, which were drilled in central part of the Blue Nile basin in depths varies from 1460 m to 1600 m. These crossplots were successful to discriminate between sand and shale when using P-Impedance values, and between the wet sand and the pay sand when using both P-impedance and Vp/Vs together. Also, some impedance sections were converted to porosity sections using linear formula to characterize the reservoir in terms of porosity. The used crossplots were created on log resolution, while the seismic resolution can identify only the reservoir, unless a 3D seismic angle stacks were available; then it would be easier to identify the pay sand with great confidence; through high resolution seismic inversion and geostatistical approach when using P-impedance and Vp/Vs volumes.Keywords: basin, Blue Nile, inversion, seismic
Procedia PDF Downloads 4333689 A Calibration Method of Portable Coordinate Measuring Arm Using Bar Gauge with Cone Holes
Authors: Rim Chang Hyon, Song Hak Jin, Song Kwang Hyok, Jong Ki Hun
Abstract:
The calibration of the articulated arm coordinate measuring machine (AACMM) is key to improving calibration accuracy and saving calibration time. To reduce the time consumed for calibration, we should choose the proper calibration gauges and develop a reasonable calibration method. In addition, we should get the exact optimal solution by accurately removing the rough errors within the experimental data. In this paper, we present a calibration method of the portable coordinate measuring arm (PCMA) using the 1.2m long bar guage with cone-holes. First, we determine the locations of the bar gauge and establish an optimal objective function for identifying the structural parameter errors. Next, we make a mathematical model of the calibration algorithm and present a new mathematical method to remove the rough errors within calibration data. Finally, we find the optimal solution to identify the kinematic parameter errors by using Levenberg-Marquardt algorithm. The experimental results show that our calibration method is very effective in saving the calibration time and improving the calibration accuracy.Keywords: AACMM, kinematic model, parameter identify, measurement accuracy, calibration
Procedia PDF Downloads 853688 Giving Gustatory Aesthetics Its Place at the Table
Authors: Brock Decker
Abstract:
Vision and hearing have been given metaphysical, epistemic, moral and aesthetic preference over the gustatory senses since the very beginnings of Western philosophy. This unjustified prejudice has directed philosophical inquiry away from taste and smell and the values and interests of those concerned with them. The metaphysical and epistemic prejudices that have hindered work in this field are confronted by accepting an oblique invitation from David Hume to pursue a gustatory aesthetics of taste. A framework for further discussion of gustatory experience is added by arguing that taste and smell are cognitively configurable senses capable of bifurcated intentionality and that the taste perception of states of affairs is influenced both by culture and personal preference. Taste perceptions are revealed to admit an aesthetic standard. Using both a Humean aesthetic and a Brillat-Savarin-inspired understanding of taste can explain and discriminate between untrained and expert aesthetic taste experiences and contribute a perspective free from traditional prejudice for future work in the aesthetics of taste.Keywords: aesthetics, Hume, Korsmeyer, taste, Scruton
Procedia PDF Downloads 673687 A Survey of Skin Cancer Detection and Classification from Skin Lesion Images Using Deep Learning
Authors: Joseph George, Anne Kotteswara Roa
Abstract:
Skin disease is one of the most common and popular kinds of health issues faced by people nowadays. Skin cancer (SC) is one among them, and its detection relies on the skin biopsy outputs and the expertise of the doctors, but it consumes more time and some inaccurate results. At the early stage, skin cancer detection is a challenging task, and it easily spreads to the whole body and leads to an increase in the mortality rate. Skin cancer is curable when it is detected at an early stage. In order to classify correct and accurate skin cancer, the critical task is skin cancer identification and classification, and it is more based on the cancer disease features such as shape, size, color, symmetry and etc. More similar characteristics are present in many skin diseases; hence it makes it a challenging issue to select important features from a skin cancer dataset images. Hence, the skin cancer diagnostic accuracy is improved by requiring an automated skin cancer detection and classification framework; thereby, the human expert’s scarcity is handled. Recently, the deep learning techniques like Convolutional neural network (CNN), Deep belief neural network (DBN), Artificial neural network (ANN), Recurrent neural network (RNN), and Long and short term memory (LSTM) have been widely used for the identification and classification of skin cancers. This survey reviews different DL techniques for skin cancer identification and classification. The performance metrics such as precision, recall, accuracy, sensitivity, specificity, and F-measures are used to evaluate the effectiveness of SC identification using DL techniques. By using these DL techniques, the classification accuracy increases along with the mitigation of computational complexities and time consumption.Keywords: skin cancer, deep learning, performance measures, accuracy, datasets
Procedia PDF Downloads 1343686 Performance Enrichment of Deep Feed Forward Neural Network and Deep Belief Neural Networks for Fault Detection of Automobile Gearbox Using Vibration Signal
Authors: T. Praveenkumar, Kulpreet Singh, Divy Bhanpuriya, M. Saimurugan
Abstract:
This study analysed the classification accuracy for gearbox faults using Machine Learning Techniques. Gearboxes are widely used for mechanical power transmission in rotating machines. Its rotating components such as bearings, gears, and shafts tend to wear due to prolonged usage, causing fluctuating vibrations. Increasing the dependability of mechanical components like a gearbox is hampered by their sealed design, which makes visual inspection difficult. One way of detecting impending failure is to detect a change in the vibration signature. The current study proposes various machine learning algorithms, with aid of these vibration signals for obtaining the fault classification accuracy of an automotive 4-Speed synchromesh gearbox. Experimental data in the form of vibration signals were acquired from a 4-Speed synchromesh gearbox using Data Acquisition System (DAQs). Statistical features were extracted from the acquired vibration signal under various operating conditions. Then the extracted features were given as input to the algorithms for fault classification. Supervised Machine Learning algorithms such as Support Vector Machines (SVM) and unsupervised algorithms such as Deep Feed Forward Neural Network (DFFNN), Deep Belief Networks (DBN) algorithms are used for fault classification. The fusion of DBN & DFFNN classifiers were architected to further enhance the classification accuracy and to reduce the computational complexity. The fault classification accuracy for each algorithm was thoroughly studied, tabulated, and graphically analysed for fused and individual algorithms. In conclusion, the fusion of DBN and DFFNN algorithm yielded the better classification accuracy and was selected for fault detection due to its faster computational processing and greater efficiency.Keywords: deep belief networks, DBN, deep feed forward neural network, DFFNN, fault diagnosis, fusion of algorithm, vibration signal
Procedia PDF Downloads 1223685 Explainable Graph Attention Networks
Authors: David Pham, Yongfeng Zhang
Abstract:
Graphs are an important structure for data storage and computation. Recent years have seen the success of deep learning on graphs such as Graph Neural Networks (GNN) on various data mining and machine learning tasks. However, most of the deep learning models on graphs cannot easily explain their predictions and are thus often labelled as “black boxes.” For example, Graph Attention Network (GAT) is a frequently used GNN architecture, which adopts an attention mechanism to carefully select the neighborhood nodes for message passing and aggregation. However, it is difficult to explain why certain neighbors are selected while others are not and how the selected neighbors contribute to the final classification result. In this paper, we present a graph learning model called Explainable Graph Attention Network (XGAT), which integrates graph attention modeling and explainability. We use a single model to target both the accuracy and explainability of problem spaces and show that in the context of graph attention modeling, we can design a unified neighborhood selection strategy that selects appropriate neighbor nodes for both better accuracy and enhanced explainability. To justify this, we conduct extensive experiments to better understand the behavior of our model under different conditions and show an increase in both accuracy and explainability.Keywords: explainable AI, graph attention network, graph neural network, node classification
Procedia PDF Downloads 2053684 Makhraj Recognition Using Convolutional Neural Network
Authors: Zan Azma Nasruddin, Irwan Mazlin, Nor Aziah Daud, Fauziah Redzuan, Fariza Hanis Abdul Razak
Abstract:
This paper focuses on a machine learning that learn the correct pronunciation of Makhraj Huroofs. Usually, people need to find an expert to pronounce the Huroof accurately. In this study, the researchers have developed a system that is able to learn the selected Huroofs which are ha, tsa, zho, and dza using the Convolutional Neural Network. The researchers present the chosen type of the CNN architecture to make the system that is able to learn the data (Huroofs) as quick as possible and produces high accuracy during the prediction. The researchers have experimented the system to measure the accuracy and the cross entropy in the training process.Keywords: convolutional neural network, Makhraj recognition, speech recognition, signal processing, tensorflow
Procedia PDF Downloads 3393683 The Impact of the Training Program Provided by the Saudi Archery Federation on the Electromyography of the Bow Arm Muscles
Authors: Hana Aljumayi, Mohammed Issa
Abstract:
The aim of this study was to investigate the effect of the training program for professional athletes at the Saudi Archery Federation on the electrical activity of the muscles involved in pulling the bowstring, maximum muscle strength (MVC) and to identify the relationship between the electrical activity of these muscles and accuracy in shooting among female archers. The researcher used a descriptive approach that was suitable for the nature of the study, and a sample of nine female archers was selected using purposive sampling. An EMG device was used to measure signal amplitude, signal frequency, spectral energy signal, and MVC. The results showed statistically significant differences in signal amplitude among muscles, with F(8,1)=5.91 and a significance level of 0.02. There were also statistically significant differences between muscles in terms of signal frequency, with F(8,1)=8.23 and a significance level of 0.02. Bonferroni test results indicated statistically significant differences between measurements at a significance level of 0.05, with anterior measurements showing an average difference of 16.4 compared to other measurements. Furthermore, there was a significant negative correlation between signal amplitude in the calf muscle and accuracy in shooting (r=-0.78) at a significance level of 0.02. There was also a significant positive correlation between signal frequency in the calf muscle and accuracy in shooting (r=0.72) at a significance level of 0.04. In conclusion, it appears that the training program for archery athletes focused more on skill development than physical aspects such as muscle activity and strength development. However, it did have a statistically significant effect on signal amplitude but not on signal frequency or MVC development in muscles involved in pulling the bowstring.Keywords: electrical activity of muscles, archery sport, shooting accuracy, muscles
Procedia PDF Downloads 673682 Understanding Health-Related Properties of Grapes by Pharmacokinetic Modelling of Intestinal Absorption
Authors: Sophie N. Selby-Pham, Yudie Wang, Louise Bennett
Abstract:
Consumption of grapes promotes health and reduces the risk of chronic diseases due to the action of grape phytochemicals in regulation of Oxidative Stress and Inflammation (OSI). The bioefficacy of phytochemicals depends on their absorption in the human body. The time required for phytochemicals to achieve maximal plasma concentration (Tₘₐₓ) after oral intake reflects the time window of maximal bioefficacy of phytochemicals, with Tₘₐₓ dependent on physicochemical properties of phytochemicals. This research collated physicochemical properties of grape phytochemicals from white and red grapes to predict their Tₘₐₓ using pharmacokinetic modelling. The predicted values of Tₘₐₓ were then compared to the measured Tₘₐₓ collected from clinical studies to determine the accuracy of prediction. In both liquid and solid intake forms, white grapes exhibit a shorter Tₘₐₓ range (0.5-2.5 h) versus red grapes (1.5-5h). The prediction accuracy of Tₘₐₓ for grape phytochemicals was 33.3% total error of prediction compared to the mean, indicating high prediction accuracy. Pharmacokinetic modelling allows prediction of Tₘₐₓ without costly clinical trials, informing dosing frequency for sustained presence of phytochemicals in the body to optimize the health benefits of phytochemicals.Keywords: absorption kinetics, phytochemical, phytochemical absorption prediction model, Vitis vinifera
Procedia PDF Downloads 150