Search results for: computational accuracy
4990 Static and Dynamic Hand Gesture Recognition Using Convolutional Neural Network Models
Authors: Keyi Wang
Abstract:
Similar to the touchscreen, hand gesture based human-computer interaction (HCI) is a technology that could allow people to perform a variety of tasks faster and more conveniently. This paper proposes a training method of an image-based hand gesture image and video clip recognition system using a CNN (Convolutional Neural Network) with a dataset. A dataset containing 6 hand gesture images is used to train a 2D CNN model. ~98% accuracy is achieved. Furthermore, a 3D CNN model is trained on a dataset containing 4 hand gesture video clips resulting in ~83% accuracy. It is demonstrated that a Cozmo robot loaded with pre-trained models is able to recognize static and dynamic hand gestures.Keywords: deep learning, hand gesture recognition, computer vision, image processing
Procedia PDF Downloads 1394989 The Selection of the Nearest Anchor Using Received Signal Strength Indication (RSSI)
Authors: Hichem Sassi, Tawfik Najeh, Noureddine Liouane
Abstract:
The localization information is crucial for the operation of WSN. There are principally two types of localization algorithms. The Range-based localization algorithm has strict requirements on hardware; thus, it is expensive to be implemented in practice. The Range-free localization algorithm reduces the hardware cost. However, it can only achieve high accuracy in ideal scenarios. In this paper, we locate unknown nodes by incorporating the advantages of these two types of methods. The proposed algorithm makes the unknown nodes select the nearest anchor using the Received Signal Strength Indicator (RSSI) and choose two other anchors which are the most accurate to achieve the estimated location. Our algorithm improves the localization accuracy compared with previous algorithms, which has been demonstrated by the simulating results.Keywords: WSN, localization, DV-Hop, RSSI
Procedia PDF Downloads 3614988 Evaluation of the Self-Efficacy and Learning Experiences of Final year Students of Computer Science of Southwest Nigerian Universities
Authors: Olabamiji J. Onifade, Peter O. Ajayi, Paul O. Jegede
Abstract:
This study aimed at investigating the preparedness of the undergraduate final year students of Computer Science as the next entrants into the workplace. It assessed their self-efficacy in computational tasks and examined the relationship between their self-efficacy and their learning experiences in Southwest Nigerian universities. The study employed a descriptive survey research design. The population of the study comprises all the final year students of Computer Science. A purposive sampling technique was adopted in selecting a representative sample of interest from the final year students of Computer Science. The Students’ Computational Task Self-Efficacy Questionnaire (SCTSEQ) was used to collect data. Mean, standard deviation, frequency, percentages, and linear regression were used for data analysis. The result obtained revealed that the final year students of Computer Science were averagely confident in performing computational tasks, and there is a significant relationship between the learning experiences of the students and their self-efficacy. The study recommends that the curriculum be improved upon to accommodate industry experts as lecturers in some of the courses, make provision for more practical sessions, and the learning experiences of the student be considered an important component in the undergraduate Computer Science curriculum development process.Keywords: computer science, learning experiences, self-efficacy, students
Procedia PDF Downloads 1444987 Microwave Dielectric Constant Measurements of Titanium Dioxide Using Five Mixture Equations
Authors: Jyh Sheen, Yong-Lin Wang
Abstract:
This research dedicates to find a different measurement procedure of microwave dielectric properties of ceramic materials with high dielectric constants. For the composite of ceramic dispersed in the polymer matrix, the dielectric constants of the composites with different concentrations can be obtained by various mixture equations. The other development of mixture rule is to calculate the permittivity of ceramic from measurements on composite. To do this, the analysis method and theoretical accuracy on six basic mixture laws derived from three basic particle shapes of ceramic fillers have been reported for dielectric constants of ceramic less than 40 at microwave frequency. Similar researches have been done for other well-known mixture rules. They have shown that both the physical curve matching with experimental results and low potential theory error are important to promote the calculation accuracy. Recently, a modified of mixture equation for high dielectric constant ceramics at microwave frequency has also been presented for strontium titanate (SrTiO3) which was selected from five more well known mixing rules and has shown a good accuracy for high dielectric constant measurements. However, it is still not clear the accuracy of this modified equation for other high dielectric constant materials. Therefore, the five more well known mixing rules are selected again to understand their application to other high dielectric constant ceramics. The other high dielectric constant ceramic, TiO2 with dielectric constant 100, was then chosen for this research. Their theoretical error equations are derived. In addition to the theoretical research, experimental measurements are always required. Titanium dioxide is an interesting ceramic for microwave applications. In this research, its powder is adopted as the filler material and polyethylene powder is like the matrix material. The dielectric constants of those ceramic-polyethylene composites with various compositions were measured at 10 GHz. The theoretical curves of the five published mixture equations are shown together with the measured results to understand the curve matching condition of each rule. Finally, based on the experimental observation and theoretical analysis, one of the five rules was selected and modified to a new powder mixture equation. This modified rule has show very good curve matching with the measurement data and low theoretical error. We can then calculate the dielectric constant of pure filler medium (titanium dioxide) by those mixing equations from the measured dielectric constants of composites. The accuracy on the estimating dielectric constant of pure ceramic by various mixture rules will be compared. This modified mixture rule has also shown good measurement accuracy on the dielectric constant of titanium dioxide ceramic. This study can be applied to the microwave dielectric properties measurements of other high dielectric constant ceramic materials in the future.Keywords: microwave measurement, dielectric constant, mixture rules, composites
Procedia PDF Downloads 3674986 The Influence of Chevron Angle on Plate Heat Exchanger Thermal Performance with Considering Maldistribution
Authors: Hossein Shokouhmand, Majid Hasanpour
Abstract:
A new modification to the Strelow method of chevron-type plate heat exchangers (PHX) modeling is proposed. The effects of maldistribution are accounted in the resulting equation. The results of calculations are validated by reported experiences. The good accuracy of heat transfer performance prediction is shown. The results indicate that considering flow maldistribution improve the accuracy of predicting the flow and thermal behavior of the plate exchanger. Additionally, a wide range of the parametric study has been presented which brings out the effects of chevron angle of PHE on its thermal efficiency with considering maldistribution effect. In addition, the thermally optimal corrugation discussed for the chevron-type PHEs.Keywords: chevron angle, plate heat exchangers, maldistribution, strelow method
Procedia PDF Downloads 1904985 Numerical Method for Heat Transfer Problem in a Block Having an Interface
Authors: Beghdadi Lotfi, Bouziane Abdelhafid
Abstract:
A finite volume method for quadrilaterals unstructured mesh is developed to predict the two dimensional steady-state solutions of conduction equation. In this scheme, based on the integration around the polygonal control volume, the derivatives of conduction equation must be converted into closed line integrals using same formulation of the Stokes theorem. To valid the accuracy of the method two numerical experiments s are used: conduction in a regular block (with known analytical solution) and conduction in a rotated block (case with curved boundaries).The numerical results show good agreement with analytical results. To demonstrate the accuracy of the method, the absolute and root-mean square errors versus the grid size are examined quantitatively.Keywords: Stokes theorem, unstructured grid, heat transfer, complex geometry
Procedia PDF Downloads 2904984 A Quick Prediction for Shear Behaviour of RC Membrane Elements by Fixed-Angle Softened Truss Model with Tension-Stiffening
Authors: X. Wang, J. S. Kuang
Abstract:
The Fixed-angle Softened Truss Model with Tension-stiffening (FASTMT) has a superior performance in predicting the shear behaviour of reinforced concrete (RC) membrane elements, especially for the post-cracking behaviour. Nevertheless, massive computational work is inevitable due to the multiple transcendental equations involved in the stress-strain relationship. In this paper, an iterative root-finding technique is introduced to FASTMT for solving quickly the transcendental equations of the tension-stiffening effect of RC membrane elements. This fast FASTMT, which performs in MATLAB, uses the bisection method to calculate the tensile stress of the membranes. By adopting the simplification, the elapsed time of each loop is reduced significantly and the transcendental equations can be solved accurately. Owing to the high efficiency and good accuracy as compared with FASTMT, the fast FASTMT can be further applied in quick prediction of shear behaviour of complex large-scale RC structures.Keywords: bisection method, FASTMT, iterative root-finding technique, reinforced concrete membrane
Procedia PDF Downloads 2724983 Data Quality on Regular Immunization Programme at Birkod District: Somali Region, Ethiopia
Authors: Eyob Seife, Tesfalem Teshome, Bereket Seyoum, Behailu Getachew, Yohans Demis
Abstract:
Developing countries continue to face preventable communicable diseases, such as vaccine-preventable diseases. The Expanded Programme on Immunization (EPI) was established by the World Health Organization in 1974 to control these diseases. Health data use is crucial in decision-making, but ensuring data quality remains challenging. The study aimed to assess the accuracy ratio, timeliness, and quality index of regular immunization programme data in the Birkod district of the Somali Region, Ethiopia. For poor data quality, technical, contextual, behavioral, and organizational factors are among contributors. The study used a quantitative cross-sectional design conducted in September 2022GC using WHO-recommended data quality self-assessment tools. The accuracy ratio and timeliness of reports on regular immunization programmes were assessed for two health centers and three health posts in the district for one fiscal year. Moreover, the quality index assessment was conducted at the district level and health facilities by trained assessors. The study found poor data quality in the accuracy ratio and timeliness of reports at all health units, which includes zeros. Overreporting was observed for most facilities, particularly at the health post level. Health centers showed a relatively better accuracy ratio than health posts. The quality index assessment revealed poor quality at all levels. The study recommends that responsible bodies at different levels improve data quality using various approaches, such as the capacitation of health professionals and strengthening the quality index components. The study highlighted the need for attention to data quality in general, specifically at the health post level, and improving the quality index at all levels, which is essential.Keywords: Birkod District, data quality, quality index, regular immunization programme, Somali Region-Ethiopia
Procedia PDF Downloads 904982 ELD79-LGD2006 Transformation Techniques Implementation and Accuracy Comparison in Tripoli Area, Libya
Authors: Jamal A. Gledan, Othman A. Azzeidani
Abstract:
During the last decade, Libya established a new Geodetic Datum called Libyan Geodetic Datum 2006 (LGD 2006) by using GPS, whereas the ground traversing method was used to establish the last Libyan datum which was called the Europe Libyan Datum 79 (ELD79). The current research paper introduces ELD79 to LGD2006 coordinate transformation technique, the accurate comparison of transformation between multiple regression equations and the three-parameters model (Bursa-Wolf). The results had been obtained show that the overall accuracy of stepwise multi regression equations is better than that can be determined by using Bursa-Wolf transformation model.Keywords: geodetic datum, horizontal control points, traditional similarity transformation model, unconventional transformation techniques
Procedia PDF Downloads 3074981 Gaze Behaviour of Individuals with and without Intellectual Disability for Nonaccidental and Metric Shape Properties
Authors: S. Haider, B. Bhushan
Abstract:
Eye Gaze behaviour of individuals with and without intellectual disability are investigated in an eye tracking study in terms of sensitivity to Nonaccidental (NAPs) and Metric (MPs) shape properties. Total fixation time is used as an indirect measure of attention allocation. Studies have found Mean reaction times for non accidental properties (NAPs) to be shorter than for metric (MPs) when the MP and NAP differences were equalized. METHODS: Twenty-five individuals with intellectual disability (mild and moderate level of Mental Retardation) and twenty-seven normal individuals were compared on mean total fixation duration, accuracy level and mean reaction time for mild NAPs, extreme NAPs and metric properties of images. 2D images of cylinders were adapted and made into forced choice match-to-sample tasks. Tobii TX300 Eye Tracker was used to record total fixation duration and data obtained from the Areas of Interest (AOI). Variable trial duration (total reaction time of each participant) and fixed trail duration (data taken at each second from one to fifteen seconds) data were used for analyses. Both groups did not differ in terms of fixation times (fixed as well as variable) across any of the three image manipulations but differed in terms of reaction time and accuracy. Normal individuals had longer reaction time compared to individuals with intellectual disability across all types of images. Both the groups differed significantly on accuracy measure across all image types. Normal individuals performed better across all three types of images. Mild NAPs vs. Metric differences: There was significant difference between mild NAPs and metric properties of images in terms of reaction times. Mild NAPs images had significantly longer reaction time compared to metric for normal individuals but this difference was not found for individuals with intellectual disability. Mild NAPs images had significantly better accuracy level compared to metric for both the groups. In conclusion, type of image manipulations did not result in differences in attention allocation for individuals with and without intellectual disability. Mild Nonaccidental properties facilitate better accuracy level compared to metric in both the groups but this advantage is seen only for normal group in terms of mean reaction time.Keywords: eye gaze fixations, eye movements, intellectual disability, stimulus properties
Procedia PDF Downloads 5534980 A Computational Model of the Thermal Grill Illusion: Simulating the Perceived Pain Using Neuronal Activity in Pain-Sensitive Nerve Fibers
Authors: Subhankar Karmakar, Madhan Kumar Vasudevan, Manivannan Muniyandi
Abstract:
Thermal Grill Illusion (TGI) elicits a strong and often painful sensation of burn when interlacing warm and cold stimuli that are individually non-painful, excites thermoreceptors beneath the skin. Among several theories of TGI, the “disinhibition” theory is the most widely accepted in the literature. According to this theory, TGI is the result of the disinhibition or unmasking of the pain-sensitive HPC (Heat-Pinch-Cold) nerve fibers due to the inhibition of cold-sensitive nerve fibers that are responsible for masking HPC nerve fibers. Although researchers focused on understanding TGI throughexperiments and models, none of them investigated the prediction of TGI pain intensity through a computational model. Furthermore, the comparison of psychophysically perceived TGI intensity with neurophysiological models has not yet been studied. The prediction of pain intensity through a computational model of TGI can help inoptimizing thermal displays and understanding pathological conditions related to temperature perception. The current studyfocuses on developing a computational model to predict the intensity of TGI pain and experimentally observe the perceived TGI pain. The computational model is developed based on the disinhibition theory and by utilizing the existing popular models of warm and cold receptors in the skin. The model aims to predict the neuronal activity of the HPC nerve fibers. With a temperature-controlled thermal grill setup, fifteen participants (ten males and five females) were presented with five temperature differences between warm and cold grills (each repeated three times). All the participants rated the perceived TGI pain sensation on a scale of one to ten. For the range of temperature differences, the experimentally observed perceived intensity of TGI is compared with the neuronal activity of pain-sensitive HPC nerve fibers. The simulation results show a monotonically increasing relationship between the temperature differences and the neuronal activity of the HPC nerve fibers. Moreover, a similar monotonically increasing relationship is experimentally observed between temperature differences and the perceived TGI intensity. This shows the potential comparison of TGI pain intensity observed through the experimental study with the neuronal activity predicted through the model. The proposed model intends to bridge the theoretical understanding of the TGI and the experimental results obtained through psychophysics. Further studies in pain perception are needed to develop a more accurate version of the current model.Keywords: thermal grill Illusion, computational modelling, simulation, psychophysics, haptics
Procedia PDF Downloads 1714979 Predicting Survival in Cancer: How Cox Regression Model Compares to Artifial Neural Networks?
Authors: Dalia Rimawi, Walid Salameh, Amal Al-Omari, Hadeel AbdelKhaleq
Abstract:
Predication of Survival time of patients with cancer, is a core factor that influences oncologist decisions in different aspects; such as offered treatment plans, patients’ quality of life and medications development. For a long time proportional hazards Cox regression (ph. Cox) was and still the most well-known statistical method to predict survival outcome. But due to the revolution of data sciences; new predication models were employed and proved to be more flexible and provided higher accuracy in that type of studies. Artificial neural network is one of those models that is suitable to handle time to event predication. In this study we aim to compare ph Cox regression with artificial neural network method according to data handling and Accuracy of each model.Keywords: Cox regression, neural networks, survival, cancer.
Procedia PDF Downloads 2014978 Slice Bispectrogram Analysis-Based Classification of Environmental Sounds Using Convolutional Neural Network
Authors: Katsumi Hirata
Abstract:
Certain systems can function well only if they recognize the sound environment as humans do. In this research, we focus on sound classification by adopting a convolutional neural network and aim to develop a method that automatically classifies various environmental sounds. Although the neural network is a powerful technique, the performance depends on the type of input data. Therefore, we propose an approach via a slice bispectrogram, which is a third-order spectrogram and is a slice version of the amplitude for the short-time bispectrum. This paper explains the slice bispectrogram and discusses the effectiveness of the derived method by evaluating the experimental results using the ESC‑50 sound dataset. As a result, the proposed scheme gives high accuracy and stability. Furthermore, some relationship between the accuracy and non-Gaussianity of sound signals was confirmed.Keywords: environmental sound, bispectrum, spectrogram, slice bispectrogram, convolutional neural network
Procedia PDF Downloads 1264977 Generating Synthetic Chest X-ray Images for Improved COVID-19 Detection Using Generative Adversarial Networks
Authors: Muneeb Ullah, Daishihan, Xiadong Young
Abstract:
Deep learning plays a crucial role in identifying COVID-19 and preventing its spread. To improve the accuracy of COVID-19 diagnoses, it is important to have access to a sufficient number of training images of CXRs (chest X-rays) depicting the disease. However, there is currently a shortage of such images. To address this issue, this paper introduces COVID-19 GAN, a model that uses generative adversarial networks (GANs) to generate realistic CXR images of COVID-19, which can be used to train identification models. Initially, a generator model is created that uses digressive channels to generate images of CXR scans for COVID-19. To differentiate between real and fake disease images, an efficient discriminator is developed by combining the dense connectivity strategy and instance normalization. This approach makes use of their feature extraction capabilities on CXR hazy areas. Lastly, the deep regret gradient penalty technique is utilized to ensure stable training of the model. With the use of 4,062 grape leaf disease images, the Leaf GAN model successfully produces 8,124 COVID-19 CXR images. The COVID-19 GAN model produces COVID-19 CXR images that outperform DCGAN and WGAN in terms of the Fréchet inception distance. Experimental findings suggest that the COVID-19 GAN-generated CXR images possess noticeable haziness, offering a promising approach to address the limited training data available for COVID-19 model training. When the dataset was expanded, CNN-based classification models outperformed other models, yielding higher accuracy rates than those of the initial dataset and other augmentation techniques. Among these models, ImagNet exhibited the best recognition accuracy of 99.70% on the testing set. These findings suggest that the proposed augmentation method is a solution to address overfitting issues in disease identification and can enhance identification accuracy effectively.Keywords: classification, deep learning, medical images, CXR, GAN.
Procedia PDF Downloads 964976 Microchip-Integrated Computational Models for Studying Gait and Motor Control Deficits in Autism
Authors: Noah Odion, Honest Jimu, Blessing Atinuke Afuape
Abstract:
Introduction: Motor control and gait abnormalities are commonly observed in individuals with autism spectrum disorder (ASD), affecting their mobility and coordination. Understanding the underlying neurological and biomechanical factors is essential for designing effective interventions. This study focuses on developing microchip-integrated wearable devices to capture real-time movement data from individuals with autism. By applying computational models to the collected data, we aim to analyze motor control patterns and gait abnormalities, bridging a crucial knowledge gap in autism-related motor dysfunction. Methods: We designed microchip-enabled wearable devices capable of capturing precise kinematic data, including joint angles, acceleration, and velocity during movement. A cross-sectional study was conducted on individuals with ASD and a control group to collect comparative data. Computational modelling was applied using machine learning algorithms to analyse motor control patterns, focusing on gait variability, balance, and coordination. Finite element models were also used to simulate muscle and joint dynamics. The study employed descriptive and analytical methods to interpret the motor data. Results: The wearable devices effectively captured detailed movement data, revealing significant gait variability in the ASD group. For example, gait cycle time was 25% longer, and stride length was reduced by 15% compared to the control group. Motor control analysis showed a 30% reduction in balance stability in individuals with autism. Computational models successfully predicted movement irregularities and helped identify motor control deficits, particularly in the lower limbs. Conclusions: The integration of microchip-based wearable devices with computational models offers a powerful tool for diagnosing and treating motor control deficits in autism. These results have significant implications for patient care, providing objective data to guide personalized therapeutic interventions. The findings also contribute to the broader field of neuroscience by improving our understanding of the motor dysfunctions associated with ASD and other neurodevelopmental disorders.Keywords: motor control, gait abnormalities, autism, wearable devices, microchips, computational modeling, kinematic analysis, neurodevelopmental disorders
Procedia PDF Downloads 244975 Moving Object Detection Using Histogram of Uniformly Oriented Gradient
Authors: Wei-Jong Yang, Yu-Siang Su, Pau-Choo Chung, Jar-Ferr Yang
Abstract:
Moving object detection (MOD) is an important issue in advanced driver assistance systems (ADAS). There are two important moving objects, pedestrians and scooters in ADAS. In real-world systems, there exist two important challenges for MOD, including the computational complexity and the detection accuracy. The histogram of oriented gradient (HOG) features can easily detect the edge of object without invariance to changes in illumination and shadowing. However, to reduce the execution time for real-time systems, the image size should be down sampled which would lead the outlier influence to increase. For this reason, we propose the histogram of uniformly-oriented gradient (HUG) features to get better accurate description of the contour of human body. In the testing phase, the support vector machine (SVM) with linear kernel function is involved. Experimental results show the correctness and effectiveness of the proposed method. With SVM classifiers, the real testing results show the proposed HUG features achieve better than classification performance than the HOG ones.Keywords: moving object detection, histogram of oriented gradient, histogram of uniformly-oriented gradient, linear support vector machine
Procedia PDF Downloads 5944974 A Recommender System Fusing Collaborative Filtering and User’s Review Mining
Authors: Seulbi Choi, Hyunchul Ahn
Abstract:
Collaborative filtering (CF) algorithm has been popularly used for recommender systems in both academic and practical applications. It basically generates recommendation results using users’ numeric ratings. However, the additional use of the information other than user ratings may lead to better accuracy of CF. Considering that a lot of people are likely to share their honest opinion on the items they purchased recently due to the advent of the Web 2.0, user's review can be regarded as the new informative source for identifying user's preference with accuracy. Under this background, this study presents a hybrid recommender system that fuses CF and user's review mining. Our system adopts conventional memory-based CF, but it is designed to use both user’s numeric ratings and his/her text reviews on the items when calculating similarities between users.Keywords: Recommender system, Collaborative filtering, Text mining, Review mining
Procedia PDF Downloads 3574973 Evaluating Classification with Efficacy Metrics
Authors: Guofan Shao, Lina Tang, Hao Zhang
Abstract:
The values of image classification accuracy are affected by class size distributions and classification schemes, making it difficult to compare the performance of classification algorithms across different remote sensing data sources and classification systems. Based on the term efficacy from medicine and pharmacology, we have developed the metrics of image classification efficacy at the map and class levels. The novelty of this approach is that a baseline classification is involved in computing image classification efficacies so that the effects of class statistics are reduced. Furthermore, the image classification efficacies are interpretable and comparable, and thus, strengthen the assessment of image data classification methods. We use real-world and hypothetical examples to explain the use of image classification efficacies. The metrics of image classification efficacy meet the critical need to rectify the strategy for the assessment of image classification performance as image classification methods are becoming more diversified.Keywords: accuracy assessment, efficacy, image classification, machine learning, uncertainty
Procedia PDF Downloads 2114972 Diagnostic Properties of Exercise or Pharmacological Stress Myocardial Perfusion Scintigraphy in Per-Vessel Basis: A Clinical Validation Study
Authors: Ahmadreza Bagheri, Seyyed S. Eftekhari, Shervin Rashidinia
Abstract:
Background: Various stress tests have been proposed yet to assess patients with suspected coronary artery disease. However, their diagnostic properties in terms of sensitivity, specificity, and accuracy are variable and their applicability remained somewhat vague. The aim of this study is to validate per-vessel diagnostic properties of 3 types of stress myocardial perfusion scintigraphy in gated SPECT (Single-Photon Emission Computed Tomography) using either exercise or pharmacological stress testing with dipyridamole or dobutamine. Materials and Methods: Hospital records of 314 patients who referred to Imam Khomeini hospital of Tehran between September 2015 and January 2017 were completely reviewed in this study. All patients underwent coronary angiography within 3 months after stress myocardial perfusion scan. Eventually, the results were analyzed in per-vessel basis to find the proper modality for each involved vessel or scanned site. Results: The mean age of patients was 62.15 ± 4.94 years (30-85) and 35.03% were women. The overall sensitivity, specificity, and accuracy were calculated as 56.59%, 54.24%, and 55.09%, respectively. These values were 56.43% and 53.25%, 54.46% and 47.36%, 56.75% and 54.83% for dipyridamole and exercise, respectively. Ischemia of the anterior wall through exercise stress testing has the highest diagnostic accuracy in detecting LAD (Left Anterior Descending artery) involvement. Inferior wall hypokinesia and anterolateral wall ischemia during exercise stress testing have the highest diagnostic accuracy in detecting RCA (Right Coronary Artery) and LCX artery (Left Circumflex Artery) stenosis, respectively. Conclusion: Stress myocardial perfusion scan should be carried out on the basis of the findings of the preliminary investigations on suspicion of a specific coronary artery or involved myocardial wall.Keywords: dipyridamole, dobutamine, single-photon emission computed tomography, stress myocardial perfusion scintigraphy
Procedia PDF Downloads 1554971 Bi-Lateral Comparison between NIS-Egypt and NMISA-South Africa for the Calibration of an Optical Spectrum Analyzer
Authors: Osama Terra, Hatem Hussein, Adriaan Van Brakel
Abstract:
Dense wavelength division multiplexing (DWDM) technology requires tight specification and therefore measurement of wavelength accuracy and stability of the telecommunication lasers. Thus, calibration of the used Optical Spectrum Analyzers (OSAs) that are used to measure wavelength is of a great importance. Proficiency testing must be performed on such measuring activity to insure the accuracy of the measurement results. In this paper, a new comparison scheme is introduced to test the performance of such calibrations. This comparison scheme is implemented between NIS-Egypt and NMISA-South Africa for the calibration of the wavelength scale of an OSA. Both institutes employ reference gas cell to calibrate OSA according to the standard IEC/ BS EN 62129 (2006). The result of this comparison is compiled in this paper.Keywords: OSA calibration, HCN gas cell, DWDM technology, wavelength measurement
Procedia PDF Downloads 3034970 Calibration of the Radical Installation Limit Error of the Accelerometer in the Gravity Gradient Instrument
Authors: Danni Cong, Meiping Wu, Xiaofeng He, Junxiang Lian, Juliang Cao, Shaokuncai, Hao Qin
Abstract:
Gravity gradient instrument (GGI) is the core of the gravity gradiometer, so the structural error of the sensor has a great impact on the measurement results. In order not to affect the aimed measurement accuracy, limit error is required in the installation of the accelerometer. In this paper, based on the established measuring principle model, the radial installation limit error is calibrated, which is taken as an example to provide a method to calculate the other limit error of the installation under the premise of ensuring the accuracy of the measurement result. This method provides the idea for deriving the limit error of the geometry structure of the sensor, laying the foundation for the mechanical precision design and physical design.Keywords: gravity gradient sensor, radial installation limit error, accelerometer, uniaxial rotational modulation
Procedia PDF Downloads 4224969 Efficient Passenger Counting in Public Transport Based on Machine Learning
Authors: Chonlakorn Wiboonsiriruk, Ekachai Phaisangittisagul, Chadchai Srisurangkul, Itsuo Kumazawa
Abstract:
Public transportation is a crucial aspect of passenger transportation, with buses playing a vital role in the transportation service. Passenger counting is an essential tool for organizing and managing transportation services. However, manual counting is a tedious and time-consuming task, which is why computer vision algorithms are being utilized to make the process more efficient. In this study, different object detection algorithms combined with passenger tracking are investigated to compare passenger counting performance. The system employs the EfficientDet algorithm, which has demonstrated superior performance in terms of speed and accuracy. Our results show that the proposed system can accurately count passengers in varying conditions with an accuracy of 94%.Keywords: computer vision, object detection, passenger counting, public transportation
Procedia PDF Downloads 1554968 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Partitioned Solution Approach and an Exponential Model
Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino
Abstract:
The solution of the nonlinear dynamic equilibrium equations of base-isolated structures adopting a conventional monolithic solution approach, i.e. an implicit single-step time integration method employed with an iteration procedure, and the use of existing nonlinear analytical models, such as differential equation models, to simulate the dynamic behavior of seismic isolators can require a significant computational effort. In order to reduce numerical computations, a partitioned solution method and a one dimensional nonlinear analytical model are presented in this paper. A partitioned solution approach can be easily applied to base-isolated structures in which the base isolation system is much more flexible than the superstructure. Thus, in this work, the explicit conditionally stable central difference method is used to evaluate the base isolation system nonlinear response and the implicit unconditionally stable Newmark’s constant average acceleration method is adopted to predict the superstructure linear response with the benefit in avoiding iterations in each time step of a nonlinear dynamic analysis. The proposed mathematical model is able to simulate the dynamic behavior of seismic isolators without requiring the solution of a nonlinear differential equation, as in the case of widely used differential equation model. The proposed mixed explicit-implicit time integration method and nonlinear exponential model are adopted to analyze a three dimensional seismically isolated structure with a lead rubber bearing system subjected to earthquake excitation. The numerical results show the good accuracy and the significant computational efficiency of the proposed solution approach and analytical model compared to the conventional solution method and mathematical model adopted in this work. Furthermore, the low stiffness value of the base isolation system with lead rubber bearings allows to have a critical time step considerably larger than the imposed ground acceleration time step, thus avoiding stability problems in the proposed mixed method.Keywords: base-isolated structures, earthquake engineering, mixed time integration, nonlinear exponential model
Procedia PDF Downloads 2804967 A Homogenized Mechanical Model of Carbon Nanotubes/Polymer Composite with Interface Debonding
Authors: Wenya Shu, Ilinca Stanciulescu
Abstract:
Carbon nanotubes (CNTs) possess attractive properties, such as high stiffness and strength, and high thermal and electrical conductivities, making them promising filler in multifunctional nanocomposites. Although CNTs can be efficient reinforcements, the expected level of mechanical performance of CNT-polymers is not often reached in practice due to the poor mechanical behavior of the CNT-polymer interfaces. It is believed that the interactions of CNT and polymer mainly result from the Van der Waals force. The interface debonding is a fracture and delamination phenomenon. Thus, the cohesive zone modeling (CZM) is deemed to give good capture of the interface behavior. The detailed, cohesive zone modeling provides an option to consider the CNT-matrix interactions, but brings difficulties in mesh generation and also leads to high computational costs. Homogenized models that smear the fibers in the ground matrix and treat the material as homogeneous are studied in many researches to simplify simulations. But based on the perfect interface assumption, the traditional homogenized model obtained by mixing rules severely overestimates the stiffness of the composite, even comparing with the result of the CZM with artificially very strong interface. A mechanical model that can take into account the interface debonding and achieve comparable accuracy to the CZM is thus essential. The present study first investigates the CNT-matrix interactions by employing cohesive zone modeling. Three different coupled CZM laws, i.e., bilinear, exponential and polynomial, are considered. These studies indicate that the shapes of the CZM constitutive laws chosen do not influence significantly the simulations of interface debonding. Assuming a bilinear traction-separation relationship, the debonding process of single CNT in the matrix is divided into three phases and described by differential equations. The analytical solutions corresponding to these phases are derived. A homogenized model is then developed by introducing a parameter characterizing interface sliding into the mixing theory. The proposed mechanical model is implemented in FEAP8.5 as a user material. The accuracy and limitations of the model are discussed through several numerical examples. The CZM simulations in this study reveal important factors in the modeling of CNT-matrix interactions. The analytical solutions and proposed homogenized model provide alternative methods to efficiently investigate the mechanical behaviors of CNT/polymer composites.Keywords: carbon nanotube, cohesive zone modeling, homogenized model, interface debonding
Procedia PDF Downloads 1294966 An Automated R-Peak Detection Method Using Common Vector Approach
Authors: Ali Kirkbas
Abstract:
R peaks in an electrocardiogram (ECG) are signs of cardiac activity in individuals that reveal valuable information about cardiac abnormalities, which can lead to mortalities in some cases. This paper examines the problem of detecting R-peaks in ECG signals, which is a two-class pattern classification problem in fact. To handle this problem with a reliable high accuracy, we propose to use the common vector approach which is a successful machine learning algorithm. The dataset used in the proposed method is obtained from MIT-BIH, which is publicly available. The results are compared with the other popular methods under the performance metrics. The obtained results show that the proposed method shows good performance than that of the other. methods compared in the meaning of diagnosis accuracy and simplicity which can be operated on wearable devices.Keywords: ECG, R-peak classification, common vector approach, machine learning
Procedia PDF Downloads 644965 Using Audit Tools to Maintain Data Quality for ACC/NCDR PCI Registry Abstraction
Authors: Vikrum Malhotra, Manpreet Kaur, Ayesha Ghotto
Abstract:
Background: Cardiac registries such as ACC Percutaneous Coronary Intervention Registry require high quality data to be abstracted, including data elements such as nuclear cardiology, diagnostic coronary angiography, and PCI. Introduction: The audit tool created is used by data abstractors to provide data audits and assess the accuracy and inter-rater reliability of abstraction performed by the abstractors for a health system. This audit tool solution has been developed across 13 registries, including ACC/NCDR registries, PCI, STS, Get with the Guidelines. Methodology: The data audit tool was used to audit internal registry abstraction for all data elements, including stress test performed, type of stress test, data of stress test, results of stress test, risk/extent of ischemia, diagnostic catheterization detail, and PCI data elements for ACC/NCDR PCI registries. This is being used across 20 hospital systems internally and providing abstraction and audit services for them. Results: The data audit tool had inter-rater reliability and accuracy greater than 95% data accuracy and IRR score for the PCI registry in 50 PCI registry cases in 2021. Conclusion: The tool is being used internally for surgical societies and across hospital systems. The audit tool enables the abstractor to be assessed by an external abstractor and includes all of the data dictionary fields for each registry.Keywords: abstraction, cardiac registry, cardiovascular registry, registry, data
Procedia PDF Downloads 1054964 Computational Fluid Dynamics (CFD) Modeling of Local with a Hot Temperature in Sahara
Authors: Selma Bouasria, Mahi Abdelkader, Abbès Azzi, Herouz Keltoum
Abstract:
This paper reports concept was used into the computational fluid dynamics (CFD) code cfx through user-defined functions to assess ventilation efficiency inside (forced-ventilation local). CFX is a simulation tool which uses powerful computer and applied mathematics, to model fluid flow situations for the prediction of heat, mass and momentum transfer and optimal design in various heat transfer and fluid flow processes to evaluate thermal comfort in a room ventilated (highly-glazed). The quality of the solutions obtained from CFD simulations is an effective tool for predicting the behavior and performance indoor thermo-aéraulique comfort.Keywords: ventilation, thermal comfort, CFD, indoor environment, solar air heater
Procedia PDF Downloads 6344963 Musical Instrument Recognition in Polyphonic Audio Through Convolutional Neural Networks and Spectrograms
Authors: Rujia Chen, Akbar Ghobakhlou, Ajit Narayanan
Abstract:
This study investigates the task of identifying musical instruments in polyphonic compositions using Convolutional Neural Networks (CNNs) from spectrogram inputs, focusing on binary classification. The model showed promising results, with an accuracy of 97% on solo instrument recognition. When applied to polyphonic combinations of 1 to 10 instruments, the overall accuracy was 64%, reflecting the increasing challenge with larger ensembles. These findings contribute to the field of Music Information Retrieval (MIR) by highlighting the potential and limitations of current approaches in handling complex musical arrangements. Future work aims to include a broader range of musical sounds, including electronic and synthetic sounds, to improve the model's robustness and applicability in real-time MIR systems.Keywords: binary classifier, CNN, spectrogram, instrument
Procedia PDF Downloads 814962 Fluid Structure Interaction Study between Ahead and Angled Impact of AGM 88 Missile Entering Relatively High Viscous Fluid for K-Omega Turbulence Model
Authors: Abu Afree Andalib, Rafiur Rahman, Md Mezbah Uddin
Abstract:
The main objective of this work is to anatomize on the various parameters of AGM 88 missile anatomized using FSI module in Ansys. Computational fluid dynamics is used for the study of fluid flow pattern and fluidic phenomenon such as drag, pressure force, energy dissipation and shockwave distribution in water. Using finite element analysis module of Ansys, structural parameters such as stress and stress density, localization point, deflection, force propagation is determined. Separate analysis on structural parameters is done on Abacus. State of the art coupling module is used for FSI analysis. Fine mesh is considered in every case for better result during simulation according to computational machine power. The result of the above-mentioned parameters is analyzed and compared for two phases using graphical representation. The result of Ansys and Abaqus are also showed. Computational Fluid Dynamics and Finite Element analyses and subsequently the Fluid-Structure Interaction (FSI) technique is being considered. Finite volume method and finite element method are being considered for modelling fluid flow and structural parameters analysis. Feasible boundary conditions are also utilized in the research. Significant change in the interaction and interference pattern while the impact was found. Theoretically as well as according to simulation angled condition was found with higher impact.Keywords: FSI (Fluid Surface Interaction), impact, missile, high viscous fluid, CFD (Computational Fluid Dynamics), FEM (Finite Element Analysis), FVM (Finite Volume Method), fluid flow, fluid pattern, structural analysis, AGM-88, Ansys, Abaqus, meshing, k-omega, turbulence model
Procedia PDF Downloads 4674961 Computational Pipeline for Lynch Syndrome Detection: Integrating Alignment, Variant Calling, and Annotations
Authors: Rofida Gamal, Mostafa Mohammed, Mariam Adel, Marwa Gamal, Marwa kamal, Ayat Saber, Maha Mamdouh, Amira Emad, Mai Ramadan
Abstract:
Lynch Syndrome is an inherited genetic condition associated with an increased risk of colorectal and other cancers. Detecting Lynch Syndrome in individuals is crucial for early intervention and preventive measures. This study proposes a computational pipeline for Lynch Syndrome detection by integrating alignment, variant calling, and annotation. The pipeline leverages popular tools such as FastQC, Trimmomatic, BWA, bcftools, and ANNOVAR to process the input FASTQ file, perform quality trimming, align reads to the reference genome, call variants, and annotate them. It is believed that the computational pipeline was applied to a dataset of Lynch Syndrome cases, and its performance was evaluated. It is believed that the quality check step ensured the integrity of the sequencing data, while the trimming process is thought to have removed low-quality bases and adaptors. In the alignment step, it is believed that the reads were accurately mapped to the reference genome, and the subsequent variant calling step is believed to have identified potential genetic variants. The annotation step is believed to have provided functional insights into the detected variants, including their effects on known Lynch Syndrome-associated genes. The results obtained from the pipeline revealed Lynch Syndrome-related positions in the genome, providing valuable information for further investigation and clinical decision-making. The pipeline's effectiveness was demonstrated through its ability to streamline the analysis workflow and identify potential genetic markers associated with Lynch Syndrome. It is believed that the computational pipeline presents a comprehensive and efficient approach to Lynch Syndrome detection, contributing to early diagnosis and intervention. The modularity and flexibility of the pipeline are believed to enable customization and adaptation to various datasets and research settings. Further optimization and validation are believed to be necessary to enhance performance and applicability across diverse populations.Keywords: Lynch Syndrome, computational pipeline, alignment, variant calling, annotation, genetic markers
Procedia PDF Downloads 77