Search results for: heart sound classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3909

Search results for: heart sound classification

3309 Difficulties Arising from Cultural and Social Differences Between Languages and Its Impact on Translation and on Translator’s Performance

Authors: Belalia Douma Mohammed

Abstract:

The translator must have a wide knowledge of all fields, especially cultural and literary, so that he can enjoy smoothly translating scientific, literary, political, or any oral or written translation without distorting the meaning. so to be a transfer of the entire content, a correct and identical translation that expresses the culture and literature of the mother country. But this has always been an obstacle for any translator, as, for example, a person who translates from Spanish to another language may face the problem of different in speech speed, a difference that appears clearly considering the pronunciation of the Spanish language is more rapid than other languages, and this certrainly will effect the translator’s performance, as also the word “ snowed my heart” in the Arabic language is common and known to the Arabs as it means to make me happy and delight me, but translating it without transferring its culture, for example, to a country like Russia, may mean the cold that causes freezing of the heart, so in this research paper, we aim to research such difficulties and its impacts on translation and interpretation and on translator's performance.

Keywords: interpretation, translation, performance, difficulties, differences

Procedia PDF Downloads 82
3308 Low Cost Real Time Robust Identification of Impulsive Signals

Authors: R. Biondi, G. Dys, G. Ferone, T. Renard, M. Zysman

Abstract:

This paper describes an automated implementable system for impulsive signals detection and recognition. The system uses a Digital Signal Processing device for the detection and identification process. Here the system analyses the signals in real time in order to produce a particular response if needed. The system analyses the signals in real time in order to produce a specific output if needed. Detection is achieved through normalizing the inputs and comparing the read signals to a dynamic threshold and thus avoiding detections linked to loud or fluctuating environing noise. Identification is done through neuronal network algorithms. As a setup our system can receive signals to “learn” certain patterns. Through “learning” the system can recognize signals faster, inducing flexibility to new patterns similar to those known. Sound is captured through a simple jack input, and could be changed for an enhanced recording surface such as a wide-area recorder. Furthermore a communication module can be added to the apparatus to send alerts to another interface if needed.

Keywords: sound detection, impulsive signal, background noise, neural network

Procedia PDF Downloads 299
3307 Loudspeaker Parameters Inverse Problem for Improving Sound Frequency Response Simulation

Authors: Y. T. Tsai, Jin H. Huang

Abstract:

The sound pressure level (SPL) of the moving-coil loudspeaker (MCL) is often simulated and analyzed using the lumped parameter model. However, the SPL of a MCL cannot be simulated precisely in the high frequency region, because the value of cone effective area is changed due to the geometry variation in different mode shapes, it is also related to affect the acoustic radiation mass and resistance. Herein, the paper presents the inverse method which has a high ability to measure the value of cone effective area in various frequency points, also can estimate the MCL electroacoustic parameters simultaneously. The proposed inverse method comprises the direct problem, adjoint problem, and sensitivity problem in collaboration with nonlinear conjugate gradient method. Estimated values from the inverse method are validated experimentally which compared with the measured SPL curve result. Results presented in this paper not only improve the accuracy of lumped parameter model but also provide the valuable information on loudspeaker cone design.

Keywords: inverse problem, cone effective area, loudspeaker, nonlinear conjugate gradient method

Procedia PDF Downloads 290
3306 Comparison of Two Strategies in Thoracoscopic Ablation of Atrial Fibrillation

Authors: Alexander Zotov, Ilkin Osmanov, Emil Sakharov, Oleg Shelest, Aleksander Troitskiy, Robert Khabazov

Abstract:

Objective: Thoracoscopic surgical ablation of atrial fibrillation (AF) includes two technologies in performing of operation. 1st strategy used is the AtriCure device (bipolar, nonirrigated, non clamping), 2nd strategy is- the Medtronic device (bipolar, irrigated, clamping). The study presents a comparative analysis of clinical outcomes of two strategies in thoracoscopic ablation of AF using AtriCure vs. Medtronic devices. Methods: In 2 center study, 123 patients underwent thoracoscopic ablation of AF for the period from 2016 to 2020. Patients were divided into two groups. The first group is represented by patients who applied the AtriCure device (N=63), and the second group is - the Medtronic device (N=60), respectively. Patients were comparable in age, gender, and initial severity of the condition. Among the patients, in group 1 were 65% males with a median age of 57 years, while in group 2 – 75% and 60 years, respectively. Group 1 included patients with paroxysmal form -14,3%, persistent form - 68,3%, long-standing persistent form – 17,5%, group 2 – 13,3%, 13,3% and 73,3% respectively. Median ejection fraction and indexed left atrial volume amounted in group 1 – 63% and 40,6 ml/m2, in group 2 - 56% and 40,5 ml/m2. In addition, group 1 consisted of 39,7% patients with chronic heart failure (NYHA Class II) and 4,8% with chronic heart failure (NYHA Class III), when in group 2 – 45% and 6,7%, respectively. Follow-up consisted of laboratory tests, chest Х-ray, ECG, 24-hour Holter monitor, and cardiopulmonary exercise test. Duration of freedom from AF, distant mortality rate, and prevalence of cerebrovascular events were compared between the two groups. Results: Exit block was achieved in all patients. According to the Clavien-Dindo classification of surgical complications fraction of adverse events was 14,3% and 16,7% (1st group and 2nd group, respectively). Mean follow-up period in the 1st group was 50,4 (31,8; 64,8) months, in 2nd group - 30,5 (14,1; 37,5) months (P=0,0001). In group 1 - total freedom of AF was in 73,3% of patients, among which 25% had additional antiarrhythmic drugs (AADs) therapy or catheter ablation (CA), in group 2 – 90% and 18,3%, respectively (for total freedom of AF P<0,02). At follow-up, the distant mortality rate in the 1st group was – 4,8%, and in the 2nd – no fatal events. Prevalence of cerebrovascular events was higher in the 1st group than in the 2nd (6,7% vs. 1,7% respectively). Conclusions: Despite the relatively shorter follow-up of the 2nd group in the study, applying the strategy using the Medtronic device showed quite encouraging results. Further research is needed to evaluate the effectiveness of this strategy in the long-term period.

Keywords: atrial fibrillation, clamping, ablation, thoracoscopic surgery

Procedia PDF Downloads 92
3305 The Optimization of Decision Rules in Multimodal Decision-Level Fusion Scheme

Authors: Andrey V. Timofeev, Dmitry V. Egorov

Abstract:

This paper introduces an original method of parametric optimization of the structure for multimodal decision-level fusion scheme which combines the results of the partial solution of the classification task obtained from assembly of the mono-modal classifiers. As a result, a multimodal fusion classifier which has the minimum value of the total error rate has been obtained.

Keywords: classification accuracy, fusion solution, total error rate, multimodal fusion classifier

Procedia PDF Downloads 446
3304 Evaluation of Developmental Toxicity and Teratogenicity of Perfluoroalkyl Compounds Using FETAX

Authors: Hyun-Kyung Lee, Jehyung Oh, Young Eun Jeong, Hyun-Shik Lee

Abstract:

Perfluoroalkyl compounds (PFCs) are environmental toxicants that persistently accumulate in the human blood. Their widespread detection and accumulation in the environment raise concerns about whether these chemicals might be developmental toxicants and teratogens in the ecosystem. We evaluated and compared the toxicity of PFCs of containing various numbers of carbon atoms (C8-11 carbons) on vertebrate embryogenesis. We assessed the developmental toxicity and teratogenicity of various PFCs. The toxic effects on Xenopus embryos were evaluated using different methods. We measured teratogenic indices (TIs) and investigated the mechanisms underlying developmental toxicity and teratogenicity by measuring the expression of organ-specific biomarkers such as xPTB (liver), Nkx2.5 (heart), and Cyl18 (intestine). All PFCs that we tested were found to be developmental toxicants and teratogens. Their toxic effects were strengthened with increasing length of the fluorinated carbon chain. Furthermore, we produced evidence showing that perfluorodecanoic acid (PFDA) and perfluoroundecanoic acid (PFuDA) are more potent developmental toxicants and teratogens in an animal model compared to the other PFCs we evaluated [perfluorooctanoic acid (PFOA) and perfluorononanoic acid (PFNA)]. In particular, severe defects resulting from PFDA and PFuDA exposure were observed in the liver and heart, respectively, using the whole mount in situ hybridization, real-time PCR, pathologic analysis of the heart, and dissection of the liver. Our studies suggest that most PFCs are developmental toxicants and teratogens, however, compounds that have higher numbers of carbons (i.e., PFDA and PFuDA) exert more potent effects.

Keywords: PFC, xenopus, fetax, development

Procedia PDF Downloads 331
3303 Machine Learning Approach for Stress Detection Using Wireless Physical Activity Tracker

Authors: B. Padmaja, V. V. Rama Prasad, K. V. N. Sunitha, E. Krishna Rao Patro

Abstract:

Stress is a psychological condition that reduces the quality of sleep and affects every facet of life. Constant exposure to stress is detrimental not only for mind but also body. Nevertheless, to cope with stress, one should first identify it. This paper provides an effective method for the cognitive stress level detection by using data provided from a physical activity tracker device Fitbit. This device gathers people’s daily activities of food, weight, sleep, heart rate, and physical activities. In this paper, four major stressors like physical activities, sleep patterns, working hours and change in heart rate are used to assess the stress levels of individuals. The main motive of this system is to use machine learning approach in stress detection with the help of Smartphone sensor technology. Individually, the effect of each stressor is evaluated using logistic regression and then combined model is built and assessed using variants of ordinal logistic regression models like logit, probit and complementary log-log. Then the quality of each model is evaluated using Akaike Information Criterion (AIC) and probit is assessed as the more suitable model for our dataset. This system is experimented and evaluated in a real time environment by taking data from adults working in IT and other sectors in India. The novelty of this work lies in the fact that stress detection system should be less invasive as possible for the users.

Keywords: physical activity tracker, sleep pattern, working hours, heart rate, smartphone sensor

Procedia PDF Downloads 241
3302 An Overview of the Porosity Classification in Carbonate Reservoirs and Their Challenges: An Example of Macro-Microporosity Classification from Offshore Miocene Carbonate in Central Luconia, Malaysia

Authors: Hammad T. Janjuhah, Josep Sanjuan, Mohamed K. Salah

Abstract:

Biological and chemical activities in carbonates are responsible for the complexity of the pore system. Primary porosity is generally of natural origin while secondary porosity is subject to chemical reactivity through diagenetic processes. To understand the integrated part of hydrocarbon exploration, it is necessary to understand the carbonate pore system. However, the current porosity classification scheme is limited to adequately predict the petrophysical properties of different reservoirs having various origins and depositional environments. Rock classification provides a descriptive method for explaining the lithofacies but makes no significant contribution to the application of porosity and permeability (poro-perm) correlation. The Central Luconia carbonate system (Malaysia) represents a good example of pore complexity (in terms of nature and origin) mainly related to diagenetic processes which have altered the original reservoir. For quantitative analysis, 32 high-resolution images of each thin section were taken using transmitted light microscopy. The quantification of grains, matrix, cement, and macroporosity (pore types) was achieved using a petrographic analysis of thin sections and FESEM images. The point counting technique was used to estimate the amount of macroporosity from thin section, which was then subtracted from the total porosity to derive the microporosity. The quantitative observation of thin sections revealed that the mouldic porosity (macroporosity) is the dominant porosity type present, whereas the microporosity seems to correspond to a sum of 40 to 50% of the total porosity. It has been proven that these Miocene carbonates contain a significant amount of microporosity, which significantly complicates the estimation and production of hydrocarbons. Neglecting its impact can increase uncertainty about estimating hydrocarbon reserves. Due to the diversity of geological parameters, the application of existing porosity classifications does not allow a better understanding of the poro-perm relationship. However, the classification can be improved by including the pore types and pore structures where they can be divided into macro- and microporosity. Such studies of microporosity identification/classification represent now a major concern in limestone reservoirs around the world.

Keywords: overview of porosity classification, reservoir characterization, microporosity, carbonate reservoir

Procedia PDF Downloads 132
3301 Using Time Series NDVI to Model Land Cover Change: A Case Study in the Berg River Catchment Area, Western Cape, South Africa

Authors: Adesuyi Ayodeji Steve, Zahn Munch

Abstract:

This study investigates the use of MODIS NDVI to identify agricultural land cover change areas on an annual time step (2007 - 2012) and characterize the trend in the study area. An ISODATA classification was performed on the MODIS imagery to select only the agricultural class producing 3 class groups namely: agriculture, agriculture/semi-natural, and semi-natural. NDVI signatures were created for the time series to identify areas dominated by cereals and vineyards with the aid of ancillary, pictometry and field sample data. The NDVI signature curve and training samples aided in creating a decision tree model in WEKA 3.6.9. From the training samples two classification models were built in WEKA using decision tree classifier (J48) algorithm; Model 1 included ISODATA classification and Model 2 without, both having accuracies of 90.7% and 88.3% respectively. The two models were used to classify the whole study area, thus producing two land cover maps with Model 1 and 2 having classification accuracies of 77% and 80% respectively. Model 2 was used to create change detection maps for all the other years. Subtle changes and areas of consistency (unchanged) were observed in the agricultural classes and crop practices over the years as predicted by the land cover classification. 41% of the catchment comprises of cereals with 35% possibly following a crop rotation system. Vineyard largely remained constant over the years, with some conversion to vineyard (1%) from other land cover classes. Some of the changes might be as a result of misclassification and crop rotation system.

Keywords: change detection, land cover, modis, NDVI

Procedia PDF Downloads 381
3300 Ontology-Based Backpropagation Neural Network Classification and Reasoning Strategy for NoSQL and SQL Databases

Authors: Hao-Hsiang Ku, Ching-Ho Chi

Abstract:

Big data applications have become an imperative for many fields. Many researchers have been devoted into increasing correct rates and reducing time complexities. Hence, the study designs and proposes an Ontology-based backpropagation neural network classification and reasoning strategy for NoSQL big data applications, which is called ON4NoSQL. ON4NoSQL is responsible for enhancing the performances of classifications in NoSQL and SQL databases to build up mass behavior models. Mass behavior models are made by MapReduce techniques and Hadoop distributed file system based on Hadoop service platform. The reference engine of ON4NoSQL is the ontology-based backpropagation neural network classification and reasoning strategy. Simulation results indicate that ON4NoSQL can efficiently achieve to construct a high performance environment for data storing, searching, and retrieving.

Keywords: Hadoop, NoSQL, ontology, back propagation neural network, high distributed file system

Procedia PDF Downloads 246
3299 An Exploratory Research of Human Character Analysis Based on Smart Watch Data: Distinguish the Drinking State from Normal State

Authors: Lu Zhao, Yanrong Kang, Lili Guo, Yuan Long, Guidong Xing

Abstract:

Smart watches, as a handy device with rich functionality, has become one of the most popular wearable devices all over the world. Among the various function, the most basic is health monitoring. The monitoring data can be provided as an effective evidence or a clue for the detection of crime cases. For instance, the step counting data can help to determine whether the watch wearer was quiet or moving during the given time period. There is, however, still quite few research on the analysis of human character based on these data. The purpose of this research is to analyze the health monitoring data to distinguish the drinking state from normal state. The analysis result may play a role in cases involving drinking, such as drunk driving. The experiment mainly focused on finding the figures of smart watch health monitoring data that change with drinking and figuring up the change scope. The chosen subjects are mostly in their 20s, each of whom had been wearing the same smart watch for a week. Each subject drank for several times during the week, and noted down the begin and end time point of the drinking. The researcher, then, extracted and analyzed the health monitoring data from the watch. According to the descriptive statistics analysis, it can be found that the heart rate change when drinking. The average heart rate is about 10% higher than normal, the coefficient of variation is less than about 30% of the normal state. Though more research is needed to be carried out, this experiment and analysis provide a thought of the application of the data from smart watches.

Keywords: character analysis, descriptive statistics analysis, drink state, heart rate, smart watch

Procedia PDF Downloads 149
3298 Analysis of Automotive Sensor for Engine Knock System

Authors: Miroslav Gutten, Jozef Jurcik, Daniel Korenciak, Milan Sebok, Matej Kuceraa

Abstract:

This paper deals with the phenomenon of the undesirable detonation combustion in internal combustion engines. A control unit of the engine monitors these detonations using piezoelectric knock sensors. With the control of these sensors the detonations can be objectively measured just outside the car. If this component provides small amplitude of the output voltage it could happen that there would have been in the areas of the engine ignition combustion. The paper deals with the design of a simple device for the detection of this disorder. A construction of the testing device for the knock sensor suitable for diagnostics of knock combustion in internal combustion engines will be presented. The output signal of presented sensor will be described by Bessel functions. Using the first voltage extremes on the characteristics it is possible to create a reference for the evaluation of the polynomial residue. It should be taken into account that the velocity of sound in air is 330 m/s. This sound impinges on the walls of the combustion chamber and is detected by the sensor. The resonant frequency of the clicking of the motor is usually in the range from 5 kHz to 15 kHz. The sensor worked in the field to 37 kHz, which shall be taken into account on an own sensor resonance.

Keywords: diagnostics, knock sensor, measurement, testing device

Procedia PDF Downloads 431
3297 Land Use Change Detection Using Satellite Images for Najran City, Kingdom of Saudi Arabia (KSA)

Authors: Ismail Elkhrachy

Abstract:

Determination of land use changing is an important component of regional planning for applications ranging from urban fringe change detection to monitoring change detection of land use. This data are very useful for natural resources management.On the other hand, the technologies and methods of change detection also have evolved dramatically during past 20 years. So it has been well recognized that the change detection had become the best methods for researching dynamic change of land use by multi-temporal remotely-sensed data. The objective of this paper is to assess, evaluate and monitor land use change surrounding the area of Najran city, Kingdom of Saudi Arabia (KSA) using Landsat images (June 23, 2009) and ETM+ image(June. 21, 2014). The post-classification change detection technique was applied. At last,two-time subset images of Najran city are compared on a pixel-by-pixel basis using the post-classification comparison method and the from-to change matrix is produced, the land use change information obtained.Three classes were obtained, urban, bare land and agricultural land from unsupervised classification method by using Erdas Imagine and ArcGIS software. Accuracy assessment of classification has been performed before calculating change detection for study area. The obtained accuracy is between 61% to 87% percent for all the classes. Change detection analysis shows that rapid growth in urban area has been increased by 73.2%, the agricultural area has been decreased by 10.5 % and barren area reduced by 7% between 2009 and 2014. The quantitative study indicated that the area of urban class has unchanged by 58.2 km〗^2, gained 70.3 〖km〗^2 and lost 16 〖km〗^2. For bare land class 586.4〖km〗^2 has unchanged, 53.2〖km〗^2 has gained and 101.5〖km〗^2 has lost. While agriculture area class, 20.2〖km〗^2 has unchanged, 31.2〖km〗^2 has gained and 37.2〖km〗^2 has lost.

Keywords: land use, remote sensing, change detection, satellite images, image classification

Procedia PDF Downloads 506
3296 Development of a Regression Based Model to Predict Subjective Perception of Squeak and Rattle Noise

Authors: Ramkumar R., Gaurav Shinde, Pratik Shroff, Sachin Kumar Jain, Nagesh Walke

Abstract:

Advancements in electric vehicles have significantly reduced the powertrain noise and moving components of vehicles. As a result, in-cab noises have become more noticeable to passengers inside the car. To ensure a comfortable ride for drivers and other passengers, it has become crucial to eliminate undesirable component noises during the development phase. Standard practices are followed to identify the severity of noises based on subjective ratings, but it can be a tedious process to identify the severity of each development sample and make changes to reduce it. Additionally, the severity rating can vary from jury to jury, making it challenging to arrive at a definitive conclusion. To address this, an automotive component was identified to evaluate squeak and rattle noise issue. Physical tests were carried out for random and sine excitation profiles. Aim was to subjectively assess the noise using jury rating method and objectively evaluate the same by measuring the noise. Suitable jury evaluation method was selected for the said activity, and recorded sounds were replayed for jury rating. Objective data sound quality metrics viz., loudness, sharpness, roughness, fluctuation strength and overall Sound Pressure Level (SPL) were measured. Based on this, correlation co-efficients was established to identify the most relevant sound quality metrics that are contributing to particular identified noise issue. Regression analysis was then performed to establish the correlation between subjective and objective data. Mathematical model was prepared using artificial intelligence and machine learning algorithm. The developed model was able to predict the subjective rating with good accuracy.

Keywords: BSR, noise, correlation, regression

Procedia PDF Downloads 63
3295 The Necessity to Standardize Procedures of Providing Engineering Geological Data for Designing Road and Railway Tunneling Projects

Authors: Atefeh Saljooghi Khoshkar, Jafar Hassanpour

Abstract:

One of the main problems of the design stage relating to many tunneling projects is the lack of an appropriate standard for the provision of engineering geological data in a predefined format. In particular, this is more reflected in highway and railroad tunnel projects in which there is a number of tunnels and different professional teams involved. In this regard, comprehensive software needs to be designed using the accepted methods in order to help engineering geologists to prepare standard reports, which contain sufficient input data for the design stage. Regarding this necessity, applied software has been designed using macro capabilities and Visual Basic programming language (VBA) through Microsoft Excel. In this software, all of the engineering geological input data, which are required for designing different parts of tunnels, such as discontinuities properties, rock mass strength parameters, rock mass classification systems, boreability classification, the penetration rate, and so forth, can be calculated and reported in a standard format.

Keywords: engineering geology, rock mass classification, rock mechanic, tunnel

Procedia PDF Downloads 59
3294 Defect Classification of Hydrogen Fuel Pressure Vessels using Deep Learning

Authors: Dongju Kim, Youngjoo Suh, Hyojin Kim, Gyeongyeong Kim

Abstract:

Acoustic Emission Testing (AET) is widely used to test the structural integrity of an operational hydrogen storage container, and clustering algorithms are frequently used in pattern recognition methods to interpret AET results. However, the interpretation of AET results can vary from user to user as the tuning of the relevant parameters relies on the user's experience and knowledge of AET. Therefore, it is necessary to use a deep learning model to identify patterns in acoustic emission (AE) signal data that can be used to classify defects instead. In this paper, a deep learning-based model for classifying the types of defects in hydrogen storage tanks, using AE sensor waveforms, is proposed. As hydrogen storage tanks are commonly constructed using carbon fiber reinforced polymer composite (CFRP), a defect classification dataset is collected through a tensile test on a specimen of CFRP with an AE sensor attached. The performance of the classification model, using one-dimensional convolutional neural network (1-D CNN) and synthetic minority oversampling technique (SMOTE) data augmentation, achieved 91.09% accuracy for each defect. It is expected that the deep learning classification model in this paper, used with AET, will help in evaluating the operational safety of hydrogen storage containers.

Keywords: acoustic emission testing, carbon fiber reinforced polymer composite, one-dimensional convolutional neural network, smote data augmentation

Procedia PDF Downloads 74
3293 Classification of Manufacturing Data for Efficient Processing on an Edge-Cloud Network

Authors: Onyedikachi Ulelu, Andrew P. Longstaff, Simon Fletcher, Simon Parkinson

Abstract:

The widespread interest in 'Industry 4.0' or 'digital manufacturing' has led to significant research requiring the acquisition of data from sensors, instruments, and machine signals. In-depth research then identifies methods of analysis of the massive amounts of data generated before and during manufacture to solve a particular problem. The ultimate goal is for industrial Internet of Things (IIoT) data to be processed automatically to assist with either visualisation or autonomous system decision-making. However, the collection and processing of data in an industrial environment come with a cost. Little research has been undertaken on how to specify optimally what data to capture, transmit, process, and store at various levels of an edge-cloud network. The first step in this specification is to categorise IIoT data for efficient and effective use. This paper proposes the required attributes and classification to take manufacturing digital data from various sources to determine the most suitable location for data processing on the edge-cloud network. The proposed classification framework will minimise overhead in terms of network bandwidth/cost and processing time of machine tool data via efficient decision making on which dataset should be processed at the ‘edge’ and what to send to a remote server (cloud). A fast-and-frugal heuristic method is implemented for this decision-making. The framework is tested using case studies from industrial machine tools for machine productivity and maintenance.

Keywords: data classification, decision making, edge computing, industrial IoT, industry 4.0

Procedia PDF Downloads 160
3292 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs

Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa

Abstract:

Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.

Keywords: classification models, egg weight, fertilised eggs, multiple linear regression

Procedia PDF Downloads 73
3291 Local Directional Encoded Derivative Binary Pattern Based Coral Image Classification Using Weighted Distance Gray Wolf Optimization Algorithm

Authors: Annalakshmi G., Sakthivel Murugan S.

Abstract:

This paper presents a local directional encoded derivative binary pattern (LDEDBP) feature extraction method that can be applied for the classification of submarine coral reef images. The classification of coral reef images using texture features is difficult due to the dissimilarities in class samples. In coral reef image classification, texture features are extracted using the proposed method called local directional encoded derivative binary pattern (LDEDBP). The proposed approach extracts the complete structural arrangement of the local region using local binary batten (LBP) and also extracts the edge information using local directional pattern (LDP) from the edge response available in a particular region, thereby achieving extra discriminative feature value. Typically the LDP extracts the edge details in all eight directions. The process of integrating edge responses along with the local binary pattern achieves a more robust texture descriptor than the other descriptors used in texture feature extraction methods. Finally, the proposed technique is applied to an extreme learning machine (ELM) method with a meta-heuristic algorithm known as weighted distance grey wolf optimizer (GWO) to optimize the input weight and biases of single-hidden-layer feed-forward neural networks (SLFN). In the empirical results, ELM-WDGWO demonstrated their better performance in terms of accuracy on all coral datasets, namely RSMAS, EILAT, EILAT2, and MLC, compared with other state-of-the-art algorithms. The proposed method achieves the highest overall classification accuracy of 94% compared to the other state of art methods.

Keywords: feature extraction, local directional pattern, ELM classifier, GWO optimization

Procedia PDF Downloads 146
3290 Anaesthetic Management of Congenitally Corrected Transposition of Great Arteries with Complete Heart Block in a Parturient for Emergency Caesarean Section

Authors: Lokvendra S. Budania, Yogesh K Gaude, Vamsidhar Chamala

Abstract:

Introduction: Congenitally corrected transposition of great arteries (CCTGA) is a complex congenital heart disease where there are both atrioventricular and ventriculoarterial discordances, usually accompanied by other cardiovascular malformations. Case Report: A 24-year-old primigravida known case of CCTGA at 37 weeks of gestation was referred to our hospital for safe delivery. Her electrocardiogram showed HR-40/pm, echocardiography showed Ejection Fraction of 65% and CCTGA. Temporary pacemaker was inserted by cardiologist in catheterization laboratory, before giving trial of labour in view of complete heart block. She was planned for normal delivery, but emergency Caesarean section was planned due to non-reassuring foetal Cardiotocography Pre-op vitals showed PR-50 bpm with temporary pacemaker, Blood pressure-110/70 mmHg, SpO2-99% on room air. Nil per oral was inadequate. Patency of two peripheral IV cannula checked and left radial arterial line secured. Epidural Anaesthesia was planned, and catheter was placed at L2-L3. Test dose was given, Anaesthesia was provided with 5ml + 5ml of 2% Lignocaine with 25 mcg Fentanyl and further 2.5Ml of 0.5% Bupivacaine was given to achieve a sensory level of T6. Cesarean section was performed and baby was delivered. Cautery was avoided during this procedure. IV Oxytocin (15U) was added to 500 mL of ringer’s lactate. Hypotension was treated with phenylephrine boluses. Patient was shifted to post-operative care unit and later to high dependency unit for monitoring. Post op vitals remained stable. Temporary pacemaker was removed after 24 hours of surgery. Her post-operative period was uneventful and discharged from hospital. Conclusion: Rare congenital cardiac disorders require detail knowledge of pathophysiology and associated comorbidities with the disease. Meticulously planned and carefully titrated neuraxial techniques will be beneficial for such cases.

Keywords: congenitally corrected transposition of great arteries, complete heart block, emergency LSCS, epidural anaesthesia

Procedia PDF Downloads 115
3289 Kannada HandWritten Character Recognition by Edge Hinge and Edge Distribution Techniques Using Manhatan and Minimum Distance Classifiers

Authors: C. V. Aravinda, H. N. Prakash

Abstract:

In this paper, we tried to convey fusion and state of art pertaining to SIL character recognition systems. In the first step, the text is preprocessed and normalized to perform the text identification correctly. The second step involves extracting relevant and informative features. The third step implements the classification decision. The three stages which involved are Data acquisition and preprocessing, Feature extraction, and Classification. Here we concentrated on two techniques to obtain features, Feature Extraction & Feature Selection. Edge-hinge distribution is a feature that characterizes the changes in direction of a script stroke in handwritten text. The edge-hinge distribution is extracted by means of a windowpane that is slid over an edge-detected binary handwriting image. Whenever the mid pixel of the window is on, the two edge fragments (i.e. connected sequences of pixels) emerging from this mid pixel are measured. Their directions are measured and stored as pairs. A joint probability distribution is obtained from a large sample of such pairs. Despite continuous effort, handwriting identification remains a challenging issue, due to different approaches use different varieties of features, having different. Therefore, our study will focus on handwriting recognition based on feature selection to simplify features extracting task, optimize classification system complexity, reduce running time and improve the classification accuracy.

Keywords: word segmentation and recognition, character recognition, optical character recognition, hand written character recognition, South Indian languages

Procedia PDF Downloads 476
3288 Music Genre Classification Based on Non-Negative Matrix Factorization Features

Authors: Soyon Kim, Edward Kim

Abstract:

In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.

Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)

Procedia PDF Downloads 278
3287 Dual-Channel Reliable Breast Ultrasound Image Classification Based on Explainable Attribution and Uncertainty Quantification

Authors: Haonan Hu, Shuge Lei, Dasheng Sun, Huabin Zhang, Kehong Yuan, Jian Dai, Jijun Tang

Abstract:

This paper focuses on the classification task of breast ultrasound images and conducts research on the reliability measurement of classification results. A dual-channel evaluation framework was developed based on the proposed inference reliability and predictive reliability scores. For the inference reliability evaluation, human-aligned and doctor-agreed inference rationals based on the improved feature attribution algorithm SP-RISA are gracefully applied. Uncertainty quantification is used to evaluate the predictive reliability via the test time enhancement. The effectiveness of this reliability evaluation framework has been verified on the breast ultrasound clinical dataset YBUS, and its robustness is verified on the public dataset BUSI. The expected calibration errors on both datasets are significantly lower than traditional evaluation methods, which proves the effectiveness of the proposed reliability measurement.

Keywords: medical imaging, ultrasound imaging, XAI, uncertainty measurement, trustworthy AI

Procedia PDF Downloads 72
3286 A Multi-Output Network with U-Net Enhanced Class Activation Map and Robust Classification Performance for Medical Imaging Analysis

Authors: Jaiden Xuan Schraut, Leon Liu, Yiqiao Yin

Abstract:

Computer vision in medical diagnosis has achieved a high level of success in diagnosing diseases with high accuracy. However, conventional classifiers that produce an image to-label result provides insufficient information for medical professionals to judge and raise concerns over the trust and reliability of a model with results that cannot be explained. In order to gain local insight into cancerous regions, separate tasks such as imaging segmentation need to be implemented to aid the doctors in treating patients, which doubles the training time and costs which renders the diagnosis system inefficient and difficult to be accepted by the public. To tackle this issue and drive AI-first medical solutions further, this paper proposes a multi-output network that follows a U-Net architecture for image segmentation output and features an additional convolutional neural networks (CNN) module for auxiliary classification output. Class activation maps are a method of providing insight into a convolutional neural network’s feature maps that leads to its classification but in the case of lung diseases, the region of interest is enhanced by U-net-assisted Class Activation Map (CAM) visualization. Therefore, our proposed model combines image segmentation models and classifiers to crop out only the lung region of a chest X-ray’s class activation map to provide a visualization that improves the explainability and is able to generate classification results simultaneously which builds trust for AI-led diagnosis systems. The proposed U-Net model achieves 97.61% accuracy and a dice coefficient of 0.97 on testing data from the COVID-QU-Ex Dataset which includes both diseased and healthy lungs.

Keywords: multi-output network model, U-net, class activation map, image classification, medical imaging analysis

Procedia PDF Downloads 176
3285 Developing a Tissue-Engineered Aortic Heart Valve Based on an Electrospun Scaffold

Authors: Sara R. Knigge, Sugat R. Tuladhar, Alexander Becker, Tobias Schilling, Birgit Glasmacher

Abstract:

Commercially available mechanical or biological heart valve prostheses both tend to fail long-term due to thrombosis, calcific degeneration, infection, or immunogenic rejection. Moreover, these prostheses are non-viable and do not grow with the patients, which is a problem for young patients. As a result, patients often need to undergo redo-operations. Tissue-engineered (TE) heart valves based on degradable electrospun fiber scaffolds represent a promising approach to overcome these limitations. Such scaffolds need sufficient mechanical properties to withstand the hydrodynamic stress of intracardiac hemodynamics. Additionally, the scaffolds should be colonized by autologous or homologous cells to facilitate the in vivo remodeling of the scaffolds to a viable structure. This study investigates how process parameters of electrospinning and degradation affect the mechanical properties of electrospun scaffolds made of FDA-approved, biodegradable polymer polycaprolactone (PCL). Fiber mats were produced from a PCL/tetrafluoroethylene solution by electrospinning. The e-spinning process was varied in terms of scaffold thickness, fiber diameter, fiber orientation, and fiber interconnectivity. The morphology of the fiber mats was characterized with a scanning electron microscope (SEM). The mats were degraded in different solutions (cell culture media, SBF, PBS and 10 M NaOH-Solution). At different time points of degradation (2, 4 and 6 weeks), tensile and cyclic loading tests were performed. Fresh porcine pericardium and heart valves served as a control for the mechanical assessment. The progression of polymer degradation was quantified by SEM and differential scanning calorimetry (DSC). Primary Human aortic endothelial cells (HAECs) and Human induced pluripotent stem cell-derived endothelial cells (iPSC-ECs) were seeded on the fiber mats to investigate the cell colonization potential. The results showed that both the electrospinning parameters and the degradation significantly influenced the mechanical properties. Especially the fiber orientation has a considerable impact and leads to a pronounced anisotropic behavior of the scaffold. Preliminary results showed that the polymer became strongly more brittle over time. However, the embrittlement can initially only be detected in the mechanical test. In the SEM and DSC investigations, neither morphological nor thermodynamic changes are significantly detectable. Live/Dead staining and SEM imaging of the cell-seeded scaffolds showed that HAECs and iPSC-ECs were able to grow on the surface of the polymer. In summary, this study's results indicate a promising approach to the development of a TE aortic heart valve based on an electrospun scaffold.

Keywords: electrospun scaffolds, long-term polymer degradation, mechanical behavior of electrospun PCL, tissue engineered aortic heart valve

Procedia PDF Downloads 123
3284 Comparative Efficacy of Angiotensin Converting Enzymes Inhibitors and Angiotensin Receptor Blockers in Patients with Heart Failure in Tanzania: A Prospective Cohort Study

Authors: Mark P. Mayala, Henry Mayala, Khuzeima Khanbhai

Abstract:

Background: Heart failure has been a rising concern in Tanzania. New drugs have been introduced, including the group of drugs called Angiotensin receptor Neprilysin Inhibitor (ARNI), but due to their high cost, angiotensin-converting enzymes inhibitors (ACEIs) and Angiotensin receptor blockers (ARBs) have been mostly used in Tanzania. However, according to our knowledge, the efficacy comparison of the two groups is yet to be studied in Tanzania. The aim of this study was to compare the efficacy of ACEIs and ARBs among patients with heart failure. Methodology: This was a hospital-based prospective cohort study done at Jakaya Kikwete Cardiac Institution (JKCI), Tanzania, from June to December 2020. Consecutive enrollment was done until fulfilling the inclusion criteria. Clinical details were measured at baseline. We assessed the relationship between ARBs and ACEIs users with N-terminal pro-brain natriuretic peptide (NT pro-BNP) levels at admission and at 1-month follow-up using a chi-square test. A Kaplan-Meier curve was used to estimate the survival time of the two groups. Results: 155 HF patients were enrolled, with a mean age of 48 years, whereby 52.3% were male, and their mean left ventricular ejection fraction (LVEF) was 37.3%. 52 (33.5%) heart failure patients were on ACEIs, 57 (36.8%) on ARBs, and 46 (29.7%) were neither using ACEIs nor ARBs. At least half of the patients did not receive a guideline-directed medical therapy (GDMT), with only 82 (52.9%) receiving a GDMT. A drop in NT pro-BNP levels was observed during admission and at 1-month follow-up on both groups, from 6389.2 pg/ml to 4000.1 pg/ml for ARB users and 5877.7 pg/ml to 1328.2 pg/ml for the ACEIs users. There was no statistical difference between the two groups when estimated by the Kaplan-Meier curve, though more deaths were observed in those who were neither on ACEIs nor ARBs, with a calculated P value of 0.01. Conclusion: This study demonstrates that ACEIs have more efficacy and overall better clinical outcome than ARBs, but this should be taken under the patient-based case, considering the side effects of ACEIs and patients’ adherence.

Keywords: angiotensin converting enzymes inhibitors, angiotensin receptor blockers, guideline direct medical therapy, N-terminal pro-brain natriuretic peptide

Procedia PDF Downloads 71
3283 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 149
3282 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 137
3281 The Use of Biofeedback to Increase Resilience and Mental Health of Supersonic Pilots

Authors: G. Kloudova, S. Kozlova, M. Stehlik

Abstract:

Pilots are operating in a high-risk environment rich in potential stressors, which negatively affect aviation safety and the mental health of pilots. In the research conducted, the pilots were offered mental training biofeedback therapy. Biofeedback is an objective tool to measure physiological responses to stress. After only six sessions, all of the pilots tested showed significant differences between their initial condition and their condition after therapy. The biggest improvement was found in decreased heart rate (in 83.3% of tested pilots) and respiration rate (66.7%), which are the best indicators of anxiety states and panic attacks. To incorporate all of the variables, we correlated the measured physiological state of the pilots with their personality traits. Surprisingly, we found a high correlation with peripheral temperature and confidence (0.98) and with heart rate and aggressiveness (0.97). A retest made after a one-year interval showed that in majority of the subjects tested their acquired self-regulation ability had been internalized.

Keywords: aviation, biofeedback, mental workload, performance psychology

Procedia PDF Downloads 227
3280 Classification of EEG Signals Based on Dynamic Connectivity Analysis

Authors: Zoran Šverko, Saša Vlahinić, Nino Stojković, Ivan Markovinović

Abstract:

In this article, the classification of target letters is performed using data from the EEG P300 Speller paradigm. Neural networks trained with the results of dynamic connectivity analysis between different brain regions are used for classification. Dynamic connectivity analysis is based on the adaptive window size and the imaginary part of the complex Pearson correlation coefficient. Brain dynamics are analysed using the relative intersection of confidence intervals for the imaginary component of the complex Pearson correlation coefficient method (RICI-imCPCC). The RICI-imCPCC method overcomes the shortcomings of currently used dynamical connectivity analysis methods, such as the low reliability and low temporal precision for short connectivity intervals encountered in constant sliding window analysis with wide window size and the high susceptibility to noise encountered in constant sliding window analysis with narrow window size. This method overcomes these shortcomings by dynamically adjusting the window size using the RICI rule. This method extracts information about brain connections for each time sample. Seventy percent of the extracted brain connectivity information is used for training and thirty percent for validation. Classification of the target word is also done and based on the same analysis method. As far as we know, through this research, we have shown for the first time that dynamic connectivity can be used as a parameter for classifying EEG signals.

Keywords: dynamic connectivity analysis, EEG, neural networks, Pearson correlation coefficients

Procedia PDF Downloads 189