Search results for: diagnostic accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4657

Search results for: diagnostic accuracy

3937 Machine Learning Driven Analysis of Kepler Objects of Interest to Identify Exoplanets

Authors: Akshat Kumar, Vidushi

Abstract:

This paper identifies 27 KOIs, 26 of which are currently classified as candidates and one as false positives that have a high probability of being confirmed. For this purpose, 11 machine learning algorithms were implemented on the cumulative kepler dataset sourced from the NASA exoplanet archive; it was observed that the best-performing model was HistGradientBoosting and XGBoost with a test accuracy of 93.5%, and the lowest-performing model was Gaussian NB with a test accuracy of 54%, to test model performance F1, cross-validation score and RUC curve was calculated. Based on the learned models, the significant characteristics for confirm exoplanets were identified, putting emphasis on the object’s transit and stellar properties; these characteristics were namely koi_count, koi_prad, koi_period, koi_dor, koi_ror, and koi_smass, which were later considered to filter out the potential KOIs. The paper also calculates the Earth similarity index based on the planetary radius and equilibrium temperature for each KOI identified to aid in their classification.

Keywords: Kepler objects of interest, exoplanets, space exploration, machine learning, earth similarity index, transit photometry

Procedia PDF Downloads 73
3936 Assessing Diagnostic and Evaluation Tools for Use in Urban Immunisation Programming: A Critical Narrative Review and Proposed Framework

Authors: Tim Crocker-Buque, Sandra Mounier-Jack, Natasha Howard

Abstract:

Background: Due to both the increasing scale and speed of urbanisation, urban areas in low and middle-income countries (LMICs) host increasingly large populations of under-immunized children, with the additional associated risks of rapid disease transmission in high-density living environments. Multiple interdependent factors are associated with these coverage disparities in urban areas and most evidence comes from relatively few countries, e.g., predominantly India, Kenya, Nigeria, and some from Pakistan, Iran, and Brazil. This study aimed to identify, describe, and assess the main tools used to measure or improve coverage of immunisation services in poor urban areas. Methods: Authors used a qualitative review design, including academic and non-academic literature, to identify tools used to improve coverage of public health interventions in urban areas. Authors selected and extracted sources that provided good examples of specific tools, or categories of tools, used in a context relevant to urban immunization. Diagnostic (e.g., for data collection, analysis, and insight generation) and programme tools (e.g., for investigating or improving ongoing programmes) and interventions (e.g., multi-component or stand-alone with evidence) were selected for inclusion to provide a range of type and availability of relevant tools. These were then prioritised using a decision-analysis framework and a tool selection guide for programme managers developed. Results: Authors reviewed tools used in urban immunisation contexts and tools designed for (i) non-immunization and/or non-health interventions in urban areas, and (ii) immunisation in rural contexts that had relevance for urban areas (e.g., Reaching every District/Child/ Zone). Many approaches combined several tools and methods, which authors categorised as diagnostic, programme, and intervention. The most common diagnostic tools were cross-sectional surveys, key informant interviews, focus group discussions, secondary analysis of routine data, and geographical mapping of outcomes, resources, and services. Programme tools involved multiple stages of data collection, analysis, insight generation, and intervention planning and included guidance documents from WHO (World Health Organisation), UNICEF (United Nations Children's Fund), USAID (United States Agency for International Development), and governments, and articles reporting on diagnostics, interventions, and/or evaluations to improve urban immunisation. Interventions involved service improvement, education, reminder/recall, incentives, outreach, mass-media, or were multi-component. The main gaps in existing tools were an assessment of macro/policy-level factors, exploration of effective immunization communication channels, and measuring in/out-migration. The proposed framework uses a problem tree approach to suggest tools to address five common challenges (i.e. identifying populations, understanding communities, issues with service access and use, improving services, improving coverage) based on context and available data. Conclusion: This study identified many tools relevant to evaluating urban LMIC immunisation programmes, including significant crossover between tools. This was encouraging in terms of supporting the identification of common areas, but problematic as data volumes, instructions, and activities could overwhelm managers and tools are not always suitably applied to suitable contexts. Further research is needed on how best to combine tools and methods to suit local contexts. Authors’ initial framework can be tested and developed further.

Keywords: health equity, immunisation, low and middle-income countries, poverty, urban health

Procedia PDF Downloads 139
3935 Multiphase Equilibrium Characterization Model For Hydrate-Containing Systems Based On Trust-Region Method Non-Iterative Solving Approach

Authors: Zhuoran Li, Guan Qin

Abstract:

A robust and efficient compositional equilibrium characterization model for hydrate-containing systems is required, especially for time-critical simulations such as subsea pipeline flow assurance analysis, compositional simulation in hydrate reservoirs etc. A multiphase flash calculation framework, which combines Gibbs energy minimization function and cubic plus association (CPA) EoS, is developed to describe the highly non-ideal phase behavior of hydrate-containing systems. A non-iterative eigenvalue problem-solving approach for the trust-region sub-problem is selected to guarantee efficiency. The developed flash model is based on the state-of-the-art objective function proposed by Michelsen to minimize the Gibbs energy of the multiphase system. It is conceivable that a hydrate-containing system always contains polar components (such as water and hydrate inhibitors), introducing hydrogen bonds to influence phase behavior. Thus, the cubic plus associating (CPA) EoS is utilized to compute the thermodynamic parameters. The solid solution theory proposed by van der Waals and Platteeuw is applied to represent hydrate phase parameters. The trust-region method combined with the trust-region sub-problem non-iterative eigenvalue problem-solving approach is utilized to ensure fast convergence. The developed multiphase flash model's accuracy performance is validated by three available models (one published and two commercial models). Hundreds of published hydrate-containing system equilibrium experimental data are collected to act as the standard group for the accuracy test. The accuracy comparing results show that our model has superior performances over two models and comparable calculation accuracy to CSMGem. Efficiency performance test also has been carried out. Because the trust-region method can determine the optimization step's direction and size simultaneously, fast solution progress can be obtained. The comparison results show that less iteration number is needed to optimize the objective function by utilizing trust-region methods than applying line search methods. The non-iterative eigenvalue problem approach also performs faster computation speed than the conventional iterative solving algorithm for the trust-region sub-problem, further improving the calculation efficiency. A new thermodynamic framework of the multiphase flash model for the hydrate-containing system has been constructed in this work. Sensitive analysis and numerical experiments have been carried out to prove the accuracy and efficiency of this model. Furthermore, based on the current thermodynamic model in the oil and gas industry, implementing this model is simple.

Keywords: equation of state, hydrates, multiphase equilibrium, trust-region method

Procedia PDF Downloads 172
3934 Machine Learning Techniques in Bank Credit Analysis

Authors: Fernanda M. Assef, Maria Teresinha A. Steiner

Abstract:

The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.

Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines

Procedia PDF Downloads 103
3933 Proteomic Analysis of Cytoplasmic Antigen from Brucella canis to Characterize Immunogenic Proteins Responded with Naturally Infected Dogs

Authors: J. J. Lee, S. R. Sung, E. J. Yum, S. C. Kim, B. H. Hyun, M. Her, H. S. Lee

Abstract:

Canine brucellosis is a critical problem in dogs leading to reproductive diseases which are mainly caused by Brucella canis. There are, nonetheless, not clear symptoms so that it may go unnoticed in most of the cases. Serodiagnosis for canine brucellosis has not been confirmed. Moreover, it has substantial difficulties due to broad cross-reactivity between the rough cell wall antigens of B. canis and heterospecific antibodies present in normal, uninfected dogs. Thus, this study was conducted to characterize the immunogenic proteins in cytoplasmic antigen (CPAg) of B. canis, which defined the antigenic sensitivity of the humoral antibody responses to B. canis-infected dogs. In analysis of B. canis CPAg, first, we extracted and purified the cytoplasmic proteins from cultured B. canis by hot-saline inactivation, ultrafiltration, sonication, and ultracentrifugation step by step according to the sonicated antigen extract method. For characterization of this antigen, we checked the sort and range of each protein on SDS-PAGE and verified the immunogenic proteins leading to reaction with antisera of B. canis-infected dogs. Selected immunodominant proteins were identified using MALDI-MS/MS. As a result, in an immunoproteomic assay, several polypeptides in CPAg on one or two-dimensional electrophoresis (DE) were specifically reacted to antisera from B. canis-infected dogs but not from non-infected dogs. The polypeptides with approximate 150, 80, 60, 52, 33, 26, 17, 15, 13, 11 kDa on 1-DE were dominantly recognized by antisera from B. canis-infected dogs. In the immunoblot profiles on 2-DE, ten immunodominant proteins in CPAg were detected with antisera of infected dogs between pI 3.5-6.5 at approximate 35 to 10 KDa, without any nonspecific reaction with sera in non-infected dogs. Ten immunodominant proteins identified by MALDI-MS/MS were identified as superoxide dismutase, bacteroferritin, amino acid ABC transporter substrate-binding protein, extracellular solute-binding protein family3, transaldolase, 26kDa periplasmic immunogenic protein, Rhizopine-binding protein, enoyl-CoA hydratase, arginase and type1 glyceraldehyde-3-phosphate dehydrogenase. Most of these proteins were determined by their cytoplasmic or periplasmic localization with metabolism and transporter functions. Consequently, this study discovered and identified the prominent immunogenic proteins in B. canis CPAg, highlighting that those antigenic proteins may accomplish a specific serodiagnosis for canine brucellosis. Furthermore, we will evaluate those immunodominant proteins for applying to the advanced diagnostic methods with high specificity and accuracy.

Keywords: Brucella canis, Canine brucellosis, cytoplasmic antigen, immunogenic proteins

Procedia PDF Downloads 146
3932 Solutions for Large Diameter Piles Stifness Used in Offshore Wind Turbine Farms

Authors: M. H. Aissa, Amar Bouzid Dj

Abstract:

As known, many countries are now planning to build new wind farms with high capacity up to 5MW. Consequently, the size of the foundation increase. These kinds of structures are subject to fatigue damage from environmental loading mainly due to wind and waves as well as from cyclic loading imposed through the rotational frequency (1P) through mass and aerodynamic imbalances and from the blade passing frequency (3P) of the wind turbine which make them behavior dynamically very sensitive. That is why natural frequency must be determined with accuracy from the existing data of the soil and the foundation stiffness sources of uncertainties, to avoid the resonance of the system. This paper presents analytical expressions of stiffness foundation with large diameter in linear soil behavior in different soil stiffness profile. To check the accuracy of the proposed formulas, a mathematical model approach based on non-dimensional parameters is used to calculate the natural frequency taking into account the soil structure interaction (SSI) compared with the p-y method and measured frequency in the North Sea Wind farms.

Keywords: offshore wind turbines, semi analytical FE analysis, p-y curves, piles foundations

Procedia PDF Downloads 465
3931 Analyzing Current Transformer’s Transient and Steady State Behavior for Different Burden’s Using LabVIEW Data Acquisition Tool

Authors: D. Subedi, D. Sharma

Abstract:

Current transformers (CTs) are used to transform large primary currents to a small secondary current. Since most standard equipment’s are not designed to handle large primary currents the CTs have an important part in any electrical system for the purpose of Metering and Protection both of which are integral in Power system. Now a days due to advancement in solid state technology, the operation times of the protective relays have come to a few cycles from few seconds. Thus, in such a scenario it becomes important to study the transient response of the current transformers as it will play a vital role in the operating of the protective devices. This paper shows the steady state and transient behavior of current transformers and how it changes with change in connected burden. The transient and steady state response will be captured using the data acquisition software LabVIEW. Analysis is done on the real time data gathered using LabVIEW. Variation of current transformer characteristics with changes in burden will be discussed.

Keywords: accuracy, accuracy limiting factor, burden, current transformer, instrument security factor

Procedia PDF Downloads 341
3930 Algorithm Research on Traffic Sign Detection Based on Improved EfficientDet

Authors: Ma Lei-Lei, Zhou You

Abstract:

Aiming at the problems of low detection accuracy of deep learning algorithm in traffic sign detection, this paper proposes improved EfficientDet based traffic sign detection algorithm. Multi-head self-attention is introduced in the minimum resolution layer of the backbone of EfficientDet to achieve effective aggregation of local and global depth information, and this study proposes an improved feature fusion pyramid with increased vertical cross-layer connections, which improves the performance of the model while introducing a small amount of complexity, the Balanced L1 Loss is introduced to replace the original regression loss function Smooth L1 Loss, which solves the problem of balance in the loss function. Experimental results show, the algorithm proposed in this study is suitable for the task of traffic sign detection. Compared with other models, the improved EfficientDet has the best detection accuracy. Although the test speed is not completely dominant, it still meets the real-time requirement.

Keywords: convolutional neural network, transformer, feature pyramid networks, loss function

Procedia PDF Downloads 96
3929 An Unusual Cause of Electrocardiographic Artefact: Patient's Warming Blanket

Authors: Sanjay Dhiraaj, Puneet Goyal, Aditya Kapoor, Gaurav Misra

Abstract:

In electrocardiography, an ECG artefact is used to indicate something that is not heart-made. Although technological advancements have produced monitors with the potential of providing accurate information and reliable heart rate alarms, despite this, interference of the displayed electrocardiogram still occurs. These interferences can be from the various electrical gadgets present in the operating room or electrical signals from other parts of the body. Artefacts may also occur due to poor electrode contact with the body or due to machine malfunction. Knowing these artefacts is of utmost importance so as to avoid unnecessary and unwarranted diagnostic as well as interventional procedures. We report a case of ECG artefacts occurring due to patient warming blanket and its consequences. A 20-year-old male with a preoperative diagnosis of exstrophy epispadias complex was posted for surgery under epidural and general anaesthesia. Just after endotracheal intubation, we observed nonspecific ECG changes on the monitor. At a first glance, the monitor strip revealed broad QRs complexes suggesting a ventricular bigeminal rhythm. Closer analysis revealed these to be artefacts because although the complexes were looking broad on the first glance there was clear presence of normal sinus complexes which were immediately followed by 'broad complexes' or artefacts produced by some device or connection. These broad complexes were labeled as artefacts as they were originating in the absolute refractory period of the previous normal sinus beat. It would be physiologically impossible for the myocardium to depolarize so rapidly as to produce a second QRS complex. A search for the possible reason for the artefacts was made and after deepening the plane of anaesthesia, ruling out any possible electrolyte abnormalities, checking of ECG leads and its connections, changing monitors, checking all other monitoring connections, checking for proper grounding of anaesthesia machine and OT table, we found that after switching off the patient’s warming apparatus the rhythm returned to a normal sinus one and the 'broad complexes' or artefacts disappeared. As misdiagnosis of ECG artefacts may subject patients to unnecessary diagnostic and therapeutic interventions so a thorough knowledge of the patient and monitors allow for a quick interpretation and resolution of the problem.

Keywords: ECG artefacts, patient warming blanket, peri-operative arrhythmias, mobile messaging services

Procedia PDF Downloads 271
3928 Building Scalable and Accurate Hybrid Kernel Mapping Recommender

Authors: Hina Iqbal, Mustansar Ali Ghazanfar, Sandor Szedmak

Abstract:

Recommender systems uses artificial intelligence practices for filtering obscure information and can predict if a user likes a specified item. Kernel mapping Recommender systems have been proposed which are accurate and state-of-the-art algorithms and resolve recommender system’s design objectives such as; long tail, cold-start, and sparsity. The aim of research is to propose hybrid framework that can efficiently integrate different versions— namely item-based and user-based KMR— of KMR algorithm. We have proposed various heuristic algorithms that integrate different versions of KMR (into a unified framework) resulting in improved accuracy and elimination of problems associated with conventional recommender system. We have tested our system on publically available movies dataset and benchmark with KMR. The results (in terms of accuracy, precision, recall, F1 measure and ROC metrics) reveal that the proposed algorithm is quite accurate especially under cold-start and sparse scenarios.

Keywords: Kernel Mapping Recommender Systems, hybrid recommender systems, cold start, sparsity, long tail

Procedia PDF Downloads 337
3927 Ectopic Mediastinal Parathyroid Adenoma: A Case Report with Diagnostic and Management Challenges

Authors: Augustina Konadu Larbi-Ampofo, Ekemini Umoinwek

Abstract:

Background: Hypercalcaemia is a common electrolyte imbalance that increases mortality if poorly controlled. Primary hyperparathyroidism often presents like this with a prevalence of 0.1-0.3%. Management due to an ectopic parathyroid adenoma in the mediastinum is challenging, especially in a patient with a pacemaker. Case Presentation: A 79-year-old woman with a history of a previous cardiac arrest, permanent pacemaker, ischaemic heart disease, bilateral renal calculi, rectal polyps, liver cirrhosis, and a family history of hyperthyroidism presented to the emergency department with acute back pain. Management and Outcome: The patient was diagnosed with primary hyperparathyroidism due to her elevated corrected calcium and parathyroid hormone levels. Parathyroid investigations consisting of an NM MIBI scan, SPECT-CT, 4D parathyroid scan, and an ultrasound scan of the neck and thorax confirmed an ectopic parathyroid adenoma in the mediastinum at the level of the aortic arch, along with benign thyroid nodules. The location of the adenoma warranted a thoracoscopic surgical approach; however, the presence of her pacemaker and other cardiovascular conditions predisposed her to a potentially poorer post-operative outcome. Discussion: Mediastinal ectopic parathyroid adenomas are rare and difficult to diagnose and treat, often needing a multimodal imaging approach for accurate localisation. Surgery is a definitive treatment; however, in this patient, long-term medical treatment with cinacalcet was the only next suitable treatment option. The difficulty with this is that cinacalcet tackles the biochemical markers of the disease entity and not the disease itself, leaving room for what happens next if there is refractory/uncontrolled hypercalcaemia in this patient with a pacemaker. Moreover, the coexistence of her multiple conditions raises the suspicion of an underlying multisystemic or multiple endocrine disorder, with multiple endocrine neoplasia coming to mind, necessitating further genetic or autoimmune investigations. Conclusion: Mediastinal ectopic parathyroid adenomas are rare, with diagnostic and management challenges.

Keywords: mediastinal ectopic parathyroid adenoma, hyperparathyroidism, SPECT/CT, nuclear medicine, multimodal imaging

Procedia PDF Downloads 16
3926 Capability of Available Seismic Soil Liquefaction Potential Assessment Models Based on Shear-Wave Velocity Using Banchu Case History

Authors: Nima Pirhadi, Yong Bo Shao, Xusheng Wa, Jianguo Lu

Abstract:

Several models based on the simplified method introduced by Seed and Idriss (1971) have been developed to assess the liquefaction potential of saturated sandy soils. The procedure includes determining the cyclic resistance of the soil as the cyclic resistance ratio (CRR) and comparing it with earthquake loads as cyclic stress ratio (CSR). Of all methods to determine CRR, the methods using shear-wave velocity (Vs) are common because of their low sensitivity to the penetration resistance reduction caused by fine content (FC). To evaluate the capability of the models, based on the Vs., the new data from Bachu-Jianshi earthquake case history collected, then the prediction results of the models are compared to the measured results; consequently, the accuracy of the models are discussed via three criteria and graphs. The evaluation demonstrates reasonable accuracy of the models in the Banchu region.

Keywords: seismic liquefaction, banchu-jiashi earthquake, shear-wave velocity, liquefaction potential evaluation

Procedia PDF Downloads 235
3925 Using Photogrammetric Techniques to Map the Mars Surface

Authors: Ahmed Elaksher, Islam Omar

Abstract:

For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.

Keywords: mars, photogrammetry, MOLA, HiRISE

Procedia PDF Downloads 56
3924 COVID-19 Analysis with Deep Learning Model Using Chest X-Rays Images

Authors: Uma Maheshwari V., Rajanikanth Aluvalu, Kumar Gautam

Abstract:

The COVID-19 disease is a highly contagious viral infection with major worldwide health implications. The global economy suffers as a result of COVID. The spread of this pandemic disease can be slowed if positive patients are found early. COVID-19 disease prediction is beneficial for identifying patients' health problems that are at risk for COVID. Deep learning and machine learning algorithms for COVID prediction using X-rays have the potential to be extremely useful in solving the scarcity of doctors and clinicians in remote places. In this paper, a convolutional neural network (CNN) with deep layers is presented for recognizing COVID-19 patients using real-world datasets. We gathered around 6000 X-ray scan images from various sources and split them into two categories: normal and COVID-impacted. Our model examines chest X-ray images to recognize such patients. Because X-rays are commonly available and affordable, our findings show that X-ray analysis is effective in COVID diagnosis. The predictions performed well, with an average accuracy of 99% on training photographs and 88% on X-ray test images.

Keywords: deep CNN, COVID–19 analysis, feature extraction, feature map, accuracy

Procedia PDF Downloads 77
3923 Classification of Political Affiliations by Reduced Number of Features

Authors: Vesile Evrim, Aliyu Awwal

Abstract:

By the evolvement in technology, the way of expressing opinions switched the direction to the digital world. The domain of politics as one of the hottest topics of opinion mining research merged together with the behavior analysis for affiliation determination in text which constitutes the subject of this paper. This study aims to classify the text in news/blogs either as Republican or Democrat with the minimum number of features. As an initial set, 68 features which 64 are constituted by Linguistic Inquiry and Word Count (LIWC) features are tested against 14 benchmark classification algorithms. In the later experiments, the dimensions of the feature vector reduced based on the 7 feature selection algorithms. The results show that Decision Tree, Rule Induction and M5 Rule classifiers when used with SVM and IGR feature selection algorithms performed the best up to 82.5% accuracy on a given dataset. Further tests on a single feature and the linguistic based feature sets showed the similar results. The feature “function” as an aggregate feature of the linguistic category, is obtained as the most differentiating feature among the 68 features with 81% accuracy by itself in classifying articles either as Republican or Democrat.

Keywords: feature selection, LIWC, machine learning, politics

Procedia PDF Downloads 381
3922 A Survey and Analysis on Inflammatory Pain Detection and Standard Protocol Selection Using Medical Infrared Thermography from Image Processing View Point

Authors: Mrinal Kanti Bhowmik, Shawli Bardhan Jr., Debotosh Bhattacharjee

Abstract:

Human skin containing temperature value more than absolute zero, discharges infrared radiation related to the frequency of the body temperature. The difference in infrared radiation from the skin surface reflects the abnormality present in human body. Considering the difference, detection and forecasting the temperature variation of the skin surface is the main objective of using Medical Infrared Thermography(MIT) as a diagnostic tool for pain detection. Medical Infrared Thermography(MIT) is a non-invasive imaging technique that records and monitors the temperature flow in the body by receiving the infrared radiated from the skin and represent it through thermogram. The intensity of the thermogram measures the inflammation from the skin surface related to pain in human body. Analysis of thermograms provides automated anomaly detection associated with suspicious pain regions by following several image processing steps. The paper represents a rigorous study based survey related to the processing and analysis of thermograms based on the previous works published in the area of infrared thermal imaging for detecting inflammatory pain diseases like arthritis, spondylosis, shoulder impingement, etc. The study also explores the performance analysis of thermogram processing accompanied by thermogram acquisition protocols, thermography camera specification and the types of pain detected by thermography in summarized tabular format. The tabular format provides a clear structural vision of the past works. The major contribution of the paper introduces a new thermogram acquisition standard associated with inflammatory pain detection in human body to enhance the performance rate. The FLIR T650sc infrared camera with high sensitivity and resolution is adopted to increase the accuracy of thermogram acquisition and analysis. The survey of previous research work highlights that intensity distribution based comparison of comparable and symmetric region of interest and their statistical analysis assigns adequate result in case of identifying and detecting physiological disorder related to inflammatory diseases.

Keywords: acquisition protocol, inflammatory pain detection, medical infrared thermography (MIT), statistical analysis

Procedia PDF Downloads 341
3921 Evaluation of Hydrocarbons in Tissues of Bivalve Mollusks from the Red Sea Coast

Authors: Asma Ahmed Aljohani, Mohammed Orif

Abstract:

The concentration of polycyclic aromatic hydrocarbons (PAH) in clam (A. glabrata) was examined in samples collected from Alseef Beach, 30 km south of Jeddah city. Gas chromatography-mass spectrometry (GC-MS) was used to analyse the 14 PAHs. The concentration of total PAHs was found to range from 11.521 to 40.149 ng/gdw with a mean concentration of 21.857 ng/gdw, which is lower compared to similar studies. The lower molecular weight PAHs with three rings comprised 18.14% of the total PAH concentrations in the clams, while the high molecular weight PAHs with four rings, five rings, and six rings account for 81.86%. Diagnostic ratios for PAH source distinction suggested pyrogenic or anthropogenic sources.

Keywords: bivalves, biomonitoring, hydrocarbons, PAHs

Procedia PDF Downloads 97
3920 Specific Emitter Identification Based on Refined Composite Multiscale Dispersion Entropy

Authors: Shaoying Guo, Yanyun Xu, Meng Zhang, Weiqing Huang

Abstract:

The wireless communication network is developing rapidly, thus the wireless security becomes more and more important. Specific emitter identification (SEI) is an vital part of wireless communication security as a technique to identify the unique transmitters. In this paper, a SEI method based on multiscale dispersion entropy (MDE) and refined composite multiscale dispersion entropy (RCMDE) is proposed. The algorithms of MDE and RCMDE are used to extract features for identification of five wireless devices and cross-validation support vector machine (CV-SVM) is used as the classifier. The experimental results show that the total identification accuracy is 99.3%, even at low signal-to-noise ratio(SNR) of 5dB, which proves that MDE and RCMDE can describe the communication signal series well. In addition, compared with other methods, the proposed method is effective and provides better accuracy and stability for SEI.

Keywords: cross-validation support vector machine, refined com- posite multiscale dispersion entropy, specific emitter identification, transient signal, wireless communication device

Procedia PDF Downloads 128
3919 Rethinking the Value of Pancreatic Cyst CEA Levels from Endoscopic Ultrasound Fine-Needle Aspiration (EUS-FNA): A Longitudinal Analysis

Authors: Giselle Tran, Ralitza Parina, Phuong T. Nguyen

Abstract:

Background/Aims: Pancreatic cysts (PC) have recently become an increasingly common entity, often diagnosed as incidental findings on cross-sectional imaging. Clinically, management of the lesions is difficult because of uncertainties in their potential for malignant degeneration. Prior series have reported that carcinoembryonic antigen (CEA), a biomarker collected from cyst fluid aspiration, has a high diagnostic accuracy for discriminating between mucinous and non-mucinous lesions, at the patient’s initial presentation. To the author’s best knowledge, no prior studies have reported PC CEA levels obtained from endoscopic ultrasound fine-needle aspiration (EUS-FNA) over years of serial EUS surveillance imaging. Methods: We report a consecutive retrospective series of 624 patients who underwent EUS evaluation for a PC between 11/20/2009 and 11/13/2018. Of these patients, 401 patients had CEA values obtained at the point of entry. Of these, 157 patients had two or more CEA values obtained over the course of their EUS surveillance. Of the 157 patients (96 F, 61 M; mean age 68 [range, 62-76]), the mean interval of EUS follow-up was 29.7 months [3.5-128]. The mean number of EUS procedures was 3 [2-7]. To assess CEA value fluctuations, we defined an appreciable increase in CEA as "spikes" – two-times increase in CEA on a subsequent EUS-FNA of the same cyst, with the second CEA value being greater than 1000 ng/mL. Using this definition, cysts with a spike in CEA were compared to those without a spike in a bivariate analysis to determine if a CEA spike is associated with poorer outcomes and the presence of high-risk features. Results: Of the 157 patients analyzed, 29 had a spike in CEA. Of these 29 patients, 5 had a cyst with size increase >0.5cm (p=0.93); 2 had a large cyst, >3cm (p=0.77); 1 had a cyst that developed a new solid component (p=0.03); 7 had a cyst with a solid component at any time during surveillance (p=0.08); 21 had a complex cyst (p=0.34); 4 had a cyst categorized as "Statistically Higher Risk" based on molecular analysis (p=0.11); and 0 underwent surgical resection (p=0.28). Conclusion: With serial EUS imaging in the surveillance of PC, an increase in CEA level defined as a spike did not predict poorer outcomes. Most notably, a spike in CEA did not correlate with the number of patients sent to surgery or patients with an appreciable increase in cyst size. A spike in CEA did not correlate with the development of a solid nodule within the PC nor progression on molecular analysis. Future studies should focus on the selected use of CEA analysis when patients undergo EUS surveillance evaluation for PCs.

Keywords: carcinoembryonic antigen (CEA), endoscopic ultrasound (EUS), fine-needle aspiration (FNA), pancreatic cyst, spike

Procedia PDF Downloads 141
3918 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features

Authors: Bushra Zafar, Usman Qamar

Abstract:

Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.

Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection

Procedia PDF Downloads 315
3917 Investigating Data Normalization Techniques in Swarm Intelligence Forecasting for Energy Commodity Spot Price

Authors: Yuhanis Yusof, Zuriani Mustaffa, Siti Sakira Kamaruddin

Abstract:

Data mining is a fundamental technique in identifying patterns from large data sets. The extracted facts and patterns contribute in various domains such as marketing, forecasting, and medical. Prior to that, data are consolidated so that the resulting mining process may be more efficient. This study investigates the effect of different data normalization techniques, which are Min-max, Z-score, and decimal scaling, on Swarm-based forecasting models. Recent swarm intelligence algorithms employed includes the Grey Wolf Optimizer (GWO) and Artificial Bee Colony (ABC). Forecasting models are later developed to predict the daily spot price of crude oil and gasoline. Results showed that GWO works better with Z-score normalization technique while ABC produces better accuracy with the Min-Max. Nevertheless, the GWO is more superior that ABC as its model generates the highest accuracy for both crude oil and gasoline price. Such a result indicates that GWO is a promising competitor in the family of swarm intelligence algorithms.

Keywords: artificial bee colony, data normalization, forecasting, Grey Wolf optimizer

Procedia PDF Downloads 475
3916 Use of Telehealth for Facilitating the Diagnostic Assessment of Autism Spectrum Disorder: A Scoping Review

Authors: Manahil Alfuraydan, Jodie Croxall, Lisa Hurt, Mike Kerr, Sinead Brophy

Abstract:

Autism Spectrum Disorder (ASD) is a developmental condition characterised by impairment in terms of social communication, social interaction, and a repetitive or restricted pattern of interest, behaviour, and activity. There is a significant delay between seeking help and a confirmed diagnosis of ASD. This may result in delay in receiving early intervention services, which are critical for positive outcomes. The long wait times also cause stress for the individuals and their families. Telehealth potentially offers a way of improving the diagnostic pathway for ASD. This review of the literature aims to examine which telehealth approaches have been used in the diagnosis and assessment of autism in children and adults, whether they are feasible and acceptable, and how they compare with face-to-face diagnosis and assessment methods. A comprehensive search of following databases- MEDLINE, CINAHL Plus with Full text, Business Sources Complete, Web of Science, Scopus, PsycINFO and trail and systematic review databases including Cochrane Library, Health Technology Assessment, Database of Abstracts and Reviews of Effectiveness and NHS Economic Evaluation was conducted, combining the terms of autism and telehealth from 2000 to 2018. A total of 10 studies were identified for inclusion in the review. This review of the literature found there to be two methods of using telehealth: (a) video conferencing to enable teams in different areas to consult with the families and to assess the child/adult in real time and (b) a video upload to a web portal that enables the clinical assessment of behaviours in the family home. The findings were positive, finding there to be high agreement in terms of the diagnosis between remote methods and face to face methods and with high levels of satisfaction among the families and clinicians. This field is in the very early stages, and so only studies with small sample size were identified, but the findings suggest that there is potential for telehealth methods to improve assessment and diagnosis of autism used in conjunction with existing methods, especially for those with clear autism traits and adults with autism. Larger randomised controlled trials of this technology are warranted.

Keywords: assessment, autism spectrum disorder, diagnosis, telehealth

Procedia PDF Downloads 127
3915 A Trend Based Forecasting Framework of the ATA Method and Its Performance on the M3-Competition Data

Authors: H. Taylan Selamlar, I. Yavuz, G. Yapar

Abstract:

It is difficult to make predictions especially about the future and making accurate predictions is not always easy. However, better predictions remain the foundation of all science therefore the development of accurate, robust and reliable forecasting methods is very important. Numerous number of forecasting methods have been proposed and studied in the literature. There are still two dominant major forecasting methods: Box-Jenkins ARIMA and Exponential Smoothing (ES), and still new methods are derived or inspired from them. After more than 50 years of widespread use, exponential smoothing is still one of the most practically relevant forecasting methods available due to their simplicity, robustness and accuracy as automatic forecasting procedures especially in the famous M-Competitions. Despite its success and widespread use in many areas, ES models have some shortcomings that negatively affect the accuracy of forecasts. Therefore, a new forecasting method in this study will be proposed to cope with these shortcomings and it will be called ATA method. This new method is obtained from traditional ES models by modifying the smoothing parameters therefore both methods have similar structural forms and ATA can be easily adapted to all of the individual ES models however ATA has many advantages due to its innovative new weighting scheme. In this paper, the focus is on modeling the trend component and handling seasonality patterns by utilizing classical decomposition. Therefore, ATA method is expanded to higher order ES methods for additive, multiplicative, additive damped and multiplicative damped trend components. The proposed models are called ATA trended models and their predictive performances are compared to their counter ES models on the M3 competition data set since it is still the most recent and comprehensive time-series data collection available. It is shown that the models outperform their counters on almost all settings and when a model selection is carried out amongst these trended models ATA outperforms all of the competitors in the M3- competition for both short term and long term forecasting horizons when the models’ forecasting accuracies are compared based on popular error metrics.

Keywords: accuracy, exponential smoothing, forecasting, initial value

Procedia PDF Downloads 176
3914 A Supervised Approach for Word Sense Disambiguation Based on Arabic Diacritics

Authors: Alaa Alrakaf, Sk. Md. Mizanur Rahman

Abstract:

Since the last two decades’ Arabic natural language processing (ANLP) has become increasingly much more important. One of the key issues related to ANLP is ambiguity. In Arabic language different pronunciation of one word may have a different meaning. Furthermore, ambiguity also has an impact on the effectiveness and efficiency of Machine Translation (MT). The issue of ambiguity has limited the usefulness and accuracy of the translation from Arabic to English. The lack of Arabic resources makes ambiguity problem more complicated. Additionally, the orthographic level of representation cannot specify the exact meaning of the word. This paper looked at the diacritics of Arabic language and used them to disambiguate a word. The proposed approach of word sense disambiguation used Diacritizer application to Diacritize Arabic text then found the most accurate sense of an ambiguous word using Naïve Bayes Classifier. Our Experimental study proves that using Arabic Diacritics with Naïve Bayes Classifier enhances the accuracy of choosing the appropriate sense by 23% and also decreases the ambiguity in machine translation.

Keywords: Arabic natural language processing, machine learning, machine translation, Naive bayes classifier, word sense disambiguation

Procedia PDF Downloads 356
3913 Acoustic Analysis of Psycho-Communication Disorders within Moroccan Students

Authors: Brahim Sabir

Abstract:

Psycho-Communication disorders negatively affect the academic curriculum for students in higher education. Thus, understanding these disorders, their causes and effects will give education specialists a tool for the decision, which will lead to the resolution of problems related to the integration of students with Psycho-Communication disorders. It is in this context that a statistical study was conducted, targeting the population object of study, namely Moroccan students. Pathological voice samples were recorded and analyzed acoustically with PRAAT software, in order to build a model that will be the basis for the objective diagnostic.

Keywords: psycho-communication disorders, acoustic analysis, PRAAT

Procedia PDF Downloads 388
3912 Resistivity Tomography Optimization Based on Parallel Electrode Linear Back Projection Algorithm

Authors: Yiwei Huang, Chunyu Zhao, Jingjing Ding

Abstract:

Electrical Resistivity Tomography has been widely used in the medicine and the geology, such as the imaging of the lung impedance and the analysis of the soil impedance, etc. Linear Back Projection is the core algorithm of Electrical Resistivity Tomography, but the traditional Linear Back Projection can not make full use of the information of the electric field. In this paper, an imaging method of Parallel Electrode Linear Back Projection for Electrical Resistivity Tomography is proposed, which generates the electric field distribution that is not linearly related to the traditional Linear Back Projection, captures the new information and improves the imaging accuracy without increasing the number of electrodes by changing the connection mode of the electrodes. The simulation results show that the accuracy of the image obtained by the inverse operation obtained by the Parallel Electrode Linear Back Projection can be improved by about 20%.

Keywords: electrical resistivity tomography, finite element simulation, image optimization, parallel electrode linear back projection

Procedia PDF Downloads 151
3911 SC-LSH: An Efficient Indexing Method for Approximate Similarity Search in High Dimensional Space

Authors: Sanaa Chafik, Imane Daoudi, Mounim A. El Yacoubi, Hamid El Ouardi

Abstract:

Locality Sensitive Hashing (LSH) is one of the most promising techniques for solving nearest neighbour search problem in high dimensional space. Euclidean LSH is the most popular variation of LSH that has been successfully applied in many multimedia applications. However, the Euclidean LSH presents limitations that affect structure and query performances. The main limitation of the Euclidean LSH is the large memory consumption. In order to achieve a good accuracy, a large number of hash tables is required. In this paper, we propose a new hashing algorithm to overcome the storage space problem and improve query time, while keeping a good accuracy as similar to that achieved by the original Euclidean LSH. The Experimental results on a real large-scale dataset show that the proposed approach achieves good performances and consumes less memory than the Euclidean LSH.

Keywords: approximate nearest neighbor search, content based image retrieval (CBIR), curse of dimensionality, locality sensitive hashing, multidimensional indexing, scalability

Procedia PDF Downloads 320
3910 Impact of Marine Hydrodynamics and Coastal Morphology on Changes in Mangrove Forests (Case Study: West of Strait of Hormuz, Iran)

Authors: Fatemeh Parhizkar, Mojtaba Yamani, Abdolla Behboodi, Masoomeh Hashemi

Abstract:

The mangrove forests are natural and valuable gifts that exist in some parts of the world, including Iran. Regarding the threats faced by these forests and the declining area of them all over the world, as well as in Iran, it is very necessary to manage and monitor them. The current study aimed to investigate the changes in mangrove forests and the relationship between these changes and the marine hydrodynamics and coastal morphology in the area between qeshm island and the west coast of the Hormozgan province (i.e. the coastline between Mehran river and Bandar-e Pol port) in the 49-year period. After preprocessing and classifying satellite images using the SVM, MLC, and ANN classifiers and evaluating the accuracy of the maps, the SVM approach with the highest accuracy (the Kappa coefficient of 0.97 and overall accuracy of 98) was selected for preparing the classification map of all images. The results indicate that from 1972 to 1987, the area of these forests have had experienced a declining trend, and in the next years, their expansion was initiated. These forests include the mangrove forests of Khurkhuran wetland, Muriz Deraz Estuary, Haft Baram Estuary, the mangrove forest in the south of the Laft Port, and the mangrove forests between the Tabl Pier, Maleki Village, and Gevarzin Village. The marine hydrodynamic and geomorphological characteristics of the region, such as average intertidal zone, sediment data, the freshwater inlet of Mehran river, wave stability and calmness, topography and slope, as well as mangrove conservation projects make the further expansion of mangrove forests in this area possible. By providing significant and up-to-date information on the development and decline of mangrove forests in different parts of the coast, this study can significantly contribute to taking measures for the conservation and restoration of mangrove forests.

Keywords: mangrove forests, marine hydrodynamics, coastal morphology, west of strait of Hormuz, Iran

Procedia PDF Downloads 94
3909 Modeling of Geotechnical Data Using GIS and Matlab for Eastern Ahmedabad City, Gujarat

Authors: Rahul Patel, S. P. Dave, M. V Shah

Abstract:

Ahmedabad is a rapidly growing city in western India that is experiencing significant urbanization and industrialization. With projections indicating that it will become a metropolitan city in the near future, various construction activities are taking place, making soil testing a crucial requirement before construction can commence. To achieve this, construction companies and contractors need to periodically conduct soil testing. This study focuses on the process of creating a spatial database that is digitally formatted and integrated with geotechnical data and a Geographic Information System (GIS). Building a comprehensive geotechnical Geo-database involves three essential steps. Firstly, borehole data is collected from reputable sources. Secondly, the accuracy and redundancy of the data are verified. Finally, the geotechnical information is standardized and organized for integration into the database. Once the Geo-database is complete, it is integrated with GIS. This integration allows users to visualize, analyze, and interpret geotechnical information spatially. Using a Topographic to Raster interpolation process in GIS, estimated values are assigned to all locations based on sampled geotechnical data values. The study area was contoured for SPT N-Values, Soil Classification, Φ-Values, and Bearing Capacity (T/m2). Various interpolation techniques were cross-validated to ensure information accuracy. The GIS map generated by this study enables the calculation of SPT N-Values, Φ-Values, and bearing capacities for different footing widths and various depths. This approach highlights the potential of GIS in providing an efficient solution to complex phenomena that would otherwise be tedious to achieve through other means. Not only does GIS offer greater accuracy, but it also generates valuable information that can be used as input for correlation analysis. Furthermore, this system serves as a decision support tool for geotechnical engineers. The information generated by this study can be utilized by engineers to make informed decisions during construction activities. For instance, they can use the data to optimize foundation designs and improve site selection. In conclusion, the rapid growth experienced by Ahmedabad requires extensive construction activities, necessitating soil testing. This study focused on the process of creating a comprehensive geotechnical database integrated with GIS. The database was developed by collecting borehole data from reputable sources, verifying its accuracy and redundancy, and organizing the information for integration. The GIS map generated by this study is an efficient solution that offers greater accuracy and generates valuable information that can be used as input for correlation analysis. It also serves as a decision support tool for geotechnical engineers, allowing them to make informed decisions during construction activities.

Keywords: arcGIS, borehole data, geographic information system (GIS), geo-database, interpolation, SPT N-value, soil classification, φ-value, bearing capacity

Procedia PDF Downloads 67
3908 Energy Consumption Forecast Procedure for an Industrial Facility

Authors: Tatyana Aleksandrovna Barbasova, Lev Sergeevich Kazarinov, Olga Valerevna Kolesnikova, Aleksandra Aleksandrovna Filimonova

Abstract:

We regard forecasting of energy consumption by private production areas of a large industrial facility as well as by the facility itself. As for production areas the forecast is made based on empirical dependencies of the specific energy consumption and the production output. As for the facility itself implementation of the task to minimize the energy consumption forecasting error is based on adjustment of the facility’s actual energy consumption values evaluated with the metering device and the total design energy consumption of separate production areas of the facility. The suggested procedure of optimal energy consumption was tested based on the actual data of core product output and energy consumption by a group of workshops and power plants of the large iron and steel facility. Test results show that implementation of this procedure gives the mean accuracy of energy consumption forecasting for winter 2014 of 0.11% for the group of workshops and 0.137% for the power plants.

Keywords: energy consumption, energy consumption forecasting error, energy efficiency, forecasting accuracy, forecasting

Procedia PDF Downloads 443