Search results for: slice thickness accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5251

Search results for: slice thickness accuracy

4351 Real-Time Lane Marking Detection Using Weighted Filter

Authors: Ayhan Kucukmanisa, Orhan Akbulut, Oguzhan Urhan

Abstract:

Nowadays, advanced driver assistance systems (ADAS) have become popular, since they enable safe driving. Lane detection is a vital step for ADAS. The performance of the lane detection process is critical to obtain a high accuracy lane departure warning system (LDWS). Challenging factors such as road cracks, erosion of lane markings, weather conditions might affect the performance of a lane detection system. In this paper, 1-D weighted filter based on row filtering to detect lane marking is proposed. 2-D input image is filtered by 1-D weighted filter considering four-pixel values located symmetrically around the center of candidate pixel. Performance evaluation is carried out by two metrics which are true positive rate (TPR) and false positive rate (FPR). Experimental results demonstrate that the proposed approach provides better lane marking detection accuracy compared to the previous methods while providing real-time processing performance.

Keywords: lane marking filter, lane detection, ADAS, LDWS

Procedia PDF Downloads 195
4350 Measurement of Sarcopenia Associated with the Extent of Gastrointestinal Oncological Disease

Authors: Adrian Hang Yue Siu, Matthew Holyland, Sharon Carey, Daniel Steffens, Nabila Ansari, Cherry E. Koh

Abstract:

Introduction: Peritoneal malignancies are challenging cancers to manage. While cytoreductive surgery and hyperthermic intraperitoneal chemotherapy (CRS and HIPEC) may offer a cure, it’s considered radical and morbid. Pre-emptive identification of deconditioned patients for optimization may mitigate the risks of surgery. However, the difficulty lies in the scarcity of validated predictive tools to identify high-risk patients. In recent times, there has been growing interest in sarcopenia, which can occur as a result of malnutrition and malignancies. Therefore, the purpose of this study was to assess the utility of sarcopenia in predicting post-operative outcomes. Methods: A single quaternary-center retrospective study of CRS and HIPEC patients between 2017-2020 was conducted to determine the association between pre-operative sarcopenia and post-operative outcomes. Lumbar CT images were analyzed using Slice-o-matic® to measure sarcopenia. Results : Cohort (n=94) analysis found that 40% had sarcopenia, with a majority being female (53.2%) and a mean age of 55 years. Sarcopenia was statistically associated with decreased weight compared to non-sarcopenia patients, 72.7kg vs. 82.2kg (p=0.014) and shorter overall survival, 1.4 years vs. 2.1 years (p=0.032). Post-operatively, patients with sarcopenia experienced more post-operative complications (p=0.001). Conclusion: Complex procedures often require optimization to prevent complications and improve survival. While patient biomarkers – BMI and weight – are used for optimization, this research advocates for the identification of sarcopenia status for pre-operative planning. Sarcopenia may be an indicator of advanced disease requiring further treatment and is an emerging area of research. Larger studies are required to confirm these findings and to assess the reversibility of sarcopenia after surgery.

Keywords: sarcopaenia, cytoreductive surgery, hyperthermic intraperitoneal chemotherapy, surgical oncology

Procedia PDF Downloads 86
4349 Artificial Neural Networks and Hidden Markov Model in Landslides Prediction

Authors: C. S. Subhashini, H. L. Premaratne

Abstract:

Landslides are the most recurrent and prominent disaster in Sri Lanka. Sri Lanka has been subjected to a number of extreme landslide disasters that resulted in a significant loss of life, material damage, and distress. It is required to explore a solution towards preparedness and mitigation to reduce recurrent losses associated with landslides. Artificial Neural Networks (ANNs) and Hidden Markov Model (HMMs) are now widely used in many computer applications spanning multiple domains. This research examines the effectiveness of using Artificial Neural Networks and Hidden Markov Model in landslides predictions and the possibility of applying the modern technology to predict landslides in a prominent geographical area in Sri Lanka. A thorough survey was conducted with the participation of resource persons from several national universities in Sri Lanka to identify and rank the influencing factors for landslides. A landslide database was created using existing topographic; soil, drainage, land cover maps and historical data. The landslide related factors which include external factors (Rainfall and Number of Previous Occurrences) and internal factors (Soil Material, Geology, Land Use, Curvature, Soil Texture, Slope, Aspect, Soil Drainage, and Soil Effective Thickness) are extracted from the landslide database. These factors are used to recognize the possibility to occur landslides by using an ANN and HMM. The model acquires the relationship between the factors of landslide and its hazard index during the training session. These models with landslide related factors as the inputs will be trained to predict three classes namely, ‘landslide occurs’, ‘landslide does not occur’ and ‘landslide likely to occur’. Once trained, the models will be able to predict the most likely class for the prevailing data. Finally compared two models with regards to prediction accuracy, False Acceptance Rates and False Rejection rates and This research indicates that the Artificial Neural Network could be used as a strong decision support system to predict landslides efficiently and effectively than Hidden Markov Model.

Keywords: landslides, influencing factors, neural network model, hidden markov model

Procedia PDF Downloads 385
4348 An Auxiliary Technique for Coronary Heart Disease Prediction by Analyzing Electrocardiogram Based on ResNet and Bi-Long Short-Term Memory

Authors: Yang Zhang, Jian He

Abstract:

Heart disease is one of the leading causes of death in the world, and coronary heart disease (CHD) is one of the major heart diseases. Electrocardiogram (ECG) is widely used in the detection of heart diseases, but the traditional manual method for CHD prediction by analyzing ECG requires lots of professional knowledge for doctors. This paper introduces sliding window and continuous wavelet transform (CWT) to transform ECG signals into images, and then ResNet and Bi-LSTM are introduced to build the ECG feature extraction network (namely ECGNet). At last, an auxiliary system for coronary heart disease prediction was developed based on modified ResNet18 and Bi-LSTM, and the public ECG dataset of CHD from MIMIC-3 was used to train and test the system. The experimental results show that the accuracy of the method is 83%, and the F1-score is 83%. Compared with the available methods for CHD prediction based on ECG, such as kNN, decision tree, VGGNet, etc., this method not only improves the prediction accuracy but also could avoid the degradation phenomenon of the deep learning network.

Keywords: Bi-LSTM, CHD, ECG, ResNet, sliding window

Procedia PDF Downloads 92
4347 Comparison of Radiation Dosage and Image Quality: Digital Breast Tomosynthesis vs. Full-Field Digital Mammography

Authors: Okhee Woo

Abstract:

Purpose: With increasing concern of individual radiation exposure doses, studies analyzing radiation dosage in breast imaging modalities are required. Aim of this study is to compare radiation dosage and image quality between digital breast tomosynthesis (DBT) and full-field digital mammography (FFDM). Methods and Materials: 303 patients (mean age 52.1 years) who studied DBT and FFDM were retrospectively reviewed. Radiation dosage data were obtained by radiation dosage scoring and monitoring program: Radimetrics (Bayer HealthCare, Whippany, NJ). Entrance dose and mean glandular doses in each breast were obtained in both imaging modalities. To compare the image quality of DBT with two-dimensional synthesized mammogram (2DSM) and FFDM, 5-point scoring of lesion clarity was assessed and the better modality between the two was selected. Interobserver performance was compared with kappa values and diagnostic accuracy was compared using McNemar test. The parameters of radiation dosages (entrance dose, mean glandular dose) and image quality were compared between two modalities by using paired t-test and Wilcoxon rank sum test. Results: For entrance dose and mean glandular doses for each breasts, DBT had lower values compared with FFDM (p-value < 0.0001). Diagnostic accuracy did not have statistical difference, but lesion clarity score was higher in DBT with 2DSM and DBT was chosen as a better modality compared with FFDM. Conclusion: DBT showed lower radiation entrance dose and also lower mean glandular doses to both breasts compared with FFDM. Also, DBT with 2DSM had better image quality than FFDM with similar diagnostic accuracy, suggesting that DBT may have a potential to be performed as an alternative to FFDM.

Keywords: radiation dose, DBT, digital mammography, image quality

Procedia PDF Downloads 351
4346 On the Solution of Boundary Value Problems Blended with Hybrid Block Methods

Authors: Kizito Ugochukwu Nwajeri

Abstract:

This paper explores the application of hybrid block methods for solving boundary value problems (BVPs), which are prevalent in various fields such as science, engineering, and applied mathematics. Traditionally, numerical approaches such as finite difference and shooting methods, often encounter challenges related to stability and convergence, particularly in the context of complex and nonlinear BVPs. To address these challenges, we propose a hybrid block method that integrates features from both single-step and multi-step techniques. This method allows for the simultaneous computation of multiple solution points while maintaining high accuracy. Specifically, we employ a combination of polynomial interpolation and collocation strategies to derive a system of equations that captures the behavior of the solution across the entire domain. By directly incorporating boundary conditions into the formulation, we enhance the stability and convergence properties of the numerical solution. Furthermore, we introduce an adaptive step-size mechanism to optimize performance based on the local behavior of the solution. This adjustment allows the method to respond effectively to variations in solution behavior, improving both accuracy and computational efficiency. Numerical tests on a variety of boundary value problems demonstrate the effectiveness of the hybrid block methods. These tests showcase significant improvements in accuracy and computational efficiency compared to conventional methods, indicating that our approach is robust and versatile. The results suggest that this hybrid block method is suitable for a wide range of applications in real-world problems, offering a promising alternative to existing numerical techniques.

Keywords: hybrid block methods, boundary value problem, polynomial interpolation, adaptive step-size control, collocation methods

Procedia PDF Downloads 40
4345 An Effective Noise Resistant Frequency Modulation Continuous-Wave Radar Vital Sign Signal Detection Method

Authors: Lu Yang, Meiyang Song, Xiang Yu, Wenhao Zhou, Chuntao Feng

Abstract:

To address the problem that the FM continuous-wave radar (FMCW) extracts human vital sign signals which are susceptible to noise interference and low reconstruction accuracy, a new detection scheme for the sign signals is proposed. Firstly, an improved complete ensemble empirical modal decomposition with adaptive noise (ICEEMDAN) algorithm is applied to decompose the radar-extracted thoracic signals to obtain several intrinsic modal functions (IMF) with different spatial scales, and then the IMF components are optimized by a BP neural network improved by immune genetic algorithm (IGA). The simulation results show that this scheme can effectively separate the noise and accurately extract the respiratory and heartbeat signals and improve the reconstruction accuracy and signal-to-noise ratio of the sign signals.

Keywords: frequency modulated continuous wave radar, ICEEMDAN, BP neural network, vital signs signal

Procedia PDF Downloads 169
4344 Accuracy of Trauma on Scene Triage Screen Tool (Shock Index, Reverse Shock Index Glasgow Coma Scale, and National Early Warning Score) to Predict the Severity of Emergency Department Triage

Authors: Chaiyaporn Yuksen, Tapanawat Chaiwan

Abstract:

Introduction: Emergency medical service (EMS) care for trauma patients must be provided on-scene assessment and essential treatment and have appropriate transporting to the trauma center. The shock index (SI), reverse shock index Glasgow Coma Scale (rSIG), and National Early Warning Score (NEWS) triage tools are easy to use in a prehospital setting. There is no standardized on-scene triage protocol in prehospital care. The primary objective was to determine the accuracy of SI, rSIG, and NEWS to predict the severity of trauma patients in the emergency department (ED). Methods: This was a retrospective cross-sectional and diagnostic research conducted on trauma patients transported by EMS to the ED of Ramathibodi Hospital, a university-affiliated super tertiary care hospital in Bangkok, Thailand, from January 2015 to September 2022. We included the injured patients receiving prehospital care and transport to the ED of Ramathibodi Hospital by the EMS team from January 2015 to September 2022. We compared the on-scene parameter (SI, rSIG, and NEWS) and ED (Emergency Severity Index) with the area under ROC. Results: 218 patients were traumatic patients transported by EMS to the ED. 161 was ESI level 1-2, and 57 was level 3-5. NEWS was a more accurate triage tool to discriminate the severity of trauma patients than rSIG and SI. The area under the ROC was 0.743 (95%CI 0.70-0.79), 0.649 (95%CI 0.59-0.70), and 0.582 (95%CI 0.52-0.65), respectively (P-value <0.001). The cut point of NEWS to discriminate was 6 points. Conclusions: The NEWs was the most accurate triage tool in prehospital seeing in trauma patients.

Keywords: on-scene triage, trauma patient, ED triage, accuracy, NEWS

Procedia PDF Downloads 129
4343 Habitat Studies of Etheria elliptica in Some Water Bodies (River Ogbese and Owena Reservoir) in Ondo State, Nigeria

Authors: O. O. Olawusi-Peters, M. O. Adediran, O. A. Ajibare

Abstract:

Etheria elliptica population is declining due to various human activities on the freshwater habitat. This necessitate the habitat study of the mussel in river Ogbese and Owena reservoir in Ondo state, Nigeria in order to know the status of the organism within the ecosystem. Thirty (30) specimens each from River Ogbese and Owena reservoir were sampled between May and August 2012. The meristic variables such as length, breadth, shell thickness and weight of the mussel were measured. Also, some physico-chemical parameters, flow rate and soil profile of the two rivers were studied. In River Ogbese, the weight, length, breadth and thickness variables obtained were; 49.73g, 8.42cm, 3.78cm and 0.53cm respectively. In Owena reservoir, the values were; 111.17g, 8.80cm, 6.64cm, 0.22cm respectively. The condition factor showed that the samples from Owena reservoir (K = 16.33) were healthier than River Ogbese (K = 8.34). Also, the length-weight relationship indicated isometric growth in both water bodies (Ogbese r2 = 0.68; Owena r2 = 0.66). In River Ogbese, the physico-chemical parameters obtained were; temperature (24.3oC), pH (7.12), TDS (72ppm), DO (3.2mg/l), conductivity (145µ), BOD (0.7mg/l). The mean temperature (24.1oC), pH (7.69), TDS (102ppm), DO (3.1mg/l), conductivity (183µ), BOD (0.8mg/l) were obtained from Owena reservoir. The soil samples values obtained from both water bodies are; River Ogbese –phosphorus; 78.78, calcium; 3.60, magnesium; 1.90 and organic matter; 0.17. Owena reservoir - Phosphorus; 3.34, calcium; 4.40, magnesium; 1.20 and organic matter; 0.66. The river flow rate was 0.22m/s for Owena reservoir and 0.26m/s for river Ogbese. The study revealed that Etheria elliptica in Owena reservoir and Ogbese were in good and healthy conditions despite the various human activities on the water bodies. The water quality parameters obtained were within the preferred requirements of the mussels.

Keywords: Etheria elliptica, mussels, Owena reservoir, River Ogbese

Procedia PDF Downloads 512
4342 Assessment of the High-Speed Ice Friction of Bob Skeleton Runners

Authors: Agata Tomaszewska, Timothy Kamps, Stephan R. Turnock, Nicola Symonds

Abstract:

Bob skeleton is a highly competitive sport in which an athlete reaches speeds up to 40 m/s sliding, head first, down an ice track. It is believed that the friction between the runners and ice significantly contributes to the amount of the total energy loss during a bob skeleton descent. There is only limited available experimental data regarding the friction of bob skeleton runners or indeed steel on the ice at high sliding speeds ( > 20 m/s). Testing methods used to investigate the friction of steel on ice in winter sports have been outlined, and their accuracy and repeatability discussed. A system thinking approach was used to investigate the runner-ice interaction during sliding and create concept designs of three ice tribometers. The operational envelope of the bob skeleton system has been defined through mathematical modelling. Designs of a drum, linear and inertia pin-on-disk tribometers were developed specifically for bob skeleton runner testing with the requirement of reaching up to 40 m/s speed and facilitate fresh ice sliding. The design constraints have been outline and the proposed solutions compared based on the ease of operation, accuracy and the development cost.

Keywords: bob skeleton, ice friction, high-speed tribometers, sliding friction

Procedia PDF Downloads 263
4341 Pretherapy Initial Dosimetry Results in Prostat Cancer Radionuclide Therapy with Lu-177-PSMA-DOTA-617

Authors: M. Abuqebitah, H. Tanyildizi, N. Yeyin, I. Cavdar, M. Demir, L. Kabasakal

Abstract:

Aim: Targeted radionuclide therapy (TRT) is an increasingly used treatment modality for wide range of cancers. Presently dosimetry is highly required either to plan treatment or to ascertain the absorbed dose delivered to critical organs during treatment. Methods and Materials: The study comprised 7 patients suffered from prostate cancer with progressive disease and candidate to undergo Lu-177-DOTA-617 therapy following to PSMA- PET/CT imaging for all patients. (5.2±0.3 mCi) was intravenously injected. To evaluate bone marrow absorbed dose 2 cc blood samples were withdrawn in short variable times (3, 15, 30, 60, 180 minutes) after injection. Furthermore, whole body scans were performed using scintillation gama camera in 4, 24, 48, and 120 hours after injection and in order to quantify the activity taken up in the body, kidneys , liver, right parotid, and left parotid the geometric mean of anterior and posterior counts were determined through ROI analysis, after that background subtraction and attenuation correction were applied using patients PSMA- PET/CT images taking in a consideration: organ thickness, body thickness, and Hounsfield unites from CT scan. OLINDA/EXM dosimetry program was used for curve fitting, residence time calculation, and absorbed dose calculations. Findings: Absorbed doses of bone marrow, left kidney, right kidney, liver, left parotid, right parotid, total body were 1.28±0.52, 32.36±16.36, 32.7±13.68, 10.35±3.45, 38.67±21.29, 37.55±19.77, 2.25±0.95 (mGy/mCi), respectively. Conclusion: Our first results clarify that Lu-177-DOTA-617 is safe and reliable therapy as there were no complications seen. In the other hand, the observable variation in the absorbed dose of the critical organs among the patients necessitate patient-specific dosimetry approach to save body organs and particularly highly exposed kidneys and parotid gland.

Keywords: Lu-177-PSMA, prostate cancer, radionuclide therapy

Procedia PDF Downloads 482
4340 Developing an Advanced Algorithm Capable of Classifying News, Articles and Other Textual Documents Using Text Mining Techniques

Authors: R. B. Knudsen, O. T. Rasmussen, R. A. Alphinas

Abstract:

The reason for conducting this research is to develop an algorithm that is capable of classifying news articles from the automobile industry, according to the competitive actions that they entail, with the use of Text Mining (TM) methods. It is needed to test how to properly preprocess the data for this research by preparing pipelines which fits each algorithm the best. The pipelines are tested along with nine different classification algorithms in the realm of regression, support vector machines, and neural networks. Preliminary testing for identifying the optimal pipelines and algorithms resulted in the selection of two algorithms with two different pipelines. The two algorithms are Logistic Regression (LR) and Artificial Neural Network (ANN). These algorithms are optimized further, where several parameters of each algorithm are tested. The best result is achieved with the ANN. The final model yields an accuracy of 0.79, a precision of 0.80, a recall of 0.78, and an F1 score of 0.76. By removing three of the classes that created noise, the final algorithm is capable of reaching an accuracy of 94%.

Keywords: Artificial Neural network, Competitive dynamics, Logistic Regression, Text classification, Text mining

Procedia PDF Downloads 122
4339 Forecasting Stock Prices Based on the Residual Income Valuation Model: Evidence from a Time-Series Approach

Authors: Chen-Yin Kuo, Yung-Hsin Lee

Abstract:

Previous studies applying residual income valuation (RIV) model generally use panel data and single-equation model to forecast stock prices. Unlike these, this paper uses Taiwan longitudinal data to estimate multi-equation time-series models such as Vector Autoregressive (VAR), Vector Error Correction Model (VECM), and conduct out-of-sample forecasting. Further, this work assesses their forecasting performance by two instruments. In favor of extant research, the major finding shows that VECM outperforms other three models in forecasting for three stock sectors over entire horizons. It implies that an error correction term containing long-run information contributes to improve forecasting accuracy. Moreover, the pattern of composite shows that at longer horizon, VECM produces the greater reduction in errors, and performs substantially better than VAR.

Keywords: residual income valuation model, vector error correction model, out of sample forecasting, forecasting accuracy

Procedia PDF Downloads 318
4338 Amharic Text News Classification Using Supervised Learning

Authors: Misrak Assefa

Abstract:

The Amharic language is the second most widely spoken Semitic language in the world. There are several new overloaded on the web. Searching some useful documents from the web on a specific topic, which is written in the Amharic language, is a challenging task. Hence, document categorization is required for managing and filtering important information. In the classification of Amharic text news, there is still a gap in the domain of information that needs to be launch. This study attempts to design an automatic Amharic news classification using a supervised learning mechanism on four un-touch classes. To achieve this research, 4,182 news articles were used. Naive Bayes (NB) and Decision tree (j48) algorithms were used to classify the given Amharic dataset. In this paper, k-fold cross-validation is used to estimate the accuracy of the classifier. As a result, it shows those algorithms can be applicable in Amharic news categorization. The best average accuracy result is achieved by j48 decision tree and naïve Bayes is 95.2345 %, and 94.6245 % respectively using three categories. This research indicated that a typical decision tree algorithm is more applicable to Amharic news categorization.

Keywords: text categorization, supervised machine learning, naive Bayes, decision tree

Procedia PDF Downloads 212
4337 A Simple and Easy-To-Use Tool for Detecting Outer Contour of Leukocytes Based on Image Processing Techniques

Authors: Retno Supriyanti, Best Leader Nababan, Yogi Ramadhani, Wahyu Siswandari

Abstract:

Blood cell morphology is an important parameter in a hematology test. Currently, in developing countries, a lot of hematology is done manually, either by physicians or laboratory staff. According to the limitation of the human eye, examination based on manual method will result in a lower precision and accuracy. In addition, the hematology test by manual will further complicate the diagnosis in some areas that do not have competent medical personnel. This research aims to develop a simple tool in the detection of blood cell morphology-based computer. In this paper, we focus on the detection of the outer contour of leukocytes. The results show that the system that we developed is promising for detecting blood cell morphology automatically. It is expected, by implementing this method, the problem of accuracy, precision and limitations of the medical staff can be solved.

Keywords: morphology operation, developing countries, hematology test, limitation of medical personnel

Procedia PDF Downloads 341
4336 Margin-Based Feed-Forward Neural Network Classifiers

Authors: Xiaohan Bookman, Xiaoyan Zhu

Abstract:

Margin-Based Principle has been proposed for a long time, it has been proved that this principle could reduce the structural risk and improve the performance in both theoretical and practical aspects. Meanwhile, feed-forward neural network is a traditional classifier, which is very hot at present with a deeper architecture. However, the training algorithm of feed-forward neural network is developed and generated from Widrow-Hoff Principle that means to minimize the squared error. In this paper, we propose a new training algorithm for feed-forward neural networks based on Margin-Based Principle, which could effectively promote the accuracy and generalization ability of neural network classifiers with less labeled samples and flexible network. We have conducted experiments on four UCI open data sets and achieved good results as expected. In conclusion, our model could handle more sparse labeled and more high-dimension data set in a high accuracy while modification from old ANN method to our method is easy and almost free of work.

Keywords: Max-Margin Principle, Feed-Forward Neural Network, classifier, structural risk

Procedia PDF Downloads 347
4335 Computer-Aided Diagnosis of Eyelid Skin Tumors Using Machine Learning

Authors: Ofira Zloto, Ofir Fogel, Eyal Klang

Abstract:

Purpose: The aim is to develop an automated framework based on machine learning to diagnose malignant eyelid skin tumors. Methods: This study utilized eyelid lesion images from Sheba Medical Center, a large tertiary center in Israel. Before model training, we pre-trained our models on the ISIC 2019 dataset consisting of 25,332 images. The proprietary eyelid dataset was then used for fine-tuning. The dataset contained multiple images per patient, aiming to classify malignant lesions in comparison to benign counterparts. Results: The analyzed dataset consisted of images representing both benign and malignant eyelid lesions. For the benign category, a total of 373 images were sourced. In comparison, the malignant category has 186 images. Based on the accuracy values, the model with 3 epochs and a learning rate of 0.0001 exhibited the best performance, achieving an accuracy of 0.748 with a standard deviation of 0.034. At a sensitivity of 69%, the model has a corresponding specificity of 82%. To further understand the decision-making process of our model, we employed heatmap visualization techniques, specifically Gradient-weighted Class Activation Mapping. Discussion: This study introduces a dependable model-aided diagnostic technology for assessing eyelid skin lesions. The model demonstrated accuracy comparable to human evaluation, effectively determining whether a lesion raises a high suspicion of malignancy or is benign. Such a model has the potential to alleviate the burden on the healthcare system, particularly benefiting rural areas and enhancing the efficiency of clinicians and overall healthcare.

Keywords: machine learning;, eyelid skin tumors;, decision-making process;, heatmap visualization techniques

Procedia PDF Downloads 7
4334 Adhesive Bonded Joints Characterization and Crack Propagation in Composite Materials under Cyclic Impact Fatigue and Constant Amplitude Fatigue Loadings

Authors: Andres Bautista, Alicia Porras, Juan P. Casas, Maribel Silva

Abstract:

The Colombian aeronautical industry has stimulated research in the mechanical behavior of materials under different loading conditions aircrafts are generally exposed during its operation. The Calima T-90 is the first military aircraft built in the country, used for primary flight training of Colombian Air Force Pilots, therefore, it may be exposed to adverse operating situations such as hard landings which cause impact loads on the aircraft that might produce the impact fatigue phenomenon. The Calima T-90 structure is mainly manufactured by composites materials generating assemblies and subassemblies of different components of it. The main method of bonding these components is by using adhesive joints. Each type of adhesive bond must be studied on its own since its performance depends on the conditions of the manufacturing process and operating characteristics. This study aims to characterize the typical adhesive joints of the aircraft under usual loads. To this purpose, the evaluation of the effect of adhesive thickness on the mechanical performance of the joint under quasi-static loading conditions, constant amplitude fatigue and cyclic impact fatigue using single lap-joint specimens will be performed. Additionally, using a double cantilever beam specimen, the influence of the thickness of the adhesive on the crack growth rate for mode I delamination failure, as a function of the critical energy release rate will be determined. Finally, an analysis of the fracture surface of the test specimens considering the mechanical interaction between the substrate (composite) and the adhesive, provide insights into the magnitude of the damage, the type of failure mechanism that occurs and its correlation with the way crack propagates under the proposed loading conditions.

Keywords: adhesive, composites, crack propagation, fatigue

Procedia PDF Downloads 206
4333 WebAppShield: An Approach Exploiting Machine Learning to Detect SQLi Attacks in an Application Layer in Run-time

Authors: Ahmed Abdulla Ashlam, Atta Badii, Frederic Stahl

Abstract:

In recent years, SQL injection attacks have been identified as being prevalent against web applications. They affect network security and user data, which leads to a considerable loss of money and data every year. This paper presents the use of classification algorithms in machine learning using a method to classify the login data filtering inputs into "SQLi" or "Non-SQLi,” thus increasing the reliability and accuracy of results in terms of deciding whether an operation is an attack or a valid operation. A method Web-App auto-generated twin data structure replication. Shielding against SQLi attacks (WebAppShield) that verifies all users and prevents attackers (SQLi attacks) from entering and or accessing the database, which the machine learning module predicts as "Non-SQLi" has been developed. A special login form has been developed with a special instance of data validation; this verification process secures the web application from its early stages. The system has been tested and validated, up to 99% of SQLi attacks have been prevented.

Keywords: SQL injection, attacks, web application, accuracy, database

Procedia PDF Downloads 153
4332 Cognitive Methods for Detecting Deception During the Criminal Investigation Process

Authors: Laid Fekih

Abstract:

Background: It is difficult to detect lying, deception, and misrepresentation just by looking at verbal or non-verbal expression during the criminal investigation process, as there is a common belief that it is possible to tell whether a person is lying or telling the truth just by looking at the way they act or behave. The process of detecting lies and deception during the criminal investigation process needs more studies and research to overcome the difficulties facing the investigators. Method: The present study aimed to identify the effectiveness of cognitive methods and techniques in detecting deception during the criminal investigation. It adopted the quasi-experimental method and covered a sample of (20) defendants distributed randomly into two homogeneous groups, an experimental group of (10) defendants be subject to criminal investigation by applying cognitive techniques to detect deception and a second experimental group of (10) defendants be subject to the direct investigation method. The tool that used is a guided interview based on models of investigative questions according to the cognitive deception detection approach, which consists of three techniques of Vrij: imposing the cognitive burden, encouragement to provide more information, and ask unexpected questions, and the Direct Investigation Method. Results: Results revealed a significant difference between the two groups in term of lie detection accuracy in favour of defendants be subject to criminal investigation by applying cognitive techniques, the cognitive deception detection approach produced superior total accuracy rates both with human observers and through an analysis of objective criteria. The cognitive deception detection approach produced superior accuracy results in truth detection: 71%, deception detection: 70% compared to a direct investigation method truth detection: 52%; deception detection: 49%. Conclusion: The study recommended if practitioners use a cognitive deception detection technique, they will correctly classify more individuals than when they use a direct investigation method.

Keywords: the cognitive lie detection approach, deception, criminal investigation, mental health

Procedia PDF Downloads 68
4331 Predicting Wealth Status of Households Using Ensemble Machine Learning Algorithms

Authors: Habtamu Ayenew Asegie

Abstract:

Wealth, as opposed to income or consumption, implies a more stable and permanent status. Due to natural and human-made difficulties, households' economies will be diminished, and their well-being will fall into trouble. Hence, governments and humanitarian agencies offer considerable resources for poverty and malnutrition reduction efforts. One key factor in the effectiveness of such efforts is the accuracy with which low-income or poor populations can be identified. As a result, this study aims to predict a household’s wealth status using ensemble Machine learning (ML) algorithms. In this study, design science research methodology (DSRM) is employed, and four ML algorithms, Random Forest (RF), Adaptive Boosting (AdaBoost), Light Gradient Boosted Machine (LightGBM), and Extreme Gradient Boosting (XGBoost), have been used to train models. The Ethiopian Demographic and Health Survey (EDHS) dataset is accessed for this purpose from the Central Statistical Agency (CSA)'s database. Various data pre-processing techniques were employed, and the model training has been conducted using the scikit learn Python library functions. Model evaluation is executed using various metrics like Accuracy, Precision, Recall, F1-score, area under curve-the receiver operating characteristics (AUC-ROC), and subjective evaluations of domain experts. An optimal subset of hyper-parameters for the algorithms was selected through the grid search function for the best prediction. The RF model has performed better than the rest of the algorithms by achieving an accuracy of 96.06% and is better suited as a solution model for our purpose. Following RF, LightGBM, XGBoost, and AdaBoost algorithms have an accuracy of 91.53%, 88.44%, and 58.55%, respectively. The findings suggest that some of the features like ‘Age of household head’, ‘Total children ever born’ in a family, ‘Main roof material’ of their house, ‘Region’ they lived in, whether a household uses ‘Electricity’ or not, and ‘Type of toilet facility’ of a household are determinant factors to be a focal point for economic policymakers. The determinant risk factors, extracted rules, and designed artifact achieved 82.28% of the domain expert’s evaluation. Overall, the study shows ML techniques are effective in predicting the wealth status of households.

Keywords: ensemble machine learning, households wealth status, predictive model, wealth status prediction

Procedia PDF Downloads 43
4330 Detection of Powdery Mildew Disease in Strawberry Using Image Texture and Supervised Classifiers

Authors: Sultan Mahmud, Qamar Zaman, Travis Esau, Young Chang

Abstract:

Strawberry powdery mildew (PM) is a serious disease that has a significant impact on strawberry production. Field scouting is still a major way to find PM disease, which is not only labor intensive but also almost impossible to monitor disease severity. To reduce the loss caused by PM disease and achieve faster automatic detection of the disease, this paper proposes an approach for detection of the disease, based on image texture and classified with support vector machines (SVMs) and k-nearest neighbors (kNNs). The methodology of the proposed study is based on image processing which is composed of five main steps including image acquisition, pre-processing, segmentation, features extraction and classification. Two strawberry fields were used in this study. Images of healthy leaves and leaves infected with PM (Sphaerotheca macularis) disease under artificial cloud lighting condition. Colour thresholding was utilized to segment all images before textural analysis. Colour co-occurrence matrix (CCM) was introduced for extraction of textural features. Forty textural features, related to a physiological parameter of leaves were extracted from CCM of National television system committee (NTSC) luminance, hue, saturation and intensity (HSI) images. The normalized feature data were utilized for training and validation, respectively, using developed classifiers. The classifiers have experimented with internal, external and cross-validations. The best classifier was selected based on their performance and accuracy. Experimental results suggested that SVMs classifier showed 98.33%, 85.33%, 87.33%, 93.33% and 95.0% of accuracy on internal, external-I, external-II, 4-fold cross and 5-fold cross-validation, respectively. Whereas, kNNs results represented 90.0%, 72.00%, 74.66%, 89.33% and 90.3% of classification accuracy, respectively. The outcome of this study demonstrated that SVMs classified PM disease with a highest overall accuracy of 91.86% and 1.1211 seconds of processing time. Therefore, overall results concluded that the proposed study can significantly support an accurate and automatic identification and recognition of strawberry PM disease with SVMs classifier.

Keywords: powdery mildew, image processing, textural analysis, color co-occurrence matrix, support vector machines, k-nearest neighbors

Procedia PDF Downloads 122
4329 Evaluating Factors Affecting Audiologists’ Diagnostic Performance in Auditory Brainstem Response Reading: Training and Experience

Authors: M. Zaitoun, S. Cumming, A. Purcell

Abstract:

This study aims to determine if audiologists' experience characteristics in ABR (Auditory Brainstem Response) reading is associated with their performance in interpreting ABR results. Fifteen ABR traces with varying degrees of hearing level were presented twice, making a total of 30. Audiologists were asked to determine the hearing threshold for each of the cases after completing a brief survey regarding their experience and training in ABR administration. Sixty-one audiologists completed all tasks. Correlations between audiologists’ performance measures and experience variables suggested significant associations (p < 0.05) between training period in ABR testing and audiologists’ performance in terms of both sensitivity and accuracy. In addition, the number of years conducting ABR testing correlated with specificity. No other correlations approached significance. While there are relatively few significant correlations between ABR performance and experience, accuracy in ABR reading is associated with audiologists’ length of experience and period of training. To improve audiologists’ performance in reading ABR results, an emphasis on the importance of training should be raised and standardized levels and period for audiologists training in ABR testing should also be set.

Keywords: ABR, audiology, performance, training, experience

Procedia PDF Downloads 167
4328 Structural Equation Modeling Semiparametric in Modeling the Accuracy of Payment Time for Customers of Credit Bank in Indonesia

Authors: Adji Achmad Rinaldo Fernandes

Abstract:

The research was conducted to apply semiparametric SEM modeling to the timeliness of paying credit. Semiparametric SEM is structural modeling in which two combined approaches of parametric and nonparametric approaches are used. The analysis method in this research is semiparametric SEM with a nonparametric approach using a truncated spline. The data in the study were obtained through questionnaires distributed to Bank X mortgage debtors and are confidential. The study used 3 variables consisting of one exogenous variable, one intervening endogenous variable, and one endogenous variable. The results showed that (1) the effect of capacity and willingness to pay variables on timeliness of payment is significant, (2) modeling the capacity variable on willingness to pay also produces a significant estimate, (3) the effect of the capacity variable on the timeliness of payment variable is not influenced by the willingness to pay variable as an intervening variable, (4) the R^2 value of 0.763 or 76.33% indicates that the model has good predictive relevance.

Keywords: structural equation modeling semiparametric, credit bank, accuracy of payment time, willingness to pay

Procedia PDF Downloads 48
4327 Flood-prone Urban Area Mapping Using Machine Learning, a Case Sudy of M'sila City (Algeria)

Authors: Medjadj Tarek, Ghribi Hayet

Abstract:

This study aims to develop a flood sensitivity assessment tool using machine learning (ML) techniques and geographic information system (GIS). The importance of this study is integrating the geographic information systems (GIS) and machine learning (ML) techniques for mapping flood risks, which help decision-makers to identify the most vulnerable areas and take the necessary precautions to face this type of natural disaster. To reach this goal, we will study the case of the city of M'sila, which is among the areas most vulnerable to floods. This study drew a map of flood-prone areas based on the methodology where we have made a comparison between 3 machine learning algorithms: the xGboost model, the Random Forest algorithm and the K Nearest Neighbour algorithm. Each of them gave an accuracy respectively of 97.92 - 95 - 93.75. In the process of mapping flood-prone areas, the first model was relied upon, which gave the greatest accuracy (xGboost).

Keywords: Geographic information systems (GIS), machine learning (ML), emergency mapping, flood disaster management

Procedia PDF Downloads 96
4326 Machine Learning Driven Analysis of Kepler Objects of Interest to Identify Exoplanets

Authors: Akshat Kumar, Vidushi

Abstract:

This paper identifies 27 KOIs, 26 of which are currently classified as candidates and one as false positives that have a high probability of being confirmed. For this purpose, 11 machine learning algorithms were implemented on the cumulative kepler dataset sourced from the NASA exoplanet archive; it was observed that the best-performing model was HistGradientBoosting and XGBoost with a test accuracy of 93.5%, and the lowest-performing model was Gaussian NB with a test accuracy of 54%, to test model performance F1, cross-validation score and RUC curve was calculated. Based on the learned models, the significant characteristics for confirm exoplanets were identified, putting emphasis on the object’s transit and stellar properties; these characteristics were namely koi_count, koi_prad, koi_period, koi_dor, koi_ror, and koi_smass, which were later considered to filter out the potential KOIs. The paper also calculates the Earth similarity index based on the planetary radius and equilibrium temperature for each KOI identified to aid in their classification.

Keywords: Kepler objects of interest, exoplanets, space exploration, machine learning, earth similarity index, transit photometry

Procedia PDF Downloads 77
4325 Multiphase Equilibrium Characterization Model For Hydrate-Containing Systems Based On Trust-Region Method Non-Iterative Solving Approach

Authors: Zhuoran Li, Guan Qin

Abstract:

A robust and efficient compositional equilibrium characterization model for hydrate-containing systems is required, especially for time-critical simulations such as subsea pipeline flow assurance analysis, compositional simulation in hydrate reservoirs etc. A multiphase flash calculation framework, which combines Gibbs energy minimization function and cubic plus association (CPA) EoS, is developed to describe the highly non-ideal phase behavior of hydrate-containing systems. A non-iterative eigenvalue problem-solving approach for the trust-region sub-problem is selected to guarantee efficiency. The developed flash model is based on the state-of-the-art objective function proposed by Michelsen to minimize the Gibbs energy of the multiphase system. It is conceivable that a hydrate-containing system always contains polar components (such as water and hydrate inhibitors), introducing hydrogen bonds to influence phase behavior. Thus, the cubic plus associating (CPA) EoS is utilized to compute the thermodynamic parameters. The solid solution theory proposed by van der Waals and Platteeuw is applied to represent hydrate phase parameters. The trust-region method combined with the trust-region sub-problem non-iterative eigenvalue problem-solving approach is utilized to ensure fast convergence. The developed multiphase flash model's accuracy performance is validated by three available models (one published and two commercial models). Hundreds of published hydrate-containing system equilibrium experimental data are collected to act as the standard group for the accuracy test. The accuracy comparing results show that our model has superior performances over two models and comparable calculation accuracy to CSMGem. Efficiency performance test also has been carried out. Because the trust-region method can determine the optimization step's direction and size simultaneously, fast solution progress can be obtained. The comparison results show that less iteration number is needed to optimize the objective function by utilizing trust-region methods than applying line search methods. The non-iterative eigenvalue problem approach also performs faster computation speed than the conventional iterative solving algorithm for the trust-region sub-problem, further improving the calculation efficiency. A new thermodynamic framework of the multiphase flash model for the hydrate-containing system has been constructed in this work. Sensitive analysis and numerical experiments have been carried out to prove the accuracy and efficiency of this model. Furthermore, based on the current thermodynamic model in the oil and gas industry, implementing this model is simple.

Keywords: equation of state, hydrates, multiphase equilibrium, trust-region method

Procedia PDF Downloads 173
4324 Transient Response of Elastic Structures Subjected to a Fluid Medium

Authors: Helnaz Soltani, J. N. Reddy

Abstract:

Presence of fluid medium interacting with a structure can lead to failure of the structure. Since developing efficient computational model for fluid-structure interaction (FSI) problems has broader impact to realistic problems encountered in aerospace industry, ship industry, oil and gas industry, and so on, one can find an increasing need to find a method in order to investigate the effect of fluid domain on structural response. A coupled finite element formulation of problems involving FSI issue is an accurate method to predict the response of structures in contact with a fluid medium. This study proposes a finite element approach in order to study the transient response of the structures interacting with a fluid medium. Since beam and plate are considered to be the fundamental elements of almost any structure, the developed method is applied to beams and plates benchmark problems in order to demonstrate its efficiency. The formulation is a combination of the various structure theories and the solid-fluid interface boundary condition, which is used to represent the interaction between the solid and fluid regimes. Here, three different beam theories as well as three different plate theories are considered to model the solid medium, and the Navier-Stokes equation is used as the theoretical equation governed the fluid domain. For each theory, a coupled set of equations is derived where the element matrices of both regimes are calculated by Gaussian quadrature integration. The main feature of the proposed methodology is to model the fluid domain as an added mass; the external distributed force due to the presence of the fluid. We validate the accuracy of such formulation by means of some numerical examples. Since the formulation presented in this study covers several theories in literature, the applicability of our proposed approach is independent of any structure geometry. The effect of varying parameters such as structure thickness ratio, fluid density and immersion depth, are studied using numerical simulations. The results indicate that maximum vertical deflection of the structure is affected considerably in the presence of a fluid medium.

Keywords: beam and plate, finite element analysis, fluid-structure interaction, transient response

Procedia PDF Downloads 569
4323 Sildenafil Citrate (Viagra) Suppositories Are Promising Approach for Treatment of Unexplained Infertility

Authors: Shahinaz El-Shourbagy El-Shourbagy, Ahmed M. E Ossman Ossman, Ashraf El-Mohamady El-Mohamady

Abstract:

Objective: To investigate if there is a role of sildenafil citrate (Viagra) in the treatment of infertile couples for idiopathic cause. Design: An observational study. Setting: Infertility outpatient clinic of Tanta University Hospital Egypt. Patient(s): 50 unexplained infertility women {endometrial thickness (EM) and the mean resistance index (RI)} compared to 50 fertile control group attended for check-up in the same period and receiving no treatment. Intervention(s): unexplained infertility women were given 25 mg of sildenafil citrate suppositories four times per day for seven days starting from the 5th day of the menstrual cycle for three cycles. Main Outcome Measures: EM and RI of endometrial spiral artery were assessed by transvaginal color-pulsed Doppler ultrasound in unexplained infertility women before and after sildenafil citrate treatment and compared with control. The conception rate and pregnancy outcome were recorded in the two groups. Result(s): Women with unexplained infertility had significantly thinner endometrium and a higher spiral artery resistance index, meaning lower peri-implantation blood flow than the fertile controls. Sildenafil citrate treated women showed a statistically significant increase in endometrial thickness (p < 0.001) and a significant decrease in the mean spiral artery resistance index (p < 0.001) giving a better conception rate. Conclusion: Sildenafil citrate suppositories treatment enhance the endometrial blood flow through decreasing spiral artery resistance index 'RI' and consequently improve endometrial growth and receptivity in cases of unexplained infertility thus giving a better conception rate.

Keywords: Unexplained infertility, endometrial blood flow, endome¬trial receptivity, color-pulsed Doppler ultrasound; RI (resis¬tance index, Sildenafil citrate (Viagra)

Procedia PDF Downloads 220
4322 Machine Learning Techniques in Bank Credit Analysis

Authors: Fernanda M. Assef, Maria Teresinha A. Steiner

Abstract:

The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.

Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines

Procedia PDF Downloads 104