Search results for: dimensional accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5715

Search results for: dimensional accuracy

5385 Evaluating Accuracy of Foetal Weight Estimation by Clinicians in Christian Medical College Hospital, India and Its Correlation to Actual Birth Weight: A Clinical Audit

Authors: Aarati Susan Mathew, Radhika Narendra Patel, Jiji Mathew

Abstract:

A retrospective study conducted at Christian Medical College (CMC) Teaching Hospital, Vellore, India on 14th August 2014 to assess the accuracy of clinically estimated foetal weight upon labour admission. Estimating foetal weight is a crucial factor in assessing maternal and foetal complications during and after labour. Medical notes of ninety-eight postnatal women who fulfilled the inclusion criteria were studied to evaluate the correlation between their recorded Estimated Foetal Weight (EFW) on admission and actual birth weight (ABW) of the newborn after delivery. Data concerning maternal and foetal demographics was also noted. Accuracy was determined by absolute percentage error and proportion of estimates within 10% of ABW. Actual birth weights ranged from 950-4080g. A strong positive correlation between EFW and ABW (r=0.904) was noted. Term deliveries (≥40 weeks) in the normal weight range (2500-4000g) had a 59.5% estimation accuracy (n=74) compared to pre-term (<40 weeks) with an estimation accuracy of 0% (n=2). Out of the term deliveries, macrosomic babies (>4000g) were underestimated by 25% (n=3) and low birthweight (LBW) babies were overestimated by 12.7% (n=9). Registrars who estimated foetal weight were accurate in babies within normal weight ranges. However, there needs to be an improvement in predicting weight of macrosomic and LBW foetuses. We have suggested the use of an amended version of the Johnson’s formula for the Indian population for improvement and a need to re-audit once implemented.

Keywords: clinical palpation, estimated foetal weight, pregnancy, India, Johnson’s formula

Procedia PDF Downloads 368
5384 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.

Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document

Procedia PDF Downloads 164
5383 Examining the Development of Complexity, Accuracy and Fluency in L2 Learners' Writing after L2 Instruction

Authors: Khaled Barkaoui

Abstract:

Research on second-language (L2) learning tends to focus on comparing students with different levels of proficiency at one point in time. However, to understand L2 development, we need more longitudinal research. In this study, we adopt a longitudinal approach to examine changes in three indicators of L2 ability, complexity, accuracy, and fluency (CAF), as reflected in the writing of L2 learners when writing on different tasks before and after a period L2 instruction. Each of 85 Chinese learners of English at three levels of English language proficiency responded to two writing tasks (independent and integrated) before and after nine months of English-language study in China. Each essay (N= 276) was analyzed in terms of numerous CAF indices using both computer coding and human rating: number of words written, number of errors per 100 words, ratings of error severity, global syntactic complexity (MLS), complexity by coordination (T/S), complexity by subordination (C/T), clausal complexity (MLC), phrasal complexity (NP density), syntactic variety, lexical density, lexical variation, lexical sophistication, and lexical bundles. Results were then compared statistically across tasks, L2 proficiency levels, and time. Overall, task type had significant effects on fluency and some syntactic complexity indices (complexity by coordination, structural variety, clausal complexity, phrase complexity) and lexical density, sophistication, and bundles, but not accuracy. L2 proficiency had significant effects on fluency, accuracy, and lexical variation, but not syntactic complexity. Finally, fluency, frequency of errors, but not accuracy ratings, syntactic complexity indices (clausal complexity, global complexity, complexity by subordination, phrase complexity, structural variety) and lexical complexity (lexical density, variation, and sophistication) exhibited significant changes after instruction, particularly for the independent task. We discuss the findings and their implications for assessment, instruction, and research on CAF in the context of L2 writing.

Keywords: second language writing, Fluency, accuracy, complexity, longitudinal

Procedia PDF Downloads 156
5382 The Use of the Matlab Software as the Best Way to Recognize Penumbra Region in Radiotherapy

Authors: Alireza Shayegan, Morteza Amirabadi

Abstract:

The y tool was developed to quantitatively compare dose distributions, either measured or calculated. Before computing ɣ, the dose and distance scales of the two distributions, referred to as evaluated and reference, are re-normalized by dose and distance criteria, respectively. The re-normalization allows the dose distribution comparison to be conducted simultaneously along dose and distance axes. Several two-dimensional images were acquired using a Scanning Liquid Ionization Chamber EPID and Extended Dose Range (EDR2) films for regular and irregular radiation fields. The raw images were then converted into two-dimensional dose maps. Transitional and rotational manipulations were performed for images using Matlab software. As evaluated dose distribution maps, they were then compared with the corresponding original dose maps as the reference dose maps.

Keywords: energetic electron, gamma function, penumbra, Matlab software

Procedia PDF Downloads 305
5381 Dissolved Oxygen Prediction Using Support Vector Machine

Authors: Sorayya Malek, Mogeeb Mosleh, Sharifah M. Syed

Abstract:

In this study, Support Vector Machine (SVM) technique was applied to predict the dichotomized value of Dissolved oxygen (DO) from two freshwater lakes namely Chini and Bera Lake (Malaysia). Data sample contained 11 parameters for water quality features from year 2005 until 2009. All data parameters were used to predicate the dissolved oxygen concentration which was dichotomized into 3 different levels (High, Medium, and Low). The input parameters were ranked, and forward selection method was applied to determine the optimum parameters that yield the lowest errors, and highest accuracy. Initial results showed that pH, water temperature, and conductivity are the most important parameters that significantly affect the predication of DO. Then, SVM model was applied using the Anova kernel with those parameters yielded 74% accuracy rate. We concluded that using SVM models to predicate the DO is feasible, and using dichotomized value of DO yields higher prediction accuracy than using precise DO value.

Keywords: dissolved oxygen, water quality, predication DO, support vector machine

Procedia PDF Downloads 294
5380 Image Reconstruction Method Based on L0 Norm

Authors: Jianhong Xiang, Hao Xiang, Linyu Wang

Abstract:

Compressed sensing (CS) has a wide range of applications in sparse signal reconstruction. Aiming at the problems of low recovery accuracy and long reconstruction time of existing reconstruction algorithms in medical imaging, this paper proposes a corrected smoothing L0 algorithm based on compressed sensing (CSL0). First, an approximate hyperbolic tangent function (AHTF) that is more similar to the L0 norm is proposed to approximate the L0 norm. Secondly, in view of the "sawtooth phenomenon" in the steepest descent method and the problem of sensitivity to the initial value selection in the modified Newton method, the use of the steepest descent method and the modified Newton method are jointly optimized to improve the reconstruction accuracy. Finally, the CSL0 algorithm is simulated on various images. The results show that the algorithm proposed in this paper improves the reconstruction accuracy of the test image by 0-0. 98dB.

Keywords: smoothed L0, compressed sensing, image processing, sparse reconstruction

Procedia PDF Downloads 121
5379 Numerical Simulation of Three-Dimensional Cavitating Turbulent Flow in Francis Turbines with ANSYS

Authors: Raza Abdulla Saeed

Abstract:

In this study, the three-dimensional cavitating turbulent flow in a complete Francis turbine is simulated using mixture model for cavity/liquid two-phase flows. Numerical analysis is carried out using ANSYS CFX software release 12, and standard k-ε turbulence model is adopted for this analysis. The computational fluid domain consist of spiral casing, stay vanes, guide vanes, runner and draft tube. The computational domain is discretized with a three-dimensional mesh system of unstructured tetrahedron mesh. The finite volume method (FVM) is used to solve the governing equations of the mixture model. Results of cavitation on the runner’s blades under three different boundary conditions are presented and discussed. From the numerical results it has been found that the numerical method was successfully applied to simulate the cavitating two-phase turbulent flow through a Francis turbine, and also cavitation is clearly predicted in the form of water vapor formation inside the turbine. By comparison the numerical prediction results with a real runner; it’s shown that the region of higher volume fraction obtained by simulation is consistent with the region of runner cavitation damage.

Keywords: computational fluid dynamics, hydraulic francis turbine, numerical simulation, two-phase mixture cavitation model

Procedia PDF Downloads 566
5378 The Learning Impact of a 4-Dimensional Digital Construction Learning Environment

Authors: Chris Landorf, Stephen Ward

Abstract:

This paper addresses a virtual environment approach to work integrated learning for students in construction-related disciplines. The virtual approach provides a safe and pedagogically rigorous environment where students can apply theoretical knowledge in a simulated real-world context. The paper describes the development of a 4-dimensional digital construction environment and associated learning activities funded by the Australian Office for Learning and Teaching. The environment was trialled with over 1,300 students and evaluated through questionnaires, observational studies and coursework analysis. Results demonstrate a positive impact on students’ technical learning and collaboration skills, but there is need for further research in relation to critical thinking skills and work-readiness.

Keywords: architectural education, construction industry, digital learning environments, immersive learning

Procedia PDF Downloads 414
5377 The Impact of the Cross Race Effect on Eyewitness Identification

Authors: Leah Wilck

Abstract:

Eyewitness identification is arguably one of the most utilized practices within our legal system; however, exoneration cases indicate that this practice may lead to accuracy and conviction errors. The purpose of this study was to examine the effects of the cross-race effect, the phenomena in which people are able to more easily and accurately identify faces from within their racial category, on the accuracy of eyewitness identification. Participants watched three separate videos of a perpetrator trying to steal a bicycle. In each video, the perpetrator was of a different race and gender. Participants watched a video where the perpetrator was a Black male, a White male, and a White female. Following the completion of watching each video, participants were asked to recall everything they could about the perpetrator they witnessed. The initial results of the study did not find the expected cross-race effect impacted the eyewitness identification accuracy. These surprising results are discussed in terms of cross-race bias and recognition theory as well as applied implications.

Keywords: cross race effect, eyewitness identification, own-race bias, racial profiling

Procedia PDF Downloads 166
5376 Generic Hybrid Models for Two-Dimensional Ultrasonic Guided Wave Problems

Authors: Manoj Reghu, Prabhu Rajagopal, C. V. Krishnamurthy, Krishnan Balasubramaniam

Abstract:

A thorough understanding of guided ultrasonic wave behavior in structures is essential for the application of existing Non Destructive Evaluation (NDE) technologies, as well as for the development of new methods. However, the analysis of guided wave phenomena is challenging because of their complex dispersive and multimodal nature. Although numerical solution procedures have proven to be very useful in this regard, the increasing complexity of features and defects to be considered, as well as the desire to improve the accuracy of inspection often imposes a large computational cost. Hybrid models that combine numerical solutions for wave scattering with faster alternative methods for wave propagation have long been considered as a solution to this problem. However usually such models require modification of the base code of the solution procedure. Here we aim to develop Generic Hybrid models that can be directly applied to any two different solution procedures. With this goal in mind, a Numerical Hybrid model and an Analytical-Numerical Hybrid model has been developed. The concept and implementation of these Hybrid models are discussed in this paper.

Keywords: guided ultrasonic waves, Finite Element Method (FEM), Hybrid model

Procedia PDF Downloads 469
5375 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation

Authors: Jonathan Gong

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning

Procedia PDF Downloads 134
5374 A Study of the Performance Parameter for Recommendation Algorithm Evaluation

Authors: C. Rana, S. K. Jain

Abstract:

The enormous amount of Web data has challenged its usage in efficient manner in the past few years. As such, a range of techniques are applied to tackle this problem; prominent among them is personalization and recommender system. In fact, these are the tools that assist user in finding relevant information of web. Most of the e-commerce websites are applying such tools in one way or the other. In the past decade, a large number of recommendation algorithms have been proposed to tackle such problems. However, there have not been much research in the evaluation criteria for these algorithms. As such, the traditional accuracy and classification metrics are still used for the evaluation purpose that provides a static view. This paper studies how the evolution of user preference over a period of time can be mapped in a recommender system using a new evaluation methodology that explicitly using time dimension. We have also presented different types of experimental set up that are generally used for recommender system evaluation. Furthermore, an overview of major accuracy metrics and metrics that go beyond the scope of accuracy as researched in the past few years is also discussed in detail.

Keywords: collaborative filtering, data mining, evolutionary, clustering, algorithm, recommender systems

Procedia PDF Downloads 419
5373 Behaviour of Model Square Footing Resting on Three Dimensional Geogrid Reinforced Sand Bed

Authors: Femy M. Makkar, S. Chandrakaran, N. Sankar

Abstract:

The concept of reinforced earth has been used in the field of geotechnical engineering since 1960s, for many applications such as, construction of road and rail embankments, pavements, retaining walls, shallow foundations, soft ground improvement and so on. Conventionally, planar geosynthetic materials such as geotextiles and geogrids were used as the reinforcing elements. Recently, the use of three dimensional reinforcements becomes one of the emerging trends in this field. So, in the present investigation, three dimensional geogrid is proposed as a reinforcing material. Laboratory scaled plate load tests are conducted on a model square footing resting on 3D geogrid reinforced sand bed. The performance of 3D geogrids in triangular and square pattern was compared with conventional geogrids and the improvement in bearing capacity and reduction in settlement and heave are evaluated. When single layer of reinforcement was placed at an optimum depth of 0.25B from the bottom of the footing, the bearing capacity of conventional geogrid reinforced soil improved by 1.85 times compared to unreinforced soil, where as 3D geogrid reinforced soil with triangular pattern and square pattern shows 2.69 and 3.05 times improvement respectively compared to unreinforced soil. Also, 3D geogrids performs better than conventional geogrids in reducing the settlement and heave of sand bed around the model footing.

Keywords: 3D reinforcing elements, bearing capacity, heavy, settlement

Procedia PDF Downloads 305
5372 Robust Image Registration Based on an Adaptive Normalized Mutual Information Metric

Authors: Huda Algharib, Amal Algharib, Hanan Algharib, Ali Mohammad Alqudah

Abstract:

Image registration is an important topic for many imaging systems and computer vision applications. The standard image registration techniques such as Mutual information/ Normalized mutual information -based methods have a limited performance because they do not consider the spatial information or the relationships between the neighbouring pixels or voxels. In addition, the amount of image noise may significantly affect the registration accuracy. Therefore, this paper proposes an efficient method that explicitly considers the relationships between the adjacent pixels, where the gradient information of the reference and scene images is extracted first, and then the cosine similarity of the extracted gradient information is computed and used to improve the accuracy of the standard normalized mutual information measure. Our experimental results on different data types (i.e. CT, MRI and thermal images) show that the proposed method outperforms a number of image registration techniques in terms of the accuracy.

Keywords: image registration, mutual information, image gradients, image transformations

Procedia PDF Downloads 252
5371 Automated Driving Deep Neural Networks Model Accuracy and Performance Assessment in a Simulated Environment

Authors: David Tena-Gago, Jose M. Alcaraz Calero, Qi Wang

Abstract:

The evolution and integration of automated vehicles have become more and more tangible in recent years. State-of-the-art technological advances in the field of camera-based Artificial Intelligence (AI) and computer vision greatly favor the performance and reliability of the Advanced Driver Assistance System (ADAS), leading to a greater knowledge of vehicular operation and resembling human behavior. However, the exclusive use of this technology still seems insufficient to control vehicular operation at 100%. To reveal the degree of accuracy of the current camera-based automated driving AI modules, this paper studies the structure and behavior of one of the main solutions in a controlled testing environment. The results obtained clearly outline the lack of reliability when using exclusively the AI model in the perception stage, thereby entailing using additional complementary sensors to improve its safety and performance.

Keywords: accuracy assessment, AI-driven mobility, artificial intelligence, automated vehicles

Procedia PDF Downloads 118
5370 Protein Tertiary Structure Prediction by a Multiobjective Optimization and Neural Network Approach

Authors: Alexandre Barbosa de Almeida, Telma Woerle de Lima Soares

Abstract:

Protein structure prediction is a challenging task in the bioinformatics field. The biological function of all proteins majorly relies on the shape of their three-dimensional conformational structure, but less than 1% of all known proteins in the world have their structure solved. This work proposes a deep learning model to address this problem, attempting to predict some aspects of the protein conformations. Throughout a process of multiobjective dominance, a recurrent neural network was trained to abstract the particular bias of each individual multiobjective algorithm, generating a heuristic that could be useful to predict some of the relevant aspects of the three-dimensional conformation process formation, known as protein folding.

Keywords: Ab initio heuristic modeling, multiobjective optimization, protein structure prediction, recurrent neural network

Procedia PDF Downloads 209
5369 Metropolis-Hastings Sampling Approach for High Dimensional Testing Methods of Autonomous Vehicles

Authors: Nacer Eddine Chelbi, Ayet Bagane, Annie Saleh, Claude Sauvageau, Denis Gingras

Abstract:

As recently stated by National Highway Traffic Safety Administration (NHTSA), to demonstrate the expected performance of a highly automated vehicles system, test approaches should include a combination of simulation, test track, and on-road testing. In this paper, we propose a new validation method for autonomous vehicles involving on-road tests (Field Operational Tests), test track (Test Matrix) and simulation (Worst Case Scenarios). We concentrate our discussion on the simulation aspects, in particular, we extend recent work based on Importance Sampling by using a Metropolis-Hasting algorithm (MHS) to sample collected data from the Safety Pilot Model Deployment (SPMD) in lane-change scenarios. Our proposed MH sampling method will be compared to the Importance Sampling method, which does not perform well in high-dimensional problems. The importance of this study is to obtain a sampler that could be applied to high dimensional simulation problems in order to reduce and optimize the number of test scenarios that are necessary for validation and certification of autonomous vehicles.

Keywords: automated driving, autonomous emergency braking (AEB), autonomous vehicles, certification, evaluation, importance sampling, metropolis-hastings sampling, tests

Procedia PDF Downloads 290
5368 3-D Visualization and Optimization for SISO Linear Systems Using Parametrization of Two-Stage Compensator Design

Authors: Kazuyoshi Mori, Keisuke Hashimoto

Abstract:

In this paper, we consider the two-stage compensator designs of SISO plants. As an investigation of the characteristics of the two-stage compensator designs, which is not well investigated yet, of SISO plants, we implement three dimensional visualization systems of output signals and optimization system for SISO plants by the parametrization of stabilizing controllers based on the two-stage compensator design. The system runs on Mathematica by using “Three Dimensional Surface Plots,” so that the visualization can be interactively manipulated by users. In this paper, we use the discrete-time LTI system model. Even so, our approach is the factorization approach, so that the result can be applied to many linear models.

Keywords: linear systems, visualization, optimization, Mathematica

Procedia PDF Downloads 304
5367 Performance and Emission Prediction in a Biodiesel Engine Fuelled with Honge Methyl Ester Using RBF Neural Networks

Authors: Shiva Kumar, G. S. Vijay, Srinivas Pai P., Shrinivasa Rao B. R.

Abstract:

In the present study RBF neural networks were used for predicting the performance and emission parameters of a biodiesel engine. Engine experiments were carried out in a 4 stroke diesel engine using blends of diesel and Honge methyl ester as the fuel. Performance parameters like BTE, BSEC, Tech and emissions from the engine were measured. These experimental results were used for ANN modeling. RBF center initialization was done by random selection and by using Clustered techniques. Network was trained by using fixed and varying widths for the RBF units. It was observed that RBF results were having a good agreement with the experimental results. Networks trained by using clustering technique gave better results than using random selection of centers in terms of reduced MRE and increased prediction accuracy. The average MRE for the performance parameters was 3.25% with the prediction accuracy of 98% and for emissions it was 10.4% with a prediction accuracy of 80%.

Keywords: radial basis function networks, emissions, performance parameters, fuzzy c means

Procedia PDF Downloads 562
5366 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 55
5365 Email Phishing Detection Using Natural Language Processing and Convolutional Neural Network

Authors: M. Hilani, B. Nassih

Abstract:

Phishing is one of the oldest and best known scams on the Internet. It can be defined as any type of telecommunications fraud that uses social engineering tricks to obtain confidential data from its victims. It’s a cybercrime aimed at stealing your sensitive information. Phishing is generally done via private email, so scammers impersonate large companies or other trusted entities to encourage victims to voluntarily provide information such as login credentials or, worse yet, credit card numbers. The COVID-19 theme is used by cybercriminals in multiple malicious campaigns like phishing. In this environment, messaging filtering solutions have become essential to protect devices that will now be used outside of the secure perimeter. Despite constantly updating methods to avoid these cyberattacks, the end result is currently insufficient. Many researchers are looking for optimal solutions to filter phishing emails, but we still need good results. In this work, we concentrated on solving the problem of detecting phishing emails using the different steps of NLP preprocessing, and we proposed and trained a model using one-dimensional CNN. Our study results show that our model obtained an accuracy of 99.99%, which demonstrates how well our model is working.

Keywords: phishing, e-mail, NLP preprocessing, CNN, e-mail filtering

Procedia PDF Downloads 132
5364 Estimation of Stress Intensity Factors from near Crack Tip Field

Authors: Zhuang He, Andrei Kotousov

Abstract:

All current experimental methods for determination of stress intensity factors are based on the assumption that the state of stress near the crack tip is plane stress. Therefore, these methods rely on strain and displacement measurements made outside the near crack tip region affected by the three-dimensional effects or by process zone. In this paper, we develop and validate an experimental procedure for the evaluation of stress intensity factors from the measurements of the out-of-plane displacements in the surface area controlled by 3D effects. The evaluation of stress intensity factors is possible when the process zone is sufficiently small, and the displacement field generated by the 3D effects is fully encapsulated by K-dominance region.

Keywords: digital image correlation, stress intensity factors, three-dimensional effects, transverse displacement

Procedia PDF Downloads 620
5363 Behavior Consistency Analysis for Workflow Nets Based on Branching Processes

Authors: Wang Mimi, Jiang Changjun, Liu Guanjun, Fang Xianwen

Abstract:

Loop structure often appears in the business process modeling, analyzing the consistency of corresponding workflow net models containing loop structure is a problem, the existing behavior consistency methods cannot analyze effectively the process models with the loop structure. In the paper, by analyzing five kinds of behavior relations of transitions, a three-dimensional figure and two-dimensional behavior relation matrix are proposed. Based on this, analysis method of behavior consistency of business process based on Petri net branching processes is proposed. Finally, an example is given out, which shows the method is effective.

Keywords: workflow net, behavior consistency measures, loop, branching process

Procedia PDF Downloads 392
5362 In-door Localization Algorithm and Appropriate Implementation Using Wireless Sensor Networks

Authors: Adeniran K. Ademuwagun, Alastair Allen

Abstract:

The relationship dependence between RSS and distance in an enclosed environment is an important consideration because it is a factor that can influence the reliability of any localization algorithm founded on RSS. Several algorithms effectively reduce the variance of RSS to improve localization or accuracy performance. Our proposed algorithm essentially avoids this pitfall and consequently, its high adaptability in the face of erratic radio signal. Using 3 anchors in close proximity of each other, we are able to establish that RSS can be used as reliable indicator for localization with an acceptable degree of accuracy. Inherent in this concept, is the ability for each prospective anchor to validate (guarantee) the position or the proximity of the other 2 anchors involved in the localization and vice versa. This procedure ensures that the uncertainties of radio signals due to multipath effects in enclosed environments are minimized. A major driver of this idea is the implicit topological relationship among sensors due to raw radio signal strength. The algorithm is an area based algorithm; however, it does not trade accuracy for precision (i.e the size of the returned area).

Keywords: anchor nodes, centroid algorithm, communication graph, radio signal strength

Procedia PDF Downloads 512
5361 An Accurate Computer-Aided Diagnosis: CAD System for Diagnosis of Aortic Enlargement by Using Convolutional Neural Networks

Authors: Mahdi Bazarganigilani

Abstract:

Aortic enlargement, also known as an aortic aneurysm, can occur when the walls of the aorta become weak. This disease can become deadly if overlooked and undiagnosed. In this paper, a computer-aided diagnosis (CAD) system was introduced to accurately diagnose aortic enlargement from chest x-ray images. An enhanced convolutional neural network (CNN) was employed and then trained by transfer learning by using three different main areas from the original images. The areas included the left lung, heart, and right lung. The accuracy of the system was then evaluated on 1001 samples by using 4-fold cross-validation. A promising accuracy of 90% was achieved in terms of the F-measure indicator. The results showed using different areas from the original image in the training phase of CNN could increase the accuracy of predictions. This encouraged the author to evaluate this method on a larger dataset and even on different CAD systems for further enhancement of this methodology.

Keywords: computer-aided diagnosis systems, aortic enlargement, chest X-ray, image processing, convolutional neural networks

Procedia PDF Downloads 167
5360 Estimation of Functional Response Model by Supervised Functional Principal Component Analysis

Authors: Hyon I. Paek, Sang Rim Kim, Hyon A. Ryu

Abstract:

In functional linear regression, one typical problem is to reduce dimension. Compared with multivariate linear regression, functional linear regression is regarded as an infinite-dimensional case, and the main task is to reduce dimensions of functional response and functional predictors. One common approach is to adapt functional principal component analysis (FPCA) on functional predictors and then use a few leading functional principal components (FPC) to predict the functional model. The leading FPCs estimated by the typical FPCA explain a major variation of the functional predictor, but these leading FPCs may not be mostly correlated with the functional response, so they may not be significant in the prediction for response. In this paper, we propose a supervised functional principal component analysis method for a functional response model with FPCs obtained by considering the correlation of the functional response. Our method would have a better prediction accuracy than the typical FPCA method.

Keywords: supervised, functional principal component analysis, functional response, functional linear regression

Procedia PDF Downloads 82
5359 The Effect of Explicit Focus on Form on Second Language Learning Writing Performance

Authors: Keivan Seyyedi, Leila Esmaeilpour, Seyed Jamal Sadeghi

Abstract:

Investigating the effectiveness of explicit focus on form on the written performance of the EFL learners was the aim of this study. To provide empirical support for this study, sixty male English learners were selected and randomly assigned into two groups of explicit focus on form and meaning focused. Narrative writing was employed for data collection. To measure writing performance, participants were required to narrate a story. They were given 20 minutes to finish the task and were asked to write at least 150 words. The participants’ output was coded then analyzed utilizing Independent t-test for grammatical accuracy and fluency of learners’ performance. Results indicated that learners in explicit focus on form group appear to benefit from error correction and rule explanation as two pedagogical techniques of explicit focus on form with respect to accuracy, but regarding fluency they did not yield any significant differences compared to the participants of meaning-focused group.

Keywords: explicit focus on form, rule explanation, accuracy, fluency

Procedia PDF Downloads 518
5358 Effects of Topic Familiarity on Linguistic Aspects in EFL Learners’ Writing Performance

Authors: Jeong-Won Lee, Kyeong-Ok Yoon

Abstract:

The current study aimed to investigate the effects of topic familiarity and language proficiency on linguistic aspects (lexical complexity, syntactic complexity, accuracy, and fluency) in EFL learners’ argumentative essays. For the study 64 college students were asked to write an argumentative essay for the two different topics (Driving and Smoking) chosen by the consideration of topic familiarity. The students were divided into two language proficiency groups (high-level and intermediate) according to their English writing proficiency. The findings of the study are as follows: 1) the participants of this study exhibited lower levels of lexical and syntactic complexity as well as accuracy when performing writing tasks with unfamiliar topics; and 2) they demonstrated the use of a wider range of vocabulary, and longer and more complex structures, and produced accurate and lengthier texts compared to their intermediate peers. Discussion and pedagogical implications for instruction of writing classes in EFL contexts were addressed.

Keywords: topic familiarity, complexity, accuracy, fluency

Procedia PDF Downloads 54
5357 A Developmental Survey of Local Stereo Matching Algorithms

Authors: André Smith, Amr Abdel-Dayem

Abstract:

This paper presents an overview of the history and development of stereo matching algorithms. Details from its inception, up to relatively recent techniques are described, noting challenges that have been surmounted across these past decades. Different components of these are explored, though focus is directed towards the local matching techniques. While global approaches have existed for some time, and demonstrated greater accuracy than their counterparts, they are generally quite slow. Many strides have been made more recently, allowing local methods to catch up in terms of accuracy, without sacrificing the overall performance.

Keywords: developmental survey, local stereo matching, rectification, stereo correspondence

Procedia PDF Downloads 297
5356 The Combination Of Aortic Dissection Detection Risk Score (ADD-RS) With D-dimer As A Diagnostic Tool To Exclude The Diagnosis Of Acute Aortic Syndrome (AAS)

Authors: Mohamed Hamada Abdelkader Fayed

Abstract:

Background: To evaluate the diagnostic accuracy of (ADD-RS) with D-dimer as a screening test to exclude AAS. Methods: We conducted research for the studies examining the diagnostic accuracy of (ADD- RS)+ D-dimer to exclude the diagnosis of AAS, We searched MEDLINE, Embase, and Cochrane of Trials up to 31 December 2020. Results: We identified 3 studies using (ADD-RS) with D-dimer as a diagnostic tool for AAS, involving 3261 patients were AAS was diagnosed in 559(17.14%) patients. Overall results showed that the pooled sensitivities were 97.6 (95% CI 0.95.6, 99.6) at (ADD-RS)≤1(low risk group) with D-dimer and 97.4(95% CI 0.95.4,, 99.4) at (ADD-RS)>1(High risk group) with D-dimer., the failure rate was 0.48% at low risk group and 4.3% at high risk group respectively. Conclusions: (ADD-RS) with D-dimer was a useful screening test with high sensitivity to exclude Acute Aortic Syndrome.

Keywords: aortic dissection detection risk score, D-dimer, acute aortic syndrome, diagnostic accuracy

Procedia PDF Downloads 219