Search results for: accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3720

Search results for: accuracy

2430 Design and Analysis of an Electro Thermally Symmetrical Actuated Microgripper

Authors: Sh. Foroughi, V. Karamzadeh, M. Packirisamy

Abstract:

This paper presents design and analysis of an electrothermally symmetrical actuated microgripper applicable for performing micro assembly or biological cell manipulation. Integration of micro-optics with microdevice leads to achieve extremely precise control over the operation of the device. Geometry, material, actuation, control, accuracy in measurement and temperature distribution are important factors which have to be taken into account for designing the efficient microgripper device. In this work, analyses of four different geometries are performed by means of COMSOL Multiphysics 5.2 with implementing Finite Element Methods. Then, temperature distribution along the fingertip, displacement of gripper site as well as optical efficiency vs. displacement and electrical potential are illustrated. Results show in addition to the industrial application of this device, the usage of that as a cell manipulator is possible.

Keywords: electro thermal actuator, MEMS, microgripper, MOEMS

Procedia PDF Downloads 165
2429 Parametric Template-Based 3D Reconstruction of the Human Body

Authors: Jiahe Liu, Hongyang Yu, Feng Qian, Miao Luo, Linhang Zhu

Abstract:

This study proposed a 3D human body reconstruction method, which integrates multi-view joint information into a set of joints and processes it with a parametric human body template. Firstly, we obtained human body image information captured from multiple perspectives. The multi-view information can avoid self-occlusion and occlusion problems during the reconstruction process. Then, we used the MvP algorithm to integrate multi-view joint information into a set of joints. Next, we used the parametric human body template SMPL-X to obtain more accurate three-dimensional human body reconstruction results. Compared with the traditional single-view parametric human body template reconstruction, this method significantly improved the accuracy and stability of the reconstruction.

Keywords: parametric human body templates, reconstruction of the human body, multi-view, joint

Procedia PDF Downloads 79
2428 Context-Aware Recommender System Using Collaborative Filtering, Content-Based Algorithm and Fuzzy Rules

Authors: Xochilt Ramirez-Garcia, Mario Garcia-Valdez

Abstract:

Contextual recommendations are implemented in Recommender Systems to improve user satisfaction, recommender system makes accurate and suitable recommendations for a particular situation reaching personalized recommendations. The context provides information relevant to the Recommender System and is used as a filter for selection of relevant items for the user. This paper presents a Context-aware Recommender System, which uses techniques based on Collaborative Filtering and Content-Based, as well as fuzzy rules, to recommend items inside the context. The dataset used to test the system is Trip Advisor. The accuracy in the recommendations was evaluated with the Mean Absolute Error.

Keywords: algorithms, collaborative filtering, intelligent systems, fuzzy logic, recommender systems

Procedia PDF Downloads 421
2427 Forecasting Amman Stock Market Data Using a Hybrid Method

Authors: Ahmad Awajan, Sadam Al Wadi

Abstract:

In this study, a hybrid method based on Empirical Mode Decomposition and Holt-Winter (EMD-HW) is used to forecast Amman stock market data. First, the data are decomposed by EMD method into Intrinsic Mode Functions (IMFs) and residual components. Then, all components are forecasted by HW technique. Finally, forecasting values are aggregated together to get the forecasting value of stock market data. Empirical results showed that the EMD- HW outperform individual forecasting models. The strength of this EMD-HW lies in its ability to forecast non-stationary and non- linear time series without a need to use any transformation method. Moreover, EMD-HW has a relatively high accuracy comparing with eight existing forecasting methods based on the five forecast error measures.

Keywords: Holt-Winter method, empirical mode decomposition, forecasting, time series

Procedia PDF Downloads 129
2426 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations

Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh

Abstract:

Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.

Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy

Procedia PDF Downloads 97
2425 Analysis and Simulation of TM Fields in Waveguides with Arbitrary Cross-Section Shapes by Means of Evolutionary Equations of Time-Domain Electromagnetic Theory

Authors: Ömer Aktaş, Olga A. Suvorova, Oleg Tretyakov

Abstract:

The boundary value problem on non-canonical and arbitrary shaped contour is solved with a numerically effective method called Analytical Regularization Method (ARM) to calculate propagation parameters. As a result of regularization, the equation of first kind is reduced to the infinite system of the linear algebraic equations of the second kind in the space of L2. This equation can be solved numerically for desired accuracy by using truncation method. The parameters as cut-off wavenumber and cut-off frequency are used in waveguide evolutionary equations of electromagnetic theory in time-domain to illustrate the real-valued TM fields with lossy and lossless media.

Keywords: analytical regularization method, electromagnetic theory evolutionary equations of time-domain, TM Field

Procedia PDF Downloads 500
2424 Supervised Learning for Cyber Threat Intelligence

Authors: Jihen Bennaceur, Wissem Zouaghi, Ali Mabrouk

Abstract:

The major aim of cyber threat intelligence (CTI) is to provide sophisticated knowledge about cybersecurity threats to ensure internal and external safeguards against modern cyberattacks. Inaccurate, incomplete, outdated, and invaluable threat intelligence is the main problem. Therefore, data analysis based on AI algorithms is one of the emergent solutions to overcome the threat of information-sharing issues. In this paper, we propose a supervised machine learning-based algorithm to improve threat information sharing by providing a sophisticated classification of cyber threats and data. Extensive simulations investigate the accuracy, precision, recall, f1-score, and support overall to validate the designed algorithm and to compare it with several supervised machine learning algorithms.

Keywords: threat information sharing, supervised learning, data classification, performance evaluation

Procedia PDF Downloads 148
2423 A Dislocation-Based Explanation to Quasi-Elastic Release in Shock Loaded Aluminum

Authors: Song L. Yao, Ji D. Yu, Xiao Y. Pei

Abstract:

An explanation is introduced to study the quasi-elastic release phenomenon in shock compressed aluminum. A dislocation-based model, taking into account of dislocation substructures and evolutions, is applied to simulate the elastic-plastic response of both single crystal and polycrystalline aluminum. Simulated results indicate that dislocation immobilization during dynamic deformation results in a smooth increase of yield stress, which leads to the quasi-elastic release. While the generation of dislocations caused by plastic release wave results in the appearance of transition point between the quasi-elastic release and the plastic release in the profile. The quantities of calculated shear strength and dislocation density are in accordance with experimental result, which demonstrates the accuracy of our simulations.

Keywords: dislocation density, quasi-elastic release, wave profile, shock wave

Procedia PDF Downloads 282
2422 Overview of Fiber Optic Gyroscopes as Ring Laser Gyros and Fiber Optic Gyros and the Comparison Between Them

Authors: M. Abdo, Mohamed Shalaby

Abstract:

A key development in the field of inertial sensors, fiber-optic gyroscopes (FOGs) are currently thought to be a competitive alternative to mechanical gyroscopes for inertial navigation and control applications. For the past few years, research and development efforts have been conducted all around the world using the FOG as a crucial sensor for high-accuracy inertial navigation systems. The main fundamentals of optical gyros were covered in this essay, followed by discussions of the main types of optical gyros and fiber optic gyroscopes and ring laser gyroscopes and comparisons between them. We also discussed different types of fiber optic gyros, including interferometric, resonator, and Brillion fiber optic gyroscopes.

Keywords: mechanical gyros, ring laser gyros, interferometric finer optic gyros, Resonator fiber optic gyros

Procedia PDF Downloads 80
2421 Using Scale Invariant Feature Transform Features to Recognize Characters in Natural Scene Images

Authors: Belaynesh Chekol, Numan Çelebi

Abstract:

The main purpose of this work is to recognize individual characters extracted from natural scene images using scale invariant feature transform (SIFT) features as an input to K-nearest neighbor (KNN); a classification learner algorithm. For this task, 1,068 and 78 images of English alphabet characters taken from Chars74k data set is used to train and test the classifier respectively. For each character image, We have generated describing features by using SIFT algorithm. This set of features is fed to the learner so that it can recognize and label new images of English characters. Two types of KNN (fine KNN and weighted KNN) were trained and the resulted classification accuracy is 56.9% and 56.5% respectively. The training time taken was the same for both fine and weighted KNN.

Keywords: character recognition, KNN, natural scene image, SIFT

Procedia PDF Downloads 281
2420 Localization of Mobile Robots with Omnidirectional Cameras

Authors: Tatsuya Kato, Masanobu Nagata, Hidetoshi Nakashima, Kazunori Matsuo

Abstract:

Localization of mobile robots are important tasks for developing autonomous mobile robots. This paper proposes a method to estimate positions of a mobile robot using an omnidirectional camera on the robot. Landmarks for points of references are set up on a field where the robot works. The omnidirectional camera which can obtain 360 [deg] around images takes photographs of these landmarks. The positions of the robots are estimated from directions of these landmarks that are extracted from the images by image processing. This method can obtain the robot positions without accumulative position errors. Accuracy of the estimated robot positions by the proposed method are evaluated through some experiments. The results show that it can obtain the positions with small standard deviations. Therefore the method has possibilities of more accurate localization by tuning of appropriate offset parameters.

Keywords: mobile robots, localization, omnidirectional camera, estimating positions

Procedia PDF Downloads 442
2419 Item-Trait Pattern Recognition of Replenished Items in Multidimensional Computerized Adaptive Testing

Authors: Jianan Sun, Ziwen Ye

Abstract:

Multidimensional computerized adaptive testing (MCAT) is a popular research topic in psychometrics. It is important for practitioners to clearly know the item-trait patterns of administered items when a test like MCAT is operated. Item-trait pattern recognition refers to detecting which latent traits in a psychological test are measured by each of the specified items. If the item-trait patterns of the replenished items in MCAT item pool are well detected, the interpretability of the items can be improved, which can further promote the abilities of the examinees who attending the MCAT to be accurately estimated. This research explores to solve the item-trait pattern recognition problem of the replenished items in MCAT item pool from the perspective of statistical variable selection. The popular multidimensional item response theory model, multidimensional two-parameter logistic model, is assumed to fit the response data of MCAT. The proposed method uses the least absolute shrinkage and selection operator (LASSO) to detect item-trait patterns of replenished items based on the essential information of item responses and ability estimates of examinees collected from a designed MCAT procedure. Several advantages of the proposed method are outlined. First, the proposed method does not strictly depend on the relative order between the replenished items and the selected operational items, so it allows the replenished items to be mixed into the operational items in reasonable order such as considering content constraints or other test requirements. Second, the LASSO used in this research improves the interpretability of the multidimensional replenished items in MCAT. Third, the proposed method can exert the advantage of shrinkage method idea for variable selection, so it can help to check item quality and key dimension features of replenished items and saves more costs of time and labors in response data collection than traditional factor analysis method. Moreover, the proposed method makes sure the dimensions of replenished items are recognized to be consistent with the dimensions of operational items in MCAT item pool. Simulation studies are conducted to investigate the performance of the proposed method under different conditions for varying dimensionality of item pool, latent trait correlation, item discrimination, test lengths and item selection criteria in MCAT. Results show that the proposed method can accurately detect the item-trait patterns of the replenished items in the two-dimensional and the three-dimensional item pool. Selecting enough operational items from the item pool consisting of high discriminating items by Bayesian A-optimality in MCAT can improve the recognition accuracy of item-trait patterns of replenished items for the proposed method. The pattern recognition accuracy for the conditions with correlated traits is better than those with independent traits especially for the item pool consisting of comparatively low discriminating items. To sum up, the proposed data-driven method based on the LASSO can accurately and efficiently detect the item-trait patterns of replenished items in MCAT.

Keywords: item-trait pattern recognition, least absolute shrinkage and selection operator, multidimensional computerized adaptive testing, variable selection

Procedia PDF Downloads 130
2418 The Complete Modal Derivatives

Authors: Sebastian Andersen, Peter N. Poulsen

Abstract:

The use of basis projection in the structural dynamic analysis is frequently applied. The purpose of the method is to improve the computational efficiency, while maintaining a high solution accuracy, by projection the governing equations onto a small set of carefully selected basis vectors. The present work considers basis projection in kinematic nonlinear systems with a focus on two widely used basis vectors; the system mode shapes and their modal derivatives. Particularly the latter basis vectors are given special attention since only approximate modal derivatives have been used until now. In the present work the complete modal derivatives, derived from perturbation methods, are presented and compared to the previously applied approximate modal derivatives. The correctness of the complete modal derivatives is illustrated by use of an example of a harmonically loaded kinematic nonlinear structure modeled by beam elements.

Keywords: basis projection, finite element method, kinematic nonlinearities, modal derivatives

Procedia PDF Downloads 237
2417 Extracting an Experimental Relation between SMD, Mass Flow Rate, Velocity and Pressure in Swirl Fuel Atomizers

Authors: Mohammad Hassan Ziraksaz

Abstract:

Fuel atomizers are used in a wide range of IC engines, turbojets and a variety of liquid propellant rocket engines. As the fuel spray fully develops its characters approach their ultimate amounts. Fuel spray characters such as SMD, injection pressure, mass flow rate, droplet velocity and spray cone angle play important roles to atomize the liquid fuel to finely atomized fuel droplets and finally form the fine fuel spray. Well performed, fully developed, fine spray without any defections, brings the idea of finding an experimental relation between the main effective spray characters. Extracting an experimental relation between SMD and other fuel spray physical characters in swirl fuel atomizers is the main scope of this experimental work. Droplet velocity, fuel mass flow rate, SMD and spray cone angle are the parameters which are measured. A set of twelve reverse engineering atomizers without any spray defections and a set of eight original atomizers as referenced well-performed spray are contributed in this work. More than 350 tests, mostly repeated, were performed. This work shows that although spray cone angle plays a very effective role in spray formation, after formation, it smoothly approaches to an almost constant amount while the other characters are changed to create fine droplets. Therefore, the work to find the relation between the characters is focused on SMD, droplet velocity, fuel mass flow rate, and injection pressure. The process of fuel spray formation begins in 5 Psig injection pressures, where a tiny fuel onion attaches to the injector tip and ended in 250 Psig injection pressure, were fully developed fine fuel spray forms. Injection pressure is gradually increased to observe how the spray forms. In each step, all parameters are measured and recorded carefully to provide a data bank. Various diagrams have been drawn to study the behavior of the parameters in more detail. Experiments and graphs show that the power equation can best show changes in parameters. The SMD experimental relation with pressure P, fuel mass flow rate Q ̇ and droplet velocity V extracted individually in pairs. Therefore, the proportional relation of SMD with other parameters is founded. Now it is time to find an experimental relation including all the parameters. Using obtained proportional relation, replacing the parameters with experimentally measured ones and drawing the graphs of experimental SMD versus proportion SMD (〖SMD〗_P), a correctional equation and consequently the final experimental equation is obtained. This experimental equation is specified to use for swirl fuel atomizers and the use of this experimental equation in different conditions shows about 3% error, which is expected to achieve lower error and consequently higher accuracy by increasing the number of experiments and increasing the accuracy of data collection.

Keywords: droplet velocity, experimental relation, mass flow rate, SMD, swirl fuel atomizer

Procedia PDF Downloads 161
2416 A New Floating Point Implementation of Base 2 Logarithm

Authors: Ahmed M. Mansour, Ali M. El-Sawy, Ahmed T. Sayed

Abstract:

Logarithms reduce products to sums and powers to products; they play an important role in signal processing, communication and information theory. They are primarily used for hardware calculations, handling multiplications, divisions, powers, and roots effectively. There are three commonly used bases for logarithms; the logarithm with base-10 is called the common logarithm, the natural logarithm with base-e and the binary logarithm with base-2. This paper demonstrates different methods of calculation for log2 showing the complexity of each and finds out the most accurate and efficient besides giving in- sights to their hardware design. We present a new method called Floor Shift for fast calculation of log2, and then we combine this algorithm with Taylor series to improve the accuracy of the output, we illustrate that by using two examples. We finally compare the algorithms and conclude with our remarks.

Keywords: logarithms, log2, floor, iterative, CORDIC, Taylor series

Procedia PDF Downloads 532
2415 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space

Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari

Abstract:

Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.

Keywords: amino acid, head space, gas chromatography, total error

Procedia PDF Downloads 148
2414 Complex Rigid-Plastic Deformation Model of Tow Degree of Freedom Mechanical System under Impulsive Force

Authors: Abdelouaheb Rouabhi

Abstract:

In order to study the plastic resource of structures, the elastic-plastic single degree of freedom model described by Prandtl diagram is widely used. The generalization of this model to tow degree of freedom beyond the scope of a simple rigid-plastic system allows investigating the plastic resource of structures under complex disproportionate by individual components of deformation (earthquake). This macro-model greatly increases the accuracy of the calculations carried out. At the same time, the implementation of the proposed macro-model calculations easier than the detailed dynamic elastic-plastic calculations existing software systems such as ANSYS.

Keywords: elastic-plastic, single degree of freedom model, rigid-plastic system, plastic resource, complex plastic deformation, macro-model

Procedia PDF Downloads 379
2413 Enhanced Furfural Extraction from Aqueous Media Using Neoteric Hydrophobic Solvents

Authors: Ahmad S. Darwish, Tarek Lemaoui, Hanifa Taher, Inas M. AlNashef, Fawzi Banat

Abstract:

This research reports a systematic top-down approach for designing neoteric hydrophobic solvents –particularly, deep eutectic solvents (DES) and ionic liquids (IL)– as furfural extractants from aqueous media for the application of sustainable biomass conversion. The first stage of the framework entailed screening 32 neoteric solvents to determine their efficacy against toluene as the application’s conventional benchmark for comparison. The selection criteria for the best solvents encompassed not only their efficiency in extracting furfural but also low viscosity and minimal toxicity levels. Additionally, for the DESs, their natural origins, availability, and biodegradability were also taken into account. From the screening pool, two neoteric solvents were selected: thymol:decanoic acid 1:1 (Thy:DecA) and trihexyltetradecyl phosphonium bis(trifluoromethylsulfonyl) imide [P₁₄,₆,₆,₆][NTf₂]. These solvents outperformed the toluene benchmark, achieving efficiencies of 94.1% and 97.1% respectively, compared to toluene’s 81.2%, while also possessing the desired properties. These solvents were then characterized thoroughly in terms of their physical properties, thermal properties, critical properties, and cross-contamination solubilities. The selected neoteric solvents were then extensively tested under various operating conditions, and an exceptional stable performance was exhibited, maintaining high efficiency across a broad range of temperatures (15–100 °C), pH levels (1–13), and furfural concentrations (0.1–2.0 wt%) with a remarkable equilibrium time of only 2 minutes, and most notably, demonstrated high efficiencies even at low solvent-to-feed ratios. The durability of the neoteric solvents was also validated to be stable over multiple extraction-regeneration cycles, with limited leachability to the aqueous phase (≈0.1%). Moreover, the extraction performance of the solvents was then modeled through machine learning, specifically multiple non-linear regression (MNLR) and artificial neural networks (ANN). The models demonstrated high accuracy, indicated by their low absolute average relative deviations with values of 2.74% and 2.28% for Thy:DecA and [P₁₄,₆,₆,₆][NTf₂], respectively, using MNLR, and 0.10% for Thy:DecA and 0.41% for [P₁₄,₆,₆,₆][NTf₂] using ANN, highlighting the significantly enhanced predictive accuracy of the ANN. The neoteric solvents presented herein offer noteworthy advantages over traditional organic solvents, including their high efficiency in both extraction and regeneration processes, their stability and minimal leachability, making them particularly suitable for applications involving aqueous media. Moreover, these solvents are more environmentally friendly, incorporating renewable and sustainable components like thymol and decanoic acid. This exceptional efficacy of the newly developed neoteric solvents signifies a significant advancement, providing a green and sustainable alternative for furfural production from biowaste.

Keywords: sustainable biomass conversion, furfural extraction, ionic liquids, deep eutectic solvents

Procedia PDF Downloads 70
2412 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments

Authors: Skyler Kim

Abstract:

An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.

Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning

Procedia PDF Downloads 187
2411 Self-Supervised Learning for Hate-Speech Identification

Authors: Shrabani Ghosh

Abstract:

Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.

Keywords: attention learning, language model, offensive language detection, self-supervised learning

Procedia PDF Downloads 105
2410 Machine Learning Approach to Project Control Threshold Reliability Evaluation

Authors: Y. Kim, H. Lee, M. Park, B. Lee

Abstract:

Planning is understood as the determination of what has to be performed, how, in which sequence, when, what resources are needed, and their cost within the organization before execution. In most construction project, it is evident that the inherent nature of planning is dynamic, and initial planning is subject to be changed due to various uncertain conditions of construction project. Planners take a continuous revision process during the course of a project and until the very end of project. However, current practice lacks reliable, systematic tool for setting variance thresholds to determine when and what corrective actions to be taken. Rather it is heavily dependent on the level of experience and knowledge of the planner. Thus, this paper introduces a machine learning approach to evaluate project control threshold reliability incorporating project-specific data and presents a method to automate the process. The results have shown that the model improves the efficiency and accuracy of the monitoring process as an early warning.

Keywords: machine learning, project control, project progress monitoring, schedule

Procedia PDF Downloads 244
2409 Innovative Food Related Modification of the Day-Night Task Demonstrates Impaired Inhibitory Control among Patients with Binge-Purge Eating Disorder

Authors: Sigal Gat-Lazer, Ronny Geva, Dan Ramon, Eitan Gur, Daniel Stein

Abstract:

Introduction: Eating disorders (ED) are common psychopathologies which involve distorted body image and eating disturbances. Binge-purge eating disorders (B/P ED) are characterized by repetitive events of binge eating followed by purges. Patients with B/P ED behavior may be seen as impulsive especially when relate to food stimulation and affective conditions. The current study included innovative modification of the day-night task targeted to assess inhibitory control among patients with B/P ED. Methods: This prospective study included 50 patients with B/P ED during acute phase of illness (T1) upon their admission to specialized ED department in tertiary center. 34 patients repeated the study towards discharge to ambulatory care (T2). Treatment effect was evaluated by BMI and emotional questionnaires regarding depression and anxiety by the Beck Depression Inventory and State Trait Anxiety Inventory questionnaires. Control group included 36 healthy controls with matched demographic parameters who performed both T1 and T2 assessments. The current modification is based on the emotional day-night task (EDNT) which involves five emotional stimulation added to the sun and moon pictures presented to participants. In the current study, we designed the food-emotional modification day night task (F-EDNT) food stimulations of egg and banana which resemble the sun and moon, respectively, in five emotional states (angry, sad, happy, scrambled and neutral). During this computerized task, participants were instructed to push on “day” bottom in response to moon and banana stimulations and on “night” bottom when sun and egg were presented. Accuracy (A) and reaction time (RT) were evaluated and compared between EDNT and F-EDNT as a reflection of participants’ inhibitory control. Results: Patients with B/P ED had significantly improved BMI, depression and anxiety scores on T2 compared to T1 (all p<0.001). Task performance was similar among patients and controls in the EDNT without significant A or RT differences in both T1 and T2. On F-EDNT during T1, B/P ED patients had significantly reduced accuracy in 4/5 emotional stimulation compared to controls: angry (73±25% vs. 84±15%, respectively), sad (69±25% vs. 80±18%, respectively), happy (73±24% vs. 82±18%, respectively) and scrambled (74±24% vs. 84±13%, respectively, all p<0.05). Additionally, patients’ RT to food stimuli was significantly faster compared to neutral ones, in both cry and neutral emotional stimulations (356±146 vs. 400±141 and 378±124 vs. 412±116 msec, respectively, p<0.05). These significant differences between groups as a function of stimulus type were diminished on T2. Conclusion: Having to process food related content, in particular in emotional context seems to be impaired in patients with B/P ED during the acute phase of their illness and elicits greater impulsivity. Innovative modification using such procedures seem to be sensitive to patients’ illness phase and thus may be implemented during screening and follow up through the clinical management of these patients.

Keywords: binge purge eating disorders, day night task modification, eating disorders, food related stimulations

Procedia PDF Downloads 380
2408 The Audit Quality Effects on Reputation of the Certified Public Accountants in Thailand

Authors: Prateep Wajeetongratana

Abstract:

This research aims to study the audit quality that affected to the reputation of the certified public accountants in Thailand. The researcher defined the population for this research as a group of the certified public accountants in Thailand who are the member of the federation of accounting professions under the royal patronage of his majesty the king also disclose their information .The total sampling size is 325. The results showed the audit quality factor has influence to the reputation of the certified public accountants in Thailand by accuracy auditing, objectiveness auditing and clearness auditing .These factors show by y1 = 1.381 + .372x1.1 + .309x1.2 + .305x1.3 can be describe as professional standard strictly factor (Y.1.1) and the new clients raised from word of mount of old clients regularly factor (Y.1.2) by regression coefficient (R2) as.242, this shows that such variables could predict the audit quality variable as 24.2 percent.

Keywords: audit quality, certified public accountants in Thailand, reputation

Procedia PDF Downloads 259
2407 Comparative Analysis of Spectral Estimation Methods for Brain-Computer Interfaces

Authors: Rafik Djemili, Hocine Bourouba, M. C. Amara Korba

Abstract:

In this paper, we present a method in order to classify EEG signals for Brain-Computer Interfaces (BCI). EEG signals are first processed by means of spectral estimation methods to derive reliable features before classification step. Spectral estimation methods used are standard periodogram and the periodogram calculated by the Welch method; both methods are compared with Logarithm of Band Power (logBP) features. In the method proposed, we apply Linear Discriminant Analysis (LDA) followed by Support Vector Machine (SVM). Classification accuracy reached could be as high as 85%, which proves the effectiveness of classification of EEG signals based BCI using spectral methods.

Keywords: brain-computer interface, motor imagery, electroencephalogram, linear discriminant analysis, support vector machine

Procedia PDF Downloads 499
2406 Numerical Simulation and Analysis on Liquid Nitrogen Spray Heat Exchanger

Authors: Wenjing Ding, Weiwei Shan, Zijuan, Wang, Chao He

Abstract:

Liquid spray heat exchanger is the critical equipment of temperature regulating system by gaseous nitrogen which realizes the environment temperature in the range of -180 ℃~+180 ℃. Liquid nitrogen is atomized into smaller liquid drops through liquid nitrogen sprayer and then contacts with gaseous nitrogen to be cooled. By adjusting the pressure of liquid nitrogen and gaseous nitrogen, the flowrate of liquid nitrogen is changed to realize the required outlet temperature of heat exchanger. The temperature accuracy of shrouds is ±1 ℃. Liquid nitrogen spray heat exchanger is simulated by CATIA, and the numerical simulation is performed by FLUENT. The comparison between the tests and numerical simulation is conducted. Moreover, the results help to improve the design of liquid nitrogen spray heat exchanger.

Keywords: liquid nitrogen spray, temperature regulating system, heat exchanger, numerical simulation

Procedia PDF Downloads 326
2405 A New Mathematical Method for Heart Attack Forecasting

Authors: Razi Khalafi

Abstract:

Myocardial Infarction (MI) or acute Myocardial Infarction (AMI), commonly known as a heart attack, occurs when blood flow stops to part of the heart causing damage to the heart muscle. An ECG can often show evidence of a previous heart attack or one that's in progress. The patterns on the ECG may indicate which part of your heart has been damaged, as well as the extent of the damage. In chaos theory, the correlation dimension is a measure of the dimensionality of the space occupied by a set of random points, often referred to as a type of fractal dimension. In this research by considering ECG signal as a random walk we work on forecasting the oncoming heart attack by analysing the ECG signals using the correlation dimension. In order to test the model a set of ECG signals for patients before and after heart attack was used and the strength of model for forecasting the behaviour of these signals were checked. Results show this methodology can forecast the ECG and accordingly heart attack with high accuracy.

Keywords: heart attack, ECG, random walk, correlation dimension, forecasting

Procedia PDF Downloads 506
2404 Performance Analysis with the Combination of Visualization and Classification Technique for Medical Chatbot

Authors: Shajida M., Sakthiyadharshini N. P., Kamalesh S., Aswitha B.

Abstract:

Natural Language Processing (NLP) continues to play a strategic part in complaint discovery and medicine discovery during the current epidemic. This abstract provides an overview of performance analysis with a combination of visualization and classification techniques of NLP for a medical chatbot. Sentiment analysis is an important aspect of NLP that is used to determine the emotional tone behind a piece of text. This technique has been applied to various domains, including medical chatbots. In this, we have compared the combination of the decision tree with heatmap and Naïve Bayes with Word Cloud. The performance of the chatbot was evaluated using accuracy, and the results indicate that the combination of visualization and classification techniques significantly improves the chatbot's performance.

Keywords: sentimental analysis, NLP, medical chatbot, decision tree, heatmap, naïve bayes, word cloud

Procedia PDF Downloads 74
2403 Annular Hyperbolic Profile Fins with Variable Thermal Conductivity Using Laplace Adomian Transform and Double Decomposition Methods

Authors: Yinwei Lin, Cha'o-Kuang Chen

Abstract:

In this article, the Laplace Adomian transform method (LADM) and double decomposition method (DDM) are used to solve the annular hyperbolic profile fins with variable thermal conductivity. As the thermal conductivity parameter ε is relatively large, the numerical solution using DDM become incorrect. Moreover, when the terms of DDM are more than seven, the numerical solution using DDM is very complicated. However, the present method can be easily calculated as terms are over seven and has more precisely numerical solutions. As the thermal conductivity parameter ε is relatively large, LADM also has better accuracy than DDM.

Keywords: fins, thermal conductivity, Laplace transform, Adomian, nonlinear

Procedia PDF Downloads 334
2402 Diagnosis of Diabetes Using Computer Methods: Soft Computing Methods for Diabetes Detection Using Iris

Authors: Piyush Samant, Ravinder Agarwal

Abstract:

Complementary and Alternative Medicine (CAM) techniques are quite popular and effective for chronic diseases. Iridology is more than 150 years old CAM technique which analyzes the patterns, tissue weakness, color, shape, structure, etc. for disease diagnosis. The objective of this paper is to validate the use of iridology for the diagnosis of the diabetes. The suggested model was applied in a systemic disease with ocular effects. 200 subject data of 100 each diabetic and non-diabetic were evaluated. Complete procedure was kept very simple and free from the involvement of any iridologist. From the normalized iris, the region of interest was cropped. All 63 features were extracted using statistical, texture analysis, and two-dimensional discrete wavelet transformation. A comparison of accuracies of six different classifiers has been presented. The result shows 89.66% accuracy by the random forest classifier.

Keywords: complementary and alternative medicine, classification, iridology, iris, feature extraction, disease prediction

Procedia PDF Downloads 407
2401 Road Vehicle Recognition Using Magnetic Sensing Feature Extraction and Classification

Authors: Xiao Chen, Xiaoying Kong, Min Xu

Abstract:

This paper presents a road vehicle detection approach for the intelligent transportation system. This approach mainly uses low-cost magnetic sensor and associated data collection system to collect magnetic signals. This system can measure the magnetic field changing, and it also can detect and count vehicles. We extend Mel Frequency Cepstral Coefficients to analyze vehicle magnetic signals. Vehicle type features are extracted using representation of cepstrum, frame energy, and gap cepstrum of magnetic signals. We design a 2-dimensional map algorithm using Vector Quantization to classify vehicle magnetic features to four typical types of vehicles in Australian suburbs: sedan, VAN, truck, and bus. Experiments results show that our approach achieves a high level of accuracy for vehicle detection and classification.

Keywords: vehicle classification, signal processing, road traffic model, magnetic sensing

Procedia PDF Downloads 320