Search results for: quantification accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4051

Search results for: quantification accuracy

3091 Creep Analysis and Rupture Evaluation of High Temperature Materials

Authors: Yuexi Xiong, Jingwu He

Abstract:

The structural components in an energy facility such as steam turbine machines are operated under high stress and elevated temperature in an endured time period and thus the creep deformation and creep rupture failure are important issues that need to be addressed in the design of such components. There are numerous creep models being used for creep analysis that have both advantages and disadvantages in terms of accuracy and efficiency. The Isochronous Creep Analysis is one of the simplified approaches in which a full-time dependent creep analysis is avoided and instead an elastic-plastic analysis is conducted at each time point. This approach has been established based on the rupture dependent creep equations using the well-known Larson-Miller parameter. In this paper, some fundamental aspects of creep deformation and the rupture dependent creep models are reviewed and the analysis procedures using isochronous creep curves are discussed. Four rupture failure criteria are examined from creep fundamental perspectives including criteria of Stress Damage, Strain Damage, Strain Rate Damage, and Strain Capability. The accuracy of these criteria in predicting creep life is discussed and applications of the creep analysis procedures and failure predictions of simple models will be presented. In addition, a new failure criterion is proposed to improve the accuracy and effectiveness of the existing criteria. Comparisons are made between the existing criteria and the new one using several examples materials. Both strain increase and stress relaxation form a full picture of the creep behaviour of a material under high temperature in an endured time period. It is important to bear this in mind when dealing with creep problems. Accordingly there are two sets of rupture dependent creep equations. While the rupture strength vs LMP equation shows how the rupture time depends on the stress level under load controlled condition, the strain rate vs rupture time equation reflects how the rupture time behaves under strain-controlled condition. Among the four existing failure criteria for rupture life predictions, the Stress Damage and Strain Damage Criteria provide the most conservative and non-conservative predictions, respectively. The Strain Rate and Strain Capability Criteria provide predictions in between that are believed to be more accurate because the strain rate and strain capability are more determined quantities than stress to reflect the creep rupture behaviour. A modified Strain Capability Criterion is proposed making use of the two sets of creep equations and therefore is considered to be more accurate than the original Strain Capability Criterion.

Keywords: creep analysis, high temperature mateials, rapture evalution, steam turbine machines

Procedia PDF Downloads 272
3090 Damage Identification Using Experimental Modal Analysis

Authors: Niladri Sekhar Barma, Satish Dhandole

Abstract:

Damage identification in the context of safety, nowadays, has become a fundamental research interest area in the field of mechanical, civil, and aerospace engineering structures. The following research is aimed to identify damage in a mechanical beam structure and quantify the severity or extent of damage in terms of loss of stiffness, and obtain an updated analytical Finite Element (FE) model. An FE model is used for analysis, and the location of damage for single and multiple damage cases is identified numerically using the modal strain energy method and mode shape curvature method. Experimental data has been acquired with the help of an accelerometer. Fast Fourier Transform (FFT) algorithm is applied to the measured signal, and subsequently, post-processing is done in MEscopeVes software. The two sets of data, the numerical FE model and experimental results, are compared to locate the damage accurately. The extent of the damage is identified via modal frequencies using a mixed numerical-experimental technique. Mode shape comparison is performed by Modal Assurance Criteria (MAC). The analytical FE model is adjusted by the direct method of model updating. The same study has been extended to some real-life structures such as plate and GARTEUR structures.

Keywords: damage identification, damage quantification, damage detection using modal analysis, structural damage identification

Procedia PDF Downloads 96
3089 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model

Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You

Abstract:

The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.

Keywords: DBSCAN, potential function, speech signal, the UBSS model

Procedia PDF Downloads 119
3088 Carbohydrates Quantification from Agro-Industrial Waste and Fermentation with Lactic Acid Bacteria

Authors: Prittesh Patel, Bhavika Patel, Ramar Krishnamurthy

Abstract:

Present study was conducted to isolate lactic acid bacteria (LAB) from Oreochromis niloticus and Nemipterus japonicus fish gut. The LAB isolated were confirmed through 16s rRNA sequencing. It was observed that isolated Lactococcus spp. were able to tolerate NaCl and bile acid up to certain range. The isolated Lactococcus spp. were also able to survive in acidic and alkaline conditions. Further agro-industrial waste like peels of pineapple, orange, lemon, sugarcane, pomegranate; sweet lemon was analyzed for their polysaccharide contents and prebiotic properties. In the present study, orange peels, sweet lemon peels, and pineapple peels give maximum indigestible polysaccharide. To evaluate synbiotic effect combination of probiotic and prebiotic were analyzed under in vitro conditions. Isolates Lactococcus garvieae R3 and Lactococcus sp. R4 reported to have better fermentation efficiency with orange, sweet lemon and pineapple compare to lemon, sugarcane and pomegranate. The different agro-industrial waste evaluated in this research resulted in being a cheap and fermentable carbon source by LAB.

Keywords: agro-industrial waste, lactic acid bacteria, prebiotic, probiotic, synbiotic

Procedia PDF Downloads 149
3087 Influence of Alcohol Consumption on Attention in Wistar Albino Rats

Authors: Adekunle Adesina, Dorcas Adesina

Abstract:

This Research investigated the influence of alcohol consumption on attention in Wister albino rats. It was designed to test whether or not alcohol consumption affected visual and auditory attention. The sample of this study comprise of 3males albino rats and 3 females albino rats which were randomly assigned to 3 (male/female each) groups, 1, 2 and 3. The first group which was experimental Group 1 received 4ml of alcohol ingestion with cannula twice daily (morning and evening). The second group which was experimental group 2 received 2ml of alcohol ingestion with cannula twice daily (morning and evening). Third group which was the control group only received water (placebo), all these happened within a period of 2 days. Three hypotheses were advanced and testedf in the study. Hypothesis 1 stated that there will be no significant difference between the response speed of albino rats that consume alcohol and those that consume water on visual attention using 5-CSRTT. This was confirmed (DF (2, 9) = 0.72, P <.05). Hypothesis 2 stated that albino rats who consumed alcohol will perform better than those who consume water on auditory accuracy using 5-CSRTT. This was also tested but not confirmed (DF (2, 9) = 2.10, P< .05). The third hypothesis which stated that female albino rats who consumed alcohol would not perform better than male albino rats who consumed alcohol on auditory accuracy using 5-CSRTT was tested and not confirmed. (DF (4) = 0.17, P < .05). Data was analyzed using one-way ANOVA and T-test for independent measures. It was therefore recommended that government policies and programs should be directed at reducing to the barest minimum the rate of alcohol consumption especially among males as it is detrimental to the human auditory attentional organ.

Keywords: alcohol, attention, influence, rats, Wistar

Procedia PDF Downloads 244
3086 Estimating View-Through Ad Attribution from User Surveys Using Convex Optimization

Authors: Yuhan Lin, Rohan Kekatpure, Cassidy Yeung

Abstract:

In Digital Marketing, robust quantification of View-through attribution (VTA) is necessary for evaluating channel effectiveness. VTA occurs when a product purchase is aided by an Ad but without an explicit click (e.g. a TV ad). A lack of a tracking mechanism makes VTA estimation challenging. Most prevalent VTA estimation techniques rely on post-purchase in-product user surveys. User surveys enable the calculation of channel multipliers, which are the ratio of the view-attributed to the click-attributed purchases of each marketing channel. Channel multipliers thus provide a way to estimate the unknown VTA for a channel from its known click attribution. In this work, we use Convex Optimization to compute channel multipliers in a way that enables a mathematical encoding of the expected channel behavior. Large fluctuations in channel attributions often result from overfitting the calculations to user surveys. Casting channel attribution as a Convex Optimization problem allows an introduction of constraints that limit such fluctuations. The result of our study is a distribution of channel multipliers across the entire marketing funnel, with important implications for marketing spend optimization. Our technique can be broadly applied to estimate Ad effectiveness in a privacy-centric world that increasingly limits user tracking.

Keywords: digital marketing, survey analysis, operational research, convex optimization, channel attribution

Procedia PDF Downloads 164
3085 An Enhanced Approach in Validating Analytical Methods Using Tolerance-Based Design of Experiments (DoE)

Authors: Gule Teri

Abstract:

The effective validation of analytical methods forms a crucial component of pharmaceutical manufacturing. However, traditional validation techniques can occasionally fail to fully account for inherent variations within datasets, which may result in inconsistent outcomes. This deficiency in validation accuracy is particularly noticeable when quantifying low concentrations of active pharmaceutical ingredients (APIs), excipients, or impurities, introducing a risk to the reliability of the results and, subsequently, the safety and effectiveness of the pharmaceutical products. In response to this challenge, we introduce an enhanced, tolerance-based Design of Experiments (DoE) approach for the validation of analytical methods. This approach distinctly measures variability with reference to tolerance or design margins, enhancing the precision and trustworthiness of the results. This method provides a systematic, statistically grounded validation technique that improves the truthfulness of results. It offers an essential tool for industry professionals aiming to guarantee the accuracy of their measurements, particularly for low-concentration components. By incorporating this innovative method, pharmaceutical manufacturers can substantially advance their validation processes, subsequently improving the overall quality and safety of their products. This paper delves deeper into the development, application, and advantages of this tolerance-based DoE approach and demonstrates its effectiveness using High-Performance Liquid Chromatography (HPLC) data for verification. This paper also discusses the potential implications and future applications of this method in enhancing pharmaceutical manufacturing practices and outcomes.

Keywords: tolerance-based design, design of experiments, analytical method validation, quality control, biopharmaceutical manufacturing

Procedia PDF Downloads 54
3084 Data Augmentation for Early-Stage Lung Nodules Using Deep Image Prior and Pix2pix

Authors: Qasim Munye, Juned Islam, Haseeb Qureshi, Syed Jung

Abstract:

Lung nodules are commonly identified in computed tomography (CT) scans by experienced radiologists at a relatively late stage. Early diagnosis can greatly increase survival. We propose using a pix2pix conditional generative adversarial network to generate realistic images simulating early-stage lung nodule growth. We have applied deep images prior to 2341 slices from 895 computed tomography (CT) scans from the Lung Image Database Consortium (LIDC) dataset to generate pseudo-healthy medical images. From these images, 819 were chosen to train a pix2pix network. We observed that for most of the images, the pix2pix network was able to generate images where the nodule increased in size and intensity across epochs. To evaluate the images, 400 generated images were chosen at random and shown to a medical student beside their corresponding original image. Of these 400 generated images, 384 were defined as satisfactory - meaning they resembled a nodule and were visually similar to the corresponding image. We believe that this generated dataset could be used as training data for neural networks to detect lung nodules at an early stage or to improve the accuracy of such networks. This is particularly significant as datasets containing the growth of early-stage nodules are scarce. This project shows that the combination of deep image prior and generative models could potentially open the door to creating larger datasets than currently possible and has the potential to increase the accuracy of medical classification tasks.

Keywords: medical technology, artificial intelligence, radiology, lung cancer

Procedia PDF Downloads 53
3083 A Sensitive Uric Acid Electrochemical Sensing in Biofluids Based on Ni/Zn Hydroxide Nanocatalyst

Authors: Nathalia Florencia Barros Azeredo, Josué Martins Gonçalves, Pamela De Oliveira Rossini, Koiti Araki, Lucio Angnes

Abstract:

This work demonstrates the electroanalysis of uric acid (UA) at very low working potential (0 V vs Ag/AgCl) directly in body fluids such as saliva and sweat using electrodes modified with mixed -Ni0.75Zn0.25(OH)2 nanoparticles exhibiting stable electrocatalytic responses from alkaline down to weakly acidic media (pH 14 to 3 range). These materials were prepared for the first time and fully characterized by TEM, XRD, and spectroscopic techniques. The electrochemical properties of the modified electrodes were evaluated in a fast and simple procedure for uric acid analyses based on cyclic voltammetry and chronoamperometry, pushing down the detection and quantification limits (respectively of 2.3*10-8 and 7.6*10-8 mol L-1) with good repeatability (RSD = 3.2% for 30 successive analyses pH 14). Finally, the possibility of real application was demonstrated upon realization of unexpectedly robust and sensitive modified FTO (fluorine doped tin oxide) glass and screen-printed sensors for measurement of uric acid directly in real saliva and sweat samples, with no significant interference of usual concentrations of ascorbic acid, acetaminophen, lactate and glucose present in those body fluids (Fig. 1).

Keywords: nickel hydroxide, mixed catalyst, uric acid sensors, biofluids

Procedia PDF Downloads 113
3082 Effect of Knowledge of Bubble Point Pressure on Estimating PVT Properties from Correlations

Authors: Ahmed El-Banbi, Ahmed El-Maraghi

Abstract:

PVT properties are needed as input data in all reservoir, production, and surface facilities engineering calculations. In the absence of PVT reports on valid reservoir fluid samples, engineers rely on PVT correlations to generate the required PVT data. The accuracy of PVT correlations varies, and no correlation group has been found to provide accurate results for all oil types. The effect of inaccurate PVT data can be significant in engineering calculations and is well documented in the literature. Bubble point pressure can sometimes be obtained from external sources. In this paper, we show how to utilize the known bubble point pressure to improve the accuracy of calculated PVT properties from correlations. We conducted a systematic study using around 250 reservoir oil samples to quantify the effect of pre-knowledge of bubble point pressure. The samples spanned a wide range of oils, from very volatile oils to black oils and all the way to low-GOR oils. A method for shifting both undersaturated and saturated sections of the PVT properties curves to the correct bubble point is explained. Seven PVT correlation families were used in this study. All PVT properties (e.g., solution gas-oil ratio, formation volume factor, density, viscosity, and compressibility) were calculated using the correct bubble point pressure and the correlation estimated bubble point pressure. Comparisons between the calculated PVT properties and actual laboratory-measured values were made. It was found that pre-knowledge of bubble point pressure and using the shifting technique presented in the paper improved the correlation-estimated values by 10% to more than 30%. The most improvement was seen in the solution gas-oil ratio and formation volume factor.

Keywords: PVT data, PVT properties, PVT correlations, bubble point pressure

Procedia PDF Downloads 46
3081 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis

Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya

Abstract:

In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.

Keywords: cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis

Procedia PDF Downloads 308
3080 Evaluation of Machine Learning Algorithms and Ensemble Methods for Prediction of Students’ Graduation

Authors: Soha A. Bahanshal, Vaibhav Verdhan, Bayong Kim

Abstract:

Graduation rates at six-year colleges are becoming a more essential indicator for incoming fresh students and for university rankings. Predicting student graduation is extremely beneficial to schools and has a huge potential for targeted intervention. It is important for educational institutions since it enables the development of strategic plans that will assist or improve students' performance in achieving their degrees on time (GOT). A first step and a helping hand in extracting useful information from these data and gaining insights into the prediction of students' progress and performance is offered by machine learning techniques. Data analysis and visualization techniques are applied to understand and interpret the data. The data used for the analysis contains students who have graduated in 6 years in the academic year 2017-2018 for science majors. This analysis can be used to predict the graduation of students in the next academic year. Different Predictive modelings such as logistic regression, decision trees, support vector machines, Random Forest, Naïve Bayes, and KNeighborsClassifier are applied to predict whether a student will graduate. These classifiers were evaluated with k folds of 5. The performance of these classifiers was compared based on accuracy measurement. The results indicated that Ensemble Classifier achieves better accuracy, about 91.12%. This GOT prediction model would hopefully be useful to university administration and academics in developing measures for assisting and boosting students' academic performance and ensuring they graduate on time.

Keywords: prediction, decision trees, machine learning, support vector machine, ensemble model, student graduation, GOT graduate on time

Procedia PDF Downloads 60
3079 Path-Tracking Controller for Tracked Mobile Robot on Rough Terrain

Authors: Toshifumi Hiramatsu, Satoshi Morita, Manuel Pencelli, Marta Niccolini, Matteo Ragaglia, Alfredo Argiolas

Abstract:

Automation technologies for agriculture field are needed to promote labor-saving. One of the most relevant problems in automated agriculture is represented by controlling the robot along a predetermined path in presence of rough terrain or incline ground. Unfortunately, disturbances originating from interaction with the ground, such as slipping, make it quite difficult to achieve the required accuracy. In general, it is required to move within 5-10 cm accuracy with respect to the predetermined path. Moreover, lateral velocity caused by gravity on the incline field also affects slipping. In this paper, a path-tracking controller for tracked mobile robots moving on rough terrains of incline field such as vineyard is presented. The controller is composed of a disturbance observer and an adaptive controller based on the kinematic model of the robot. The disturbance observer measures the difference between the measured and the reference yaw rate and linear velocity in order to estimate slip. Then, the adaptive controller adapts “virtual” parameter of the kinematics model: Instantaneous Centers of Rotation (ICRs). Finally, target angular velocity reference is computed according to the adapted parameter. This solution allows estimating the effects of slip without making the model too complex. Finally, the effectiveness of the proposed solution is tested in a simulation environment.

Keywords: the agricultural robot, autonomous control, path-tracking control, tracked mobile robot

Procedia PDF Downloads 160
3078 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 378
3077 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes

Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono

Abstract:

Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is a widely used approach for LV segmentation but suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is proposed to improve the accuracy and speed of the model-based segmentation. Firstly, a robust and efficient detector based on Hough forest is proposed to localize cardiac feature points, and such points are used to predict the initial fitting of the LV shape model. Secondly, to achieve more accurate and detailed segmentation, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. The performance of the proposed method is evaluated on a dataset of 800 cardiac ultrasound images that are mostly of abnormal shapes. The proposed method is compared to several combinations of ASM and existing initialization methods. The experiment results demonstrate that the accuracy of feature point detection for initialization was improved by 40% compared to the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops, thus speeding up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.

Keywords: hough forest, active shape model, segmentation, cardiac left ventricle

Procedia PDF Downloads 326
3076 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values

Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi

Abstract:

A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.

Keywords: eXtreme gradient boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impair, multiclass classification, ADNI, support vector machine, random forest

Procedia PDF Downloads 168
3075 Chairussyuhur Arman, Totti Tjiptosumirat, Muhammad Gunawan, Mastur, Joko Priyono, Baiq Tri Ratna Erawati

Authors: Maria M. Giannakou, Athanasios K. Ziliaskopoulos

Abstract:

Transmission pipelines carrying natural gas are often routed through populated cities, industrial and environmentally sensitive areas. While the need for these networks is unquestionable, there are serious concerns about the risk these lifeline networks pose to the people, to their habitat and to the critical infrastructures, especially in view of natural disasters such as earthquakes. This work presents an Integrated Pipeline Risk Management methodology (IPRM) for assessing the hazard associated with a natural gas pipeline failure due to natural or manmade disasters. IPRM aims to optimize the allocation of the available resources to countermeasures in order to minimize the impacts of pipeline failure to humans, the environment, the infrastructure and the economic activity. A proposed knapsack mathematical programming formulation is introduced that optimally selects the proper mitigation policies based on the estimated cost – benefit ratios. The proposed model is demonstrated with a small numerical example. The vulnerability analysis of these pipelines and the quantification of consequences from such failures can be useful for natural gas industries on deciding which mitigation measures to implement on the existing pipeline networks with the minimum cost in an acceptable level of hazard.

Keywords: cost benefit analysis, knapsack problem, natural gas distribution network, risk management, risk mitigation

Procedia PDF Downloads 277
3074 Automatic Detection of Traffic Stop Locations Using GPS Data

Authors: Areej Salaymeh, Loren Schwiebert, Stephen Remias, Jonathan Waddell

Abstract:

Extracting information from new data sources has emerged as a crucial task in many traffic planning processes, such as identifying traffic patterns, route planning, traffic forecasting, and locating infrastructure improvements. Given the advanced technologies used to collect Global Positioning System (GPS) data from dedicated GPS devices, GPS equipped phones, and navigation tools, intelligent data analysis methodologies are necessary to mine this raw data. In this research, an automatic detection framework is proposed to help identify and classify the locations of stopped GPS waypoints into two main categories: signalized intersections or highway congestion. The Delaunay triangulation is used to perform this assessment in the clustering phase. While most of the existing clustering algorithms need assumptions about the data distribution, the effectiveness of the Delaunay triangulation relies on triangulating geographical data points without such assumptions. Our proposed method starts by cleaning noise from the data and normalizing it. Next, the framework will identify stoppage points by calculating the traveled distance. The last step is to use clustering to form groups of waypoints for signalized traffic and highway congestion. Next, a binary classifier was applied to find distinguish highway congestion from signalized stop points. The binary classifier uses the length of the cluster to find congestion. The proposed framework shows high accuracy for identifying the stop positions and congestion points in around 99.2% of trials. We show that it is possible, using limited GPS data, to distinguish with high accuracy.

Keywords: Delaunay triangulation, clustering, intelligent transportation systems, GPS data

Procedia PDF Downloads 260
3073 Feasibility Studies through Quantitative Methods: The Revamping of a Tourist Railway Line in Italy

Authors: Armando Cartenì, Ilaria Henke

Abstract:

Recently, the Italian government has approved a new law for public contracts and has been laying the groundwork for restarting a planning phase. The government has adopted the indications given by the European Commission regarding the estimation of the external costs within the Cost-Benefit Analysis, and has been approved the ‘Guidelines for assessment of Investment Projects’. In compliance with the new Italian law, the aim of this research was to perform a feasibility study applying quantitative methods regarding the revamping of an Italian tourist railway line. A Cost-Benefit Analysis was performed starting from the quantification of the passengers’ demand potentially interested in using the revamped rail services. The benefits due to the external costs reduction were also estimated (quantified) in terms of variations (with respect to the not project scenario): climate change, air pollution, noises, congestion, and accidents. Estimations results have been proposed in terms of the Measure of Effectiveness underlying a positive Net Present Value equal to about 27 million of Euros, an Internal Rate of Return much greater the discount rate, a benefit/cost ratio equal to 2 and a PayBack Period of 15 years.

Keywords: cost-benefit analysis, evaluation analysis, demand management, external cost, transport planning, quality

Procedia PDF Downloads 205
3072 A Human Factors Approach to Workload Optimization for On-Screen Review Tasks

Authors: Christina Kirsch, Adam Hatzigiannis

Abstract:

Rail operators and maintainers worldwide are increasingly replacing walking patrols in the rail corridor with mechanized track patrols -essentially data capture on trains- and on-screen reviews of track infrastructure in centralized review facilities. The benefit is that infrastructure workers are less exposed to the dangers of the rail corridor. The impact is a significant change in work design from walking track sections and direct observation in the real world to sedentary jobs in the review facility reviewing captured data on screens. Defects in rail infrastructure can have catastrophic consequences. Reviewer performance regarding accuracy and efficiency of reviews within the available time frame is essential to ensure safety and operational performance. Rail operators must optimize workload and resource loading to transition to on-screen reviews successfully. Therefore, they need to know what workload assessment methodologies will provide reliable and valid data to optimize resourcing for on-screen reviews. This paper compares objective workload measures, including track difficulty ratings and review distance covered per hour, and subjective workload assessments (NASA TLX) and analyses the link between workload and reviewer performance, including sensitivity, precision, and overall accuracy. An experimental study was completed with eight on-screen reviewers, including infrastructure workers and engineers, reviewing track sections with different levels of track difficulty over nine days. Each day the reviewers completed four 90-minute sessions of on-screen inspection of the track infrastructure. Data regarding the speed of review (km/ hour), detected defects, false negatives, and false positives were collected. Additionally, all reviewers completed a subjective workload assessment (NASA TLX) after each 90-minute session and a short employee engagement survey at the end of the study period that captured impacts on job satisfaction and motivation. The results showed that objective measures for tracking difficulty align with subjective mental demand, temporal demand, effort, and frustration in the NASA TLX. Interestingly, review speed correlated with subjective assessments of physical and temporal demand, but to mental demand. Subjective performance ratings correlated with all accuracy measures and review speed. The results showed that subjective NASA TLX workload assessments accurately reflect objective workload. The analysis of the impact of workload on performance showed that subjective mental demand correlated with high precision -accurately detected defects, not false positives. Conversely, high temporal demand was negatively correlated with sensitivity and the percentage of detected existing defects. Review speed was significantly correlated with false negatives. With an increase in review speed, accuracy declined. On the other hand, review speed correlated with subjective performance assessments. Reviewers thought their performance was higher when they reviewed the track sections faster, despite the decline in accuracy. The study results were used to optimize resourcing and ensure that reviewers had enough time to review the allocated track sections to improve defect detection rates in accordance with the efficiency-thoroughness trade-off. Overall, the study showed the importance of a multi-method approach to workload assessment and optimization, combining subjective workload assessments with objective workload and performance measures to ensure that recommendations for work system optimization are evidence-based and reliable.

Keywords: automation, efficiency-thoroughness trade-off, human factors, job design, NASA TLX, performance optimization, subjective workload assessment, workload analysis

Procedia PDF Downloads 101
3071 Hourly Solar Radiations Predictions for Anticipatory Control of Electrically Heated Floor: Use of Online Weather Conditions Forecast

Authors: Helene Thieblemont, Fariborz Haghighat

Abstract:

Energy storage systems play a crucial role in decreasing building energy consumption during peak periods and expand the use of renewable energies in buildings. To provide a high building thermal performance, the energy storage system has to be properly controlled to insure a good energy performance while maintaining a satisfactory thermal comfort for building’s occupant. In the case of passive discharge storages, defining in advance the required amount of energy is required to avoid overheating in the building. Consequently, anticipatory supervisory control strategies have been developed forecasting future energy demand and production to coordinate systems. Anticipatory supervisory control strategies are based on some predictions, mainly of the weather forecast. However, if the forecasted hourly outdoor temperature may be found online with a high accuracy, solar radiations predictions are most of the time not available online. To estimate them, this paper proposes an advanced approach based on the forecast of weather conditions. Several methods to correlate hourly weather conditions forecast to real hourly solar radiations are compared. Results show that using weather conditions forecast allows estimating with an acceptable accuracy solar radiations of the next day. Moreover, this technique allows obtaining hourly data that may be used for building models. As a result, this solar radiation prediction model may help to implement model-based controller as Model Predictive Control.

Keywords: anticipatory control, model predictive control, solar radiation forecast, thermal storage

Procedia PDF Downloads 257
3070 Characteristing Aquifer Layers of Karstic Springs in Nahavand Plain Using Geoelectrical and Electromagnetic Methods

Authors: A. Taheri Tizro, Rojin Fasihi

Abstract:

Geoelectrical method is one of the most effective tools in determining subsurface lithological layers. The electromagnetic method is also a newer method that can play an important role in determining and separating subsurface layers with acceptable accuracy. In the present research, 10 electromagnetic soundings were collected in the upstream of 5 karstic springs of Famaseb, Faresban, Ghale Baroodab, Gian and Gonbad kabood in Nahavand plain of Hamadan province. By using the emerging data, the belectromagnetic logs were prepared at different depths and compared with 5 logs of the geoelectric method. The comparison showed that the value of NRMSE in the geoelectric method for the 5 springs of Famaseb, Faresban, Ghale Baroodab, Gian and Gonbad kabood were 7.11, 7.50, respectively. It is 44.93, 3.99, and 2.99, and in the electromagnetic method, the value of this coefficient for the investigated springs is about 1.4, 1.1, 1.2, 1.5, and 1.3, respectively. In addition to the similarity of the results of the two methods, it is found that, the accuracy of the electromagnetic method based on the NRMSE value is higher than the geoelectric method. The advantage of the electromagnetic method compared to geoelectric is on less time consuming and its cost prohibitive. The depth to water table is the final result of this research work , which showed that in the springs of Famaseb, Faresban, Ghale Baroodab, Gian and Gonbad kabood, having depth of about 6, 20, 10, 2 36 meters respectively. The maximum thickness of the aquifer layer was estimated in Gonbad kabood spring (36 meters) and the lowest in Gian spring (2 meters). These results can be used to identify the water potential of the region in order to better manage water resources.

Keywords: karst spring, geoelectric, aquifer layers, nahavand

Procedia PDF Downloads 57
3069 Screening of Antioxidant Activity of Exopolysaccharides Produced by Lactic Acid Bacteria From Human Origin

Authors: Piña-Ronces Laura Gabriela, Reyes-Escogido María de Lourdes

Abstract:

Exist a large variability in Exopolysaccharides (EPS) produced by LAB depending on carbon source, they have multiple applications in food industry mainly, but they have become important for the health. In this study, we identified EPS-producing strains belonging to the BAL group; they were previously isolated from humans. After that, we extracted and evaluated the antioxidant activity of EPS produced by all strains. Antioxidant activity was determined by DPPH method using ascorbic acid as standard for both comparison and quantification. 31 strains (51.66 %) produced EPS at concentrations between 451 and 1.561 mg/l, 16 of EPS extracted showed antioxidant effect superior to ascorbic acid at the same concentrations. EPS-producing strains were L. plantarum, L. sp and L. fermentum corresponding to Lactobacillus genus and, E. faecium, E. durans, and E. hirae of Enterococcus genus. Antioxidant activity showed by EPS from 3 strains of L. plantarum and 3 strains of E. faecium was different into specie, while the antioxidant activity determined for EPS obtained from the other strains did not show difference at specie level, but was superior to ascorbic acid. EPS produced by L. plantarum and E. hirae had the best activity, it could be considerate for selection them as a possible new alternative for therapy or treatment of diseases related whit oxidative stress. Further studies about biological functions of EPS have to be conducted for new applications in health.

Keywords: oxidative stress, lactic acid bacteria, exopolysaccharides, antioxidant activity

Procedia PDF Downloads 344
3068 Circular Tool and Dynamic Approach to Grow the Entrepreneurship of Macroeconomic Metabolism

Authors: Maria Areias, Diogo Simões, Ana Figueiredo, Anishur Rahman, Filipa Figueiredo, João Nunes

Abstract:

It is expected that close to 7 billion people will live in urban areas by 2050. In order to improve the sustainability of the territories and its transition towards circular economy, it’s necessary to understand its metabolism and promote and guide the entrepreneurship answer. The study of a macroeconomic metabolism involves the quantification of the inputs, outputs and storage of energy, water, materials and wastes for an urban region. This quantification and analysis representing one opportunity for the promotion of green entrepreneurship. There are several methods to assess the environmental impacts of an urban territory, such as human and environmental risk assessment (HERA), life cycle assessment (LCA), ecological footprint assessment (EF), material flow analysis (MFA), physical input-output table (PIOT), ecological network analysis (ENA), multicriteria decision analysis (MCDA) among others. However, no consensus exists about which of those assessment methods are best to analyze the sustainability of these complex systems. Taking into account the weaknesses and needs identified, the CiiM - Circular Innovation Inter-Municipality project aims to define an uniform and globally accepted methodology through the integration of various methodologies and dynamic approaches to increase the efficiency of macroeconomic metabolisms and promoting entrepreneurship in a circular economy. The pilot territory considered in CiiM project has a total area of 969,428 ha, comprising a total of 897,256 inhabitants (about 41% of the population of the Center Region). The main economic activities in the pilot territory, which contribute to a gross domestic product of 14.4 billion euros, are: social support activities for the elderly; construction of buildings; road transport of goods, retailing in supermarkets and hypermarkets; mass production of other garments; inpatient health facilities; and the manufacture of other components and accessories for motor vehicles. The region's business network is mostly constituted of micro and small companies (similar to the Central Region of Portugal), with a total of 53,708 companies identified in the CIM Region of Coimbra (39 large companies), 28,146 in the CIM Viseu Dão Lafões (22 large companies) and 24,953 in CIM Beiras and Serra da Estrela (13 large companies). For the construction of the database was taking into account data available at the National Institute of Statistics (INE), General Directorate of Energy and Geology (DGEG), Eurostat, Pordata, Strategy and Planning Office (GEP), Portuguese Environment Agency (APA), Commission for Coordination and Regional Development (CCDR) and Inter-municipal Community (CIM), as well as dedicated databases. In addition to the collection of statistical data, it was necessary to identify and characterize the different stakeholder groups in the pilot territory that are relevant to the different metabolism components under analysis. The CIIM project also adds the potential of a Geographic Information System (GIS) so that it is be possible to obtain geospatial results of the territorial metabolisms (rural and urban) of the pilot region. This platform will be a powerful visualization tool of flows of products/services that occur within the region and will support the stakeholders, improving their circular performance and identifying new business ideas and symbiotic partnerships.

Keywords: circular economy tools, life cycle assessment macroeconomic metabolism, multicriteria decision analysis, decision support tools, circular entrepreneurship, industrial and regional symbiosis

Procedia PDF Downloads 79
3067 Railway Accidents: Using the Global Railway Accident Database and Evaluation for Risk Analysis

Authors: Mathias Linden, André Schneider, Harald F. O. von Korflesch

Abstract:

The risk of train accidents is an ongoing concern for railway organizations, governments, insurance companies and other depended sectors. Safety technologies are installed to reduce and to prevent potential damages of train accidents. Since the budgetary for the safety of railway organizations is limited, it is necessary not only to achieve a high availability and high safety standard but also to be cost effective. Therefore, an economic assessment of safety technologies is fundamental to create an accurate risk analysis. In order to conduct an economical assessment of a railway safety technology and a quantification of the costs of the accident causes, the Global Railway Accident Database & Evaluation (GRADE) has been developed. The aim of this paper is to describe the structure of this accident database and to show how it can be used for risk analyses. A number of risk analysis methods, such as the probabilistic safety assessment method (PSA), was used to demonstrate this accident database’s different possibilities of risk analysis. In conclusion, it can be noted that these analyses would not be as accurate without GRADE. The information gathered in the accident database was not available in this way before. Our findings are relevant for railway operators, safety technology suppliers, assurances, governments and other concerned railway organizations.

Keywords: accident causes, accident costs, accident database, global railway accident database & evaluation, GRADE, probabilistic safety assessment, PSA, railway accidents, risk analysis

Procedia PDF Downloads 341
3066 Development and Application of an Intelligent Masonry Modulation in BIM Tools: Literature Review

Authors: Sara A. Ben Lashihar

Abstract:

The heritage building information modelling (HBIM) of the historical masonry buildings has expanded lately to meet the urgent needs for conservation and structural analysis. The masonry structures are unique features for ancient building architectures worldwide that have special cultural, spiritual, and historical significance. However, there is a research gap regarding the reliability of the HBIM modeling process of these structures. The HBIM modeling process of the masonry structures faces significant challenges due to the inherent complexity and uniqueness of their structural systems. Most of these processes are based on tracing the point clouds and rarely follow documents, archival records, or direct observation. The results of these techniques are highly abstracted models where the accuracy does not exceed LOD 200. The masonry assemblages, especially curved elements such as arches, vaults, and domes, are generally modeled with standard BIM components or in-place models, and the brick textures are graphically input. Hence, future investigation is necessary to establish a methodology to generate automatically parametric masonry components. These components are developed algorithmically according to mathematical and geometric accuracy and the validity of the survey data. The main aim of this paper is to provide a comprehensive review of the state of the art of the existing researches and papers that have been conducted on the HBIM modeling of the masonry structural elements and the latest approaches to achieve parametric models that have both the visual fidelity and high geometric accuracy. The paper reviewed more than 800 articles, proceedings papers, and book chapters focused on "HBIM and Masonry" keywords from 2017 to 2021. The studies were downloaded from well-known, trusted bibliographic databases such as Web of Science, Scopus, Dimensions, and Lens. As a starting point, a scientometric analysis was carried out using VOSViewer software. This software extracts the main keywords in these studies to retrieve the relevant works. It also calculates the strength of the relationships between these keywords. Subsequently, an in-depth qualitative review followed the studies with the highest frequency of occurrence and the strongest links with the topic, according to the VOSViewer's results. The qualitative review focused on the latest approaches and the future suggestions proposed in these researches. The findings of this paper can serve as a valuable reference for researchers, and BIM specialists, to make more accurate and reliable HBIM models for historic masonry buildings.

Keywords: HBIM, masonry, structure, modeling, automatic, approach, parametric

Procedia PDF Downloads 149
3065 Separation of Oryzanol from Rice Bran Oil Using Silica: Equilibrium of Batch Adsorption

Authors: A. D. Susanti, W. B. Sediawan, S. K. Wirawan, Budhijanto, Ritmaleni

Abstract:

Rice bran oil contains significant amounts of oryzanol, a natural antioxidant that considered has higher antioxidant activity than vitamin E (tocopherol). Oryzanol reviewed has several health properties and interested in pharmacy, nutrition, and cosmetics. For practical usage, isolation and purification would be necessary due to the low concentration of oryzanol in crude rice bran oil (0.9-2.9%). Batch chromatography has proved as a promising process for the oryzanol recovery, but productivity was still low and scale-up processes of industrial interest have not yet been described. In order to improve productivity of batch chromatography, a continuous chromatography design namely Simulated Moving Bed (SMB) concept have been proposed. The SMB concept has interested for continuous commercial scale separation of binary system (oryzanol and rice bran oil), and rice bran oil still obtained as side product. Design of SMB chromatography for oryzanol separation requires quantification of its equilibrium. In this study, equilibrium of oryzanol separation conducted in batch adsorption using silica as the adsorbent and n-hexane/acetone (9:1) as the eluent. Three isotherm models, namely the Henry, Langmuir, and Freundlich equations, have been applied and modified for the experimental data to establish appropriate correlation for each sample. It turned out that the model quantitatively describe the equilibrium experimental data and will directed for design of SMB chromatography.

Keywords: adsorption, equilibrium, oryzanol, rice bran oil, simulated moving bed

Procedia PDF Downloads 267
3064 Markov Random Field-Based Segmentation Algorithm for Detection of Land Cover Changes Using Uninhabited Aerial Vehicle Synthetic Aperture Radar Polarimetric Images

Authors: Mehrnoosh Omati, Mahmod Reza Sahebi

Abstract:

The information on land use/land cover changing plays an essential role for environmental assessment, planning and management in regional development. Remotely sensed imagery is widely used for providing information in many change detection applications. Polarimetric Synthetic aperture radar (PolSAR) image, with the discrimination capability between different scattering mechanisms, is a powerful tool for environmental monitoring applications. This paper proposes a new boundary-based segmentation algorithm as a fundamental step for land cover change detection. In this method, first, two PolSAR images are segmented using integration of marker-controlled watershed algorithm and coupled Markov random field (MRF). Then, object-based classification is performed to determine changed/no changed image objects. Compared with pixel-based support vector machine (SVM) classifier, this novel segmentation algorithm significantly reduces the speckle effect in PolSAR images and improves the accuracy of binary classification in object-based level. The experimental results on Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) polarimetric images show a 3% and 6% improvement in overall accuracy and kappa coefficient, respectively. Also, the proposed method can correctly distinguish homogeneous image parcels.

Keywords: coupled Markov random field (MRF), environment, object-based analysis, polarimetric SAR (PolSAR) images

Procedia PDF Downloads 203
3063 Fake Accounts Detection in Twitter Based on Minimum Weighted Feature Set

Authors: Ahmed ElAzab, Amira M. Idrees, Mahmoud A. Mahmoud, Hesham Hefny

Abstract:

Social networking sites such as Twitter and Facebook attracts over 500 million users across the world, for those users, their social life, even their practical life, has become interrelated. Their interaction with social networking has affected their life forever. Accordingly, social networking sites have become among the main channels that are responsible for vast dissemination of different kinds of information during real time events. This popularity in Social networking has led to different problems including the possibility of exposing incorrect information to their users through fake accounts which results to the spread of malicious content during life events. This situation can result to a huge damage in the real world to the society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting fake accounts on Twitter. The study determines the minimized set of the main factors that influence the detection of the fake accounts on Twitter, then the determined factors have been applied using different classification techniques, a comparison of the results for these techniques has been performed and the most accurate algorithm is selected according to the accuracy of the results. The study has been compared with different recent research in the same area, this comparison has proved the accuracy of the proposed study. We claim that this study can be continuously applied on Twitter social network to automatically detect the fake accounts, moreover, the study can be applied on different Social network sites such as Facebook with minor changes according to the nature of the social network which are discussed in this paper.

Keywords: fake accounts detection, classification algorithms, twitter accounts analysis, features based techniques

Procedia PDF Downloads 387
3062 The Correlation between Three-Dimensional Implant Positions and Esthetic Outcomes of Single-Tooth Implant Restoration

Authors: Pongsakorn Komutpol, Pravej Serichetaphongse, Soontra Panmekiate, Atiphan Pimkhaokham

Abstract:

Statement of Problem: The important parameter of esthetic assessment in anterior maxillary implant include pink esthetic of gingiva and white esthetic of restoration. While the 3 dimensional (3D) implant position are recently concerned as a key for succeeding in implant treatment. However, to our knowledge, the authors did not come across any publication that demonstrated the relations of esthetic outcome and 3D implant position. Objectives: To investigate the correlation between positional accuracy of single-tooth implant restoration (STIR) in all 3 dimensions and their esthetic outcomes. Materials and Methods: 17 patients’ data who had a STIR at central incisor with pristine contralateral tooth were included in this study. Intraoral photographs, dental models, and cone beam computed tomography (CBCT) images were retrieved. The esthetic outcome was assessed in accordance with pink esthetic score and white esthetic score (PES/WES). While the number of correct position in each dimension (mesiodistal, labiolingual, apicocoronal) of the implant were evaluated and defined as 'right' or 'wrong' according to ITI consensus conference by one investigator using CBCT data. The different mean score between right and wrong position in all dimensions was analyzed by Mann-Whitney U test with 0.05 was the significant level of the study. Results: The average score of PES/WES was 15.88 ± 1.65 which was considered as clinically acceptable. The average PES/WES score in 1, 2 and 3 right dimension of the implant position were 16.71, 15.75 and 15.17 respectively. None of the implants placed wrongly in all three dimensions. Statistically significant difference of the PES/WES score was found between the implants that placed right in 3 dimensions and 1 dimension (p = 0.041). Conclusion: This study supported the principle of 3D position of implant. The more properly implant was placed, the higher esthetic outcome was found.

Keywords: accuracy, dental implant, esthetic, 3D implant position

Procedia PDF Downloads 155