Search results for: compass error
606 Blood Volume Pulse Extraction for Non-Contact Photoplethysmography Measurement from Facial Images
Authors: Ki Moo Lim, Iman R. Tayibnapis
Abstract:
According to WHO estimation, 38 out of 56 million (68%) global deaths in 2012, were due to noncommunicable diseases (NCDs). To avert NCD, one of the solutions is early detection of diseases. In order to do that, we developed 'U-Healthcare Mirror', which is able to measure vital sign such as heart rate (HR) and respiration rate without any physical contact and consciousness. To measure HR in the mirror, we utilized digital camera. The camera records red, green, and blue (RGB) discoloration from user's facial image sequences. We extracted blood volume pulse (BVP) from the RGB discoloration because the discoloration of the facial skin is accordance with BVP. We used blind source separation (BSS) to extract BVP from the RGB discoloration and adaptive filters for removing noises. We utilized singular value decomposition (SVD) method to implement the BSS and the adaptive filters. HR was estimated from the obtained BVP. We did experiment for HR measurement by using our method and previous method that used independent component analysis (ICA) method. We compared both of them with HR measurement from commercial oximeter. The experiment was conducted under various distance between 30~110 cm and light intensity between 5~2000 lux. For each condition, we did measurement 7 times. The estimated HR showed 2.25 bpm of mean error and 0.73 of pearson correlation coefficient. The accuracy has improved compared to previous work. The optimal distance between the mirror and user for HR measurement was 50 cm with medium light intensity, around 550 lux.Keywords: blood volume pulse, heart rate, photoplethysmography, independent component analysis
Procedia PDF Downloads 327605 Applicability of Cameriere’s Age Estimation Method in a Sample of Turkish Adults
Authors: Hatice Boyacioglu, Nursel Akkaya, Humeyra Ozge Yilanci, Hilmi Kansu, Nihal Avcu
Abstract:
The strong relationship between the reduction in the size of the pulp cavity and increasing age has been reported in the literature. This relationship can be utilized to estimate the age of an individual by measuring the pulp cavity size using dental radiographs as a non-destructive method. The purpose of this study is to develop a population specific regression model for age estimation in a sample of Turkish adults by applying Cameriere’s method on panoramic radiographs. The sample consisted of 100 panoramic radiographs of Turkish patients (40 men, 60 women) aged between 20 and 70 years. Pulp and tooth area ratios (AR) of the maxilla¬¬ry canines were measured by two maxillofacial radiologists and then the results were subjected to regression analysis. There were no statistically significant intra-observer and inter-observer differences. The correlation coefficient between age and the AR of the maxillary canines was -0.71 and the following regression equation was derived: Estimated Age = 77,365 – ( 351,193 × AR ). The mean prediction error was 4 years which is within acceptable errors limits for age estimation. This shows that the pulp/tooth area ratio is a useful variable for assessing age with reasonable accuracy. Based on the results of this research, it was concluded that Cameriere’s method is suitable for dental age estimation and it can be used for forensic procedures in Turkish adults. These instructions give you guidelines for preparing papers for conferences or journals.Keywords: age estimation by teeth, forensic dentistry, panoramic radiograph, Cameriere's method
Procedia PDF Downloads 448604 Phase Behavior Modelling of Libyan Near-Critical Gas-Condensate Field
Authors: M. Khazam, M. Altawil, A. Eljabri
Abstract:
Fluid properties in states near a vapor-liquid critical region are the most difficult to measure and to predict with EoS models. The principal model difficulty is that near-critical property variations do not follow the same mathematics as at conditions far away from the critical region. Libyan NC98 field in Sirte basin is a typical example of near critical fluid characterized by high initial condensate gas ratio (CGR) greater than 160 bbl/MMscf and maximum liquid drop-out of 25%. The objective of this paper is to model NC98 phase behavior with the proper selection of EoS parameters and also to model reservoir depletion versus gas cycling option using measured PVT data and EoS Models. The outcomes of our study revealed that, for accurate gas and condensate recovery forecast during depletion, the most important PVT data to match are the gas phase Z-factor and C7+ fraction as functions of pressure. Reasonable match, within -3% error, was achieved for ultimate condensate recovery at abandonment pressure of 1500 psia. The smooth transition from gas-condensate to volatile oil was fairly simulated by the tuned PR-EoS. The predicted GOC was approximately at 14,380 ftss. The optimum gas cycling scheme, in order to maximize condensate recovery, should not be performed at pressures less than 5700 psia. The contribution of condensate vaporization for such field is marginal, within 8% to 14%, compared to gas-gas miscible displacement. Therefore, it is always recommended, if gas recycle scheme to be considered for this field, to start it at the early stage of field development.Keywords: EoS models, gas-condensate, gas cycling, near critical fluid
Procedia PDF Downloads 317603 Assessing Level of Pregnancy Rate and Milk Yield in Indian Murrah Buffaloes
Authors: V. Jamuna, A. K. Chakravarty, C. S. Patil, Vijay Kumar, M. A. Mir, Rakesh Kumar
Abstract:
Intense selection of buffaloes for milk production at organized herds of the country without giving due attention to fertility traits viz. pregnancy rate has lead to deterioration in their performances. Aim of study is to develop an optimum model for predicting pregnancy rate and to assess the level of pregnancy rate with respect to milk production Murrah buffaloes. Data pertaining to 1224 lactation records of Murrah buffaloes spread over a period 21 years were analyzed and it was observed that pregnancy rate depicted negative phenotypic association with lactation milk yield (-0.08 ± 0.04). For developing optimum model for pregnancy rate in Murrah buffaloes seven simple and multiple regression models were developed. Among the seven models, model II having only Service period as an independent reproduction variable, was found to be the best prediction model, based on the four statistical criterions (high coefficient of determination (R 2), low mean sum of squares due to error (MSSe), conceptual predictive (CP) value, and Bayesian information criterion (BIC). For standardizing the level of fertility with milk production, pregnancy rate was classified into seven classes with the increment of 10% in all parities, life time and their corresponding average pregnancy rate in relation to the average lactation milk yield (MY).It was observed that to achieve around 2000 kg MY which can be considered optimum for Indian Murrah buffaloes, level of pregnancy rate should be in between 30-50%.Keywords: life time, pregnancy rate, production, service period, standardization
Procedia PDF Downloads 633602 Aquatic Intervention Research for Children with Autism Spectrum Disorders
Authors: Mehmet Yanardag, Ilker Yilmaz
Abstract:
Children with autism spectrum disorders (ASD) enjoy and success the aquatic-based exercise and play skills in a pool instead of land-based exercise in a gym. Some authors also observed that many children with ASD experience more success in attaining movement skills in aquatic environment. Properties of the water and hydrodynamic principles cause buoyancy of the water and decrease effects of gravity and it leads to allow a child to practice important aquatic skills with limited motor skills. Also, some authors experience that parents liked the effects of the aquatic intervention program on children with ASD such as improving motor performance, movement capacity and learning basic swimming skills. The purpose of this study was to investigate the effects of aquatic exercise training on water orientation and underwater working capacity were measured in the pool. This study included in four male children between 5 and 7 years old with ASD and 6.25±0.5 years old. Aquatic exercise skills were applied by using one of the error less teaching which is called the 'most to least prompt' procedure during 12-week, three times a week and 60 minutes a day. The findings of this study indicated that there were improvements test results both water orientation skill and underwater working capacity of children with ASD after 12-weeks exercise training. It was seen that the aquatic exercise intervention would be affected to improve working capacity and orientation skills with the special education approaches applying children with ASD in multidisciplinary team-works.Keywords: aquatic, autism, orientation, ASD, children
Procedia PDF Downloads 431601 Dynamic Fault Diagnosis for Semi-Batch Reactor Under Closed-Loop Control via Independent RBFNN
Authors: Abdelkarim M. Ertiame, D. W. Yu, D. L. Yu, J. B. Gomm
Abstract:
In this paper, a new robust fault detection and isolation (FDI) scheme is developed to monitor a multivariable nonlinear chemical process called the Chylla-Haase polymerization reactor when it is under the cascade PI control. The scheme employs a radial basis function neural network (RBFNN) in an independent mode to model the process dynamics and using the weighted sum-squared prediction error as the residual. The recursive orthogonal Least Squares algorithm (ROLS) is employed to train the model to overcome the training difficulty of the independent mode of the network. Then, another RBFNN is used as a fault classifier to isolate faults from different features involved in the residual vector. The several actuator and sensor faults are simulated in a nonlinear simulation of the reactor in Simulink. The scheme is used to detect and isolate the faults on-line. The simulation results show the effectiveness of the scheme even the process is subjected to disturbances and uncertainties including significant changes in the monomer feed rate, fouling factor, impurity factor, ambient temperature and measurement noise. The simulation results are presented to illustrate the effectiveness and robustness of the proposed method.Keywords: Robust fault detection, cascade control, independent RBF model, RBF neural networks, Chylla-Haase reactor, FDI under closed-loop control
Procedia PDF Downloads 495600 Optimal Sliding Mode Controller for Knee Flexion during Walking
Authors: Gabriel Sitler, Yousef Sardahi, Asad Salem
Abstract:
This paper presents an optimal and robust sliding mode controller (SMC) to regulate the position of the knee joint angle for patients suffering from knee injuries. The controller imitates the role of active orthoses that produce the joint torques required to overcome gravity and loading forces and regain natural human movements. To this end, a mathematical model of the shank, the lower part of the leg, is derived first and then used for the control system design and computer simulations. The design of the controller is carried out in optimal and multi-objective settings. Four objectives are considered: minimization of the control effort and tracking error; and maximization of the control signal smoothness and closed-loop system’s speed of response. Optimal solutions in terms of the Pareto set and its image, the Pareto front, are obtained. The results show that there are trade-offs among the design objectives and many optimal solutions from which the decision-maker can choose to implement. Also, computer simulations conducted at different points from the Pareto set and assuming knee squat movement demonstrate competing relationships among the design goals. In addition, the proposed control algorithm shows robustness in tracking a standard gait signal when accounting for uncertainty in the shank’s parameters.Keywords: optimal control, multi-objective optimization, sliding mode control, wearable knee exoskeletons
Procedia PDF Downloads 81599 Impact of Climate on Sugarcane Yield Over Belagavi District, Karnataka Using Statistical Mode
Authors: Girish Chavadappanavar
Abstract:
The impact of climate on agriculture could result in problems with food security and may threaten the livelihood activities upon which much of the population depends. In the present study, the development of a statistical yield forecast model has been carried out for sugarcane production over Belagavi district, Karnataka using weather variables of crop growing season and past observed yield data for the period of 1971 to 2010. The study shows that this type of statistical yield forecast model could efficiently forecast yield 5 weeks and even 10 weeks in advance of the harvest for sugarcane within an acceptable limit of error. The performance of the model in predicting yields at the district level for sugarcane crops is found quite satisfactory for both validation (2007 and 2008) as well as forecasting (2009 and 2010).In addition to the above study, the climate variability of the area has also been studied, and hence, the data series was tested for Mann Kendall Rank Statistical Test. The maximum and minimum temperatures were found to be significant with opposite trends (decreasing trend in maximum and increasing in minimum temperature), while the other three are found in significant with different trends (rainfall and evening time relative humidity with increasing trend and morning time relative humidity with decreasing trend).Keywords: climate impact, regression analysis, yield and forecast model, sugar models
Procedia PDF Downloads 69598 Starting Order Eight Method Accurately for the Solution of First Order Initial Value Problems of Ordinary Differential Equations
Authors: James Adewale, Joshua Sunday
Abstract:
In this paper, we developed a linear multistep method, which is implemented in predictor corrector-method. The corrector is developed by method of collocation and interpretation of power series approximate solutions at some selected grid points, to give a continuous linear multistep method, which is evaluated at some selected grid points to give a discrete linear multistep method. The predictors were also developed by method of collocation and interpolation of power series approximate solution, to give a continuous linear multistep method. The continuous linear multistep method is then solved for the independent solution to give a continuous block formula, which is evaluated at some selected grid point to give discrete block method. Basic properties of the corrector were investigated and found to be zero stable, consistent and convergent. The efficiency of the method was tested on some linear, non-learn, oscillatory and stiff problems of first order, initial value problems of ordinary differential equations. The results were found to be better in terms of computer time and error bound when compared with the existing methods.Keywords: predictor, corrector, collocation, interpolation, approximate solution, independent solution, zero stable, consistent, convergent
Procedia PDF Downloads 498597 A Numerical Investigation of Total Temperature Probes Measurement Performance
Authors: Erdem Meriç
Abstract:
Measuring total temperature of air flow accurately is a very important requirement in the development phases of many industrial products, including gas turbines and rockets. Thermocouples are very practical devices to measure temperature in such cases, but in high speed and high temperature flows, the temperature of thermocouple junction may deviate considerably from real flow total temperature due to the effects of heat transfer mechanisms of convection, conduction, and radiation. To avoid errors in total temperature measurement, special probe designs which are experimentally characterized are used. In this study, a validation case which is an experimental characterization of a specific class of total temperature probes is selected from the literature to develop a numerical conjugate heat transfer analysis methodology to study the total temperature probe flow field and solid temperature distribution. Validated conjugate heat transfer methodology is used to investigate flow structures inside and around the probe and effects of probe design parameters like the ratio between inlet and outlet hole areas and prob tip geometry on measurement accuracy. Lastly, a thermal model is constructed to account for errors in total temperature measurement for a specific class of probes in different operating conditions. Outcomes of this work can guide experimentalists to design a very accurate total temperature probe and quantify the possible error for their specific case.Keywords: conjugate heat transfer, recovery factor, thermocouples, total temperature probes
Procedia PDF Downloads 132596 Development and Verification of the Idom Shielding Optimization Tool
Authors: Omar Bouhassoun, Cristian Garrido, César Hueso
Abstract:
The radiation shielding design is an optimization problem with multiple -constrained- objective functions (radiation dose, weight, price, etc.) that depend on several parameters (material, thickness, position, etc.). The classical approach for shielding design consists of a brute force trial-and-error process subject to previous designer experience. Therefore, the result is an empirical solution but not optimal, which can degrade the overall performance of the shielding. In order to automate the shielding design procedure, the IDOM Shielding Optimization Tool (ISOT) has been developed. This software combines optimization algorithms with the capabilities to read/write input files, run calculations, as well as parse output files for different radiation transport codes. In the first stage, the software was established to adjust the input files for two well-known Monte Carlo codes (MCNP and Serpent) and optimize the result (weight, volume, price, dose rate) using multi-objective genetic algorithms. Nevertheless, its modular implementation easily allows the inclusion of more radiation transport codes and optimization algorithms. The work related to the development of ISOT and its verification on a simple 3D multi-layer shielding problem using both MCNP and Serpent will be presented. ISOT looks very promising for achieving an optimal solution to complex shielding problems.Keywords: optimization, shielding, nuclear, genetic algorithm
Procedia PDF Downloads 109595 Distributional and Dynamic impact of Energy Subsidy Reform
Authors: Ali Hojati Najafabadi, Mohamad Hosein Rahmati, Seyed Ali Madanizadeh
Abstract:
Governments execute energy subsidy reforms by either increasing energy prices or reducing energy price dispersion. These policies make less use of energy per plant (intensive margin), vary the total number of firms (extensive margin), promote technological progress (technology channel), and make additional resources to redistribute (resource channel). We estimate a structural dynamic firm model with endogenous technology adaptation using data from the manufacturing firms in Iran and a country ranked the second-largest energy subsidy plan by the IMF. The findings show significant dynamics and distributional effects due to an energy reform plan. The price elasticity of energy consumption in the industrial sector is about -2.34, while it is -3.98 for large firms. The dispersion elasticity, defined as the amounts of changes in energy consumption by a one-percent reduction in the standard error of energy price distribution, is about 1.43, suggesting significant room for a distributional policy. We show that the intensive margin is the main driver of energy price elasticity, whereas the other channels mostly offset it. In contrast, the labor response is mainly through the extensive margin. Total factor productivity slightly improves in light of the reduction in energy consumption if, at the same time, the redistribution policy boosts the aggregate demands.Keywords: energy reform, firm dynamics, structural estimation, subsidy policy
Procedia PDF Downloads 91594 Surface Pressure Distributions for a Forebody Using Pressure Sensitive Paint
Authors: Yi-Xuan Huang, Kung-Ming Chung, Ping-Han Chung
Abstract:
Pressure sensitive paint (PSP), which relies on the oxygen quenching of a luminescent molecule, is an optical technique used in wind-tunnel models. A full-field pressure pattern with low aerodynamic interference can be obtained, and it is becoming an alternative to pressure measurements using pressure taps. In this study, a polymer-ceramic PSP was used, using toluene as a solvent. The porous particle and polymer were silica gel (SiO₂) and RTV-118 (3g:7g), respectively. The compound was sprayed onto the model surface using a spray gun. The absorption and emission spectra for Ru(dpp) as a luminophore were respectively 441-467 nm and 597 nm. A Revox SLG-55 light source with a short-pass filter (550 nm) and a 14-bit CCD camera with a long-pass (600 nm) filter were used to illuminate PSP and to capture images. This study determines surface pressure patterns for a forebody of an AGARD B model in a compressible flow. Since there is no experimental data for surface pressure distributions available, numerical simulation is conducted using ANSYS Fluent. The lift and drag coefficients are calculated and in comparison with the data in the open literature. The experiments were conducted using a transonic wind tunnel at the Aerospace Science and Research Center, National Cheng Kung University. The freestream Mach numbers were 0.83, and the angle of attack ranged from -4 to 8 degree. Deviation between PSP and numerical simulation is within 5%. However, the effect of the setup of the light source should be taken into account to address the relative error.Keywords: pressure sensitive paint, forebody, surface pressure, compressible flow
Procedia PDF Downloads 125593 Deep Learning Application for Object Image Recognition and Robot Automatic Grasping
Authors: Shiuh-Jer Huang, Chen-Zon Yan, C. K. Huang, Chun-Chien Ting
Abstract:
Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.Keywords: deep learning, image processing, convolution neural network, YOLOv2, 7A6 series manipulator
Procedia PDF Downloads 248592 Poly (Diphenylamine-4-Sulfonic Acid) Modified Glassy Carbon Electrode for Voltammetric Determination of Gallic Acid in Honey and Peanut Samples
Authors: Zelalem Bitew, Adane Kassa, Beyene Misgan
Abstract:
In this study, a sensitive and selective voltammetric method based on poly(diphenylamine-4-sulfonic acid) modified glassy carbon electrode (poly(DPASA)/GCE) was developed for determination of gallic acid. Appearance of an irreversible oxidative peak at both bare GCE and poly(DPASA)/GCE for gallic acid with about three folds current enhancement and much reduced potential at poly(DPASA)/GCE showed catalytic property of the modifier towards oxidation of gallic acid. Under optimized conditions, Adsorptive stripping square wave voltammetric peak current response of the poly(DPASA)/GCE showed linear dependence with gallic acid concentration in the range 5.00 × 10-7 − 3.00 × 10-4 mol L-1 with limit of detection of 4.35 × 10-9. Spike recovery results between 94.62-99.63, 95.00-99.80 and 97.25-103.20% of gallic acid in honey, raw peanut, and commercial peanut butter samples respectively, interference recovery results with less than 4.11% error in the presence of uric acid and ascorbic acid, lower LOD and relatively wider dynamic range than most of the previously reported methods validated the potential applicability of the method based on poly(DPASA)/GCE for determination of gallic acid real samples including in honey and peanut samples.Keywords: gallic acid, diphenyl amine sulfonic acid, adsorptive anodic striping square wave voltammetry, honey, peanut
Procedia PDF Downloads 76591 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks
Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone
Abstract:
Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.Keywords: artificial neural network, data mining, electroencephalogram, epilepsy, feature extraction, seizure detection, signal processing
Procedia PDF Downloads 187590 Early Warning System of Financial Distress Based On Credit Cycle Index
Authors: Bi-Huei Tsai
Abstract:
Previous studies on financial distress prediction choose the conventional failing and non-failing dichotomy; however, the distressed extent differs substantially among different financial distress events. To solve the problem, “non-distressed”, “slightly-distressed” and “reorganization and bankruptcy” are used in our article to approximate the continuum of corporate financial health. This paper explains different financial distress events using the two-stage method. First, this investigation adopts firm-specific financial ratios, corporate governance and market factors to measure the probability of various financial distress events based on multinomial logit models. Specifically, the bootstrapping simulation is performed to examine the difference of estimated misclassifying cost (EMC). Second, this work further applies macroeconomic factors to establish the credit cycle index and determines the distressed cut-off indicator of the two-stage models using such index. Two different models, one-stage and two-stage prediction models, are developed to forecast financial distress, and the results acquired from different models are compared with each other, and with the collected data. The findings show that the two-stage model incorporating financial ratios, corporate governance and market factors has the lowest misclassification error rate. The two-stage model is more accurate than the one-stage model as its distressed cut-off indicators are adjusted according to the macroeconomic-based credit cycle index.Keywords: Multinomial logit model, corporate governance, company failure, reorganization, bankruptcy
Procedia PDF Downloads 377589 Evaluating the Nexus between Energy Demand and Economic Growth Using the VECM Approach: Case Study of Nigeria, China, and the United States
Authors: Rita U. Onolemhemhen, Saheed L. Bello, Akin P. Iwayemi
Abstract:
The effectiveness of energy demand policy depends on identifying the key drivers of energy demand both in the short-run and the long-run. This paper examines the influence of regional differences on the link between energy demand and other explanatory variables for Nigeria, China and USA using the Vector Error Correction Model (VECM) approach. This study employed annual time series data on energy consumption (ED), real gross domestic product (GDP) per capita (RGDP), real energy prices (P) and urbanization (N) for a thirty-six-year sample period. The utilized time-series data are sourced from World Bank’s World Development Indicators (WDI, 2016) and US Energy Information Administration (EIA). Results from the study, shows that all the independent variables (income, urbanization, and price) substantially affect the long-run energy consumption in Nigeria, USA and China, whereas, income has no significant effect on short-run energy demand in USA and Nigeria. In addition, the long-run effect of urbanization is relatively stronger in China. Urbanization is a key factor in energy demand, it therefore recommended that more attention should be given to the development of rural communities to reduce the inflow of migrants into urban communities which causes the increase in energy demand and energy excesses should be penalized while energy management should be incentivized.Keywords: economic growth, energy demand, income, real GDP, urbanization, VECM
Procedia PDF Downloads 311588 A Transformer-Based Approach for Multi-Human 3D Pose Estimation Using Color and Depth Images
Authors: Qiang Wang, Hongyang Yu
Abstract:
Multi-human 3D pose estimation is a challenging task in computer vision, which aims to recover the 3D joint locations of multiple people from multi-view images. In contrast to traditional methods, which typically only use color (RGB) images as input, our approach utilizes both color and depth (D) information contained in RGB-D images. We also employ a transformer-based model as the backbone of our approach, which is able to capture long-range dependencies and has been shown to perform well on various sequence modeling tasks. Our method is trained and tested on the Carnegie Mellon University (CMU) Panoptic dataset, which contains a diverse set of indoor and outdoor scenes with multiple people in varying poses and clothing. We evaluate the performance of our model on the standard 3D pose estimation metrics of mean per-joint position error (MPJPE). Our results show that the transformer-based approach outperforms traditional methods and achieves competitive results on the CMU Panoptic dataset. We also perform an ablation study to understand the impact of different design choices on the overall performance of the model. In summary, our work demonstrates the effectiveness of using a transformer-based approach with RGB-D images for multi-human 3D pose estimation and has potential applications in real-world scenarios such as human-computer interaction, robotics, and augmented reality.Keywords: multi-human 3D pose estimation, RGB-D images, transformer, 3D joint locations
Procedia PDF Downloads 78587 NOx Prediction by Quasi-Dimensional Combustion Model of Hydrogen Enriched Compressed Natural Gas Engine
Authors: Anas Rao, Hao Duan, Fanhua Ma
Abstract:
The dependency on the fossil fuels can be minimized by using the hydrogen enriched compressed natural gas (HCNG) in the transportation vehicles. However, the NOx emissions of HCNG engines are significantly higher, and this turned to be its major drawback. Therefore, the study of NOx emission of HCNG engines is a very important area of research. In this context, the experiments have been performed at the different hydrogen percentage, ignition timing, air-fuel ratio, manifold-absolute pressure, load and engine speed. Afterwards, the simulation has been accomplished by the quasi-dimensional combustion model of HCNG engine. In order to investigate the NOx emission, the NO mechanism has been coupled to the quasi-dimensional combustion model of HCNG engine. The three NOx mechanism: the thermal NOx, prompt NOx and N2O mechanism have been used to predict NOx emission. For the validation purpose, NO curve has been transformed into NO packets based on the temperature difference of 100 K for the lean-burn and 60 K for stoichiometric condition. While, the width of the packet has been taken as the ratio of crank duration of the packet to the total burnt duration. The combustion chamber of the engine has been divided into three zones, with the zone equal to the product of summation of NO packets and space. In order to check the accuracy of the model, the percentage error of NOx emission has been evaluated, and it lies in the range of ±6% and ±10% for the lean-burn and stoichiometric conditions respectively. Finally, the percentage contribution of each NO formation has been evaluated.Keywords: quasi-dimensional combustion , thermal NO, prompt NO, NO packet
Procedia PDF Downloads 250586 Bayesian Borrowing Methods for Count Data: Analysis of Incontinence Episodes in Patients with Overactive Bladder
Authors: Akalu Banbeta, Emmanuel Lesaffre, Reynaldo Martina, Joost Van Rosmalen
Abstract:
Including data from previous studies (historical data) in the analysis of the current study may reduce the sample size requirement and/or increase the power of analysis. The most common example is incorporating historical control data in the analysis of a current clinical trial. However, this only applies when the historical control dataare similar enough to the current control data. Recently, several Bayesian approaches for incorporating historical data have been proposed, such as the meta-analytic-predictive (MAP) prior and the modified power prior (MPP) both for single control as well as for multiple historical control arms. Here, we examine the performance of the MAP and the MPP approaches for the analysis of (over-dispersed) count data. To this end, we propose a computational method for the MPP approach for the Poisson and the negative binomial models. We conducted an extensive simulation study to assess the performance of Bayesian approaches. Additionally, we illustrate our approaches on an overactive bladder data set. For similar data across the control arms, the MPP approach outperformed the MAP approach with respect to thestatistical power. When the means across the control arms are different, the MPP yielded a slightly inflated type I error (TIE) rate, whereas the MAP did not. In contrast, when the dispersion parameters are different, the MAP gave an inflated TIE rate, whereas the MPP did not.We conclude that the MPP approach is more promising than the MAP approach for incorporating historical count data.Keywords: count data, meta-analytic prior, negative binomial, poisson
Procedia PDF Downloads 116585 Global Navigation Satellite System and Precise Point Positioning as Remote Sensing Tools for Monitoring Tropospheric Water Vapor
Authors: Panupong Makvichian
Abstract:
Global Navigation Satellite System (GNSS) is nowadays a common technology that improves navigation functions in our life. Additionally, GNSS is also being employed on behalf of an accurate atmospheric sensor these times. Meteorology is a practical application of GNSS, which is unnoticeable in the background of people’s life. GNSS Precise Point Positioning (PPP) is a positioning method that requires data from a single dual-frequency receiver and precise information about satellite positions and satellite clocks. In addition, careful attention to mitigate various error sources is required. All the above data are combined in a sophisticated mathematical algorithm. At this point, the research is going to demonstrate how GNSS and PPP method is capable to provide high-precision estimates, such as 3D positions or Zenith tropospheric delays (ZTDs). ZTDs combined with pressure and temperature information allows us to estimate the water vapor in the atmosphere as precipitable water vapor (PWV). If the process is replicated for a network of GNSS sensors, we can create thematic maps that allow extract water content information in any location within the network area. All of the above are possible thanks to the advances in GNSS data processing. Therefore, we are able to use GNSS data for climatic trend analysis and acquisition of the further knowledge about the atmospheric water content.Keywords: GNSS, precise point positioning, Zenith tropospheric delays, precipitable water vapor
Procedia PDF Downloads 196584 Wrong Site Surgery Should Not Occur In This Day And Age!
Authors: C. Kuoh, C. Lucas, T. Lopes, I. Mechie, J. Yoong, W. Yoong
Abstract:
For all surgeons, there is one preventable but still highly occurring complication – wrong site surgeries. They can have potentially catastrophic, irreversible, or even fatal consequences on patients. With the exponential development of microsurgery and the use of advanced technological tools, the consequences of operating on the wrong side, anatomical part, or even person is seen as the most visible and destructive of all surgical errors and perhaps the error that is dreaded by most clinicians as it threatens their licenses and arouses feelings of guilt. Despite the implementation of the WHO surgical safety checklist more than a decade ago, the incidence of wrong-site surgeries remains relatively high, leading to tremendous physical and psychological repercussions for the clinicians involved, as well as a financial burden for the healthcare institution. In this presentation, the authors explore various factors which can lead to wrong site surgery – a combination of environmental and human factors and evaluate their impact amongst patients, practitioners, their families, and the medical industry. Major contributing factors to these “never events” include deviations from checklists, excessive workload, and poor communication. Two real-life cases are discussed, and systems that can be implemented to prevent these errors are highlighted alongside lessons learnt from other industries. The authors suggest that reinforcing speaking-up, implementing medical professional trainings, and higher patient’s involvements can potentially improve safety in surgeries and electrosurgeries.Keywords: wrong side surgery, never events, checklist, workload, communication
Procedia PDF Downloads 182583 A Case Study for User Rating Prediction on Automobile Recommendation System Using Mapreduce
Authors: Jiao Sun, Li Pan, Shijun Liu
Abstract:
Recommender systems have been widely used in contemporary industry, and plenty of work has been done in this field to help users to identify items of interest. Collaborative Filtering (CF, for short) algorithm is an important technology in recommender systems. However, less work has been done in automobile recommendation system with the sharp increase of the amount of automobiles. What’s more, the computational speed is a major weakness for collaborative filtering technology. Therefore, using MapReduce framework to optimize the CF algorithm is a vital solution to this performance problem. In this paper, we present a recommendation of the users’ comment on industrial automobiles with various properties based on real world industrial datasets of user-automobile comment data collection, and provide recommendation for automobile providers and help them predict users’ comment on automobiles with new-coming property. Firstly, we solve the sparseness of matrix using previous construction of score matrix. Secondly, we solve the data normalization problem by removing dimensional effects from the raw data of automobiles, where different dimensions of automobile properties bring great error to the calculation of CF. Finally, we use the MapReduce framework to optimize the CF algorithm, and the computational speed has been improved times. UV decomposition used in this paper is an often used matrix factorization technology in CF algorithm, without calculating the interpolation weight of neighbors, which will be more convenient in industry.Keywords: collaborative filtering, recommendation, data normalization, mapreduce
Procedia PDF Downloads 215582 Estimation of the Road Traffic Emissions and Dispersion in the Developing Countries Conditions
Authors: Hicham Gourgue, Ahmed Aharoune, Ahmed Ihlal
Abstract:
We present in this work our model of road traffic emissions (line sources) and dispersion of these emissions, named DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission Model). In its emission part, this model was designed to keep the consistent bottom-up and top-down approaches. It also allows to generate emission inventories from reduced input parameters being adapted to existing conditions in Morocco and in the other developing countries. While several simplifications are made, all the performance of the model results are kept. A further important advantage of the model is that it allows the uncertainty calculation and emission rate uncertainty according to each of the input parameters. In the dispersion part of the model, an improved line source model has been developed, implemented and tested against a reference solution. It provides improvement in accuracy over previous formulas of line source Gaussian plume model, without being too demanding in terms of computational resources. In the case study presented here, the biggest errors were associated with the ends of line source sections; these errors will be canceled by adjacent sections of line sources during the simulation of a road network. In cases where the wind is parallel to the source line, the use of the combination discretized source and analytical line source formulas minimizes remarkably the error. Because this combination is applied only for a small number of wind directions, it should not excessively increase the calculation time.Keywords: air pollution, dispersion, emissions, line sources, road traffic, urban transport
Procedia PDF Downloads 439581 Prediction of Boundary Shear Stress with Gradually Tapering Flood Plains
Authors: Spandan Sahu, Amiya Kumar Pati, Kishanjit Kumar Khatua
Abstract:
River is the main source of water. It is a form of natural open channel which gives rise to many complex phenomenon of sciences that needs to be tackled such as the critical flow conditions, boundary shear stress and depth averaged velocity. The development of society more or less solely depends upon the flow of rivers. The rivers are major sources of many sediments and specific ingredients which are much essential for human beings. During floods, part of a river is carried by the simple main channel and rest is carried by flood plains. For such compound asymmetric channels, the flow structure becomes complicated due to momentum exchange between main channel and adjoining flood plains. Distribution of boundary shear in subsections provides us with the concept of momentum transfer between the interface of main channel and the flood plains. Experimentally, to get better data with accurate results are very complex because of the complexity of the problem. Hence, Conveyance Estimation System (CES) software has been used to tackle the complex processes to determine the shear stresses at different sections of an open channel having asymmetric flood plains on both sides of the main channel and the results are compared with the symmetric flood plains for various geometrical shapes and flow conditions. Error analysis is also performed to know the degree of accuracy of the model implemented.Keywords: depth average velocity, non prismatic compound channel, relative flow depth , velocity distribution
Procedia PDF Downloads 121580 A Non-Destructive Estimation Method for Internal Time in Perilla Leaf Using Hyperspectral Data
Authors: Shogo Nagano, Yusuke Tanigaki, Hirokazu Fukuda
Abstract:
Vegetables harvested early in the morning or late in the afternoon are valued in plant production, and so the time of harvest is important. The biological functions known as circadian clocks have a significant effect on this harvest timing. The purpose of this study was to non-destructively estimate the circadian clock and so construct a method for determining a suitable harvest time. We took eight samples of green busil (Perilla frutescens var. crispa) every 4 hours, six times for 1 day and analyzed all samples at the same time. A hyperspectral camera was used to collect spectrum intensities at 141 different wavelengths (350–1050 nm). Calculation of correlations between spectrum intensity of each wavelength and harvest time suggested the suitability of the hyperspectral camera for non-destructive estimation. However, even the highest correlated wavelength had a weak correlation, so we used machine learning to raise the accuracy of estimation and constructed a machine learning model to estimate the internal time of the circadian clock. Artificial neural networks (ANN) were used for machine learning because this is an effective analysis method for large amounts of data. Using the estimation model resulted in an error between estimated and real times of 3 min. The estimations were made in less than 2 hours. Thus, we successfully demonstrated this method of non-destructively estimating internal time.Keywords: artificial neural network (ANN), circadian clock, green busil, hyperspectral camera, non-destructive evaluation
Procedia PDF Downloads 297579 Flame Volume Prediction and Validation for Lean Blowout of Gas Turbine Combustor
Authors: Ejaz Ahmed, Huang Yong
Abstract:
The operation of aero engines has a critical importance in the vicinity of lean blowout (LBO) limits. Lefebvre’s model of LBO based on empirical correlation has been extended to flame volume concept by the authors. The flame volume takes into account the effects of geometric configuration, the complex spatial interaction of mixing, turbulence, heat transfer and combustion processes inside the gas turbine combustion chamber. For these reasons, flame volume based LBO predictions are more accurate. Although LBO prediction accuracy has improved, it poses a challenge associated with Vf estimation in real gas turbine combustors. This work extends the approach of flame volume prediction previously based on fuel iterative approximation with cold flow simulations to reactive flow simulations. Flame volume for 11 combustor configurations has been simulated and validated against experimental data. To make prediction methodology robust as required in the preliminary design stage, reactive flow simulations were carried out with the combination of probability density function (PDF) and discrete phase model (DPM) in FLUENT 15.0. The criterion for flame identification was defined. Two important parameters i.e. critical injection diameter (Dp,crit) and critical temperature (Tcrit) were identified, and their influence on reactive flow simulation was studied for Vf estimation. Obtained results exhibit ±15% error in Vf estimation with experimental data.Keywords: CFD, combustion, gas turbine combustor, lean blowout
Procedia PDF Downloads 266578 Sequence Analysis and Structural Implications of Rotavirus Capsid Proteins
Authors: Nishal Parbhoo, John B. Dewar, Samantha Gildenhuys
Abstract:
Rotavirus is the major cause of severe gastroenteritis worldwide in children aged 5 and younger. Death rates are high particularly in developing countries. The mature rotavirus is a non-enveloped triple-layered nucleocapsid containing 11 double-stranded RNA segments. Here a global view on the sequence and structure of the three main capsid proteins, VP7, VP6, and VP2 is taken by generating a consensus sequence for each of these rotavirus proteins, for each species obtained from published data of representative rotavirus genotypes from across the world and across species. The degree of conservation between species was represented on homology models for each of the proteins. VP7 shows the highest level of variation with 14 - 45 amino acids showing conservation of less than 60%. These changes are localized to the outer surface which is exposed to antibodies alluding to a possible mechanism in evading the immune system. The middle layer, VP6 shows lower variability with only 14-32 sites having lower than 70% conservation. The inner structural layer made up of VP2 showed the lowest variability with only 1-16 sites having less than 70% conservation across species. The results correlate with proteins’ multiple structural roles. Although the nucleotide sequences vary due to an error-prone replication and lack of proofreading, the corresponding amino acid sequence of VP2, 6 and 7 remains conserved. Sequence conservation maintained for the virus results in stable protein structures, fit for function. This can be exploited in drug design, molecular studies and biotechnological applications.Keywords: amino acid sequence conservation, capsid protein, protein structure, vaccine candidate
Procedia PDF Downloads 289577 Estimation of Normalized Glandular Doses Using a Three-Layer Mammographic Phantom
Authors: Kuan-Jen Lai, Fang-Yi Lin, Shang-Rong Huang, Yun-Zheng Zeng, Po-Chieh Hsu, Jay Wu
Abstract:
The normalized glandular dose (DgN) estimates the energy deposition of mammography in clinical practice. The Monte Carlo simulations frequently use uniformly mixed phantom for calculating the conversion factor. However, breast tissues are not uniformly distributed, leading to errors of conversion factor estimation. This study constructed a three-layer phantom to estimated more accurate of normalized glandular dose. In this study, MCNP code (Monte Carlo N-Particles code) was used to create the geometric structure. We simulated three types of target/filter combinations (Mo/Mo, Mo/Rh, Rh/Rh), six voltages (25 ~ 35 kVp), six HVL parameters and nine breast phantom thicknesses (2 ~ 10 cm) for the three-layer mammographic phantom. The conversion factor for 25%, 50% and 75% glandularity was calculated. The error of conversion factors compared with the results of the American College of Radiology (ACR) was within 6%. For Rh/Rh, the difference was within 9%. The difference between the 50% average glandularity and the uniform phantom was 7.1% ~ -6.7% for the Mo/Mo combination, voltage of 27 kVp, half value layer of 0.34 mmAl, and breast thickness of 4 cm. According to the simulation results, the regression analysis found that the three-layer mammographic phantom at 0% ~ 100% glandularity can be used to accurately calculate the conversion factors. The difference in glandular tissue distribution leads to errors of conversion factor calculation. The three-layer mammographic phantom can provide accurate estimates of glandular dose in clinical practice.Keywords: Monte Carlo simulation, mammography, normalized glandular dose, glandularity
Procedia PDF Downloads 188