Search results for: error analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 28258

Search results for: error analysis

27898 Heat-Induced Uncertainty of Industrial Computed Tomography Measuring a Stainless Steel Cylinder

Authors: Verena M. Moock, Darien E. Arce Chávez, Mariana M. Espejel González, Leopoldo Ruíz-Huerta, Crescencio García-Segundo

Abstract:

Uncertainty analysis in industrial computed tomography is commonly related to metrological trace tools, which offer precision measurements of external part features. Unfortunately, there is no such reference tool for internal measurements to profit from the unique imaging potential of X-rays. Uncertainty approximations for computed tomography are still based on general aspects of the industrial machine and do not adapt to acquisition parameters or part characteristics. The present study investigates the impact of the acquisition time on the dimensional uncertainty measuring a stainless steel cylinder with a circular tomography scan. The authors develop the figure difference method for X-ray radiography to evaluate the volumetric differences introduced within the projected absorption maps of the metal workpiece. The dimensional uncertainty is dominantly influenced by photon energy dissipated as heat causing the thermal expansion of the metal, as monitored by an infrared camera within the industrial tomograph. With the proposed methodology, we are able to show evolving temperature differences throughout the tomography acquisition. This is an early study showing that the number of projections in computer tomography induces dimensional error due to energy absorption. The error magnitude would depend on the thermal properties of the sample and the acquisition parameters by placing apparent non-uniform unwanted volumetric expansion. We introduce infrared imaging for the experimental display of metrological uncertainty in a particular metal part of symmetric geometry. We assess that the current results are of fundamental value to reach the balance between the number of projections and uncertainty tolerance when performing analysis with X-ray dimensional exploration in precision measurements with industrial tomography.

Keywords: computed tomography, digital metrology, infrared imaging, thermal expansion

Procedia PDF Downloads 100
27897 Corrective Feedback and Uptake Patterns in English Speaking Lessons at Hanoi Law University

Authors: Nhac Thanh Huong

Abstract:

New teaching methods have led to the changes in the teachers’ roles in an English class, in which teachers’ error correction is an integral part. Language error and corrective feedback have been the interest of many researchers in foreign language teaching. However, the techniques and the effectiveness of teachers’ feedback have been a question of much controversy. This present case study has been carried out with a view to finding out the patterns of teachers’ corrective feedback and their impact on students’ uptake in English speaking lessons of legal English major students at Hanoi Law University. In order to achieve those aims, the study makes use of classroom observations as the main method of data collection to seeks answers to the two following questions: 1. What patterns of corrective feedback occur in English speaking lessons for second- year legal English major students in Hanoi Law University?; 2. To what extent does that corrective feedback lead to students’ uptake? The study provided some important findings, among which was a close relationship between corrective feedback and uptake. In particular, recast was the most commonly used feedback type, yet it was the least effective in terms of students’ uptake and repair, while the most successful feedback, namely meta-linguistic feedback, clarification requests and elicitation, which led to students’ generated repair, was used at a much lower rate by teachers. Furthermore, it revealed that different types of errors needed different types of feedback. Also, the use of feedback depended on the students’ English proficiency level. In the light of findings, a number of pedagogical implications have been drawn in the hope of enhancing the effectiveness of teachers’ corrective feedback to students’ uptake in foreign language acquisition process.

Keywords: corrective feedback, error, uptake, speaking English lesson

Procedia PDF Downloads 233
27896 Performance of High Efficiency Video Codec over Wireless Channels

Authors: Mohd Ayyub Khan, Nadeem Akhtar

Abstract:

Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.

Keywords: AWGN, forward error correction, HEVC, video coding, QAM

Procedia PDF Downloads 129
27895 GIS Application in Surface Runoff Estimation for Upper Klang River Basin, Malaysia

Authors: Suzana Ramli, Wardah Tahir

Abstract:

Estimation of surface runoff depth is a vital part in any rainfall-runoff modeling. It leads to stream flow calculation and later predicts flood occurrences. GIS (Geographic Information System) is an advanced and opposite tool used in simulating hydrological model due to its realistic application on topography. The paper discusses on calculation of surface runoff depth for two selected events by using GIS with Curve Number method for Upper Klang River basin. GIS enables maps intersection between soil type and land use that later produces curve number map. The results show good correlation between simulated and observed values with more than 0.7 of R2. Acceptable performance of statistical measurements namely mean error, absolute mean error, RMSE, and bias are also deduced in the paper.

Keywords: surface runoff, geographic information system, curve number method, environment

Procedia PDF Downloads 258
27894 Exploring Time-Series Phosphoproteomic Datasets in the Context of Network Models

Authors: Sandeep Kaur, Jenny Vuong, Marcel Julliard, Sean O'Donoghue

Abstract:

Time-series data are useful for modelling as they can enable model-evaluation. However, when reconstructing models from phosphoproteomic data, often non-exact methods are utilised, as the knowledge regarding the network structure, such as, which kinases and phosphatases lead to the observed phosphorylation state, is incomplete. Thus, such reactions are often hypothesised, which gives rise to uncertainty. Here, we propose a framework, implemented via a web-based tool (as an extension to Minardo), which given time-series phosphoproteomic datasets, can generate κ models. The incompleteness and uncertainty in the generated model and reactions are clearly presented to the user via the visual method. Furthermore, we demonstrate, via a toy EGF signalling model, the use of algorithmic verification to verify κ models. Manually formulated requirements were evaluated with regards to the model, leading to the highlighting of the nodes causing unsatisfiability (i.e. error causing nodes). We aim to integrate such methods into our web-based tool and demonstrate how the identified erroneous nodes can be presented to the user via the visual method. Thus, in this research we present a framework, to enable a user to explore phosphorylation proteomic time-series data in the context of models. The observer can visualise which reactions in the model are highly uncertain, and which nodes cause incorrect simulation outputs. A tool such as this enables an end-user to determine the empirical analysis to perform, to reduce uncertainty in the presented model - thus enabling a better understanding of the underlying system.

Keywords: κ-models, model verification, time-series phosphoproteomic datasets, uncertainty and error visualisation

Procedia PDF Downloads 233
27893 Perceptual Image Coding by Exploiting Internal Generative Mechanism

Authors: Kuo-Cheng Liu

Abstract:

In the perceptual image coding, the objective is to shape the coding distortion such that the amplitude of distortion does not exceed the error visibility threshold, or to remove perceptually redundant signals from the image. While most researches focus on color image coding, the perceptual-based quantizer developed for luminance signals are always directly applied to chrominance signals such that the color image compression methods are inefficient. In this paper, the internal generative mechanism is integrated into the design of a color image compression method. The internal generative mechanism working model based on the structure-based spatial masking is used to assess the subjective distortion visibility thresholds that are visually consistent to human eyes better. The estimation method of structure-based distortion visibility thresholds for color components is further presented in a locally adaptive way to design quantization process in the wavelet color image compression scheme. Since the lowest subband coefficient matrix of images in the wavelet domain preserves the local property of images in the spatial domain, the error visibility threshold inherent in each coefficient of the lowest subband for each color component is estimated by using the proposed spatial error visibility threshold assessment. The threshold inherent in each coefficient of other subbands for each color component is then estimated in a local adaptive fashion based on the distortion energy allocation. By considering that the error visibility thresholds are estimated using predicting and reconstructed signals of the color image, the coding scheme incorporated with locally adaptive perceptual color quantizer does not require side information. Experimental results show that the entropies of three color components obtained by using proposed IGM-based color image compression scheme are lower than that obtained by using the existing color image compression method at perceptually lossless visual quality.

Keywords: internal generative mechanism, structure-based spatial masking, visibility threshold, wavelet domain

Procedia PDF Downloads 223
27892 Iterative Method for Lung Tumor Localization in 4D CT

Authors: Sarah K. Hagi, Majdi Alnowaimi

Abstract:

In the last decade, there were immense advancements in the medical imaging modalities. These advancements can scan a whole volume of the lung organ in high resolution images within a short time. According to this performance, the physicians can clearly identify the complicated anatomical and pathological structures of lung. Therefore, these advancements give large opportunities for more advance of all types of lung cancer treatment available and will increase the survival rate. However, lung cancer is still one of the major causes of death with around 19% of all the cancer patients. Several factors may affect survival rate. One of the serious effects is the breathing process, which can affect the accuracy of diagnosis and lung tumor treatment plan. We have therefore developed a semi automated algorithm to localize the 3D lung tumor positions across all respiratory data during respiratory motion. The algorithm can be divided into two stages. First, a lung tumor segmentation for the first phase of the 4D computed tomography (CT). Lung tumor segmentation is performed using an active contours method. Then, localize the tumor 3D position across all next phases using a 12 degrees of freedom of an affine transformation. Two data set where used in this study, a compute simulate for 4D CT using extended cardiac-torso (XCAT) phantom and 4D CT clinical data sets. The result and error calculation is presented as root mean square error (RMSE). The average error in data sets is 0.94 mm ± 0.36. Finally, evaluation and quantitative comparison of the results with a state-of-the-art registration algorithm was introduced. The results obtained from the proposed localization algorithm show a promising result to localize alung tumor in 4D CT data.

Keywords: automated algorithm , computed tomography, lung tumor, tumor localization

Procedia PDF Downloads 583
27891 Analysis and Control of Camera Type Weft Straightener

Authors: Jae-Yong Lee, Gyu-Hyun Bae, Yun-Soo Chung, Dae-Sub Kim, Jae-Sung Bae

Abstract:

In general, fabric is heat-treated using a stenter machine in order to dry and fix its shape. It is important to shape before the heat treatment because it is difficult to revert back once the fabric is formed. To produce the product of right shape, camera type weft straightener has been applied recently to capture and process fabric images quickly. It is more powerful in determining the final textile quality rather than photo-sensor. Positioning in front of a stenter machine, weft straightener helps to spread fabric evenly and control the angle between warp and weft constantly as right angle by handling skew and bow rollers. To process this tricky procedure, the structural analysis should be carried out in advance, based on which, its control technology can be drawn. A structural analysis is to figure out the specific contact/slippage characteristics between fabric and roller. We already examined the applicability of camera type weft straightener to plain weave fabric and found its possibility and the specific working condition of machine and rollers. In this research, we aimed to explore another applicability of camera type weft straightener. Namely, we tried to figure out camera type weft straightener can be used for fabrics. To find out the optimum condition, we increased the number of rollers. The analysis is done by ANSYS software using Finite Element Analysis method. The control function is demonstrated by experiment. In conclusion, the structural analysis of weft straightener is done to identify a specific characteristic between roller and fabrics. The control of skew and bow roller is done to decrease the error of the angle between warp and weft. Finally, it is proved that camera type straightener can also be used for the special fabrics.

Keywords: camera type weft straightener, structure analysis, control, skew and bow roller

Procedia PDF Downloads 278
27890 Hierarchical Operation Strategies for Grid Connected Building Microgrid with Energy Storage and Photovoltatic Source

Authors: Seon-Ho Yoon, Jin-Young Choi, Dong-Jun Won

Abstract:

This paper presents hierarchical operation strategies which are minimizing operation error between day ahead operation plan and real time operation. Operating power systems between centralized and decentralized approaches can be represented as hierarchical control scheme, featured as primary control, secondary control and tertiary control. Primary control is known as local control, featuring fast response. Secondary control is referred to as microgrid Energy Management System (EMS). Tertiary control is responsible of coordinating the operations of multi-microgrids. In this paper, we formulated 3 stage microgrid operation strategies which are similar to hierarchical control scheme. First stage is to set a day ahead scheduled output power of Battery Energy Storage System (BESS) which is only controllable source in microgrid and it is optimized to minimize cost of exchanged power with main grid using Particle Swarm Optimization (PSO) method. Second stage is to control the active and reactive power of BESS to be operated in day ahead scheduled plan in case that State of Charge (SOC) error occurs between real time and scheduled plan. The third is rescheduling the system when the predicted error is over the limited value. The first stage can be compared with the secondary control in that it adjusts the active power. The second stage is comparable to the primary control in that it controls the error in local manner. The third stage is compared with the secondary control in that it manages power balancing. The proposed strategies will be applied to one of the buildings in Electronics and Telecommunication Research Institute (ETRI). The building microgrid is composed of Photovoltaic (PV) generation, BESS and load and it will be interconnected with the main grid. Main purpose of that is minimizing operation cost and to be operated in scheduled plan. Simulation results support validation of proposed strategies.

Keywords: Battery Energy Storage System (BESS), Energy Management System (EMS), Microgrid (MG), Particle Swarm Optimization (PSO)

Procedia PDF Downloads 232
27889 Application of Double Side Approach Method on Super Elliptical Winkler Plate

Authors: Hsiang-Wen Tang, Cheng-Ying Lo

Abstract:

In this study, the static behavior of super elliptical Winkler plate is analyzed by applying the double side approach method. The lack of information about super elliptical Winkler plates is the motivation of this study and we use the double side approach method to solve this problem because of its superior ability on efficiently treating problems with complex boundary shape. The double side approach method has the advantages of high accuracy, easy calculation procedure and less calculation load required. Most important of all, it can give the error bound of the approximate solution. The numerical results not only show that the double side approach method works well on this problem but also provide us the knowledge of static behavior of super elliptical Winkler plate in practical use.

Keywords: super elliptical winkler plate, double side approach method, error bound, mechanic

Procedia PDF Downloads 330
27888 Study of Syntactic Errors for Deep Parsing at Machine Translation

Authors: Yukiko Sasaki Alam, Shahid Alam

Abstract:

Syntactic parsing is vital for semantic treatment by many applications related to natural language processing (NLP), because form and content coincide in many cases. However, it has not yet reached the levels of reliable performance. By manually examining and analyzing individual machine translation output errors that involve syntax as well as semantics, this study attempts to discover what is required for improving syntactic and semantic parsing.

Keywords: syntactic parsing, error analysis, machine translation, deep parsing

Procedia PDF Downloads 530
27887 An Adaptive Back-Propagation Network and Kalman Filter Based Multi-Sensor Fusion Method for Train Location System

Authors: Yu-ding Du, Qi-lian Bao, Nassim Bessaad, Lin Liu

Abstract:

The Global Navigation Satellite System (GNSS) is regarded as an effective approach for the purpose of replacing the large amount used track-side balises in modern train localization systems. This paper describes a method based on the data fusion of a GNSS receiver sensor and an odometer sensor that can significantly improve the positioning accuracy. A digital track map is needed as another sensor to project two-dimensional GNSS position to one-dimensional along-track distance due to the fact that the train’s position can only be constrained on the track. A model trained by BP neural network is used to estimate the trend positioning error which is related to the specific location and proximate processing of the digital track map. Considering that in some conditions the satellite signal failure will lead to the increase of GNSS positioning error, a detection step for GNSS signal is applied. An adaptive weighted fusion algorithm is presented to reduce the standard deviation of train speed measurement. Finally an Extended Kalman Filter (EKF) is used for the fusion of the projected 1-D GNSS positioning data and the 1-D train speed data to get the estimate position. Experimental results suggest that the proposed method performs well, which can reduce positioning error notably.

Keywords: multi-sensor data fusion, train positioning, GNSS, odometer, digital track map, map matching, BP neural network, adaptive weighted fusion, Kalman filter

Procedia PDF Downloads 229
27886 Bayesian Borrowing Methods for Count Data: Analysis of Incontinence Episodes in Patients with Overactive Bladder

Authors: Akalu Banbeta, Emmanuel Lesaffre, Reynaldo Martina, Joost Van Rosmalen

Abstract:

Including data from previous studies (historical data) in the analysis of the current study may reduce the sample size requirement and/or increase the power of analysis. The most common example is incorporating historical control data in the analysis of a current clinical trial. However, this only applies when the historical control dataare similar enough to the current control data. Recently, several Bayesian approaches for incorporating historical data have been proposed, such as the meta-analytic-predictive (MAP) prior and the modified power prior (MPP) both for single control as well as for multiple historical control arms. Here, we examine the performance of the MAP and the MPP approaches for the analysis of (over-dispersed) count data. To this end, we propose a computational method for the MPP approach for the Poisson and the negative binomial models. We conducted an extensive simulation study to assess the performance of Bayesian approaches. Additionally, we illustrate our approaches on an overactive bladder data set. For similar data across the control arms, the MPP approach outperformed the MAP approach with respect to thestatistical power. When the means across the control arms are different, the MPP yielded a slightly inflated type I error (TIE) rate, whereas the MAP did not. In contrast, when the dispersion parameters are different, the MAP gave an inflated TIE rate, whereas the MPP did not.We conclude that the MPP approach is more promising than the MAP approach for incorporating historical count data.

Keywords: count data, meta-analytic prior, negative binomial, poisson

Procedia PDF Downloads 95
27885 Application of Adaptive Neural Network Algorithms for Determination of Salt Composition of Waters Using Laser Spectroscopy

Authors: Tatiana A. Dolenko, Sergey A. Burikov, Alexander O. Efitorov, Sergey A. Dolenko

Abstract:

In this study, a comparative analysis of the approaches associated with the use of neural network algorithms for effective solution of a complex inverse problem – the problem of identifying and determining the individual concentrations of inorganic salts in multicomponent aqueous solutions by the spectra of Raman scattering of light – is performed. It is shown that application of artificial neural networks provides the average accuracy of determination of concentration of each salt no worse than 0.025 M. The results of comparative analysis of input data compression methods are presented. It is demonstrated that use of uniform aggregation of input features allows decreasing the error of determination of individual concentrations of components by 16-18% on the average.

Keywords: inverse problems, multi-component solutions, neural networks, Raman spectroscopy

Procedia PDF Downloads 501
27884 Management of Fitness-For-Duty for Human Error Prevention in Nuclear Power Plants

Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee

Abstract:

For the past several decades, not a few researchers have warned that even a trivial human error may result in unexpected accidents, especially in Nuclear Power Plants. To prevent accidents in Nuclear Power Plants, it is quite indispensable to make any factors under the effective control that may raise the possibility of human errors for accident prevention. This study aimed to develop a risk management program, especially in the sense that guaranteeing Fitness-for-Duty (FFD) of human beings working in Nuclear Power Plants. Throughout a literal survey, it was found that work stress and fatigue are major psychophysical factors requiring sophisticated management. A set of major management factors related to work stress and fatigue was through repetitive literal surveys and classified into several categories. To maintain the fitness of human workers, a 4-level – individual worker, team, staff within plants, and external professional - approach was adopted for FFD management program. Moreover, the program was arranged to envelop the whole employment cycle from selection and screening of workers, job allocation, and job rotation. Also, a managerial care program was introduced for employee assistance based on the concept of Employee Assistance Program (EAP). The developed program was reviewed with repetition by ex-operators in nuclear power plants, and assessed in the affirmative. As a whole, responses implied additional treatment to guarantee high performance of human workers not only in normal operations but also in emergency situations. Consequently, the program is under administrative modification for practical application.

Keywords: fitness-for-duty (FFD), human error, work stress, fatigue, Employee-Assistance-Program (EAP)

Procedia PDF Downloads 283
27883 Soil Moisture Control System: A Product Development Approach

Authors: Swapneel U. Naphade, Dushyant A. Patil, Satyabodh M. Kulkarni

Abstract:

In this work, we propose the concept and geometrical design of a soil moisture control system (SMCS) module by following the product development approach to develop an inexpensive, easy to use and quick to install product targeted towards agriculture practitioners. The module delivers water to the agricultural land efficiently by sensing the soil moisture and activating the delivery valve. We start with identifying the general needs of the potential customer. Then, based on customer needs we establish product specifications and identify important measuring quantities to evaluate our product. Keeping in mind the specifications, we develop various conceptual solutions of the product and select the best solution through concept screening and selection matrices. Then, we develop the product architecture by integrating the systems into the final product. In the end, the geometric design is done using human factors engineering concepts like heuristic analysis, task analysis, and human error reduction analysis. The result of human factors analysis reveals the remedies which should be applied while designing the geometry and software components of the product. We find that to design the best grip in terms of comfort and applied force, for a power-type grip, a grip-diameter of 35 mm is the most ideal.

Keywords: agriculture, human factors, product design, soil moisture control

Procedia PDF Downloads 158
27882 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations

Authors: Yehjune Heo

Abstract:

Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.

Keywords: anti-spoofing, CNN, fingerprint recognition, GAN

Procedia PDF Downloads 170
27881 Lyapunov-Based Tracking Control for Nonholonomic Wheeled Mobile Robot

Authors: Raouf Fareh, Maarouf Saad, Sofiane Khadraoui, Tamer Rabie

Abstract:

This paper presents a tracking control strategy based on Lyapunov approach for nonholonomic wheeled mobile robot. This control strategy consists of two levels. First, a kinematic controller is developed to adjust the right and left wheel velocities. Using this velocity control law, the stability of the tracking error is guaranteed using Lyapunov approach. This kinematic controller cannot be generated directly by the motors. To overcome this problem, the second level of the controllers, dynamic control, is designed. This dynamic control law is developed based on Lyapunov theory in order to track the desired trajectories of the mobile robot. The stability of the tracking error is proved using Lupunov and Barbalat approaches. Simulation results on a nonholonomic wheeled mobile robot are given to demonstrate the feasibility and effectiveness of the presented approach.

Keywords: mobile robot, trajectory tracking, Lyapunov, stability

Procedia PDF Downloads 355
27880 End-to-End Performance of MPPM in Multihop MIMO-FSO System Over Dependent GG Atmospheric Turbulence Channels

Authors: Hechmi Saidi, Noureddine Hamdi

Abstract:

The performance of decode and forward (DF) multihop free space optical (FSO) scheme deploying multiple input multiple output (MIMO) configuration under gamma-gamma (GG) statistical distribution, that adopts M-ary pulse position modulation (MPPM) coding, is investigated. We have extracted exact and estimated values of symbol-error rates (SERs) respectively. The probability density function (PDF)’s closed-form formula is expressed for our designed system. Thanks to the use of DF multihop MIMO FSO configuration and MPPM signaling, atmospheric turbulence is combatted; hence the transmitted signal quality is improved.

Keywords: free space optical, gamma gamma channel, radio frequency, decode and forward, multiple-input multiple-output, M-ary pulse position modulation, symbol error rate

Procedia PDF Downloads 229
27879 Artificial Neural Networks Based Calibration Approach for Six-Port Receiver

Authors: Nadia Chagtmi, Nejla Rejab, Noureddine Boulejfen

Abstract:

This paper presents a calibration approach based on artificial neural networks (ANN) to determine the envelop signal (I+jQ) of a six-port based receiver (SPR). The memory effects called also dynamic behavior and the nonlinearity brought by diode based power detector have been taken into consideration by the ANN. Experimental set-up has been performed to validate the efficiency of this method. The efficiency of this approach has been confirmed by the obtained results in terms of waveforms. Moreover, the obtained error vector magnitude (EVM) and the mean absolute error (MAE) have been calculated in order to confirm and to test the ANN’s performance to achieve I/Q recovery using the output voltage detected by the power based detector. The baseband signal has been recovered using ANN with EVMs no higher than 1 % and an MAE no higher than 17, 26 for the SPR excited different type of signals such QAM (quadrature amplitude modulation) and LTE (Long Term Evolution).

Keywords: six-port based receiver; calibration, nonlinearity, memory effect, artificial neural network

Procedia PDF Downloads 48
27878 Dynamic Modeling of a Robot for Playing a Curved 3D Percussion Instrument Utilizing a Finite Element Method

Authors: Prakash Persad, Kelvin Loutan, Trichelle Seepersad

Abstract:

The Finite Element Method is commonly used in the analysis of flexible manipulators to predict elastic displacements and develop joint control schemes for reducing positioning error. In order to preserve simplicity, regular geometries, ideal joints and connections are assumed. This paper presents the dynamic FE analysis of a 4- degrees of freedom open chain manipulator, intended for striking a curved 3D surface percussion musical instrument. This was done utilizing the new MultiBody Dynamics Module in COMSOL, capable of modeling the elastic behavior of a body undergoing rigid body type motion.

Keywords: dynamic modeling, entertainment robots, finite element method, flexible robot manipulators, multibody dynamics, musical robots

Procedia PDF Downloads 319
27877 Multi-Point Dieless Forming Product Defect Reduction Using Reliability-Based Robust Process Optimization

Authors: Misganaw Abebe Baye, Ji-Woo Park, Beom-Soo Kang

Abstract:

The product quality of multi-point dieless forming (MDF) is identified to be dependent on the process parameters. Moreover, a certain variation of friction and material properties may have a substantially worse influence on the final product quality. This study proposed on how to compensate the MDF product defects by minimizing the sensitivity of noise parameter variations. This can be attained by reliability-based robust optimization (RRO) technique to obtain the optimal process setting of the controllable parameters. Initially two MDF Finite Element (FE) simulations of AA3003-H14 saddle shape showed a substantial amount of dimpling, wrinkling, and shape error. FE analyses are consequently applied on ABAQUS commercial software to obtain the correlation between the control process setting and noise variation with regard to the product defects. The best prediction models are chosen from the family of metamodels to swap the computational expensive FE simulation. Genetic algorithm (GA) is applied to determine the optimal process settings of the control parameters. Monte Carlo Analysis (MCA) is executed to determine how the noise parameter variation affects the final product quality. Finally, the RRO FE simulation and the experimental result show that the amendment of the control parameters in the final forming process leads to a considerably better-quality product.

Keywords: dimpling, multi-point dieless forming, reliability-based robust optimization, shape error, variation, wrinkling

Procedia PDF Downloads 228
27876 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 121
27875 Handling Missing Data by Using Expectation-Maximization and Expectation-Maximization with Bootstrapping for Linear Functional Relationship Model

Authors: Adilah Abdul Ghapor, Yong Zulina Zubairi, A. H. M. R. Imon

Abstract:

Missing value problem is common in statistics and has been of interest for years. This article considers two modern techniques in handling missing data for linear functional relationship model (LFRM) namely the Expectation-Maximization (EM) algorithm and Expectation-Maximization with Bootstrapping (EMB) algorithm using three performance indicators; namely the mean absolute error (MAE), root mean square error (RMSE) and estimated biased (EB). In this study, we applied the methods of imputing missing values in two types of LFRM namely the full model of LFRM and in LFRM when the slope is estimated using a nonparametric method. Results of the simulation study suggest that EMB algorithm performs much better than EM algorithm in both models. We also illustrate the applicability of the approach in a real data set.

Keywords: expectation-maximization, expectation-maximization with bootstrapping, linear functional relationship model, performance indicators

Procedia PDF Downloads 432
27874 On-Site Coaching on Freshly-Graduated Nurses to Improves Quality of Clinical Handover and to Avoid Clinical Error

Authors: Sau Kam Adeline Chan

Abstract:

World Health Organization had listed ‘Communication during Patient Care Handovers’ as one of its highest 5 patient safety initiatives. Clinical handover means transfer of accountability and responsibility of clinical information from one health professional to another. The main goal of clinical handover is to convey patient’s current condition and treatment plan accurately. Ineffective communication at point of care is globally regarded as the main cause of the sentinel event. Situation, Background, Assessment and Recommendation (SBAR), a communication tool, is extensively regarded as an effective communication tool in healthcare setting. Nonetheless, just by scenario-based program in nursing school or attending workshops on SBAR would not be enough for freshly graduated nurses to apply it competently in a complex clinical practice. To what extend and in-depth of information should be conveyed during handover process is not easy to learn. As such, on-site coaching is essential to upgrade their expertise on the usage of SBAR and ultimately to avoid any clinical error. On-site coaching for all freshly graduated nurses on the usage of SBAR in clinical handover was commenced in August 2014. During the preceptorship period, freshly graduated nurses were coached by the preceptor. After that, they were gradually assigned to take care of a group of patients independently. Nurse leaders would join in their shift handover process at patient’s bedside. Feedback and support were given to them accordingly. Discrepancies on their clinical handover process were shared with them and documented for further improvement work. Owing to the constraint of manpower in nurse leader, about coaching for 30 times were provided to a nurse in a year. Staff satisfaction survey was conducted to gauge their feelings about the coaching and look into areas for further improvement. Number of clinical error avoided was documented as well. The nurses reported that there was a significant improvement particularly in their confidence and knowledge in clinical handover process. In addition, the sense of empowerment was developed when liaising with senior and experienced nurses. Their proficiency in applying SBAR was enhanced and they become more alert to the critical criteria of an effective clinical handover. Most importantly, accuracy of transferring patient’s condition was improved and repetition of information was avoided. Clinical errors were prevented and quality patient care was ensured. Using SBAR as a communication tool looks simple. The tool only provides a framework to guide the handover process. Nevertheless, without on-site training, loophole on clinical handover still exists, patient’s safety will be affected and clinical error still happens.

Keywords: freshly graduated nurse, competency of clinical handover, quality, clinical error

Procedia PDF Downloads 129
27873 Comparative Analysis of Universal Filtered Multi Carrier and Filtered Orthogonal Frequency Division Multiplexing Systems for Wireless Communications

Authors: Raja Rajeswari K

Abstract:

Orthogonal Frequency Division Multiplexing (OFDM), a multi Carrier transmission technique that has been used in implementing the majority of wireless applications like Wireless Network Protocol Standards (like IEEE 802.11a, IEEE 802.11n), in telecommunications (like LTE, LTE-Advanced) and also in Digital Audio & Video Broadcast standards. The latest research and development in the area of orthogonal frequency division multiplexing, Universal Filtered Multi Carrier (UFMC) & Filtered OFDM (F-OFDM) has attracted lots of attention for wideband wireless communications. In this paper UFMC & F-OFDM system are implemented and comparative analysis are carried out in terms of M-ary QAM modulation scheme over Dolph-chebyshev filter & rectangular window filter and to estimate Bit Error Rate (BER) over Rayleigh fading channel.

Keywords: UFMC, F-OFDM, BER, M-ary QAM

Procedia PDF Downloads 149
27872 Prediction of the Mechanical Power in Wind Turbine Powered Car Using Velocity Analysis

Authors: Abdelrahman Alghazali, Youssef Kassem, Hüseyin Çamur, Ozan Erenay

Abstract:

Savonius is a drag type vertical axis wind turbine. Savonius wind turbines have a low cut-in speed and can operate at low wind speed. This makes it suitable for electricity or mechanical generation in low-power applications such as individual domestic installations. Therefore, the primary purpose of this work was to investigate the relationship between the type of Savonius rotor and the torque and mechanical power generated. And it was to illustrate how the type of rotor might play an important role in the prediction of mechanical power of wind turbine powered car. The main purpose of this paper is to predict and investigate the aerodynamic effects by means of velocity analysis on the performance of a wind turbine powered car by converting the wind energy into mechanical energy to overcome load that rotates the main shaft. The predicted results based on theoretical analysis were compared with experimental results obtained from literature. The percentage of error between the two was approximately around 20%. Prediction of the torque was done at a wind speed of 4 m/s, and an angular velocity of 130 RPM according to meteorological statistics in Northern Cyprus.

Keywords: mechanical power, torque, Savonius rotor, wind car

Procedia PDF Downloads 311
27871 Analysis of Moving Loads on Bridges Using Surrogate Models

Authors: Susmita Panda, Arnab Banerjee, Ajinkya Baxy, Bappaditya Manna

Abstract:

The design of short to medium-span high-speed bridges in critical locations is an essential aspect of vehicle-bridge interaction. Due to dynamic interaction between moving load and bridge, mathematical models or finite element modeling computations become time-consuming. Thus, to reduce the computational effort, a universal approximator using an artificial neural network (ANN) has been used to evaluate the dynamic response of the bridge. The data set generation and training of surrogate models have been conducted over the results obtained from mathematical modeling. Further, the robustness of the surrogate model has been investigated, which showed an error percentage of less than 10% with conventional methods. Additionally, the dependency of the dynamic response of the bridge on various load and bridge parameters has been highlighted through a parametric study.

Keywords: artificial neural network, mode superposition method, moving load analysis, surrogate models

Procedia PDF Downloads 78
27870 Simulation of Optimal Runoff Hydrograph Using Ensemble of Radar Rainfall and Blending of Runoffs Model

Authors: Myungjin Lee, Daegun Han, Jongsung Kim, Soojun Kim, Hung Soo Kim

Abstract:

Recently, the localized heavy rainfall and typhoons are frequently occurred due to the climate change and the damage is becoming bigger. Therefore, we may need a more accurate prediction of the rainfall and runoff. However, the gauge rainfall has the limited accuracy in space. Radar rainfall is better than gauge rainfall for the explanation of the spatial variability of rainfall but it is mostly underestimated with the uncertainty involved. Therefore, the ensemble of radar rainfall was simulated using error structure to overcome the uncertainty and gauge rainfall. The simulated ensemble was used as the input data of the rainfall-runoff models for obtaining the ensemble of runoff hydrographs. The previous studies discussed about the accuracy of the rainfall-runoff model. Even if the same input data such as rainfall is used for the runoff analysis using the models in the same basin, the models can have different results because of the uncertainty involved in the models. Therefore, we used two models of the SSARR model which is the lumped model, and the Vflo model which is a distributed model and tried to simulate the optimum runoff considering the uncertainty of each rainfall-runoff model. The study basin is located in Han river basin and we obtained one integrated runoff hydrograph which is an optimum runoff hydrograph using the blending methods such as Multi-Model Super Ensemble (MMSE), Simple Model Average (SMA), Mean Square Error (MSE). From this study, we could confirm the accuracy of rainfall and rainfall-runoff model using ensemble scenario and various rainfall-runoff model and we can use this result to study flood control measure due to climate change. Acknowledgements: This work is supported by the Korea Agency for Infrastructure Technology Advancement(KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 18AWMP-B083066-05).

Keywords: radar rainfall ensemble, rainfall-runoff models, blending method, optimum runoff hydrograph

Procedia PDF Downloads 257
27869 Optimal Design of Reference Node Placement for Wireless Indoor Positioning Systems in Multi-Floor Building

Authors: Kittipob Kondee, Chutima Prommak

Abstract:

In this paper, we propose an optimization technique that can be used to optimize the placements of reference nodes and improve the location determination performance for the multi-floor building. The proposed technique is based on Simulated Annealing algorithm (SA) and is called MSMR-M. The performance study in this work is based on simulation. We compare other node-placement techniques found in the literature with the optimal node-placement solutions obtained from our optimization. The results show that using the optimal node-placement obtained by our proposed technique can improve the positioning error distances up to 20% better than those of the other techniques. The proposed technique can provide an average error distance within 1.42 meters.

Keywords: indoor positioning system, optimization system design, multi-floor building, wireless sensor networks

Procedia PDF Downloads 222