Search results for: first order error analysis
37311 Automatic Fluid-Structure Interaction Modeling and Analysis of Butterfly Valve Using Python Script
Authors: N. Guru Prasath, Sangjin Ma, Chang-Wan Kim
Abstract:
A butterfly valve is a quarter turn valve which is used to control the flow of a fluid through a section of pipe. Generally, butterfly valve is used in wide range of applications such as water distribution, sewage, oil and gas plants. In particular, butterfly valve with larger diameter finds its immense applications in hydro power plants to control the fluid flow. In-lieu with the constraints in cost and size to run laboratory setup, analysis of large diameter values will be mostly studied by computational method which is the best and inexpensive solution. For fluid and structural analysis, CFD and FEM software is used to perform large scale valve analyses, respectively. In order to perform above analysis in butterfly valve, the CAD model has to recreate and perform mesh in conventional software’s for various dimensions of valve. Therefore, its limitation is time consuming process. In-order to overcome that issue, python code was created to outcome complete pre-processing setup automatically in Salome software. Applying dimensions of the model clearly in the python code makes the running time comparatively lower and easier way to perform analysis of the valve. Hence, in this paper, an attempt was made to study the fluid-structure interaction (FSI) of butterfly valves by varying the valve angles and dimensions using python code in pre-processing software, and results are produced.Keywords: butterfly valve, flow coefficient, automatic CFD analysis, FSI analysis
Procedia PDF Downloads 24137310 A Second Order Genetic Algorithm for Traveling Salesman Problem
Authors: T. Toathom, M. Munlin, P. Sugunnasil
Abstract:
The traveling salesman problem (TSP) is one of the best-known problems in optimization problem. There are many research regarding the TSP. One of the most usage tool for this problem is the genetic algorithm (GA). The chromosome of the GA for TSP is normally encoded by the order of the visited city. However, the traditional chromosome encoding scheme has some limitations which are twofold: the large solution space and the inability to encapsulate some information. The number of solution for a certain problem is exponentially grow by the number of city. Moreover, the traditional chromosome encoding scheme fails to recognize the misplaced correct relation. It implies that the tradition method focuses only on exact solution. In this work, we relax some of the concept in the GA for TSP which is the exactness of the solution. The proposed work exploits the relation between cities in order to reduce the solution space in the chromosome encoding. In this paper, a second order GA is proposed to solve the TSP. The term second order refers to how the solution is encoded into chromosome. The chromosome is divided into 2 types: the high order chromosome and the low order chromosome. The high order chromosome is the chromosome that focus on the relation between cities such as the city A should be visited before city B. On the other hand, the low order chromosome is a type of chromosome that is derived from a high order chromosome. In other word, low order chromosome is encoded by the traditional chromosome encoding scheme. The genetic operation, mutation and crossover, will be performed on the high order chromosome. Then, the high order chromosome will be mapped to a group of low order chromosomes whose characteristics are satisfied with the high order chromosome. From the mapped set of chromosomes, the champion chromosome will be selected based on the fitness value which will be later used as a representative for the high order chromosome. The experiment is performed on the city data from TSPLIB.Keywords: genetic algorithm, traveling salesman problem, initial population, chromosomes encoding
Procedia PDF Downloads 27137309 Management of Fitness-For-Duty for Human Error Prevention in Nuclear Power Plants
Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee
Abstract:
For the past several decades, not a few researchers have warned that even a trivial human error may result in unexpected accidents, especially in Nuclear Power Plants. To prevent accidents in Nuclear Power Plants, it is quite indispensable to make any factors under the effective control that may raise the possibility of human errors for accident prevention. This study aimed to develop a risk management program, especially in the sense that guaranteeing Fitness-for-Duty (FFD) of human beings working in Nuclear Power Plants. Throughout a literal survey, it was found that work stress and fatigue are major psychophysical factors requiring sophisticated management. A set of major management factors related to work stress and fatigue was through repetitive literal surveys and classified into several categories. To maintain the fitness of human workers, a 4-level – individual worker, team, staff within plants, and external professional - approach was adopted for FFD management program. Moreover, the program was arranged to envelop the whole employment cycle from selection and screening of workers, job allocation, and job rotation. Also, a managerial care program was introduced for employee assistance based on the concept of Employee Assistance Program (EAP). The developed program was reviewed with repetition by ex-operators in nuclear power plants, and assessed in the affirmative. As a whole, responses implied additional treatment to guarantee high performance of human workers not only in normal operations but also in emergency situations. Consequently, the program is under administrative modification for practical application.Keywords: fitness-for-duty (FFD), human error, work stress, fatigue, Employee-Assistance-Program (EAP)
Procedia PDF Downloads 30237308 Characterization of Onboard Reliable Error Correction Code FORSDRAM Controller
Authors: N. Pitcheswara Rao
Abstract:
In the process of conveying the information there may be a chance of signal being corrupted which leads to the erroneous bits in the message. The message may consist of single, double and multiple bit errors. In high-reliability applications, memory can sustain multiple soft errors due to single or multiple event upsets caused by environmental factors. The traditional hamming code with SEC-DED capability cannot be address these types of errors. It is possible to use powerful non-binary BCH code such as Reed-Solomon code to address multiple errors. However, it could take at least a couple dozen cycles of latency to complete first correction and run at a relatively slow speed. In order to overcome this drawback i.e., to increase speed and latency we are using reed-Muller code.Keywords: SEC-DED, BCH code, Reed-Solomon code, Reed-Muller code
Procedia PDF Downloads 42837307 Consensus Problem of High-Order Multi-Agent Systems under Predictor-Based Algorithm
Authors: Cheng-Lin Liu, Fei Liu
Abstract:
For the multi-agent systems with agent's dynamics described by high-order integrator, and usual consensus algorithm composed of the state coordination control parts is proposed. Under communication delay, consensus algorithm in asynchronously-coupled form just can make the agents achieve a stationary consensus, and sufficient consensus condition is obtained based on frequency-domain analysis. To recover the original consensus state of the high-order agents without communication delay, besides, a predictor-based consensus algorithm is constructed via multiplying the delayed neighboring agents' states by a delay-related compensation part, and sufficient consensus condition is also obtained. Simulation illustrates the correctness of the results.Keywords: high-order dynamic agents, communication delay, consensus, predictor-based algorithm
Procedia PDF Downloads 57037306 Analysis of a Strengthening of a Building Reinforced Concrete Structure
Authors: Nassereddine Attari
Abstract:
Each operation to strengthen or repair requires special consideration and requires the use of methods, tools and techniques appropriate to the situation and specific problems of each of the constructs. The aim of this paper is to study the pathology of building of reinforced concrete towards the earthquake and the vulnerability assessment using a non-linear Pushover analysis and to develop curves for a medium capacity building in order to estimate the damaged condition of the building.Keywords: pushover analysis, earthquake, damage, strengthening
Procedia PDF Downloads 43037305 Performance of High Efficiency Video Codec over Wireless Channels
Authors: Mohd Ayyub Khan, Nadeem Akhtar
Abstract:
Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.Keywords: AWGN, forward error correction, HEVC, video coding, QAM
Procedia PDF Downloads 14937304 Sensitive Analysis of the ZF Model for ABC Multi Criteria Inventory Classification
Authors: Makram Ben Jeddou
Abstract:
The ABC classification is widely used by managers for inventory control. The classical ABC classification is based on the Pareto principle and according to the criterion of the annual use value only. Single criterion classification is often insufficient for a closely inventory control. Multi-criteria inventory classification models have been proposed by researchers in order to take into account other important criteria. From these models, we will consider the ZF model in order to make a sensitive analysis on the composite score calculated for each item. In fact, this score based on a normalized average between a good and a bad optimized index can affect the ABC items classification. We will then focus on the weights assigned to each index and propose a classification compromise.Keywords: ABC classification, multi criteria inventory classification models, ZF-model
Procedia PDF Downloads 50837303 Comprehensive Experimental Study to Determine Energy Dissipation of Nappe Flows on Stepped Chutes
Authors: Abdollah Ghasempour, Mohammad Reza Kavianpour, Majid Galoie
Abstract:
This study has investigated the fundamental parameters which have effective role on energy dissipation of nappe flows on stepped chutes in order to estimate an empirical relationship using dimensional analysis. To gain this goal, comprehensive experimental study on some large-scale physical models with various step geometries, slopes, discharges, etc. were carried out. For all models, hydraulic parameters such as velocity, pressure, water depth, flow regime and etc. were measured precisely. The effective parameters, then, could be determined by analysis of experimental data. Finally, a dimensional analysis was done in order to estimate an empirical relationship for evaluation of energy dissipation of nappe flows on stepped chutes. Because of using the large-scale physical models in this study, the empirical relationship is in very good agreement with the experimental results.Keywords: nappe flow, energy dissipation, stepped chute, dimensional analysis
Procedia PDF Downloads 36137302 Characterization of Onboard Reliable Error Correction Code for SDRAM Controller
Authors: Pitcheswara Rao Nelapati
Abstract:
In the process of conveying the information there may be a chance of signal being corrupted which leads to the erroneous bits in the message. The message may consist of single, double and multiple bit errors. In high-reliability applications, memory can sustain multiple soft errors due to single or multiple event upsets caused by environmental factors. The traditional hamming code with SEC-DED capability cannot be address these types of errors. It is possible to use powerful non-binary BCH code such as Reed-Solomon code to address multiple errors. However, it could take at least a couple dozen cycles of latency to complete first correction and run at a relatively slow speed. In order to overcome this drawback i.e., to increase speed and latency we are using reed-Muller code.Keywords: SEC-DED, BCH code, Reed-Solomon code, Reed-Muller code
Procedia PDF Downloads 42937301 End-to-End Performance of MPPM in Multihop MIMO-FSO System Over Dependent GG Atmospheric Turbulence Channels
Authors: Hechmi Saidi, Noureddine Hamdi
Abstract:
The performance of decode and forward (DF) multihop free space optical (FSO) scheme deploying multiple input multiple output (MIMO) configuration under gamma-gamma (GG) statistical distribution, that adopts M-ary pulse position modulation (MPPM) coding, is investigated. We have extracted exact and estimated values of symbol-error rates (SERs) respectively. The probability density function (PDF)’s closed-form formula is expressed for our designed system. Thanks to the use of DF multihop MIMO FSO configuration and MPPM signaling, atmospheric turbulence is combatted; hence the transmitted signal quality is improved.Keywords: free space optical, gamma gamma channel, radio frequency, decode and forward, multiple-input multiple-output, M-ary pulse position modulation, symbol error rate
Procedia PDF Downloads 25037300 Floodplain Modeling of River Jhelum Using HEC-RAS: A Case Study
Authors: Kashif Hassan, M.A. Ahanger
Abstract:
Floods have become more frequent and severe due to effects of global climate change and human alterations of the natural environment. Flood prediction/ forecasting and control is one of the greatest challenges facing the world today. The forecast of floods is achieved by the use of hydraulic models such as HEC-RAS, which are designed to simulate flow processes of the surface water. Extreme flood events in river Jhelum , lasting from a day to few are a major disaster in the State of Jammu and Kashmir, India. In the present study HEC-RAS model was applied to two different reaches of river Jhelum in order to estimate the flood levels corresponding to 25, 50 and 100 year return period flood events at important locations and to deduce flood vulnerability of important areas and structures. The flow rates for the two reaches were derived from flood-frequency analysis of 50 years of historic peak flow data. Manning's roughness coefficient n was selected using detailed analysis. Rating Curves were also generated to serve as base for determining the boundary conditions. Calibration and Validation procedures were applied in order to ensure the reliability of the model. Sensitivity analysis was also performed in order to ensure the accuracy of Manning's n in generating water surface profiles.Keywords: flood plain, HEC-RAS, Jhelum, return period
Procedia PDF Downloads 42637299 X-Ray Dynamical Diffraction Rocking Curves in Case of Third Order Nonlinear Renninger Effect
Authors: Minas Balyan
Abstract:
In the third-order nonlinear Takagi’s equations for monochromatic waves and in the third-order nonlinear time-dependent dynamical diffraction equations for X-ray pulses for forbidden reflections the Fourier-coefficients of the linear and the third order nonlinear susceptibilities are zero. The dynamical diffraction in the nonlinear case is related to the presence in the nonlinear equations the terms proportional to the zero order and the second order nonzero Fourier coefficients of the third order nonlinear susceptibility. Thus in the third order nonlinear Bragg diffraction case a nonlinear analogue of the well known Renninger effect takes place. In this work, the ‘third order nonlinear Renninger effect’ is considered theoretically and numerically. If the reflection exactly is forbidden the diffracted wave’s amplitude is zero both in Laue and Bragg cases since the boundary conditions and dynamical diffraction equations are compatible with zero solution. But in real crystals due to some percent of dislocations and other localized defects, the atoms are displaced with respect to their equilibrium positions. Thus in real crystals susceptibilities of forbidden reflection are by some order small than for usual not forbidden reflections but are not exactly equal to zero. The numerical calculations for susceptibilities two order less than for not forbidden reflection show that in Bragg geometry case the nonlinear reflection curve’s behavior is the same as for not forbidden reflection, but for forbidden reflection the rocking curves’ width, center and boundaries are two order sensitive on the input intensity value. This gives an opportunity to investigate third order nonlinear X-ray dynamical diffraction for not intense beams – 0.001 in the units of critical intensity.Keywords: third order nonlinearity, Bragg diffraction, nonlinear Renninger effect, rocking curves
Procedia PDF Downloads 40637298 A Study on the Quantitative Evaluation Method of Asphalt Pavement Condition through the Visual Investigation
Authors: Sungho Kim, Jaechoul Shin, Yujin Baek
Abstract:
In recent years, due to the environmental impacts and time factor, etc., various type of pavement deterioration is increasing rapidly such as crack, pothole, rutting and roughness degradation. The Ministry of Land, Infrastructure and Transport maintains regular pavement condition of the highway and the national highway using the pavement condition survey equipment and structural survey equipment in Korea. Local governments that maintain local roads, farm roads, etc. are difficult to maintain the pavement condition using the pavement condition survey equipment depending on economic conditions, skills shortages and local conditions such as narrow roads. This study presents a quantitative evaluation method of the pavement condition through the visual inspection to overcome these problems of roads managed by local governments. It is difficult to evaluate rutting and roughness with the naked eye. However, the condition of cracks can be evaluated with the naked eye. Linear cracks (m), area cracks (m²) and potholes (number, m²) were investigated with the naked eye every 100 meters for survey the cracks. In this paper, crack ratio was calculated using the results of the condition of cracks and pavement condition was evaluated by calculated crack ratio. The pavement condition survey equipment also investigated the pavement condition in the same section in order to evaluate the reliability of pavement condition evaluation by the calculated crack ratio. The pavement condition was evaluated through the SPI (Seoul Pavement Index) and calculated crack ratio using results of field survey. The results of a comparison between 'the SPI considering only crack ratio' and 'the SPI considering rutting and roughness either' using the equipment survey data showed a margin of error below 5% when the SPI is less than 5. The SPI 5 is considered the base point to determine whether to maintain the pavement condition. It showed that the pavement condition can be evaluated using only the crack ratio. According to the analysis results of the crack ratio between the visual inspection and the equipment survey, it has an average error of 1.86%(minimum 0.03%, maximum 9.58%). Economically, the visual inspection costs only 10% of the equipment survey and will also help the economy by creating new jobs. This paper advises that local governments maintain the pavement condition through the visual investigations. However, more research is needed to improve reliability. Acknowledgment: The author would like to thank the MOLIT (Ministry of Land, Infrastructure, and Transport). This work was carried out through the project funded by the MOLIT. The project name is 'development of 20mm grade for road surface detecting roadway condition and rapid detection automation system for removal of pothole'.Keywords: asphalt pavement maintenance, crack ratio, evaluation of asphalt pavement condition, SPI (Seoul Pavement Index), visual investigation
Procedia PDF Downloads 16737297 The Influence of Music Education and the Order of Sounds on the Grouping of Sounds into Sequences of Six Tones
Authors: Adam Rosiński
Abstract:
This paper discusses an experiment conducted with two groups of participants, composed of musicians and non-musicians, in order to investigate the impact of the speed of a sound sequence and the order of sounds on the grouping of sounds into sequences of six tones. Significant differences were observed between musicians and non-musicians with respect to the threshold sequence speed at which the sequence was split into two streams. The differences in the results for the two groups suggest that the musical education of the participating listeners may be a vital factor. The criterion of musical education should be taken into account during experiments so that the results obtained are reliable, uniform, and free from interpretive errors.Keywords: auditory scene analysis, education, hearing, psychoacoustics
Procedia PDF Downloads 10237296 Exploring Time-Series Phosphoproteomic Datasets in the Context of Network Models
Authors: Sandeep Kaur, Jenny Vuong, Marcel Julliard, Sean O'Donoghue
Abstract:
Time-series data are useful for modelling as they can enable model-evaluation. However, when reconstructing models from phosphoproteomic data, often non-exact methods are utilised, as the knowledge regarding the network structure, such as, which kinases and phosphatases lead to the observed phosphorylation state, is incomplete. Thus, such reactions are often hypothesised, which gives rise to uncertainty. Here, we propose a framework, implemented via a web-based tool (as an extension to Minardo), which given time-series phosphoproteomic datasets, can generate κ models. The incompleteness and uncertainty in the generated model and reactions are clearly presented to the user via the visual method. Furthermore, we demonstrate, via a toy EGF signalling model, the use of algorithmic verification to verify κ models. Manually formulated requirements were evaluated with regards to the model, leading to the highlighting of the nodes causing unsatisfiability (i.e. error causing nodes). We aim to integrate such methods into our web-based tool and demonstrate how the identified erroneous nodes can be presented to the user via the visual method. Thus, in this research we present a framework, to enable a user to explore phosphorylation proteomic time-series data in the context of models. The observer can visualise which reactions in the model are highly uncertain, and which nodes cause incorrect simulation outputs. A tool such as this enables an end-user to determine the empirical analysis to perform, to reduce uncertainty in the presented model - thus enabling a better understanding of the underlying system.Keywords: κ-models, model verification, time-series phosphoproteomic datasets, uncertainty and error visualisation
Procedia PDF Downloads 25537295 A Tuning Method for Microwave Filter via Complex Neural Network and Improved Space Mapping
Authors: Shengbiao Wu, Weihua Cao, Min Wu, Can Liu
Abstract:
This paper presents an intelligent tuning method of microwave filter based on complex neural network and improved space mapping. The tuning process consists of two stages: the initial tuning and the fine tuning. At the beginning of the tuning, the return loss of the filter is transferred to the passband via the error of phase. During the fine tuning, the phase shift caused by the transmission line and the higher order mode is removed by the curve fitting. Then, an Cauchy method based on the admittance parameter (Y-parameter) is used to extract the coupling matrix. The influence of the resonant cavity loss is eliminated during the parameter extraction process. By using processed data pairs (the amount of screw variation and the variation of the coupling matrix), a tuning model is established by the complex neural network. In view of the improved space mapping algorithm, the mapping relationship between the actual model and the ideal model is established, and the amplitude and direction of the tuning is constantly updated. Finally, the tuning experiment of the eight order coaxial cavity filter shows that the proposed method has a good effect in tuning time and tuning precision.Keywords: microwave filter, scattering parameter, coupling matrix, intelligent tuning
Procedia PDF Downloads 31137294 Application of Modal Analysis for Commissioning of a Ball Screw System
Authors: T. D. Tran, H. Schlegel, R. Neugebauer
Abstract:
Ball screws are an important component in machine tools. In mechatronic systems and machine tools, a ball screw has to work usually at a high speed. Otherwise the axial compliance of the ball screw, in combination with the inertia of the slide, the motor, the coupling and the screw, will cause an oscillation resonance, which limits the systems bandwidth and consequently influences performance of the motion controller. In this paper, the modal analysis method by measuring and analysing the vibrating parameters of the ball screw system to determine the dynamic characteristic of existing structures is used. On the one hand, the results of this study were obtained by the theoretical analysis and the modal testing of a ball screw system test station with the help of an impact hammer, respectively using excitation by motor. The experimental study showed oscillating forms of the ball screw for each frequency and obtained eigenfrequencies of the ball screw system. On the other hand, in this research a simulation with the help of the numerical modal analysis in order to analyse the oscillation and to find the eigenfrequencies of the ball screw system is used. Furthermore, the model order reduction by modal reduction and also according to Guyan is carried out. On the basis of these results a secure and also rapid commissioning of the control loops with regard to operating in their optimal function is targeted.Keywords: modal analysis, ball screw, controller system, machine tools
Procedia PDF Downloads 46037293 Handling Missing Data by Using Expectation-Maximization and Expectation-Maximization with Bootstrapping for Linear Functional Relationship Model
Authors: Adilah Abdul Ghapor, Yong Zulina Zubairi, A. H. M. R. Imon
Abstract:
Missing value problem is common in statistics and has been of interest for years. This article considers two modern techniques in handling missing data for linear functional relationship model (LFRM) namely the Expectation-Maximization (EM) algorithm and Expectation-Maximization with Bootstrapping (EMB) algorithm using three performance indicators; namely the mean absolute error (MAE), root mean square error (RMSE) and estimated biased (EB). In this study, we applied the methods of imputing missing values in two types of LFRM namely the full model of LFRM and in LFRM when the slope is estimated using a nonparametric method. Results of the simulation study suggest that EMB algorithm performs much better than EM algorithm in both models. We also illustrate the applicability of the approach in a real data set.Keywords: expectation-maximization, expectation-maximization with bootstrapping, linear functional relationship model, performance indicators
Procedia PDF Downloads 45537292 On-Site Coaching on Freshly-Graduated Nurses to Improves Quality of Clinical Handover and to Avoid Clinical Error
Authors: Sau Kam Adeline Chan
Abstract:
World Health Organization had listed ‘Communication during Patient Care Handovers’ as one of its highest 5 patient safety initiatives. Clinical handover means transfer of accountability and responsibility of clinical information from one health professional to another. The main goal of clinical handover is to convey patient’s current condition and treatment plan accurately. Ineffective communication at point of care is globally regarded as the main cause of the sentinel event. Situation, Background, Assessment and Recommendation (SBAR), a communication tool, is extensively regarded as an effective communication tool in healthcare setting. Nonetheless, just by scenario-based program in nursing school or attending workshops on SBAR would not be enough for freshly graduated nurses to apply it competently in a complex clinical practice. To what extend and in-depth of information should be conveyed during handover process is not easy to learn. As such, on-site coaching is essential to upgrade their expertise on the usage of SBAR and ultimately to avoid any clinical error. On-site coaching for all freshly graduated nurses on the usage of SBAR in clinical handover was commenced in August 2014. During the preceptorship period, freshly graduated nurses were coached by the preceptor. After that, they were gradually assigned to take care of a group of patients independently. Nurse leaders would join in their shift handover process at patient’s bedside. Feedback and support were given to them accordingly. Discrepancies on their clinical handover process were shared with them and documented for further improvement work. Owing to the constraint of manpower in nurse leader, about coaching for 30 times were provided to a nurse in a year. Staff satisfaction survey was conducted to gauge their feelings about the coaching and look into areas for further improvement. Number of clinical error avoided was documented as well. The nurses reported that there was a significant improvement particularly in their confidence and knowledge in clinical handover process. In addition, the sense of empowerment was developed when liaising with senior and experienced nurses. Their proficiency in applying SBAR was enhanced and they become more alert to the critical criteria of an effective clinical handover. Most importantly, accuracy of transferring patient’s condition was improved and repetition of information was avoided. Clinical errors were prevented and quality patient care was ensured. Using SBAR as a communication tool looks simple. The tool only provides a framework to guide the handover process. Nevertheless, without on-site training, loophole on clinical handover still exists, patient’s safety will be affected and clinical error still happens.Keywords: freshly graduated nurse, competency of clinical handover, quality, clinical error
Procedia PDF Downloads 14837291 Heat-Induced Uncertainty of Industrial Computed Tomography Measuring a Stainless Steel Cylinder
Authors: Verena M. Moock, Darien E. Arce Chávez, Mariana M. Espejel González, Leopoldo Ruíz-Huerta, Crescencio García-Segundo
Abstract:
Uncertainty analysis in industrial computed tomography is commonly related to metrological trace tools, which offer precision measurements of external part features. Unfortunately, there is no such reference tool for internal measurements to profit from the unique imaging potential of X-rays. Uncertainty approximations for computed tomography are still based on general aspects of the industrial machine and do not adapt to acquisition parameters or part characteristics. The present study investigates the impact of the acquisition time on the dimensional uncertainty measuring a stainless steel cylinder with a circular tomography scan. The authors develop the figure difference method for X-ray radiography to evaluate the volumetric differences introduced within the projected absorption maps of the metal workpiece. The dimensional uncertainty is dominantly influenced by photon energy dissipated as heat causing the thermal expansion of the metal, as monitored by an infrared camera within the industrial tomograph. With the proposed methodology, we are able to show evolving temperature differences throughout the tomography acquisition. This is an early study showing that the number of projections in computer tomography induces dimensional error due to energy absorption. The error magnitude would depend on the thermal properties of the sample and the acquisition parameters by placing apparent non-uniform unwanted volumetric expansion. We introduce infrared imaging for the experimental display of metrological uncertainty in a particular metal part of symmetric geometry. We assess that the current results are of fundamental value to reach the balance between the number of projections and uncertainty tolerance when performing analysis with X-ray dimensional exploration in precision measurements with industrial tomography.Keywords: computed tomography, digital metrology, infrared imaging, thermal expansion
Procedia PDF Downloads 12137290 Contested Visions of Exploration in IR: Theoretical Engagements, Reflections and New Agendas on the Dynamics of Global Order
Authors: Ananya Sharma
Abstract:
International Relations is a discipline of paradoxes. The State is the dominant political institution, with mainstream analysis theorizing the State, but theory remains at best a reactionary monolith. Critical Theorists have been pushing the envelope and to that extent, there has been a clear shift in the dominant discourse away from State-centrism to individuals and group-level behaviour. This paradigm shift has been accompanied with more nuanced conceptualizations of other variables at play–power, security, and trust, to name a few. Yet, the ambit of “what is discussed” remains primarily embedded in realist conceptualizations. With this background in mind, this paper will attempt to understand, juxtapose and evaluate how “order” has been conceptualized in International Relations theory. This paper is a tentative attempt to present a “state of the art” and in the process, set the stage for a deeper study to draw attention to what the author feels is a gaping lacuna in IR theory. The paper looks at how different branches of international relations theory envisage world order and the silences embedded therein. Further, by locating order and disorder inhabiting the same reality along a continuum, alternative readings of world orders are drawn from the critical theoretical traditions, in which various articulations of justice impart the key normative pillar to the world order.Keywords: global justice, international relations theory, legitimacy, world order
Procedia PDF Downloads 34637289 Diesel Fault Prediction Based on Optimized Gray Neural Network
Authors: Han Bing, Yin Zhenjie
Abstract:
In order to analyze the status of a diesel engine, as well as conduct fault prediction, a new prediction model based on a gray system is proposed in this paper, which takes advantage of the neural network and the genetic algorithm. The proposed GBPGA prediction model builds on the GM (1.5) model and uses a neural network, which is optimized by a genetic algorithm to construct the error compensator. We verify our proposed model on the diesel faulty simulation data and the experimental results show that GBPGA has the potential to employ fault prediction on diesel.Keywords: fault prediction, neural network, GM(1, 5) genetic algorithm, GBPGA
Procedia PDF Downloads 30437288 An Optimization of Machine Parameters for Modified Horizontal Boring Tool Using Taguchi Method
Authors: Thirasak Panyaphirawat, Pairoj Sapsmarnwong, Teeratas Pornyungyuen
Abstract:
This paper presents the findings of an experimental investigation of important machining parameters for the horizontal boring tool modified to mouth with a horizontal lathe machine to bore an overlength workpiece. In order to verify a usability of a modified tool, design of experiment based on Taguchi method is performed. The parameters investigated are spindle speed, feed rate, depth of cut and length of workpiece. Taguchi L9 orthogonal array is selected for four factors three level parameters in order to minimize surface roughness (Ra and Rz) of S45C steel tubes. Signal to noise ratio analysis and analysis of variance (ANOVA) is performed to study an effect of said parameters and to optimize the machine setting for best surface finish. The controlled factors with most effect are depth of cut, spindle speed, length of workpiece, and feed rate in order. The confirmation test is performed to test the optimal setting obtained from Taguchi method and the result is satisfactory.Keywords: design of experiment, Taguchi design, optimization, analysis of variance, machining parameters, horizontal boring tool
Procedia PDF Downloads 44037287 Stability of Concrete Moment Resisting Frames in View of Current Codes Requirements
Authors: Mahmoud A. Mahmoud, Ashraf Osman
Abstract:
In this study, the different approaches currently followed by design codes to assess the stability of buildings utilizing concrete moment resisting frames structural system are evaluated. For such purpose, a parametric study was performed. It involved analyzing group of concrete moment resisting frames having different slenderness ratios (height/width ratios), designed for different lateral loads to vertical loads ratios and constructed using ordinary reinforced concrete and high strength concrete for stability check and overall buckling using code approaches and computer buckling analysis. The objectives were to examine the influence of such parameters that directly linked to frames’ lateral stiffness on the buildings’ stability and evaluates the code approach in view of buckling analysis results. Based on this study, it was concluded that, the most susceptible buildings to instability and magnification of second order effects are buildings having high aspect ratios (height/width ratio), having low lateral to vertical loads ratio and utilizing construction materials of high strength. In addition, the study showed that the instability limits imposed by codes are mainly mathematical to ensure reliable analysis not a physical ones and that they are in general conservative. Also, it has been shown that the upper limit set by one of the codes that second order moment for structural elements should be limited to 1.4 the first order moment is not justified, instead, the overall story check is more reliable.Keywords: buckling, lateral stability, p-delta, second order
Procedia PDF Downloads 25737286 Lifting Wavelet Transform and Singular Values Decomposition for Secure Image Watermarking
Authors: Siraa Ben Ftima, Mourad Talbi, Tahar Ezzedine
Abstract:
In this paper, we present a technique of secure watermarking of grayscale and color images. This technique consists in applying the Singular Value Decomposition (SVD) in LWT (Lifting Wavelet Transform) domain in order to insert the watermark image (grayscale) in the host image (grayscale or color image). It also uses signature in the embedding and extraction steps. The technique is applied on a number of grayscale and color images. The performance of this technique is proved by the PSNR (Pick Signal to Noise Ratio), the MSE (Mean Square Error) and the SSIM (structural similarity) computations.Keywords: lifting wavelet transform (LWT), sub-space vectorial decomposition, secure, image watermarking, watermark
Procedia PDF Downloads 27637285 Optimal Design of Reference Node Placement for Wireless Indoor Positioning Systems in Multi-Floor Building
Authors: Kittipob Kondee, Chutima Prommak
Abstract:
In this paper, we propose an optimization technique that can be used to optimize the placements of reference nodes and improve the location determination performance for the multi-floor building. The proposed technique is based on Simulated Annealing algorithm (SA) and is called MSMR-M. The performance study in this work is based on simulation. We compare other node-placement techniques found in the literature with the optimal node-placement solutions obtained from our optimization. The results show that using the optimal node-placement obtained by our proposed technique can improve the positioning error distances up to 20% better than those of the other techniques. The proposed technique can provide an average error distance within 1.42 meters.Keywords: indoor positioning system, optimization system design, multi-floor building, wireless sensor networks
Procedia PDF Downloads 24637284 Landsat 8-TIRS NEΔT at Kīlauea Volcano and the Active East Rift Zone, Hawaii
Authors: Flora Paganelli
Abstract:
The radiometric performance of remotely sensed images is important for volcanic monitoring. The Thermal Infrared Sensor (TIRS) on-board Landsat 8 was designed with specific requirements in regard to the noise-equivalent change in temperature (NEΔT) at ≤ 0.4 K at 300 K for the two thermal infrared bands B10 and B11. This study investigated the on-orbit NEΔT of the TIRS two bands from a scene-based method using clear-sky images over the volcanic activity of Kīlauea Volcano and the active East Rift Zone (Hawaii), in order to optimize the use of TIRS data. Results showed that the NEΔTs of the two bands exceeded the design specification by an order of magnitude at 300 K. Both separate bands and split window algorithm were examined to estimate the effect of NEΔT on the land surface temperature (LST) retrieval, and NEΔT contribution to the final LST error. These results were also useful in the current efforts to assess the requirements for volcanology research campaign using the Hyperspectral Infrared Imager (HyspIRI) whose airborne prototype MODIS/ASTER instruments is plan to be flown by NASA as a single campaign to the Hawaiian Islands in support of volcanology and coastal area monitoring in 2016.Keywords: landsat 8, radiometric performance, thermal infrared sensor (TIRS), volcanology
Procedia PDF Downloads 24137283 Inverse Prediction of Thermal Parameters of an Annular Hyperbolic Fin Subjected to Thermal Stresses
Authors: Ashis Mallick, Rajeev Ranjan
Abstract:
The closed form solution for thermal stresses in an annular fin with hyperbolic profile is derived using Adomian decomposition method (ADM). The conductive-convective fin with variable thermal conductivity is considered in the analysis. The nonlinear heat transfer equation is efficiently solved by ADM considering insulated convective boundary conditions at the tip of fin. The constant of integration in the solution is to be estimated using minimum decomposition error method. The solution of temperature field is represented in a polynomial form for convenience to use in thermo-elasticity equation. The non-dimensional thermal stress fields are obtained using the ADM solution of temperature field coupled with the thermo-elasticity solution. The influence of the various thermal parameters in temperature field and stress fields are presented. In order to show the accuracy of the ADM solution, the present results are compared with the results available in literature. The stress fields in fin with hyperbolic profile are compared with those of uniform thickness profile. Result shows that hyperbolic fin profile is better choice for enhancing heat transfer. Moreover, less thermal stresses are developed in hyperbolic profile as compared to rectangular profile. Next, Nelder-Mead based simplex search method is employed for the inverse estimation of unknown non-dimensional thermal parameters in a given stress fields. Owing to the correlated nature of the unknowns, the best combinations of the model parameters which are satisfying the predefined stress field are to be estimated. The stress fields calculated using the inverse parameters give a very good agreement with the stress fields obtained from the forward solution. The estimated parameters are suitable to use for efficient and cost effective fin designing.Keywords: Adomian decomposition, inverse analysis, hyperbolic fin, variable thermal conductivity
Procedia PDF Downloads 32737282 Design and Simulation of Unified Power Quality Conditioner based on Adaptive Fuzzy PI Controller
Authors: Brahim Ferdi, Samira Dib
Abstract:
The unified power quality conditioner (UPQC), a combination of shunt and series active power filter, is one of the best solutions towards the mitigation of voltage and current harmonics problems in distribution power system. PI controller is very common in the control of UPQC. However, one disadvantage of this conventional controller is the difficulty in tuning its gains (Kp and Ki). To overcome this problem, an adaptive fuzzy logic PI controller is proposed. The controller is composed of fuzzy controller and PI controller. According to the error and error rate of the control system and fuzzy control rules, the fuzzy controller can online adjust the two gains of the PI controller to get better performance of UPQC. Simulations using MATLAB/SIMULINK are carried out to verify the performance of the proposed controller. The results show that the proposed controller has fast dynamic response and high accuracy of tracking the current and voltage references.Keywords: adaptive fuzzy PI controller, current harmonics, PI controller, voltage harmonics, UPQC
Procedia PDF Downloads 556