Search results for: stochastic errors.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 721

Search results for: stochastic errors.

241 The Use of Classifiers in Image Analysis of Oil Wells Profiling Process and the Automatic Identification of Events

Authors: Jaqueline M. R. Vieira

Abstract:

Different strategies and tools are available at the oil and gas industry for detecting and analyzing tension and possible fractures in borehole walls. Most of these techniques are based on manual observation of the captured borehole images. While this strategy may be possible and convenient with small images and few data, it may become difficult and suitable to errors when big databases of images must be treated. While the patterns may differ among the image area, depending on many characteristics (drilling strategy, rock components, rock strength, etc.). In this work we propose the inclusion of data-mining classification strategies in order to create a knowledge database of the segmented curves. These classifiers allow that, after some time using and manually pointing parts of borehole images that correspond to tension regions and breakout areas, the system will indicate and suggest automatically new candidate regions, with higher accuracy. We suggest the use of different classifiers methods, in order to achieve different knowledge dataset configurations.

Keywords: Brazil, classifiers, data-mining, Image Segmentation, oil well visualization, classifiers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2544
240 Influence of Noise on the Inference of Dynamic Bayesian Networks from Short Time Series

Authors: Frank Emmert Streib, Matthias Dehmer, Gökhan H. Bakır, Max Mühlhauser

Abstract:

In this paper we investigate the influence of external noise on the inference of network structures. The purpose of our simulations is to gain insights in the experimental design of microarray experiments to infer, e.g., transcription regulatory networks from microarray experiments. Here external noise means, that the dynamics of the system under investigation, e.g., temporal changes of mRNA concentration, is affected by measurement errors. Additionally to external noise another problem occurs in the context of microarray experiments. Practically, it is not possible to monitor the mRNA concentration over an arbitrary long time period as demanded by the statistical methods used to learn the underlying network structure. For this reason, we use only short time series to make our simulations more biologically plausible.

Keywords: Dynamic Bayesian networks, structure learning, gene networks, Markov chain Monte Carlo, microarray data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1611
239 Mathematical Modeling of Gas Turbine Blade Cooling

Authors: А. Pashayev, C. Ardil, D. Askerov, R. Sadiqov, A. Samedov

Abstract:

In contrast to existing methods which do not take into account multiconnectivity in a broad sense of this term, we develop mathematical models and highly effective combination (BIEM and FDM) numerical methods of calculation of stationary and quasistationary temperature field of a profile part of a blade with convective cooling (from the point of view of realization on PC). The theoretical substantiation of these methods is proved by appropriate theorems. For it, converging quadrature processes have been developed and the estimations of errors in the terms of A.Ziqmound continuity modules have been received. For visualization of profiles are used: the method of the least squares with automatic conjecture, device spline, smooth replenishment and neural nets. Boundary conditions of heat exchange are determined from the solution of the corresponding integral equations and empirical relationships. The reliability of designed methods is proved by calculation and experimental investigations heat and hydraulic characteristics of the gas turbine first stage nozzle blade.

Keywords: Mathematical Modeling, Gas Turbine Blade Cooling, Neural Networks, BIEM and FDM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2092
238 The Contribution of Edgeworth, Bootstrap and Monte Carlo Methods in Financial Data

Authors: Edlira Donefski, Tina Donefski, Lorenc Ekonomi

Abstract:

Edgeworth Approximation, Bootstrap and Monte Carlo Simulations have a considerable impact on the achieving certain results related to different problems taken into study. In our paper, we have treated a financial case related to the effect that have the components of a Cash-Flow of one of the most successful businesses in the world, as the financial activity, operational activity and investing activity to the cash and cash equivalents at the end of the three-months period. To have a better view of this case we have created a Vector Autoregression model, and after that we have generated the impulse responses in the terms of Asymptotic Analysis (Edgeworth Approximation), Monte Carlo Simulations and Residual Bootstrap based on the standard errors of every series created. The generated results consisted of the common tendencies for the three methods applied, that consequently verified the advantage of the three methods in the optimization of the model that contains many variants.

Keywords: Autoregression, Bootstrap, Edgeworth Expansion, Monte Carlo Method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 595
237 Kernel Matching versus Inverse Probability Weighting: A Comparative Study

Authors: Andy Handouyahia, Tony Haddad, Frank Eaton

Abstract:

Recent quasi-experimental evaluation of the Canadian Active Labour Market Policies (ALMP) by Human Resources and Skills Development Canada (HRSDC) has provided an opportunity to examine alternative methods to estimating the incremental effects of Employment Benefits and Support Measures (EBSMs) on program participants. The focus of this paper is to assess the efficiency and robustness of inverse probability weighting (IPW) relative to kernel matching (KM) in the estimation of program effects. To accomplish this objective, the authors compare pairs of 1,080 estimates, along with their associated standard errors, to assess which type of estimate is generally more efficient and robust. In the interest of practicality, the authorsalso document the computationaltime it took to produce the IPW and KM estimates, respectively.

Keywords: Treatment effect, causal inference, observational studies, Propensity score based matching, Kernel Matching, Inverse Probability Weighting, Estimation methods for incremental effect.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6925
236 Stackelberg Security Game for Optimizing Security of Federated Internet of Things Platform Instances

Authors: Violeta Damjanovic-Behrendt

Abstract:

This paper presents an approach for optimal cyber security decisions to protect instances of a federated Internet of Things (IoT) platform in the cloud. The presented solution implements the repeated Stackelberg Security Game (SSG) and a model called Stochastic Human behaviour model with AttRactiveness and Probability weighting (SHARP). SHARP employs the Subjective Utility Quantal Response (SUQR) for formulating a subjective utility function, which is based on the evaluations of alternative solutions during decision-making. We augment the repeated SSG (including SHARP and SUQR) with a reinforced learning algorithm called Naïve Q-Learning. Naïve Q-Learning belongs to the category of active and model-free Machine Learning (ML) techniques in which the agent (either the defender or the attacker) attempts to find an optimal security solution. In this way, we combine GT and ML algorithms for discovering optimal cyber security policies. The proposed security optimization components will be validated in a collaborative cloud platform that is based on the Industrial Internet Reference Architecture (IIRA) and its recently published security model.

Keywords: Security, internet of things, cloud computing, Stackelberg security game, machine learning, Naïve Q-learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643
235 Automatic Inspection of Percussion Caps by Means of Combined 2D and 3D Machine Vision Techniques

Authors: A. Tellaeche, R. Arana, I.Maurtua

Abstract:

The exhaustive quality control is becoming more and more important when commercializing competitive products in the world's globalized market. Taken this affirmation as an undeniable truth, it becomes critical in certain sector markets that need to offer the highest restrictions in quality terms. One of these examples is the percussion cap mass production, a critical element assembled in firearm ammunition. These elements, built in great quantities at a very high speed, must achieve a minimum tolerance deviation in their fabrication, due to their vital importance in firing the piece of ammunition where they are built in. This paper outlines a machine vision development for the 100% inspection of percussion caps obtaining data from 2D and 3D simultaneous images. The acquisition speed and precision of these images from a metallic reflective piece as a percussion cap, the accuracy of the measures taken from these images and the multiple fabrication errors detected make the main findings of this work.

Keywords: critical tolerance, high speed decision makingsimultaneous 2D/3D machine vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
234 New Technologies for Modeling of Gas Turbine Cooled Blades

Authors: A. Pashayev, D. Askerov, R.Sadiqov, A. Samedov, C. Ardil

Abstract:

In contrast to existing methods which do not take into account multiconnectivity in a broad sense of this term, we develop mathematical models and highly effective combination (BIEM and FDM) numerical methods of calculation of stationary and cvazistationary temperature field of a profile part of a blade with convective cooling (from the point of view of realization on PC). The theoretical substantiation of these methods is proved by appropriate theorems. For it, converging quadrature processes have been developed and the estimations of errors in the terms of A.Ziqmound continuity modules have been received. For visualization of profiles are used: the method of the least squares with automatic conjecture, device spline, smooth replenishment and neural nets. Boundary conditions of heat exchange are determined from the solution of the corresponding integral equations and empirical relationships. The reliability of designed methods is proved by calculation and experimental investigations heat and hydraulic characteristics of the gas turbine 1st stage nozzle blade

Keywords: multiconnected systems, method of the boundary integrated equations, splines, neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1652
233 Numerical Modeling of Gas Turbine Engines

Authors: A. Pashayev, D. Askerov, C. Ardil, R. Sadiqov

Abstract:

In contrast to existing methods which do not take into account multiconnectivity in a broad sense of this term, we develop mathematical models and highly effective combination (BIEM and FDM) numerical methods of calculation of stationary and quasi-stationary temperature field of a profile part of a blade with convective cooling (from the point of view of realization on PC). The theoretical substantiation of these methods is proved by appropriate theorems. For it, converging quadrature processes have been developed and the estimations of errors in the terms of A.Ziqmound continuity modules have been received. For visualization of profiles are used: the method of the least squares with automatic conjecture, device spline, smooth replenishment and neural nets. Boundary conditions of heat exchange are determined from the solution of the corresponding integral equations and empirical relationships. The reliability of designed methods is proved by calculation and experimental investigations heat and hydraulic characteristics of the gas turbine first stage nozzle blade.

Keywords: Multiconnected systems, method of the boundary integrated equations, splines, neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1625
232 Design and Analysis of Gauge R&R Studies: Making Decisions Based on ANOVA Method

Authors: Afrooz Moatari Kazerouni

Abstract:

In a competitive production environment, critical decision making are based on data resulted by random sampling of product units. Efficiency of these decisions depends on data quality and also their reliability scale. This point leads to the necessity of a reliable measurement system. Therefore, the conjecture process and analysing the errors contributes to a measurement system known as Measurement System Analysis (MSA). The aim of this research is on determining the necessity and assurance of extensive development in analysing measurement systems, particularly with the use of Repeatability and Reproducibility Gages (GR&R) to improve physical measurements. Nowadays in productive industries, repeatability and reproducibility gages released so well but they are not applicable as well as other measurement system analysis methods. To get familiar with this method and gain a feedback in improving measurement systems, this survey would be on “ANOVA" method as the most widespread way of calculating Repeatability and Reproducibility (R&R).

Keywords: Analysis of Variance (ANOVA), MeasurementSystem Analysis (MSA), Part-Operator interaction effect, Repeatability and Reproducibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4668
231 Computational Evaluation of a C-A Heat Pump

Authors: Young-Jin Baik, Minsung Kim, Young-Soo Lee, Ki-Chang Chang, Seong-Ryong Park

Abstract:

The compression-absorption heat pump (C-A HP), one of the promising heat recovery equipments that make process hot water using low temperature heat of wastewater, was evaluated by computer simulation. A simulation program was developed based on the continuity and the first and second laws of thermodynamics. Both the absorber and desorber were modeled using UA-LMTD method. In order to prevent an unfeasible temperature profile and to reduce calculation errors from the curved temperature profile of a mixture, heat loads were divided into lots of segments. A single-stage compressor was considered. A compressor cooling load was also taken into account. An isentropic efficiency was computed from the map data. Simulation conditions were given based on the system consisting of ordinarily designed components. The simulation results show that most of the total entropy generation occurs during the compression and cooling process, thus suggesting the possibility that system performance can be enhanced if a rectifier is introduced.

Keywords: Waste heat recovery, Heat Pump.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1719
230 Discrete Polynomial Moments and Savitzky-Golay Smoothing

Authors: Paul O'Leary, Matthew Harker

Abstract:

This paper presents unified theory for local (Savitzky- Golay) and global polynomial smoothing. The algebraic framework can represent any polynomial approximation and is seamless from low degree local, to high degree global approximations. The representation of the smoothing operator as a projection onto orthonormal basis functions enables the computation of: the covariance matrix for noise propagation through the filter; the noise gain and; the frequency response of the polynomial filters. A virtually perfect Gram polynomial basis is synthesized, whereby polynomials of degree d = 1000 can be synthesized without significant errors. The perfect basis ensures that the filters are strictly polynomial preserving. Given n points and a support length ls = 2m + 1 then the smoothing operator is strictly linear phase for the points xi, i = m+1. . . n-m. The method is demonstrated on geometric surfaces data lying on an invariant 2D lattice.

Keywords: Gram polynomials, Savitzky-Golay Smoothing, Discrete Polynomial Moments

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2790
229 DRE - A Quality Metric for Component based Software Products

Authors: K. S. Jasmine, R. Vasantha

Abstract:

The overriding goal of software engineering is to provide a high quality system, application or a product. To achieve this goal, software engineers must apply effective methods coupled with modern tools within the context of a mature software process [2]. In addition, it is also must to assure that high quality is realized. Although many quality measures can be collected at the project levels, the important measures are errors and defects. Deriving a quality measure for reusable components has proven to be challenging task now a days. The results obtained from the study are based on the empirical evidence of reuse practices, as emerged from the analysis of industrial projects. Both large and small companies, working in a variety of business domains, and using object-oriented and procedural development approaches contributed towards this study. This paper proposes a quality metric that provides benefit at both project and process level, namely defect removal efficiency (DRE).

Keywords: Software Reuse, Defect density, Reuse metrics, Defect Removal efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2808
228 Supervisory Fuzzy Learning Control for Underwater Target Tracking

Authors: C.Kia, M.R.Arshad, A.H.Adom, P.A.Wilson

Abstract:

This paper presents recent work on the improvement of the robotics vision based control strategy for underwater pipeline tracking system. The study focuses on developing image processing algorithms and a fuzzy inference system for the analysis of the terrain. The main goal is to implement the supervisory fuzzy learning control technique to reduce the errors on navigation decision due to the pipeline occlusion problem. The system developed is capable of interpreting underwater images containing occluded pipeline, seabed and other unwanted noise. The algorithm proposed in previous work does not explore the cooperation between fuzzy controllers, knowledge and learnt data to improve the outputs for underwater pipeline tracking. Computer simulations and prototype simulations demonstrate the effectiveness of this approach. The system accuracy level has also been discussed.

Keywords: Fuzzy logic, Underwater target tracking, Autonomous underwater vehicles, Artificial intelligence, Simulations, Robot navigation, Vision system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1898
227 Planar Tracking Control of an Underactuated Autonomous Underwater Vehicle

Authors: Santhakumar M., Asokan T.

Abstract:

This paper addresses the problem of trajectory tracking control of an underactuated autonomous underwater vehicle (AUV) in the horizontal plane. The underwater vehicle under consideration is not actuated in the sway direction, and the system matrices are not assumed to be diagonal and linear, as often found in the literature. In addition, the effect of constant bias of environmental disturbances is considered. Using backstepping techniques and the tracking error dynamics, the system states are stabilized by forcing the tracking errors to an arbitrarily small neighborhood of zero. The effectiveness of the proposed control method is demonstrated through numerical simulations. Simulations are carried out for an experimental vehicle for smooth, inertial, two dimensional (2D) reference trajectories such as constant velocity trajectory (a circle maneuver – constant yaw rate), and time varying velocity trajectory (a sinusoidal path – sinusoidal yaw rate).

Keywords: autonomous underwater vehicle, system matrices, tracking control, time – varying feed back, underactuated control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2145
226 Dissolved Oxygen Prediction Using Support Vector Machine

Authors: Sorayya Malek, Mogeeb Mosleh, Sharifah M. Syed

Abstract:

In this study, Support Vector Machine (SVM) technique was applied to predict the dichotomized value of Dissolved oxygen (DO) from two freshwater lakes namely Chini and Bera Lake (Malaysia). Data sample contained 11 parameters for water quality features from year 2005 until 2009. All data parameters were used to predicate the dissolved oxygen concentration which was dichotomized into 3 different levels (High, Medium, and Low). The input parameters were ranked, and forward selection method was applied to determine the optimum parameters that yield the lowest errors, and highest accuracy. Initial results showed that pH, Water Temperature, and Conductivity are the most important parameters that significantly affect the predication of DO. Then, SVM model was applied using the Anova kernel with those parameters yielded 74% accuracy rate. We concluded that using SVM models to predicate the DO is feasible, and using dichotomized value of DO yields higher prediction accuracy than using precise DO value.

Keywords: Dissolved oxygen, Water quality, predication DO, Support Vector Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2217
225 Determinants of Profitability in Indian Pharmaceutical Firms in the New Intellectual Property Rights Regime

Authors: Shilpi Tyagi, D. K. Nauriyal

Abstract:

This study investigates the firm level determinants of profitability of Indian drug and pharmaceutical industry. The study uses inflation adjusted panel data for a period 2000-2013 and applies OLS regression model with Driscoll-Kraay standard errors. It has been found that export intensity, A&M intensity, firm’s market power and stronger patent regime dummy have exercised positive influence on profitability. The negative and statistically significant influence of R&D intensity and raw material import intensity points to the need for firms to adopt suitable investment strategies. The study suggests that firms are required to pay far more attention to optimize their operating expenditures, advertisement and marketing expenditures and improve their export orientation, as part of the long term strategy.

Keywords: Indian drug and pharmaceutical industry, trade related intellectual property rights, research and development, food and drug administration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2486
224 Real-Time Image Encryption Using a 3D Discrete Dual Chaotic Cipher

Authors: M. F. Haroun, T. A. Gulliver

Abstract:

In this paper, an encryption algorithm is proposed for real-time image encryption. The scheme employs a dual chaotic generator based on a three dimensional (3D) discrete Lorenz attractor. Encryption is achieved using non-autonomous modulation where the data is injected into the dynamics of the master chaotic generator. The second generator is used to permute the dynamics of the master generator using the same approach. Since the data stream can be regarded as a random source, the resulting permutations of the generator dynamics greatly increase the security of the transmitted signal. In addition, a technique is proposed to mitigate the error propagation due to the finite precision arithmetic of digital hardware. In particular, truncation and rounding errors are eliminated by employing an integer representation of the data which can easily be implemented. The simple hardware architecture of the algorithm makes it suitable for secure real-time applications.

Keywords: Chaotic systems, image encryption, 3D Lorenz attractor, non-autonomous modulation, FPGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1217
223 Increase of Organization in Complex Systems

Authors: Georgi Yordanov Georgiev, Michael Daly, Erin Gombos, Amrit Vinod, Gajinder Hoonjan

Abstract:

Measures of complexity and entropy have not converged to a single quantitative description of levels of organization of complex systems. The need for such a measure is increasingly necessary in all disciplines studying complex systems. To address this problem, starting from the most fundamental principle in Physics, here a new measure for quantity of organization and rate of self-organization in complex systems based on the principle of least (stationary) action is applied to a model system - the central processing unit (CPU) of computers. The quantity of organization for several generations of CPUs shows a double exponential rate of change of organization with time. The exact functional dependence has a fine, S-shaped structure, revealing some of the mechanisms of self-organization. The principle of least action helps to explain the mechanism of increase of organization through quantity accumulation and constraint and curvature minimization with an attractor, the least average sum of actions of all elements and for all motions. This approach can help describe, quantify, measure, manage, design and predict future behavior of complex systems to achieve the highest rates of self organization to improve their quality. It can be applied to other complex systems from Physics, Chemistry, Biology, Ecology, Economics, Cities, network theory and others where complex systems are present.

Keywords: Organization, self-organization, complex system, complexification, quantitative measure, principle of least action, principle of stationary action, attractor, progressive development, acceleration, stochastic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641
222 Linear Quadratic Gaussian/Loop Transfer Recover Control Flight Control on a Nonlinear Model

Authors: T. Sanches, K. Bousson

Abstract:

As part of the development of a 4D autopilot system for unmanned aerial vehicles (UAVs), i.e. a time-dependent robust trajectory generation and control algorithm, this work addresses the problem of optimal path control based on the flight sensors data output that may be unreliable due to noise on data acquisition and/or transmission under certain circumstances. Although several filtering methods, such as the Kalman-Bucy filter or the Linear Quadratic Gaussian/Loop Transfer Recover Control (LQG/LTR), are available, the utter complexity of the control system, together with the robustness and reliability required of such a system on a UAV for airworthiness certifiable autonomous flight, required the development of a proper robust filter for a nonlinear system, as a way of further mitigate errors propagation to the control system and improve its ,performance. As such, a nonlinear algorithm based upon the LQG/LTR, is validated through computational simulation testing, is proposed on this paper.

Keywords: Autonomous flight, LQG/LTR, nonlinear state estimator, robust flight control and stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 695
221 A Self Adaptive Genetic Based Algorithm for the Identification and Elimination of Bad Data

Authors: A. A. Hossam-Eldin, E. N. Abdallah, M. S. El-Nozahy

Abstract:

The identification and elimination of bad measurements is one of the basic functions of a robust state estimator as bad data have the effect of corrupting the results of state estimation according to the popular weighted least squares method. However this is a difficult problem to handle especially when dealing with multiple errors from the interactive conforming type. In this paper, a self adaptive genetic based algorithm is proposed. The algorithm utilizes the results of the classical linearized normal residuals approach to tune the genetic operators thus instead of making a randomized search throughout the whole search space it is more likely to be a directed search thus the optimum solution is obtained at very early stages(maximum of 5 generations). The algorithm utilizes the accumulating databases of already computed cases to reduce the computational burden to minimum. Tests are conducted with reference to the standard IEEE test systems. Test results are very promising.

Keywords: Bad Data, Genetic Algorithms, Linearized Normal residuals, Observability, Power System State Estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1346
220 Multipurpose Cadastre, Essential for Urban Development Plans in Iran

Authors: Mehrshad Khalaj, Elham Lashkari

Abstract:

Majority of researches conducted on Iranian urban development plans indicate that they have been almost unsuccessful in terms of draft, execution and goal achievement. Lack or shortage of essential statistics and information can be listed as an important reason of the failure of these plans. Lack of figures and information has turned into an obvious part of the country-s statistics officials. This problem has made urban planner themselves to embark on physical surveys including real estate and land pricing, population and economic census of the city. Apart from the problems facing urban developers, the possibility of errors is high in such surveys. In the present article, applying the interview technique, it has been mentioned that utilizing multipurpose cadastre system as a land information system is essential for urban development plans in Iran. It can minimize or even remove the failures facing urban development plans.

Keywords: Multipurpose Cadastre, Urban Development Plan(UDP), Land Information System (LIS), Interview Technique

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2449
219 Development of a Fiber based Interferometric Sensor for Non-contact Displacement Measurement

Authors: S. Pullteap

Abstract:

In this paper, a fiber based Fabry-Perot interferometer is proposed and demonstrated for a non-contact displacement measurement. A piece of micro-prism which attached to the mechanical vibrator is served as the target reflector. Interference signal is generated from the superposition between the sensing beam and the reference beam within the sensing arm of the fiber sensor. This signal is then converted to the displacement value by using a developed program written in visual Cµ programming with a resolution of λ/8. A classical function generator is operated for controlling the vibrator. By fixing an excitation frequency of 100 Hz and varying the excitation amplitude range of 0.1 – 3 Volts, the output displacements measured by the fiber sensor are obtained from 1.55 μm to 30.225 μm. A reference displacement sensor with a sensitivity of ~0.4 μm is also employed for comparing the displacement errors between both sensors. We found that over the entire displacement range, a maximum and average measurement error are obtained of 0.977% and 0.44% respectively.

Keywords: Non-contact displacement measurement, extrinsicfiber based Fabry-Perot interferometer, interference signal, zerocrossingfringe counting technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2027
218 Control of Commutation of SR Motor Using Its Magnetic Characteristics and Back-of-Core Saturation Effects

Authors: Dr. N.H. Mvungi

Abstract:

The control of commutation of switched reluctance (SR) motor has nominally depended on a physical position detector. The physical rotor position sensor limits robustness and increases size and inertia of the SR drive system. The paper describes a method to overcome these limitations by using magnetization characteristics of the motor to indicate rotor and stator teeth overlap status. The method is using active current probing pulses of same magnitude that is used to simulate flux linkage in the winding being probed. A microprocessor is used for processing magnetization data to deduce rotor-stator teeth overlap status and hence rotor position. However, the back-of-core saturation and mutual coupling introduces overlap detection errors, hence that of commutation control. This paper presents the concept of the detection scheme and the effects of backof core saturation.

Keywords: Microprocessor control, rotor position, sensorless, switched reluctance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1283
217 Nodal Load Profiles Estimation for Time Series Load Flow Using Independent Component Analysis

Authors: Mashitah Mohd Hussain, Salleh Serwan, Zuhaina Hj Zakaria

Abstract:

This paper presents a method to estimate load profile in a multiple power flow solutions for every minutes in 24 hours per day. A method to calculate multiple solutions of non linear profile is introduced. The Power System Simulation/Engineering (PSS®E) and python has been used to solve the load power flow. The result of this power flow solutions has been used to estimate the load profiles for each load at buses using Independent Component Analysis (ICA) without any knowledge of parameter and network topology of the systems. The proposed algorithm is tested with IEEE 69 test bus system represents for distribution part and the method of ICA has been programmed in MATLAB R2012b version. Simulation results and errors of estimations are discussed in this paper.

Keywords: Electrical Distribution System, Power Flow Solution, Distribution Network, Independent Component Analysis, Newton Raphson, Power System Simulation for Engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2916
216 Effects of Incident Angle and Distance on Visible Light Communication

Authors: Taegyoo Woo, Jong Kang Park, Jong Tae Kim

Abstract:

Visible Light Communication (VLC) provides wireless communication features in illumination systems. One of the key applications is to recognize the user location by indoor illuminators such as light emitting diodes. For localization of individual receivers in these systems, we usually assume that receivers and transmitters are placed in parallel. However, it is difficult to satisfy this assumption because the receivers move randomly in real case. It is necessary to analyze the case when transmitter is not placed perfectly parallel to receiver. It is also important to identify changes on optical gain by the tilted angles and distances of them against the illuminators. In this paper, we simulate optical gain for various cases where the tilt of the receiver and the distance change. Then, we identified changing patterns of optical gains according to tilted angles of a receiver and distance. These results can help many VLC applications understand the extent of the location errors with regard to optical gains of the receivers and identify the root cause.

Keywords: Visible light communication, optical channel, indoor positioning, Lambertian radiation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1460
215 Study of Remote Sensing and Satellite Images Ability in Preparing Agricultural Land Use Map (ALUM)

Authors: Ali Gholami

Abstract:

In this research the Preparation of Land use map of scanner LISS III satellite data, belonging to the IRS in the Aghche region in Isfahan province, is studied carefully. For this purpose, the IRS satellite images of August 2008 and various land preparation uses in region including rangelands, irrigation farming, dry farming, gardens and urban areas were separated and identified. Therefore, the GPS and Erdas Imaging software were used and three methods of Maximum Likelihood, Mahalanobis Distance and Minimum Distance were analyzed. In each of these methods, matrix error and Kappa index were calculated and accuracy of each method, based on percentages: 53.13, 56.64 and 48.44, were obtained respectively. Considering the low accuracy of these methods in separation of land preparation use, the visual interpretation of the map was used. Finally, regional visits of 150 points were noted at random and no error was observed. It shows that the map prepared by visual interpretation is in high accuracy. Although the probable errors due to visual interpretation and geometric correction might happen but the desired accuracy of the map which is more than 85 percent is reliable.

Keywords: Land use map, Aghche Region, Erdas Imagine, satellite images

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571
214 Assessment of Menus in a Selected Social Welfare Home with Regard to Nutritional Recommendations

Authors: E. Grochowska-Niedworok, K. Brukalo, B. Całyniuk, J. Piekorz, M. Kardas

Abstract:

The aim of the study was to assess diets of residents of nursing homes. Provided by social welfare home, 10 day menus were introduced into the computer program Diet 5 and analyzed in respect of protein, fats, carbohydrates, energy, vitamin D and calcium. The resulting mean values of 10-day menus were compared with the existing Nutrition Standards for Polish population. The analysis menus showed that the average amount of energy supplied from food is not sufficient. Carbohydrates in food supply are too high and represent 257% of normal. The average value of fats and proteins supplied with food is adequate 85.2 g/day and 75.2 g/day. The calcium content of the diet is 513.9 mg/day. The amount of vitamin D supplied in the age group 51-65 years is 2.3 µg/day. Dietary errors that have been shown are due to the lack of detailed nutritional guidelines for nursing homes, as well as state-owned care facilities in general.

Keywords: Assessment of diet, essential nutrients, social welfare home, nutrition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1137
213 A Study of Adaptive Fault Detection Method for GNSS Applications

Authors: Je Young Lee, Hee Sung Kim, Kwang Ho Choi, Joonhoo Lim, Sebum Chun, Hyung Keun Lee

Abstract:

This study is purposed to develop an efficient fault detection method for Global Navigation Satellite Systems (GNSS) applications based on adaptive noise covariance estimation. Due to the dependence on radio frequency signals, GNSS measurements are dominated by systematic errors in receiver’s operating environment. In the proposed method, the pseudorange and carrier-phase measurement noise covariances are obtained at time propagations and measurement updates in process of Carrier-Smoothed Code (CSC) filtering, respectively. The test statistics for fault detection are generated by the estimated measurement noise covariances. To evaluate the fault detection capability, intentional faults were added to the filed-collected measurements. The experiment result shows that the proposed method is efficient in detecting unhealthy measurements and improves GNSS positioning accuracy against fault occurrences.

Keywords: Adaptive estimation, fault detection, GNSS, residual.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2555
212 A Hybrid Gene Selection Technique Using Improved Mutual Information and Fisher Score for Cancer Classification Using Microarrays

Authors: M. Anidha, K. Premalatha

Abstract:

Feature Selection is significant in order to perform constructive classification in the area of cancer diagnosis. However, a large number of features compared to the number of samples makes the task of classification computationally very hard and prone to errors in microarray gene expression datasets. In this paper, we present an innovative method for selecting highly informative gene subsets of gene expression data that effectively classifies the cancer data into tumorous and non-tumorous. The hybrid gene selection technique comprises of combined Mutual Information and Fisher score to select informative genes. The gene selection is validated by classification using Support Vector Machine (SVM) which is a supervised learning algorithm capable of solving complex classification problems. The results obtained from improved Mutual Information and F-Score with SVM as a classifier has produced efficient results.

Keywords: Gene selection, mutual information, Fisher score, classification, SVM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1152