Search results for: supplementary variable technique
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3827

Search results for: supplementary variable technique

3437 Optical Flow Technique for Supersonic Jet Measurements

Authors: H. D. Lim, Jie Wu, T. H. New, Shengxian Shi

Abstract:

This paper outlines the development of an experimental technique in quantifying supersonic jet flows, in an attempt to avoid seeding particle problems frequently associated with particle-image velocimetry (PIV) techniques at high Mach numbers. Based on optical flow algorithms, the idea behind the technique involves using high speed cameras to capture Schlieren images of the supersonic jet shear layers, before they are subjected to an adapted optical flow algorithm based on the Horn-Schnuck method to determine the associated flow fields. The proposed method is capable of offering full-field unsteady flow information with potentially higher accuracy and resolution than existing point-measurements or PIV techniques. Preliminary study via numerical simulations of a circular de Laval jet nozzle successfully reveals flow and shock structures typically associated with supersonic jet flows, which serve as useful data for subsequent validation of the optical flow based experimental results. For experimental technique, a Z-type Schlieren setup is proposed with supersonic jet operated in cold mode, stagnation pressure of 4 bar and exit Mach of 1.5. High-speed singleframe or double-frame cameras are used to capture successive Schlieren images. As implementation of optical flow technique to supersonic flows remains rare, the current focus revolves around methodology validation through synthetic images. The results of validation test offers valuable insight into how the optical flow algorithm can be further improved to improve robustness and accuracy. Despite these challenges however, this supersonic flow measurement technique may potentially offer a simpler way to identify and quantify the fine spatial structures within the shock shear layer.

Keywords: Schlieren, optical flow, supersonic jets, shock shear layer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1890
3436 Integration of Acceleration Feedback Control with Automatic Generation Control in Intelligent Load Frequency Control

Authors: H. Zainuddin, F. Hanafi, M. H. Hairi, A. Aman, M.H.N. Talib

Abstract:

This paper investigates the effects of knowledge-based acceleration feedback control integrated with Automatic Generation Control (AGC) to enhance the quality of frequency control of governing system. The Intelligent Acceleration Feedback Controller (IAFC) is proposed to counter the over and under frequency occurrences due to major load change in power system network. Therefore, generator tripping and load shedding operations can be reduced. Meanwhile, the integration of IAFC with AGC, a well known Load-Frequency Control (LFC) is essential to ensure the system frequency is restored to the nominal value. Computer simulations of frequency response of governing system are used to optimize the parameters of IAFC. As a result, there is substantial improvement on the LFC of governing system that employing the proposed control strategy.

Keywords: Knowledge-based Supplementary Control, Acceleration Feedback, Load Frequency Control, Automatic Generation Control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693
3435 Pattern Recognition of Partial Discharge by Using Simplified Fuzzy ARTMAP

Authors: S. Boonpoke, B. Marungsri

Abstract:

This paper presents the effectiveness of artificial intelligent technique to apply for pattern recognition and classification of Partial Discharge (PD). Characteristics of PD signal for pattern recognition and classification are computed from the relation of the voltage phase angle, the discharge magnitude and the repeated existing of partial discharges by using statistical and fractal methods. The simplified fuzzy ARTMAP (SFAM) is used for pattern recognition and classification as artificial intelligent technique. PDs quantities, 13 parameters from statistical method and fractal method results, are inputted to Simplified Fuzzy ARTMAP to train system for pattern recognition and classification. The results confirm the effectiveness of purpose technique.

Keywords: Partial discharges, PD Pattern recognition, PDClassification, Artificial intelligent, Simplified Fuzzy ARTMAP

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3066
3434 A Hybrid Metaheuristic Framework for Evolving the PROAFTN Classifier

Authors: Feras Al-Obeidat, Nabil Belacel, Juan A. Carretero, Prabhat Mahanti,

Abstract:

In this paper, a new learning algorithm based on a hybrid metaheuristic integrating Differential Evolution (DE) and Reduced Variable Neighborhood Search (RVNS) is introduced to train the classification method PROAFTN. To apply PROAFTN, values of several parameters need to be determined prior to classification. These parameters include boundaries of intervals and relative weights for each attribute. Based on these requirements, the hybrid approach, named DEPRO-RVNS, is presented in this study. In some cases, the major problem when applying DE to some classification problems was the premature convergence of some individuals to local optima. To eliminate this shortcoming and to improve the exploration and exploitation capabilities of DE, such individuals were set to iteratively re-explored using RVNS. Based on the generated results on both training and testing data, it is shown that the performance of PROAFTN is significantly improved. Furthermore, the experimental study shows that DEPRO-RVNS outperforms well-known machine learning classifiers in a variety of problems.

Keywords: Knowledge Discovery, Differential Evolution, Reduced Variable Neighborhood Search, Multiple criteria classification, PROAFTN, Supervised Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1470
3433 Structural Damage Detection via Incomplete Modal Data Using Output Data Only

Authors: Ahmed Noor Al-Qayyim, Barlas Ozden Caglayan

Abstract:

Structural failure is caused mainly by damage that often occurs on structures. Many researchers focus on to obtain very efficient tools to detect the damage in structures in the early state. In the past decades, a subject that has received considerable attention in literature is the damage detection as determined by variations in the dynamic characteristics or response of structures. The study presents a new damage identification technique. The technique detects the damage location for the incomplete structure system using output data only. The method indicates the damage based on the free vibration test data by using ‘Two Points Condensation (TPC) technique’. This method creates a set of matrices by reducing the structural system to two degrees of freedom systems. The current stiffness matrices obtain from optimization the equation of motion using the measured test data. The current stiffness matrices compare with original (undamaged) stiffness matrices. The large percentage changes in matrices’ coefficients lead to the location of the damage. TPC technique is applied to the experimental data of a simply supported steel beam model structure after inducing thickness change in one element, where two cases consider. The method detects the damage and determines its location accurately in both cases. In addition, the results illustrate these changes in stiffness matrix can be a useful tool for continuous monitoring of structural safety using ambient vibration data. Furthermore, its efficiency proves that this technique can be used also for big structures.

Keywords: Damage detection, two points–condensation, structural health monitoring, signals processing, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2686
3432 Evaluation of Transfer Capability Considering Uncertainties of System Operating Condition and System Cascading Collapse

Authors: N. A. Salim, M. M. Othman, I. Musirin, M. S. Serwan

Abstract:

Over the past few decades, power system industry in many developing and developed countries has gone through a restructuring process of the industry where they are moving towards deregulated power industry. This situation will lead to competition among the generation and distribution companies to provide quality and efficient production of electric energy, which will reduce the price of electricity. Therefore it is important to obtain an accurate value of the available transfer capability (ATC) and transmission reliability margin (TRM) in order to ensure the effective power transfer between areas during the occurrence of uncertainties in the system. In this paper, the TRM and ATC is determined by taking into consideration the uncertainties of the system operating condition and system cascading collapse by applying the bootstrap technique. A case study of the IEEE RTS-79 is employed to verify the robustness of the technique proposed in the determination of TRM and ATC.

Keywords: Available transfer capability, bootstrap technique, cascading collapse, transmission reliability margin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1549
3431 An Efficient Technique for EMI Mitigation in Fluorescent Lamps using Frequency Modulation and Evolutionary Programming

Authors: V.Sekar, T.G.Palanivelu, B.Revathi

Abstract:

Electromagnetic interference (EMI) is one of the serious problems in most electrical and electronic appliances including fluorescent lamps. The electronic ballast used to regulate the power flow through the lamp is the major cause for EMI. The interference is because of the high frequency switching operation of the ballast. Formerly, some EMI mitigation techniques were in practice, but they were not satisfactory because of the hardware complexity in the circuit design, increased parasitic components and power consumption and so on. The majority of the researchers have their spotlight only on EMI mitigation without considering the other constraints such as cost, effective operation of the equipment etc. In this paper, we propose a technique for EMI mitigation in fluorescent lamps by integrating Frequency Modulation and Evolutionary Programming. By the Frequency Modulation technique, the switching at a single central frequency is extended to a range of frequencies, and so, the power is distributed throughout the range of frequencies leading to EMI mitigation. But in order to meet the operating frequency of the ballast and the operating power of the fluorescent lamps, an optimal modulation index is necessary for Frequency Modulation. The optimal modulation index is determined using Evolutionary Programming. Thereby, the proposed technique mitigates the EMI to a satisfactory level without disturbing the operation of the fluorescent lamp.

Keywords: Ballast, Electromagnetic interference (EMI), EMImitigation, Evolutionary programming (EP), Fluorescent lamp, Frequency Modulation (FM), Modulation index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2266
3430 A Simple Constellation Precoding Technique over MIMO-OFDM Systems

Authors: Fuh-Hsin Hwang, Tsui-Tsai Lin, Chih-Wen Chan, Cheng-Yuan Chang

Abstract:

This paper studies the design of a simple constellation precoding for a multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) system over Rayleigh fading channels where OFDM is used to keep the diversity replicas orthogonal and reduce ISI effects. A multi-user environment with K synchronous co-channel users is considered. The proposed scheme provides a bandwidth efficient transmission for individual users by increasing the system throughput. In comparison with the existing coded MIMO-OFDM schemes, the precoding technique is designed under the consideration of its low implementation complexity while providing a comparable error performance to the existing schemes. Analytic and simulation results have been presented to show the distinguished error performance.

Keywords: coded modulation, diversity technique, OFDM, MIMO, constellation precoding

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1621
3429 Matching Pursuit based Removal of Cardiac Pulse-Related Artifacts in EEG/fMRI

Authors: Rainer Schneider, Stephan Lau, Levin Kuhlmann, Simon Vogrin, Maciej Gratkowski, Mark Cook, Jens Haueisen

Abstract:

Cardiac pulse-related artifacts in the EEG recorded simultaneously with fMRI are complex and highly variable. Their effective removal is an unsolved problem. Our aim is to develop an adaptive removal algorithm based on the matching pursuit (MP) technique and to compare it to established methods using a visual evoked potential (VEP). We recorded the VEP inside the static magnetic field of an MR scanner (with artifacts) as well as in an electrically shielded room (artifact free). The MP-based artifact removal outperformed average artifact subtraction (AAS) and optimal basis set removal (OBS) in terms of restoring the EEG field map topography of the VEP. Subsequently, a dipole model was fitted to the VEP under each condition using a realistic boundary element head model. The source location of the VEP recorded inside the MR scanner was closest to that of the artifact free VEP after cleaning with the MP-based algorithm as well as with AAS. While none of the tested algorithms offered complete removal, MP showed promising results due to its ability to adapt to variations of latency, frequency and amplitude of individual artifact occurrences while still utilizing a common template.

Keywords: matching pursuit, ballistocardiogram, artifactremoval, EEG/fMRI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1678
3428 Artificial Neural Networks and Multi-Class Support Vector Machines for Classifying Magnetic Measurements in Tokamak Reactors

Authors: A. Greco, N. Mammone, F.C. Morabito, M.Versaci

Abstract:

This paper is mainly concerned with the application of a novel technique of data interpretation for classifying measurements of plasma columns in Tokamak reactors for nuclear fusion applications. The proposed method exploits several concepts derived from soft computing theory. In particular, Artificial Neural Networks and Multi-Class Support Vector Machines have been exploited to classify magnetic variables useful to determine shape and position of the plasma with a reduced computational complexity. The proposed technique is used to analyze simulated databases of plasma equilibria based on ITER geometry configuration. As well as demonstrating the successful recovery of scalar equilibrium parameters, we show that the technique can yield practical advantages compared with earlier methods.

Keywords: Tokamak, Classification, Artificial Neural Network, Support Vector Machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1268
3427 Reduction of Leakage Power in Digital Logic Circuits Using Stacking Technique in 45 Nanometer Regime

Authors: P.K. Sharma, B. Bhargava, S. Akashe

Abstract:

Power dissipation due to leakage current in the digital circuits is a biggest factor which is considered specially while designing nanoscale circuits. This paper is exploring the ideas of reducing leakage current in static CMOS circuits by stacking the transistors in increasing numbers. Clearly it means that the stacking of OFF transistors in large numbers result a significant reduction in power dissipation. Increase in source voltage of NMOS transistor minimizes the leakage current. Thus stacking technique makes circuit with minimum power dissipation losses due to leakage current. Also some of digital circuits such as full adder, D flip flop and 6T SRAM have been simulated in this paper, with the application of reduction technique on ‘cadence virtuoso tool’ using specter at 45nm technology with supply voltage 0.7V.

Keywords: Stack, 6T SRAM cell, low power, threshold voltage

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3410
3426 The Comparison of Anchor and Star Schema from a Query Performance Perspective

Authors: Radek Němec

Abstract:

Today's business environment requires that companies have access to highly relevant information in a matter of seconds. Modern Business Intelligence tools rely on data structured mostly in traditional dimensional database schemas, typically represented by star schemas. Dimensional modeling is already recognized as a leading industry standard in the field of data warehousing although several drawbacks and pitfalls were reported. This paper focuses on the analysis of another data warehouse modeling technique - the anchor modeling, and its characteristics in context with the standardized dimensional modeling technique from a query performance perspective. The results of the analysis show information about performance of queries executed on database schemas structured according to principles of each database modeling technique.

Keywords: Data warehousing, anchor modeling, star schema, anchor schema, query performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3306
3425 Measuring the Effect of Intercollegiate Athletic Success on Private Giving and Enrollment

Authors: Jamie L. Stangel

Abstract:

Increased popularity and visibility of college athletics has contributed to an environment in which institutions—most of which lack self-sufficient athletics department budgets—reallocate monies from the university general fund and seek additional funding sources to keep up with increasing levels of spending on athletics. Given the prevalence of debates on student debt levels, coach salaries, and athlete pay, empirical evidence on whether this spending yields expected return on investment is necessary. This study considered the relationship between the independent variable of winning percentage of the men’s basketball team at a mid-major university, moderated by National Collegiate Athletic Association (NCAA) tournament appearance, and number of applicants, number of enrollments, average SAT score of students, and donor giving to the university general and athletic funds. The results indicate that, other than a small correlation between athletic success and number of applicants, only when NCAA tournament appearance is used as a moderating variable, these purported benefits are not supported, suggesting the need for a reevaluation of athletic department spending and perceptions on tangible and intangible benefits for universities.

Keywords: Athletic success, enrollment, NCAA, private giving.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 348
3424 Internal Leakage Analysis from Pd to Pc Port Direction in ECV Body Used in External Variable Type A/C Compressor

Authors: Md. Iqbal Mahmud, Haeng Muk Cho, Seo Hyun Sang, Wang Wen Hai, Chang Heon Yi, Man Ik Hwang, Dae Hoon Kang

Abstract:

Solenoid operated electromagnetic control valve (ECV) playing an important role for car’s air conditioning control system. ECV is used in external variable displacement swash plate type compressor and controls the entire air conditioning system by means of a pulse width modulation (PWM) input signal supplying from an external source (controller). Complete form of ECV contains number of internal features like valve body, core, valve guide, plunger, guide pin, plunger spring, bellows etc. While designing the ECV; dimensions of different internal items must meet the standard requirements as it is quite challenging. In this research paper, especially the dimensioning of ECV body and its three pressure ports through which the air/refrigerant passes are considered. Here internal leakage test analysis of ECV body is being carried out from its discharge port (Pd) to crankcase port (Pc) when the guide valve is placed inside it. The experiments have made both in ordinary and digital system using different assumptions and thereafter compare the results.

Keywords: Electromagnetic control valve (ECV), Leakage, Pressure port, Valve body, Valve guide.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2843
3423 Pseudo-polynomial Motion Commands for Vibration Suppression of Belt-driven Rotary Platforms

Authors: Giovanni Incerti

Abstract:

The motion planning technique described in this paper has been developed to eliminate or reduce the residual vibrations of belt-driven rotary platforms, while maintaining unchanged the motion time and the total angular displacement of the platform. The proposed approach is based on a suitable choice of the motion command given to the servomotor that drives the mechanical device; this command is defined by some numerical coefficients which determine the shape of the displacement, velocity and acceleration profiles. Using a numerical optimization technique, these coefficients can be changed without altering the continuity conditions imposed on the displacement and its time derivatives at the initial and final time instants. The proposed technique can be easily and quickly implemented on an actual device, since it requires only a simple modification of the motion command profile mapped in the memory of the electronic motion controller.

Keywords: Command shaping, residual vibrations, belt transmission, servomechanism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1498
3422 A Dynamic RGB Intensity Based Steganography Scheme

Authors: Mandep Kaur, Surbhi Gupta, Parvinder S. Sandhu, Jagdeep Kaur

Abstract:

Steganography meaning covered writing. Steganography includes the concealment of information within computer files [1]. In other words, it is the Secret communication by hiding the existence of message. In this paper, we will refer to cover image, to indicate the images that do not yet contain a secret message, while we will refer to stego images, to indicate an image with an embedded secret message. Moreover, we will refer to the secret message as stego-message or hidden message. In this paper, we proposed a technique called RGB intensity based steganography model as RGB model is the technique used in this field to hide the data. The methods used here are based on the manipulation of the least significant bits of pixel values [3][4] or the rearrangement of colors to create least significant bit or parity bit patterns, which correspond to the message being hidden. The proposed technique attempts to overcome the problem of the sequential fashion and the use of stego-key to select the pixels.

Keywords: Steganography, Stego Image, RGB Image, Cryptography, LSB.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2102
3421 A Comparison of Signal Processing Techniques for the Extraction of Breathing Rate from the Photoplethysmogram

Authors: Susannah G. Fleming Lionel Tarassenko

Abstract:

The photoplethysmogram (PPG) is the pulsatile waveform produced by the pulse oximeter, which is widely used for monitoring arterial oxygen saturation in patients. Various methods for extracting the breathing rate from the PPG waveform have been compared using a consistent data set, and a novel technique using autoregressive modelling is presented. This novel technique is shown to outperform the existing techniques, with a mean error in breathing rate of 0.04 breaths per minute.

Keywords: Autoregressive modelling, breathing rate, photoplethysmogram, pulse oximetry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3297
3420 An Efficient Algorithm for Computing all Program Forward Static Slices

Authors: Jehad Al Dallal

Abstract:

Program slicing is the task of finding all statements in a program that directly or indirectly influence the value of a variable occurrence. The set of statements that can affect the value of a variable at some point in a program is called a program backward slice. In several software engineering applications, such as program debugging and measuring program cohesion and parallelism, several slices are computed at different program points. The existing algorithms for computing program slices are introduced to compute a slice at a program point. In these algorithms, the program, or the model that represents the program, is traversed completely or partially once. To compute more than one slice, the same algorithm is applied for every point of interest in the program. Thus, the same program, or program representation, is traversed several times. In this paper, an algorithm is introduced to compute all forward static slices of a computer program by traversing the program representation graph once. Therefore, the introduced algorithm is useful for software engineering applications that require computing program slices at different points of a program. The program representation graph used in this paper is called Program Dependence Graph (PDG).

Keywords: Program slicing, static slicing, forward slicing, program dependence graph (PDG).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1453
3419 Generalized Noise Analysis of Log Domain Static Translinear Circuits

Authors: E. Farshidi

Abstract:

This paper presents a new general technique for analysis of noise in static log-domain translinear circuits. It is demonstrated that employing this technique, leads to a general, simple and routine method of the noise analysis. The circuit has been simulated by HSPICE. The simulation results are seen to conform to the theoretical analysis and shows benefits of the proposed circuit.

Keywords: Noise analysis, log-domain, static, dynamic, translinear loop, companding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1225
3418 The Effect of Energy Consumption and Losses on the Nigerian Manufacturing Sector: Evidence from the ARDL Approach

Authors: Okezie A. Ihugba

Abstract:

The bounds testing ARDL (2, 2, 2, 2, 0) technique to cointegration was used in this study to investigate the effect of energy consumption and energy loss on Nigeria's manufacturing sector from 1981 to 2020. The model was created to determine the relationship between these three variables while also accounting for interactions with control variables such as inflation and commercial bank loans to the manufacturing sector. When the dependent variables are energy consumption and energy loss, the bound tests show that the variables of interest are bound together in the long run. Because electricity consumption is a critical factor in determining manufacturing value-added in Nigeria, some intriguing observations were made. According to the findings, the relationship between log of electricity consumption (LELC) and log of manufacturing value added (LMVA) is statistically significant. According to the findings, electricity consumption reduces manufacturing value-added. The target variable (energy loss) is statistically significant and has a positive sign. In Nigeria, a 1% reduction in energy loss increases manufacturing value-added by 36% in the first lag and 35% in the second. According to the study, the government should speed up the ongoing renovation of existing power plants across the country, as well as the construction of new gas-fired power plants. This will address a number of issues, including overpricing of electricity as a result of grid failure.

Keywords: ARDL, cointegration, Nigeria's manufacturing, electricity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 365
3417 Integrated Cultivation Technique for Microbial Lipid Production by Photosynthetic Microalgae and Locally Oleaginous Yeast

Authors: Mutiyaporn Puangbut, Ratanaporn Leesing

Abstract:

The objective of this research is to study of microbial lipid production by locally photosynthetic microalgae and oleaginous yeast via integrated cultivation technique using CO2 emissions from yeast fermentation. A maximum specific growth rate of Chlorella sp. KKU-S2 of 0.284 (1/d) was obtained under an integrated cultivation and a maximum lipid yield of 1.339g/L was found after cultivation for 5 days, while 0.969g/L of lipid yield was obtained after day 6 of cultivation time by using CO2 from air. A high value of volumetric lipid production rate (QP, 0.223 g/L/d), specific product yield (YP/X, 0.194), volumetric cell mass production rate (QX, 1.153 g/L/d) were found by using ambient air CO2 coupled with CO2 emissions from yeast fermentation. Overall lipid yield of 8.33 g/L was obtained (1.339 g/L of Chlorella sp. KKU-S2 and 7.06g/L of T. maleeae Y30) while low lipid yield of 0.969g/L was found using non-integrated cultivation technique. To our knowledge this is the unique report about the lipid production from locally microalgae Chlorella sp. KKU-S2 and yeast T. maleeae Y30 in an integrated technique to improve the biomass and lipid yield by using CO2 emissions from yeast fermentation.

Keywords: Microbial lipid, Chlorella sp. KKU-S2, Torulaspora maleeae Y30, oleaginous yeast, biodiesel, CO2 emissions

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2240
3416 T-DOF PI Controller Design for a Speed Control of Induction Motor

Authors: Tianchai Suksri, Satean Tunyasrirut

Abstract:

This paper presents design and implements the T-DOF PI controller design for a speed control of induction motor. The voltage source inverter type space vector pulse width modulation technique is used the drive system. This scheme leads to be able to adjust the speed of the motor by control the frequency and amplitude of the input voltage. The ratio of input stator voltage to frequency should be kept constant. The T-DOF PI controller design by root locus technique is also introduced to the system for regulates and tracking speed response. The experimental results in testing the 120 watt induction motor from no-load condition to rated condition show the effectiveness of the proposed control scheme.

Keywords: PI controller, root locus technique, space vector pulse width modulation, induction motor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2135
3415 Solving the Set Covering Problem Using the Binary Cat Swarm Optimization Metaheuristic

Authors: Broderick Crawford, Ricardo Soto, Natalia Berrios, Eduardo Olguin

Abstract:

In this paper, we present a binary cat swarm optimization for solving the Set covering problem. The set covering problem is a well-known NP-hard problem with many practical applications, including those involving scheduling, production planning and location problems. Binary cat swarm optimization is a recent swarm metaheuristic technique based on the behavior of discrete cats. Domestic cats show the ability to hunt and are curious about moving objects. The cats have two modes of behavior: seeking mode and tracing mode. We illustrate this approach with 65 instances of the problem from the OR-Library. Moreover, we solve this problem with 40 new binarization techniques and we select the technical with the best results obtained. Finally, we make a comparison between results obtained in previous studies and the new binarization technique, that is, with roulette wheel as transfer function and V3 as discretization technique.

Keywords: Binary cat swarm optimization, set covering problem, metaheuristic, binarization methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2320
3414 The Effect of Board Composition and Ownership Concentration on Earnings Management: Evidence from IRAN

Authors: F. Rahnamay Roodposhti, S. A. Nabavi Chashmi

Abstract:

The role of corporate governance is to reduce the divergence of interests between shareholders and managers. The role of corporate governance is more useful when managers have an incentive to deviate from shareholders- interests. One example of management-s deviation from shareholders- interests is the management of earnings through the use of accounting accruals. This paper examines the association between corporate governance internal mechanisms ownership concentration, board independence, the existence of CEO-Chairman duality and earnings management. Firm size and leverage are control variables. The population used in this study comprises firms listed on the Tehran Stock Exchange (TSE) between 2004 and 2008, the sample comprises 196 firms. Panel Data method is employed as technique to estimate the model. We find that there is negative significant association between ownership concentration and board independence manage earnings with earnings management, there is negative significant association between the existence of CEO-Chairman duality and earnings management. This study also found a positive significant association between control variable (firm size and leverage) and earnings management.

Keywords: Earnings management, board independence, ownership concentration, corporate governance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3906
3413 Characterization and Geochemical Modeling of Cu and Zn Sorption Using Mixed Mineral Systems Injected with Iron Sulfide under Sulfidic-Anoxic Conditions I: Case Study of Cwmheidol Mine Waste Water, Wales, United Kingdom

Authors: D. E. Egirani, J. E. Andrews, A. R. Baker

Abstract:

This study investigates sorption of Cu and Zn contained in natural mine wastewater, using mixed mineral systems in sulfidic-anoxic condition. The mine wastewater was obtained from disused mine workings at Cwmheidol in Wales, United Kingdom. These contaminants flow into water courses. These water courses include River Rheidol. In this River fishing activities exist. In an attempt to reduce Cu-Zn levels of fish intake in the watercourses, single mineral systems and 1:1 mixed mineral systems of clay and goethite were tested with the mine waste water for copper and zinc removal at variable pH. Modelling of hydroxyl complexes was carried out using phreeqc method. Reactions using batch mode technique was conducted at room temperature. There was significant differences in the behaviour of copper and zinc removal using mixed mineral systems when compared  to single mineral systems. All mixed mineral systems sorb more Cu than Zn when tested with mine wastewater.

Keywords: Cu- Zn, hydroxyl complexes, kinetics, mixed mineral systems, reactivity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 915
3412 Data Envelopment Analysis under Uncertainty and Risk

Authors: P. Beraldi, M. E. Bruni

Abstract:

Data Envelopment Analysis (DEA) is one of the most widely used technique for evaluating the relative efficiency of a set of homogeneous decision making units. Traditionally, it assumes that input and output variables are known in advance, ignoring the critical issue of data uncertainty. In this paper, we deal with the problem of efficiency evaluation under uncertain conditions by adopting the general framework of the stochastic programming. We assume that output parameters are represented by discretely distributed random variables and we propose two different models defined according to a neutral and risk-averse perspective. The models have been validated by considering a real case study concerning the evaluation of the technical efficiency of a sample of individual firms operating in the Italian leather manufacturing industry. Our findings show the validity of the proposed approach as ex-ante evaluation technique by providing the decision maker with useful insights depending on his risk aversion degree.

Keywords: DEA, Stochastic Programming, Ex-ante evaluation technique, Conditional Value at Risk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1957
3411 Pension Plan Member’s Investment Strategies with Transaction Cost and Couple Risky Assets Modelled by the O-U Process

Authors: Udeme O. Ini, Edikan E. Akpanibah

Abstract:

This paper studies the optimal investment strategies for a plan member (PM) in a defined contribution (DC) pension scheme with transaction cost, taxes on invested funds and couple risky assets (stocks) under the Ornstein-Uhlenbeck (O-U) process. The PM’s portfolio is assumed to consist of a risk-free asset and two risky assets where the two risky assets are driven by the O-U process. The Legendre transformation and dual theory is use to transform the resultant optimal control problem which is a nonlinear partial differential equation (PDE) into linear PDE and the resultant linear PDE is then solved for the explicit solutions of the optimal investment strategies for PM exhibiting constant absolute risk aversion (CARA) using change of variable technique. Furthermore, theoretical analysis is used to study the influences of some sensitive parameters on the optimal investment strategies with observations that the optimal investment strategies for the two risky assets increase with increase in the dividend and decreases with increase in tax on the invested funds, risk averse coefficient, initial fund size and the transaction cost.

Keywords: Ornstein-Uhlenbeck process, portfolio management, Legendre transforms, CARA utility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 461
3410 Automated Process Quality Monitoring with Prediction of Fault Condition Using Measurement Data

Authors: Hyun-Woo Cho

Abstract:

Detection of incipient abnormal events is important to improve safety and reliability of machine operations and reduce losses caused by failures. Improper set-ups or aligning of parts often leads to severe problems in many machines. The construction of prediction models for predicting faulty conditions is quite essential in making decisions on when to perform machine maintenance. This paper presents a multivariate calibration monitoring approach based on the statistical analysis of machine measurement data. The calibration model is used to predict two faulty conditions from historical reference data. This approach utilizes genetic algorithms (GA) based variable selection, and we evaluate the predictive performance of several prediction methods using real data. The results shows that the calibration model based on supervised probabilistic principal component analysis (SPPCA) yielded best performance in this work. By adopting a proper variable selection scheme in calibration models, the prediction performance can be improved by excluding non-informative variables from their model building steps.

Keywords: Prediction, operation monitoring, on-line data, nonlinear statistical methods, empirical model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1652
3409 Study of Tribological Behaviour of Al6061/Silicon Carbide/Graphite Hybrid Metal Matrix Composite Using Taguchi's Techniques

Authors: Mohamed Zakaulla, A. R. Anwar Khan

Abstract:

Al6061 alloy base matrix, reinforced with particles of silicon carbide (10 wt %) and Graphite powder (1wt%), known as hybrid composites have been fabricated by liquid metallurgy route (stir casting technique) and optimized at different parameters like applied load, sliding speed and sliding distance by taguchi method. A plan of experiment generated through taguchi technique was used to perform experiments based on L27 orthogonal array. The developed ANOVA and regression equations are used to find the optimum coefficient of friction and wear under the influence of applied load, sliding speed and sliding distance. On the basis of “smaller the best” the dry sliding wear resistance was analysed and finally confirmation tests were carried out to verify the experimental results.

Keywords: Analysis of variance, dry sliding wear, Hybrid composite, orthogonal array, Taguchi technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2693
3408 Development of Accident Predictive Model for Rural Roadway

Authors: Fajaruddin Mustakim, Motohiro Fujita

Abstract:

This paper present the study carried out of accident analysis, black spot study and to develop accident predictive models based on the data collected at rural roadway, Federal Route 50 (F050) Malaysia. The road accident trends and black spot ranking were established on the F050. The development of the accident prediction model will concentrate in Parit Raja area from KM 19 to KM 23. Multiple non-linear regression method was used to relate the discrete accident data with the road and traffic flow explanatory variable. The dependent variable was modeled as the number of crashes namely accident point weighting, however accident point weighting have rarely been account in the road accident prediction Models. The result show that, the existing number of major access points, without traffic light, rise in speed, increasing number of Annual Average Daily Traffic (AADT), growing number of motorcycle and motorcar and reducing the time gap are the potential contributors of increment accident rates on multiple rural roadway.

Keywords: Accident Trends, Black Spot Study, Accident Prediction Model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3269