Search results for: linear regression.
1632 Stochastic Estimation of Cavity Flowfield
Authors: Yin Yin Pey, Leok Poh Chua, Wei Long Siauw
Abstract:
Linear stochastic estimation and quadratic stochastic estimation techniques were applied to estimate the entire velocity flow-field of an open cavity with a length to depth ratio of 2. The estimations were done through the use of instantaneous velocity magnitude as estimators. These measurements were obtained by Particle Image Velocimetry. The predicted flow was compared against the original flow-field in terms of the Reynolds stresses and turbulent kinetic energy. Quadratic stochastic estimation proved to be more superior than linear stochastic estimation in resolving the shear layer flow. When the velocity fluctuations were scaled up in the quadratic estimate, both the time-averaged quantities and the instantaneous cavity flow can be predicted to a rather accurate extent.Keywords: Open cavity, Particle Image Velocimetry, Stochastic estimation, Turbulent kinetic energy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17151631 Laser Ultrasonic Imaging Based on Synthetic Aperture Focusing Technique Algorithm
Authors: Sundara Subramanian Karuppasamy, Che Hua Yang
Abstract:
In this work, the laser ultrasound technique has been used for analyzing and imaging the inner defects in metal blocks. To detect the defects in blocks, traditionally the researchers used piezoelectric transducers for the generation and reception of ultrasonic signals. These transducers can be configured into the sparse and phased array. But these two configurations have their drawbacks including the requirement of many transducers, time-consuming calculations, limited bandwidth, and provide confined image resolution. Here, we focus on the non-contact method for generating and receiving the ultrasound to examine the inner defects in aluminum blocks. A Q-switched pulsed laser has been used for the generation and the reception is done by using Laser Doppler Vibrometer (LDV). Based on the Doppler effect, LDV provides a rapid and high spatial resolution way for sensing ultrasonic waves. From the LDV, a series of scanning points are selected which serves as the phased array elements. The side-drilled hole of 10 mm diameter with a depth of 25 mm has been introduced and the defect is interrogated by the linear array of scanning points obtained from the LDV. With the aid of the Synthetic Aperture Focusing Technique (SAFT) algorithm, based on the time-shifting principle the inspected images are generated from the A-scan data acquired from the 1-D linear phased array elements. Thus the defect can be precisely detected with good resolution.
Keywords: Laser ultrasonics, linear phased array, nondestructive testing, synthetic aperture focusing technique, ultrasonic imaging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9521630 Parameter Estimation for Viewing Rank Distribution of Video-on-Demand
Authors: Hyoup-Sang Yoon
Abstract:
Video-on-demand (VOD) is designed by using content delivery networks (CDN) to minimize the overall operational cost and to maximize scalability. Estimation of the viewing pattern (i.e., the relationship between the number of viewings and the ranking of VOD contents) plays an important role in minimizing the total operational cost and maximizing the performance of the VOD systems. In this paper, we have analyzed a large body of commercial VOD viewing data and found that the viewing rank distribution fits well with the parabolic fractal distribution. The weighted linear model fitting function is used to estimate the parameters (coefficients) of the parabolic fractal distribution. This paper presents an analytical basis for designing an optimal hierarchical VOD contents distribution system in terms of its cost and performance.
Keywords: VOD, CDN, parabolic fractal distribution, viewing rank, weighted linear model fitting
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17901629 Financial Literacy Testing: Results of Conducted Research and Introduction of a Project
Authors: J. Nesleha, H. Florianova
Abstract:
The goal of the study is to provide results of a conducted study devoted to financial literacy in the Czech Republic and to introduce a project related to financial education in the Czech Republic. Financial education has become an important part of education in the country, yet it is still neglected on the lowest level of formal education–primary schools. The project is based on investigation of financial literacy on primary schools in the Czech Republic. Consequently, the authors aim to formulate possible amendments related to this type of education. The gained dataset is intended to be used for analysis concerning financial education in the Czech Republic. With regard to used methods, the most important one is regression analysis for disclosure of predictors causing different levels of financial literacy. Furthermore, comparison of different groups is planned, for which t-tests are intended to be used. The study also employs descriptive statistics to introduce basic relationship in the data file.Keywords: Czech Republic, financial education, financial literacy, primary school, regression analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8551628 Jitter Transfer in High Speed Data Links
Authors: Tsunwai Gary Yip
Abstract:
Phase locked loops for data links operating at 10 Gb/s or faster are low phase noise devices designed to operate with a low jitter reference clock. Characterization of their jitter transfer function is difficult because the intrinsic noise of the device is comparable to the random noise level in the reference clock signal. A linear model is proposed to account for the intrinsic noise of a PLL. The intrinsic noise data of a PLL for 10 Gb/s links is presented. The jitter transfer function of a PLL in a test chip for 12.8 Gb/s data links was determined in experiments using the 400 MHz reference clock as the source of simultaneous excitations over a wide range of frequency. The result shows that the PLL jitter transfer function can be approximated by a second order linear model.Keywords: Intrinsic phase noise, jitter in data link, PLL jitter transfer function, high speed clocking in electronic circuit
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19461627 Complexity Reduction Approach with Jacobi Iterative Method for Solving Composite Trapezoidal Algebraic Equations
Authors: Mohana Sundaram Muthuvalu, Jumat Sulaiman
Abstract:
In this paper, application of the complexity reduction approach based on half- and quarter-sweep iteration concepts with Jacobi iterative method for solving composite trapezoidal (CT) algebraic equations is discussed. The performances of the methods for CT algebraic equations are comparatively studied by their application in solving linear Fredholm integral equations of the second kind. Furthermore, computational complexity analysis and numerical results for three test problems are also included in order to verify performance of the methods.
Keywords: Complexity reduction approach, Composite trapezoidal scheme, Jacobi method, Linear Fredholm integral equations
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15951626 Modeling Exponential Growth Activity Using Technology: A Research with Bachelor of Business Administration Students
Authors: V. Vargas-Alejo, L. E. Montero-Moguel
Abstract:
Understanding the concept of function has been important in mathematics education for many years. In this study, the models built by a group of five business administration and accounting undergraduate students when carrying out a population growth activity are analyzed. The theoretical framework is the Models and Modeling Perspective. The results show how the students included tables, graphics, and algebraic representations in their models. Using technology was useful to interpret, describe, and predict the situation. The first model, the students built to describe the situation, was linear. After that, they modified and refined their ways of thinking; finally, they created exponential growth. Modeling the activity was useful to deep on mathematical concepts such as covariation, rate of change, and exponential function also to differentiate between linear and exponential growth.Keywords: Covariation reasoning, exponential function, modeling, representations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5031625 The Risk Factors Associated with Under-Five Mortality in Lesotho Using the 2009 Lesotho Demographic and Health Survey
Authors: T. Motsima
Abstract:
The under-5 mortality rate is high in sub-Saharan Africa with Lesotho being amongst the highest under-5 mortality rates in the world. The objective of the study is to determine the factors associated with under-5 mortality in Lesotho. The data used for this analysis come from the nationally representative household survey called the 2009 Lesotho Demographic and Health Survey. Odds ratios produced by the logistic regression models were used to measure the effect of each independent variable on the dependent variable. Female children were significantly 38% less likely to die than male children. Children who were breastfed for 13 to 18 months and those who were breastfed for more than 19 months were significantly less likely to die than those who were breastfed for 12 months or less. Furthermore, children of mothers who stayed in Quthing, Qacha’s Nek and Thaba Tseka ran the greatest risk of dying. The results suggested that: sex of child, type of birth, breastfeeding duration, district, source of energy and marital status were significant predictors of under-5 mortality, after correcting for all variables.
Keywords: Under-5 mortality, risk factors, millennium development goals, breastfeeding, logistic regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14801624 Reducing the Number of Constraints in Non Safe Petri Net
Authors: M. Zareiee, A. Dideban
Abstract:
This paper addresses the problem of forbidden states in non safe Petri Nets. In the system, for preventing it from entering the forbidden states, some linear constraints can be assigned to them. Then these constraints can be enforced on the system using control places. But when the number of constraints in the system is large, a large number of control places must be added to the model of system. This concept complicates the model of system. There are some methods for reducing the number of constraints in safe Petri Nets. But there is no a systematic method for non safe Petri Nets. In this paper we propose a method for reducing the number of constraints in non safe Petri Nets which is based on solving an integer linear programming problem.Keywords: discrete event system, Supervisory control, Petri Net, Constraint
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13161623 Explicit Solutions and Stability of Linear Differential Equations with multiple Delays
Authors: Felix Che Shu
Abstract:
We give an explicit formula for the general solution of a one dimensional linear delay differential equation with multiple delays, which are integer multiples of the smallest delay. For an equation of this class with two delays, we derive two equations with single delays, whose stability is sufficient for the stability of the equation with two delays. This presents a new approach to the study of the stability of such systems. This approach avoids requirement of the knowledge of the location of the characteristic roots of the equation with multiple delays which are generally more difficult to determine, compared to the location of the characteristic roots of equations with a single delay.
Keywords: Delay Differential Equation, Explicit Solution, Exponential Stability, Lyapunov Exponents, Multiple Delays.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14911622 Phase Jitter Transfer in High Speed Data Links
Authors: Tsunwai Gary Yip
Abstract:
Phase locked loops in 10 Gb/s and faster data links are low phase noise devices. Characterization of their phase jitter transfer functions is difficult because the intrinsic noise of the PLLs is comparable to the phase noise of the reference clock signal. The problem is solved by using a linear model to account for the intrinsic noise. This study also introduces a novel technique for measuring the transfer function. It involves the use of the reference clock as a source of wideband excitation, in contrast to the commonly used sinusoidal excitations at discrete frequencies. The data reported here include the intrinsic noise of a PLL for 10 Gb/s links and the jitter transfer function of a PLL for 12.8 Gb/s links. The measured transfer function suggests that the PLL responded like a second order linear system to a low noise reference clock.Keywords: Intrinsic phase noise, jitter in data link, PLL jitter transfer function, high speed clocking in electronic circuit
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16011621 On the Representation of Actuator Faults Diagnosis and Systems Invertibility
Authors: Sallem F., Dahhou B., Kamoun A.
Abstract:
In this work, the main problem considered is the detection and the isolation of the actuator fault. A new formulation of the linear system is generated to obtain the conditions of the actuator fault diagnosis. The proposed method is based on the representation of the actuator as a subsystem connected with the process system in cascade manner. The designed formulation is generated to obtain the conditions of the actuator fault detection and isolation. Detectability conditions are expressed in terms of the invertibility notions. An example and a comparative analysis with the classic formulation illustrate the performances of such approach for simple actuator fault diagnosis by using the linear model of nuclear reactor.
Keywords: Actuator fault, Fault detection, left invertibility, nuclear reactor, observability, parameter intervals, system inversion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21481620 Linear Stability of Convection in a Viscoelastic Nanofluid Layer
Authors: Long Jye Sheu
Abstract:
This paper presents a linear stability analysis of natural convection in a horizontal layer of a viscoelastic nanofluid. The Oldroyd B model was utilized to describe the rheological behavior of a viscoelastic nanofluid. The model used for the nanofluid incorporated the effects of Brownian motion and thermophoresis. The onset criterion for stationary and oscillatory convection was derived analytically. The effects of the Deborah number, retardation parameters, concentration Rayleigh number, Prandtl number, and Lewis number on the stability of the system were investigated. Results indicated that there was competition among the processes of thermophoresis, Brownian diffusion, and viscoelasticity which caused oscillatory rather than stationary convection to occur. Oscillatory instability is possible with both bottom- and top-heavy nanoparticle distributions. Regimes of stationary and oscillatory convection for various parameters were derived and are discussed in detail.Keywords: instability, viscoelastic, nanofluids, oscillatory, Brownian, thermophoresis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28031619 A Laplace Transform Dual-Reciprocity Boundary Element Method for Axisymmetric Elastodynamic Problems
Authors: B. I. Yun
Abstract:
A dual-reciprocity boundary element method is presented for the numerical solution of a class of axisymmetric elastodynamic problems. The domain integrals that arise in the integrodifferential formulation are converted to line integrals by using the dual-reciprocity method together suitably constructed interpolating functions. The second order time derivatives of the displacement in the governing partial differential equations are suppressed by using Laplace transformation. In the Laplace transform domain, the problem under consideration is eventually reduced to solving a system of linear algebraic equations. Once the linear algebraic equations are solved, the displacement and stress fields in the physical domain can be recovered by using a numerical technique for inverting Laplace transforms.Keywords: Axisymmetric elasticity, boundary element method, dual-reciprocity method, Laplace transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16711618 Simulation of Hydrogenated Boron Nitride Nanotube’s Mechanical Properties for Radiation Shielding Applications
Authors: Joseph E. Estevez, Mahdi Ghazizadeh, James G. Ryan, Ajit D. Kelkar
Abstract:
Radiation shielding is an obstacle in long duration space exploration. Boron Nitride Nanotubes (BNNTs) have attracted attention as an additive to radiation shielding material due to B10’s large neutron capture cross section. The B10 has an effective neutron capture cross section suitable for low energy neutrons ranging from 10-5 to 104 eV and hydrogen is effective at slowing down high energy neutrons. Hydrogenated BNNTs are potentially an ideal nanofiller for radiation shielding composites. We use Molecular Dynamics (MD) Simulation via Material Studios Accelrys 6.0 to model the Young’s Modulus of Hydrogenated BNNTs. An extrapolation technique was employed to determine the Young’s Modulus due to the deformation of the nanostructure at its theoretical density. A linear regression was used to extrapolate the data to the theoretical density of 2.62g/cm3. Simulation data shows that the hydrogenated BNNTs will experience a 11% decrease in the Young’s Modulus for (6,6) BNNTs and 8.5% decrease for (8,8) BNNTs compared to non-hydrogenated BNNT’s. Hydrogenated BNNTs are a viable option as a nanofiller for radiation shielding nanocomposite materials for long range and long duration space exploration.
Keywords: Boron Nitride Nanotube, Radiation Shielding, Young Modulus, Atomistic Modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 66791617 Crash Severity Modeling in Urban Highways Using Backward Regression Method
Authors: F. Rezaie Moghaddam, T. Rezaie Moghaddam, M. Pasbani Khiavi, M. Ali Ghorbani
Abstract:
Identifying and classifying intersections according to severity is very important for implementation of safety related counter measures and effective models are needed to compare and assess the severity. Highway safety organizations have considered intersection safety among their priorities. In spite of significant advances in highways safety, the large numbers of crashes with high severities still occur in the highways. Investigation of influential factors on crashes enables engineers to carry out calculations in order to reduce crash severity. Previous studies lacked a model capable of simultaneous illustration of the influence of human factors, road, vehicle, weather conditions and traffic features including traffic volume and flow speed on the crash severity. Thus, this paper is aimed at developing the models to illustrate the simultaneous influence of these variables on the crash severity in urban highways. The models represented in this study have been developed using binary Logit Models. SPSS software has been used to calibrate the models. It must be mentioned that backward regression method in SPSS was used to identify the significant variables in the model. Consider to obtained results it can be concluded that the main factor in increasing of crash severity in urban highways are driver age, movement with reverse gear, technical defect of the vehicle, vehicle collision with motorcycle and bicycle, bridge, frontal impact collisions, frontal-lateral collisions and multi-vehicle crashes in urban highways which always increase the crash severity in urban highways.Keywords: Backward regression, crash severity, speed, urbanhighways.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19211616 Effect of Mica Content in Sand on Site Response Analyses
Authors: Volkan Isbuga, Joman M. Mahmood, Ali Firat Cabalar
Abstract:
This study presents the site response analysis of mica-sand mixtures available in certain parts of the world including Izmir, a highly populated city and located in a seismically active region in western part of Turkey. We performed site response analyses by employing SHAKE, an equivalent linear approach, for the micaceous soil deposits consisting of layers with different amount of mica contents and thicknesses. Dynamic behavior of micaceous sands such as shear modulus reduction and damping ratio curves are input for the ground response analyses. Micaceous sands exhibit a unique dynamic response under a scenario earthquake with a magnitude of Mw=6. Results showed that higher amount of mica caused higher spectral accelerations.
Keywords: Micaceous sands, site response, equivalent linear approach, SHAKE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16871615 Speed Control of a Permanent Magnet Synchronous Machine (PMSM) Fed by an Inverter Voltage Fuzzy Control Approach
Authors: Jamel Khedri, Mohamed Chaabane, Mansour Souissi, Driss Mehdi
Abstract:
This paper deals with the synthesis of fuzzy controller applied to a permanent magnet synchronous machine (PMSM) with a guaranteed H∞ performance. To design this fuzzy controller, nonlinear model of the PMSM is approximated by Takagi-Sugeno fuzzy model (T-S fuzzy model), then the so-called parallel distributed compensation (PDC) is employed. Next, we derive the property of the H∞ norm. The latter is cast in terms of linear matrix inequalities (LMI-s) while minimizing the H∞ norm of the transfer function between the disturbance and the error ( ) ev T . The experimental and simulations results were conducted on a permanent magnet synchronous machine to illustrate the effects of the fuzzy modelling and the controller design via the PDC.Keywords: Feedback controller, Takagi-Sugeno fuzzy model, Linear Matrix Inequality (LMI), PMSM, H∞ performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23351614 Assessment of Reliability and Quality Measures in Power Systems
Authors: Badr M. Alshammari, Mohamed A. El-Kady
Abstract:
The paper presents new results of a recent industry supported research and development study in which an efficient framework for evaluating practical and meaningful power system reliability and quality indices was applied. The system-wide integrated performance indices are capable of addressing and revealing areas of deficiencies and bottlenecks as well as redundancies in the composite generation-transmission-demand structure of large-scale power grids. The technique utilizes a linear programming formulation, which simulates practical operating actions and offers a general and comprehensive framework to assess the harmony and compatibility of generation, transmission and demand in a power system. Practical applications to a reduced system model as well as a portion of the Saudi power grid are also presented in the paper for demonstration purposes.Keywords: Power systems, Linear programming, Quality assessment, Reliability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15571613 Novel Anti-leukemia Calanone Compounds by Quantitative Structure-Activity Relationship AM1 Semiempirical Method
Authors: Ponco Iswanto, Mochammad Chasani, Muhammad Hanafi, Iqmal Tahir, Eva Vaulina YD, Harjono, Lestari Solikhati, Winkanda S. Putra, Yayuk Yuliantini
Abstract:
Quantitative Structure-Activity Relationship (QSAR) approach for discovering novel more active Calanone derivative as anti-leukemia compound has been conducted. There are 6 experimental activities of Calanone compounds against leukemia cell L1210 that are used as material of the research. Calculation of theoretical predictors (independent variables) was performed by AM1 semiempirical method. The QSAR equation is determined by Principle Component Regression (PCR) analysis, with Log IC50 as dependent variable and the independent variables are atomic net charges, dipole moment (μ), and coefficient partition of noctanol/ water (Log P). Three novel Calanone derivatives that obtained by this research have higher activity against leukemia cell L1210 than pure Calanone.Keywords: AM1 semiempirical calculation, Calanone, Principle Component Regression, QSAR approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14781612 Traffic Signal Coordinated Control Optimization: A Case Study
Authors: Pengdi Diao, Zhuo Wang, Zundong Zhang, Hua Cheng
Abstract:
In the urban traffic network, the intersections are the “bottleneck point" of road network capacity. And the arterials are the main body in road network and the key factor which guarantees the normal operation of the city-s social and economic activities. The rapid increase in vehicles leads to seriously traffic jam and cause the increment of vehicles- delay. Most cities of our country are traditional single control system, which cannot meet the need for the city traffic any longer. In this paper, Synchro6.0 as a platform to minimize the intersection delay, optimizesingle signal cycle and split for Zhonghua Street in Handan City. Meanwhile, linear control system uses to optimize the phase for the t arterial road in this system. Comparing before and after use the control, capacities and service levels of this road and the adjacent road have improved significantly.Keywords: linear control system; delay mode; signal optimization; synchro6.0 simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21251611 Model-Based Small Area Estimation with Application to Unemployment Estimates
Authors: Hichem Omrani, Philippe Gerber, Patrick Bousch
Abstract:
The problem of Small Area Estimation (SAE) is complex because of various information sources and insufficient data. In this paper, an approach for SAE is presented for decision-making at national, regional and local level. We propose an Empirical Best Linear Unbiased Predictor (EBLUP) as an estimator in order to combine several information sources to evaluate various indicators. First, we present the urban audit project and its environmental, social and economic indicators. Secondly, we propose an approach for decision making in order to estimate indicators. An application is used to validate the theoretical proposal. Finally, a decision support system is presented based on open-source environment.
Keywords: Small area estimation, statistical method, sampling, empirical best linear unbiased predictor (EBLUP), decision-making.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17131610 A Comparative Study of Rigid and Modified Simplex Methods for Optimal Parameter Settings of ACO for Noisy Non-Linear Surfaces
Authors: Seksan Chunothaisawat, Pongchanun Luangpaiboon
Abstract:
There are two common types of operational research techniques, optimisation and metaheuristic methods. The latter may be defined as a sequential process that intelligently performs the exploration and exploitation adopted by natural intelligence and strong inspiration to form several iterative searches. An aim is to effectively determine near optimal solutions in a solution space. In this work, a type of metaheuristics called Ant Colonies Optimisation, ACO, inspired by a foraging behaviour of ants was adapted to find optimal solutions of eight non-linear continuous mathematical models. Under a consideration of a solution space in a specified region on each model, sub-solutions may contain global or multiple local optimum. Moreover, the algorithm has several common parameters; number of ants, moves, and iterations, which act as the algorithm-s driver. A series of computational experiments for initialising parameters were conducted through methods of Rigid Simplex, RS, and Modified Simplex, MSM. Experimental results were analysed in terms of the best so far solutions, mean and standard deviation. Finally, they stated a recommendation of proper level settings of ACO parameters for all eight functions. These parameter settings can be applied as a guideline for future uses of ACO. This is to promote an ease of use of ACO in real industrial processes. It was found that the results obtained from MSM were pretty similar to those gained from RS. However, if these results with noise standard deviations of 1 and 3 are compared, MSM will reach optimal solutions more efficiently than RS, in terms of speed of convergence.
Keywords: Ant colony optimisation, metaheuristics, modified simplex, non-linear, rigid simplex.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16241609 The Long Run Relationship between Exports and Imports in South Africa: Evidence from Cointegration Analysis
Authors: Sagaren Pillay
Abstract:
This study empirically examines the long run equilibrium relationship between South Africa’s exports and imports using quarterly data from 1985 to 2012. The theoretical framework used for the study is based on Johansen’s Maximum Likelihood cointegration technique which tests for both the existence and number of cointegration vectors that exists. The study finds that both the series are integrated of order one and are cointegrated. A statistically significant cointegrating relationship is found to exist between exports and imports. The study models this unique linear and lagged relationship using a Vector Error Correction Model (VECM). The findings of the study confirm the existence of a long run equilibrium relationship between exports and imports.
Keywords: Cointegration lagged, linear, maximum likelihood, vector error correction model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27841608 Image Compression Using Multiwavelet and Multi-Stage Vector Quantization
Authors: S. Esakkirajan, T. Veerakumar, V. Senthil Murugan, P. Navaneethan
Abstract:
The existing image coding standards generally degrades at low bit-rates because of the underlying block based Discrete Cosine Transform scheme. Over the past decade, the success of wavelets in solving many different problems has contributed to its unprecedented popularity. Due to implementation constraints scalar wavelets do not posses all the properties such as orthogonality, short support, linear phase symmetry, and a high order of approximation through vanishing moments simultaneously, which are very much essential for signal processing. New class of wavelets called 'Multiwavelets' which posses more than one scaling function overcomes this problem. This paper presents a new image coding scheme based on non linear approximation of multiwavelet coefficients along with multistage vector quantization. The performance of the proposed scheme is compared with the results obtained from scalar wavelets.
Keywords: Image compression, Multiwavelets, Multi-stagevector quantization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19371607 A Single-chip Proportional to Absolute Temperature Sensor Using CMOS Technology
Authors: AL.AL, M. B. I. Reaz, S. M. A. Motakabber, Mohd Alauddin Mohd Ali
Abstract:
Nowadays it is a trend for electronic circuit designers to integrate all system components on a single-chip. This paper proposed the design of a single-chip proportional to absolute temperature (PTAT) sensor including a voltage reference circuit using CEDEC 0.18m CMOS Technology. It is a challenge to design asingle-chip wide range linear response temperature sensor for many applications. The channel widths between the compensation transistor and the reference transistor are critical to design the PTAT temperature sensor circuit. The designed temperature sensor shows excellent linearity between -100°C to 200° and the sensitivity is about 0.05mV/°C. The chip is designed to operate with a single voltage source of 1.6V.Keywords: PTAT, single-chip circuit, linear temperature sensor, CMOS technology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34311606 Machine Learning Based Approach for Measuring Promotion Effectiveness in Multiple Parallel Promotions’ Scenarios
Authors: Revoti Prasad Bora, Nikita Katyal
Abstract:
Promotion is a key element in the retail business. Thus, analysis of promotions to quantify their effectiveness in terms of Revenue and/or Margin is an essential activity in the retail industry. However, measuring the sales/revenue uplift is based on estimations, as the actual sales/revenue without the promotion is not present. Further, the presence of Halo and Cannibalization in a multiple parallel promotions’ scenario complicates the problem. Calculating Baseline by considering inter-brand/competitor items or using Halo and Cannibalization's impact on Revenue calculations by considering Baseline as an interpretation of items’ unit sales in neighboring nonpromotional weeks individually may not capture the overall Revenue uplift in the case of multiple parallel promotions. Hence, this paper proposes a Machine Learning based method for calculating the Revenue uplift by considering the Halo and Cannibalization impact on the Baseline and the Revenue. In the first section of the proposed methodology, Baseline of an item is calculated by incorporating the impact of the promotions on its related items. In the later section, the Revenue of an item is calculated by considering both Halo and Cannibalization impacts. Hence, this methodology enables correct calculation of the overall Revenue uplift due a given promotion.
Keywords: Halo, cannibalization, promotion, baseline, temporary price reduction, retail, elasticity, cross price elasticity, machine learning, random forest, linear regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13251605 The Relationship between Excreta Viscosity and TMEn in SBM
Authors: Ali Nouri Emamzadeh
Abstract:
The experiment was performed to study the relationship between excreta viscosity and Nitrogen-corrected true metabolisable energy quantities of soybean meals using conventional addition method (CAM) in adult cockerels for 7 d: a 3-d preexperiment and a 4-d experiment period. Results indicated that differences between the excreta viscosity values were (P<0.01) significant for SBMs. The excreta viscosity values were less (P<0.01) for SBMs 6, 2, 8, 1 and 3 than other SBMs. The mean TMEn (kcal/kg) values were significant (P<0.01) between SBMs. The most TMEn values were (P<0.01) for SBMs 6, 2, 8 and 1, also the lowest TMEn values were (P<0.01) for SBMs 3, 7, 4, 9 and 5. There was a reverse linear relationship between the values of excreta viscosity and TMEn in SBMs. In conclusion, there was a reverse linear relationship between the values of excreta viscosity and TMEn in SBMs probably due to their various soluble NSPs.Keywords: soybean meals (SBMs), Nitrogen-corrected true metabolisable energy (TMEn), viscosity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16561604 A Linear Relation for Voltage Unbalance Factor Evaluation in Three-Phase Electrical Power System Using Space Vector
Authors: Dana M. Ragab, Jasim A Ghaeb
Abstract:
The Voltage Unbalance Factor (VUF) index is recommended to evaluate system performance under unbalanced operation. However, its calculation requires complex algebra which limits its use in the field. Furthermore, one system cycle is required at least to detect unbalance using the VUF. Ideally unbalance mitigation must be performed within 10 ms for 50 Hz systems. In this work, a linear relation for VUF evaluation in three-phase electrical power system using space vector (SV) is derived. It is proposed to determine the voltage unbalance quickly and accurately and to overcome the constraints associated with the traditional methods of VUF evaluation. Aqaba-Qatrana-South Amman (AQSA) power system is considered to study the system performance under unbalanced conditions. The results show that both the complexity of calculations and the time required to evaluate VUF are reduced significantly.
Keywords: Power quality, space vector, unbalance evaluation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9421603 Daily Probability Model of Storm Events in Peninsular Malaysia
Authors: Mohd Aftar Abu Bakar, Noratiqah Mohd Ariff, Abdul Aziz Jemain
Abstract:
Storm Event Analysis (SEA) provides a method to define rainfalls events as storms where each storm has its own amount and duration. By modelling daily probability of different types of storms, the onset, offset and cycle of rainfall seasons can be determined and investigated. Furthermore, researchers from the field of meteorology will be able to study the dynamical characteristics of rainfalls and make predictions for future reference. In this study, four categories of storms; short, intermediate, long and very long storms; are introduced based on the length of storm duration. Daily probability models of storms are built for these four categories of storms in Peninsular Malaysia. The models are constructed by using Bernoulli distribution and by applying linear regression on the first Fourier harmonic equation. From the models obtained, it is found that daily probability of storms at the Eastern part of Peninsular Malaysia shows a unimodal pattern with high probability of rain beginning at the end of the year and lasting until early the next year. This is very likely due to the Northeast monsoon season which occurs from November to March every year. Meanwhile, short and intermediate storms at other regions of Peninsular Malaysia experience a bimodal cycle due to the two inter-monsoon seasons. Overall, these models indicate that Peninsular Malaysia can be divided into four distinct regions based on the daily pattern for the probability of various storm events.
Keywords: Daily probability model, monsoon seasons, regions, storm events.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1632