Search results for: Mean Squared Error
895 Appraisal of Relativistic Effects on GNSS Receiver Positioning
Authors: I. Yakubu, Y. Y. Ziggah, E. A. Gyamera
Abstract:
The Global Navigation Satellite System (GNSS) started with the launch of the United State Department of Defense Global Positioning System (GPS). GNSS systems has grown over the years to include: GLONASS (Russia); Galileo (European Union); BeiDou (China). Any GNSS architecture consists of three major segments: Space, Control and User Segments. Errors such as; multipath, ionospheric and tropospheric effects, satellite clocks, receiver noise and orbit errors (relativity effect) have significant effects on GNSS positioning. To obtain centimeter level accuracy, the impacts of the relative motion of the satellites and earth need to be taken into account. This paper discusses the relevance of the theory of relativity as a source of error for GNSS receivers for position fix based on available relevant literature. Review of relevant literature reveals that due to relativity; Time dilation, Gravitational frequency shift and Sagnac effect cause significant influence on the use of GNSS receivers for positioning by an error range of ± 2.5 m based on pseudo-range computation.
Keywords: GNSS, relativistic effects, pseudo-range, accuracy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 399894 Performance Evaluation of a Minimum Mean Square Error-Based Physical Sidelink Share Channel Receiver under Fading Channel
Authors: Yang Fu, Jaime Rodrigo Navarro, Jose F. Monserrat, Faiza Bouchmal, Oscar Carrasco Quilis
Abstract:
Cellular Vehicle to Everything (C-V2X) is considered a promising solution for future autonomous driving. From Release 16 to Release 17, the Third Generation Partnership Project (3GPP) has introduced the definitions and services for 5G New Radio (NR) V2X. Since establishing a simulator for C-V2X communications is an essential preliminary step to achieve reliable and stable communication links, this paper proposes a complete framework of a link-level simulator based on the 3GPP specifications for the Physical Sidelink Share Channel (PSSCH) of the 5G NR Physical Layer (PHY). In this framework, several algorithms in the receiver part, i.e., sliding window in channel estimation and Minimum Mean Square Error (MMSE)-based equalization, are developed. Finally, the performance of the developed PSSCH receiver is validated through extensive simulations under different assumptions.
Keywords: Yang Fu, Jaime Rodrigo Navarro, Jose F. Monserrat, Faiza Bouchmal, Oscar Carrasco Quilis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 156893 Mathematical Analysis of Stock Prices Prediction in a Financial Market Using Geometric Brownian Motion Model
Authors: Edikan E. Akpanibah, Ogunmodimu Dupe Catherine
Abstract:
The relevance of geometric Brownian motion (GBM) in modelling the behaviour of stock market prices (SMP) cannot be over emphasized taking into consideration the volatility of the SMP. Consequently, there is need to investigate how GBM models are being estimated and used in financial market to predict SMP. To achieve this, the GBM estimation and its application to the SMP of some selected companies are studied. The normal and log-normal distributions were used to determine the expected value, variance and co-variance. Furthermore, the GBM model was used to predict the SMP of some selected companies over a period of time and the mean absolute percentage error (MAPE) were calculated and used to determine the accuracy of the GBM model in predicting the SMP of the four companies under consideration. It was observed that for all the four companies, their MAPE values were within the region of acceptance. Also, the MAPE values of our data were compared to an existing literature to test the accuracy of our prediction with respect to time of investment. Finally, some numerical simulations of the graphs of the SMP, expectations and variance of the four companies over a period of time were presented using MATLAB programming software.
Keywords: Stock Market, Geometric Brownian Motion, normal and log-normal distribution, mean absolute percentage error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 272892 On Hyperbolic Gompertz Growth Model
Authors: Angela Unna Chukwu, Samuel Oluwafemi Oyamakin
Abstract:
We proposed a Hyperbolic Gompertz Growth Model (HGGM), which was developed by introducing a shape parameter (allometric). This was achieved by convoluting hyperbolic sine function on the intrinsic rate of growth in the classical gompertz growth equation. The resulting integral solution obtained deterministically was reprogrammed into a statistical model and used in modeling the height and diameter of Pines (Pinus caribaea). Its ability in model prediction was compared with the classical gompertz growth model, an approach which mimicked the natural variability of height/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using goodness of fit tests and model selection criteria. The Kolmogorov Smirnov test and Shapiro-Wilk test was also used to test the compliance of the error term to normality assumptions while the independence of the error term was confirmed using the runs test. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic gompertz growth models better than the source model (classical gompertz growth model) while the results of R2, Adj. R2, MSE and AIC confirmed the predictive power of the Hyperbolic Gompertz growth models over its source model.Keywords: Height, Dbh, forest, Pinus caribaea, hyperbolic, gompertz.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2710891 The Relationship between Representational Conflicts, Generalization, and Encoding Requirements in an Instance Memory Network
Authors: Mathew Wakefield, Matthew Mitchell, Lisa Wise, Christopher McCarthy
Abstract:
This paper aims to provide an interpretation of artificial neural networks (ANNs) and explore some of its implications. The interpretation views ANNs as a memory which encodes instances of experience. An experiment explores the behavior of encoding and retrieval of instances from memory. A localised representation ANN is created that allows control over encoding and retrieved memory sample size and is experimented with using the MNIST digits dataset. The relationship between input familiarity, conflict within retrieved samples, and error rates is described and demonstrated to be an effective driver for memory encoding. Results indicate that selective encoding and retrieval samples that allow detection of memory conflicts produce optimal performance, and that error rates are normally distributed with input familiarity and conflict. By using input familiarity and sample consistency to guide memory encoding, the number of encoding trials on the dataset were reduced to 18.33% of the training data while maintaining good recognition performance on the test data.
Keywords: Artificial Neural Networks, ANNs, representation, memory, conflict monitoring, confidence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 512890 Reducing Unplanned Extubation in Psychiatric LTC
Authors: Jih-Rue Pan, Feng-Chuan Pan
Abstract:
Today-s healthcare industries had become more patient-centric than profession-centric, from which the issues of quality of healthcare and the patient safety are the major concerns in the modern healthcare facilities. An unplanned extubation (UE) may be detrimental to the patient-s life, and thus is one of the major indexes of patient safety and healthcare quality. A high UE rate not only defeated the healthcare quality as well as the patient safety policy but also the nurses- morality, and job satisfaction. The UE problem in a psychiatric hospital is unique and may be a tough challenge for the healthcare professionals for the patients were mostly lacking communication capabilities. We reported with this essay a particular project that was organized to reduce the UE rate from the current 2.3% to a lower and satisfactory level in the long-term care units of a psychiatric hospital. The project was conducted between March 1st, 2011 and August 31st, 2011. Based on the error information gathered from varied units of the hospital, the team analyzed the root causes with possible solutions proposed to the meetings. Four solutions were then concluded with consensus and launched to the units in question. The UE rate was now reduced to a level of 0.17%. Experience from this project, the procedure and the tools adopted would be good reference to other hospitals.Keywords: Unplanned extubation, patient safety, error information
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1841889 An Improved Performance of the SRM Drives Using Z-Source Inverter with the Simplified Fuzzy Logic Rule Base
Authors: M. Hari Prabhu
Abstract:
This paper is based on the performance of the Switched Reluctance Motor (SRM) drives using Z-Source Inverter with the simplified rule base of Fuzzy Logic Controller (FLC) with the output scaling factor (SF) self-tuning mechanism are proposed. The aim of this paper is to simplify the program complexity of the controller by reducing the number of fuzzy sets of the membership functions (MFs) without losing the system performance and stability via the adjustable controller gain. ZSI exhibits both voltage-buck and voltage-boost capability. It reduces line harmonics, improves reliability, and extends output voltage range. The output SF of the controller can be tuned continuously by a gain updating factor, whose value is derived from fuzzy logic, with the plant error and error change ratio as input variables. Then the results, carried out on a four-phase 6/8 pole SRM based on the dSPACEDS1104 platform, to show the feasibility and effectiveness of the devised methods and also performance of the proposed controllers will be compared with conventional counterpart.
Keywords: Fuzzy logic controller, scaling factor (SF), switched reluctance motor (SRM), variable-speed drives.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2430888 Adaptive Pulse Coupled Neural Network Parameters for Image Segmentation
Authors: Thejaswi H. Raya, Vineetha Bettaiah, Heggere S. Ranganath
Abstract:
For over a decade, the Pulse Coupled Neural Network (PCNN) based algorithms have been successfully used in image interpretation applications including image segmentation. There are several versions of the PCNN based image segmentation methods, and the segmentation accuracy of all of them is very sensitive to the values of the network parameters. Most methods treat PCNN parameters like linking coefficient and primary firing threshold as global parameters, and determine them by trial-and-error. The automatic determination of appropriate values for linking coefficient, and primary firing threshold is a challenging problem and deserves further research. This paper presents a method for obtaining global as well as local values for the linking coefficient and the primary firing threshold for neurons directly from the image statistics. Extensive simulation results show that the proposed approach achieves excellent segmentation accuracy comparable to the best accuracy obtainable by trial-and-error for a variety of images.Keywords: Automatic Selection of PCNN Parameters, Image Segmentation, Neural Networks, Pulse Coupled Neural Network
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2290887 Geometric Simplification Method of Building Energy Model Based on Building Performance Simulation
Authors: Yan Lyu, Yiqun Pan, Zhizhong Huang
Abstract:
In the design stage of a new building, the energy model of this building is often required for the analysis of the performance on energy efficiency. In practice, a certain degree of geometric simplification should be done in the establishment of building energy models, since the detailed geometric features of a real building are hard to be described perfectly in most energy simulation engine, such as ESP-r, eQuest or EnergyPlus. Actually, the detailed description is not necessary when the result with extremely high accuracy is not demanded. Therefore, this paper analyzed the relationship between the error of the simulation result from building energy models and the geometric simplification of the models. Finally, the following two parameters are selected as the indices to characterize the geometric feature of in building energy simulation: the southward projected area and total side surface area of the building. Based on the parameterization method, the simplification from an arbitrary column building to a typical shape (a cuboid) building can be made for energy modeling. The result in this study indicates that no more than 7% prediction error of annual cooling/heating load will be caused by the geometric simplification for those buildings with the ratio of southward projection length to total perimeter of the bottom of 0.25~0.35, which means this method is applicable for building performance simulation.
Keywords: building energy model, simulation, geometric simplification, design, regression
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 631886 Enhancing K-Means Algorithm with Initial Cluster Centers Derived from Data Partitioning along the Data Axis with the Highest Variance
Authors: S. Deelers, S. Auwatanamongkol
Abstract:
In this paper, we propose an algorithm to compute initial cluster centers for K-means clustering. Data in a cell is partitioned using a cutting plane that divides cell in two smaller cells. The plane is perpendicular to the data axis with the highest variance and is designed to reduce the sum squared errors of the two cells as much as possible, while at the same time keep the two cells far apart as possible. Cells are partitioned one at a time until the number of cells equals to the predefined number of clusters, K. The centers of the K cells become the initial cluster centers for K-means. The experimental results suggest that the proposed algorithm is effective, converge to better clustering results than those of the random initialization method. The research also indicated the proposed algorithm would greatly improve the likelihood of every cluster containing some data in it.Keywords: Clustering algorithm, K-means algorithm, Datapartitioning, Initial cluster centers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2870885 Study of Effect of Gear Tooth Accuracy on Transmission Mount Vibration
Authors: Kalyan Deepak Kolla, Ketan Paua, Rajkumar Bhagate
Abstract:
Transmission dynamics occupy major role in customer perception of the product in both senses of touch and quality of sound. The quantity and quality of sound perceived is more concerned with the whine noise of the gears engaged. Whine noise is tonal in nature and tonal noises cause fatigue and irritation to customers, which in turn affect the quality of the product. Transmission error is the usual suspect for whine noise, which can be caused due to misalignments, tolerances, manufacturing variabilities. In-cabin noise is also more sensitive to the gear design. As the details of the gear tooth design and manufacturing are in microns, anything out of the tolerance zone, either in design or manufacturing, will cause a whine noise. This will also cause high variation in stress and deformation due to change in the load and leads to the fatigue failure of the gears. Hence gear design and development take priority in the transmission development process. This paper aims to study such variability by considering five pairs of helical spur gears and their effect on the transmission error, contact pattern and vibration level on the transmission.Keywords: Gears, whine noise, manufacturing variability, mount vibration variability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 650884 Different Approaches for the Design of IFIR Compaction Filter
Authors: Sheeba V.S, Elizabeth Elias
Abstract:
Optimization of filter banks based on the knowledge of input statistics has been of interest for a long time. Finite impulse response (FIR) Compaction filters are used in the design of optimal signal adapted orthonormal FIR filter banks. In this paper we discuss three different approaches for the design of interpolated finite impulse response (IFIR) compaction filters. In the first method, the magnitude squared response satisfies Nyquist constraint approximately. In the second and third methods Nyquist constraint is exactly satisfied. These methods yield FIR compaction filters whose response is comparable with that of the existing methods. At the same time, IFIR filters enjoy significant saving in the number of multipliers and can be implemented efficiently. Since eigenfilter approach is used here, the method is less complex. Design of IFIR filters in the least square sense is presented.
Keywords: Principal Component Filter Bank, InterpolatedFinite Impulse Response filter, Orthonormal Filter Bank, Eigen Filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1581883 An Efficient Burst Errors Combating for Image Transmission over Mobile WPANs
Authors: Mohsen A. M. El-Bendary, Mostafa A. R. El-Tokhy
Abstract:
This paper presents an efficient burst error spreading tool. Also, it studies a vital issue in wireless communications, which is the transmission of images over wireless networks. IEEE ZigBee 802.15.4 is a short-range communication standard that could be used for small distance multimedia transmissions. In fact, the ZigBee network is a Wireless Personal Area Network (WPAN), which needs a strong interleaving mechanism for protection against error bursts. Also, it is low power technology and utilized in the Wireless Sensor Networks (WSN) implementation. This paper presents the chaotic interleaving scheme as a data randomization tool for this purpose. This scheme depends on the chaotic Baker map. The mobility effects on the image transmission are studied with different velocity through utilizing the Jakes’ model. A comparison study between the proposed chaotic interleaving scheme and the traditional block and convolutional interleaving schemes for image transmission over a correlated fading channel is presented. The simulation results show the superiority of the proposed chaotic interleaving scheme over the traditional schemes.
Keywords: WPANs, Burst Errors, Mobility, Interleaving Techniques, Fading channels.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2035882 Design of Parity-Preserving Reversible Logic Signed Array Multipliers
Authors: Mojtaba Valinataj
Abstract:
Reversible logic as a new favorable design domain can be used for various fields especially creating quantum computers because of its speed and intangible power consumption. However, its susceptibility to a variety of environmental effects may lead to yield the incorrect results. In this paper, because of the importance of multiplication operation in various computing systems, some novel reversible logic array multipliers are proposed with error detection capability by incorporating the parity-preserving gates. The new designs are presented for two main parts of array multipliers, partial product generation and multi-operand addition, by exploiting the new arrangements of existing gates, which results in two signed parity-preserving array multipliers. The experimental results reveal that the best proposed 4×4 multiplier in this paper reaches 12%, 24%, and 26% enhancements in the number of constant inputs, number of required gates, and quantum cost, respectively, compared to previous design. Moreover, the best proposed design is generalized for n×n multipliers with general formulations to estimate the main reversible logic criteria as the functions of the multiplier size.Keywords: Array multipliers, Baugh-Wooley method, error detection, parity-preserving gates, quantum computers, reversible logic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1033881 Roll of Membership functions in Fuzzy Logic for Prediction of Shoot Length of Mustard Plant Based on Residual Analysis
Authors: Satyendra Nath Mandal, J. Pal Choudhury, Dilip De, S. R. Bhadra Chaudhuri
Abstract:
The selection for plantation of a particular type of mustard plant depending on its productivity (pod yield) at the stage of maturity. The growth of mustard plant dependent on some parameters of that plant, these are shoot length, number of leaves, number of roots and roots length etc. As the plant is growing, some leaves may be fall down and some new leaves may come, so it can not gives the idea to develop the relationship with the seeds weight at mature stage of that plant. It is not possible to find the number of roots and root length of mustard plant at growing stage that will be harmful of this plant as roots goes deeper to deeper inside the land. Only the value of shoot length which increases in course of time can be measured at different time instances. Weather parameters are maximum and minimum humidity, rain fall, maximum and minimum temperature may effect the growth of the plant. The parameters of pollution, water, soil, distance and crop management may be dominant factors of growth of plant and its productivity. Considering all parameters, the growth of the plant is very uncertain, fuzzy environment can be considered for the prediction of shoot length at maturity of the plant. Fuzzification plays a greater role for fuzzification of data, which is based on certain membership functions. Here an effort has been made to fuzzify the original data based on gaussian function, triangular function, s-function, Trapezoidal and L –function. After that all fuzzified data are defuzzified to get normal form. Finally the error analysis (calculation of forecasting error and average error) indicates the membership function appropriate for fuzzification of data and use to predict the shoot length at maturity. The result is also verified using residual (Absolute Residual, Maximum of Absolute Residual, Mean Absolute Residual, Mean of Mean Absolute Residual, Median of Absolute Residual and Standard Deviation) analysis.Keywords: Fuzzification, defuzzification, gaussian function, triangular function, trapezoidal function, s-function, , membership function, residual analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2320880 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures
Authors: Adriano Z. Zambom, Preethi Ravikumar
Abstract:
One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.Keywords: Additive models, local polynomial regression, residuals, mean square error, variable selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1014879 On Adaptive Optimization of Filter Performance Based on Markov Representation for Output Prediction Error
Authors: Hong Son Hoang, Remy Baraille
Abstract:
This paper addresses the problem of how one can improve the performance of a non-optimal filter. First the theoretical question on dynamical representation for a given time correlated random process is studied. It will be demonstrated that for a wide class of random processes, having a canonical form, there exists a dynamical system equivalent in the sense that its output has the same covariance function. It is shown that the dynamical approach is more effective for simulating and estimating a Markov and non- Markovian random processes, computationally is less demanding, especially with increasing of the dimension of simulated processes. Numerical examples and estimation problems in low dimensional systems are given to illustrate the advantages of the approach. A very useful application of the proposed approach is shown for the problem of state estimation in very high dimensional systems. Here a modified filter for data assimilation in an oceanic numerical model is presented which is proved to be very efficient due to introducing a simple Markovian structure for the output prediction error process and adaptive tuning some parameters of the Markov equation.Keywords: Statistical simulation, canonical form, dynamical system, Markov and non-Markovian processes, data assimilation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1302878 Comparison of Alternative Models to Predict Lean Meat Percentage of Lamb Carcasses
Authors: Vasco A. P. Cadavez, Fernando C. Monteiro
Abstract:
The objective of this study was to develop and compare alternative prediction equations of lean meat proportion (LMP) of lamb carcasses. Forty (40) male lambs, 22 of Churra Galega Bragançana Portuguese local breed and 18 of Suffolk breed were used. Lambs were slaughtered, and carcasses weighed approximately 30 min later in order to obtain hot carcass weight (HCW). After cooling at 4º C for 24-h a set of seventeen carcass measurements was recorded. The left side of carcasses was dissected into muscle, subcutaneous fat, inter-muscular fat, bone, and remainder (major blood vessels, ligaments, tendons, and thick connective tissue sheets associated with muscles), and the LMP was evaluated as the dissected muscle percentage. Prediction equations of LMP were developed, and fitting quality was evaluated through the coefficient of determination of estimation (R2 e) and standard error of estimate (SEE). Models validation was performed by k-fold crossvalidation and the coefficient of determination of prediction (R2 p) and standard error of prediction (SEP) were computed. The BT2 measurement was the best single predictor and accounted for 37.8% of the LMP variation with a SEP of 2.30%. The prediction of LMP of lamb carcasses can be based simple models, using as predictors the HCW and one fat thickness measurement.
Keywords: Bootstrap, Carcass, Lambs, Lean meat
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1623877 Image Transmission: A Case Study on Combined Scheme of LDPC-STBC in Asynchronous Cooperative MIMO Systems
Authors: Shan Ding, Lijia Zhang, Hongming Xu
Abstract:
this paper presents a novel scheme which is capable of reducing the error rate and improves the transmission performance in the asynchronous cooperative MIMO systems. A case study of image transmission is applied to prove the efficient of scheme. The linear dispersion structure is employed to accommodate the cooperative wireless communication network in the dynamic topology of structure, as well as to achieve higher throughput than conventional space–time codes based on orthogonal designs. The LDPC encoder without girth-4 and the STBC encoder with guard intervals are respectively introduced. The experiment results show that the combined coder of LDPC-STBC with guard intervals can be the good error correcting coders and BER performance in the asynchronous cooperative communication. In the case study of image transmission, the results show that in the transmission process, the image quality which is obtained by applied combined scheme is much better than it which is not applied the scheme in the asynchronous cooperative MIMO systems.
Keywords: Cooperative MIMO, image transmission, lineardispersion codes, Low-Density Parity-Check (LDPC)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1936876 Impact of Government Spending on Private Consumption and on the Economy: Case of Thailand
Authors: Paitoon Kraipornsak
Abstract:
The recent global financial problem urges government to play role in stimulating the economy due to the fact that private sector has little ability to purchase during the recession. A concerned question is whether the increased government spending crowds out private consumption and whether it helps stimulate the economy. If the government spending policy is effective; the private consumption is expected to increase and can compensate the recent extra government expense. In this study, the government spending is categorized into government consumption spending and government capital spending. The study firstly examines consumer consumption along the line with the demand function in microeconomic theory. Three categories of private consumption are used in the study. Those are food consumption, non food consumption, and services consumption. The dynamic Almost Ideal Demand System of the three categories of the private consumption is estimated using the Vector Error Correction Mechanism model. The estimated model indicates the substituting effects (negative impacts) of the government consumption spending on budget shares of private non food consumption and of the government capital spending on budget share of private food consumption, respectively. Nevertheless the result does not necessarily indicate whether the negative effects of changes in the budget shares of the non food and the food consumption means fallen total private consumption. Microeconomic consumer demand analysis clearly indicates changes in component structure of aggregate expenditure in the economy as a result of the government spending policy. The macroeconomic concept of aggregate demand comprising consumption, investment, government spending (the government consumption spending and the government capital spending), export, and import are used to estimate for their relationship using the Vector Error Correction Mechanism model. The macroeconomic study found no effect of the government capital spending on either the private consumption or the growth of GDP while the government consumption spending has negative effect on the growth of GDP. Therefore no crowding out effect of the government spending is found on the private consumption but it is ineffective and even inefficient expenditure as found reducing growth of the GDP in the context of Thailand.Keywords: government consumption spending, governmentcapital spending, private consumption on food, non food, andservices, Vector Error Correction Mechanism, Almost Ideal DemandSystem, substitution effect, complementary effect, consumer demand, aggregate demand
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1833875 Capacity of Overloaded DS-CDMA System on Rayleigh Fading Channel with Timing Error
Authors: Preetam Kumar
Abstract:
The number of users supported in a DS-CDMA cellular system is typically less than spreading factor (N), and the system is said to be underloaded. Overloading is a technique to accommodate more number of users than the spreading factor N. In O/O overloading scheme, the first set is assigned to the N synchronous users and the second set is assigned to the additional synchronous users. An iterative multistage soft decision interference cancellation (SDIC) receiver is used to remove high level of interference between the two sets. Performance is evaluated in terms of the maximum number acceptable users so that the system performance is degraded slightly compared to the single user performance at a specified BER. In this paper, the capacity of CDMA based O/O overloading scheme is evaluated with SDIC receiver. It is observed that O/O scheme using orthogonal Gold codes provides 25% channel overloading (N=64) for synchronous DS-CDMA system on an AWGN channel in the uplink at a BER of 1e-5.For a Rayleigh faded channel, the critical capacity is 40% at a BER of 5e-5 assuming synchronous users. But in practical systems, perfect chip timing is very difficult to maintain in the uplink.. We have shown that the overloading performance reduces to 11% for a timing synchronization error of 0.02Tc for a BER of 1e-5.Keywords: DS-CDMA, Interference Cancellation, MultiuserDetection, Orthogonal codes, Overloading.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1720874 Analysis of Message Authentication in Turbo Coded Halftoned Images using Exit Charts
Authors: Andhe Dharani, P. S. Satyanarayana, Andhe Pallavi
Abstract:
Considering payload, reliability, security and operational lifetime as major constraints in transmission of images we put forward in this paper a steganographic technique implemented at the physical layer. We suggest transmission of Halftoned images (payload constraint) in wireless sensor networks to reduce the amount of transmitted data. For low power and interference limited applications Turbo codes provide suitable reliability. Ensuring security is one of the highest priorities in many sensor networks. The Turbo Code structure apart from providing forward error correction can be utilized to provide for encryption. We first consider the Halftoned image and then the method of embedding a block of data (called secret) in this Halftoned image during the turbo encoding process is presented. The small modifications required at the turbo decoder end to extract the embedded data are presented next. The implementation complexity and the degradation of the BER (bit error rate) in the Turbo based stego system are analyzed. Using some of the entropy based crypt analytic techniques we show that the strength of our Turbo based stego system approaches that found in the OTPs (one time pad).Keywords: Halftoning, Turbo codes, security, operationallifetime, Turbo based stego system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513873 Validation and Selection between Machine Learning Technique and Traditional Methods to Reduce Bullwhip Effects: a Data Mining Approach
Authors: Hamid R. S. Mojaveri, Seyed S. Mousavi, Mojtaba Heydar, Ahmad Aminian
Abstract:
The aim of this paper is to present a methodology in three steps to forecast supply chain demand. In first step, various data mining techniques are applied in order to prepare data for entering into forecasting models. In second step, the modeling step, an artificial neural network and support vector machine is presented after defining Mean Absolute Percentage Error index for measuring error. The structure of artificial neural network is selected based on previous researchers' results and in this article the accuracy of network is increased by using sensitivity analysis. The best forecast for classical forecasting methods (Moving Average, Exponential Smoothing, and Exponential Smoothing with Trend) is resulted based on prepared data and this forecast is compared with result of support vector machine and proposed artificial neural network. The results show that artificial neural network can forecast more precisely in comparison with other methods. Finally, forecasting methods' stability is analyzed by using raw data and even the effectiveness of clustering analysis is measured.Keywords: Artificial Neural Networks (ANN), bullwhip effect, demand forecasting, Support Vector Machine (SVM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2012872 Forecasting 24-Hour Ahead Electricity Load Using Time Series Models
Authors: Ramin Vafadary, Maryam Khanbaghi
Abstract:
Forecasting electricity load is important for various purposes like planning, operation and control. Forecasts can save operating and maintenance costs, increase the reliability of power supply and delivery systems, and correct decisions for future development. This paper compares various time series methods to forecast 24 hours ahead of electricity load. The methods considered are the Holt-Winters smoothing, SARIMA Modeling, LSTM Network, Fbprophet and Tensorflow probability. The performance of each method is evaluated by using the forecasting accuracy criteria namely, the Mean Absolute Error and Root Mean Square Error. The National Renewable Energy Laboratory (NREL) residential energy consumption data are used to train the models. The results of this study show that SARIMA model is superior to the others for 24 hours ahead forecasts. Furthermore, a Bagging technique is used to make the predictions more robust. The obtained results show that by Bagging multiple time-series forecasts we can improve the robustness of the models for 24 hour ahead electricity load forecasting.
Keywords: Bagging, Fbprophet, Holt-Winters, LSTM, Load Forecast, SARIMA, tensorflow probability, time series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 489871 A Finite Precision Block Floating Point Treatment to Direct Form, Cascaded and Parallel FIR Digital Filters
Authors: Abhijit Mitra
Abstract:
This paper proposes an efficient finite precision block floating point (BFP) treatment to the fixed coefficient finite impulse response (FIR) digital filter. The treatment includes effective implementation of all the three forms of the conventional FIR filters, namely, direct form, cascaded and par- allel, and a roundoff error analysis of them in the BFP format. An effective block formatting algorithm together with an adaptive scaling factor is pro- posed to make the realizations more simple from hardware view point. To this end, a generic relation between the tap weight vector length and the input block length is deduced. The implementation scheme also emphasises on a simple block exponent update technique to prevent overflow even during the block to block transition phase. The roundoff noise is also investigated along the analogous lines, taking into consideration these implementational issues. The simulation results show that the BFP roundoff errors depend on the sig- nal level almost in the same way as floating point roundoff noise, resulting in approximately constant signal to noise ratio over a relatively large dynamic range.
Keywords: Finite impulse response digital filters, Cascade structure, Parallel structure, Block floating point arithmetic, Roundoff error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1650870 Software Maintenance Severity Prediction for Object Oriented Systems
Authors: Parvinder S. Sandhu, Roma Jaswal, Sandeep Khimta, Shailendra Singh
Abstract:
As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done in time especially for the critical applications. As, Neural networks, which have been already applied in software engineering applications to build reliability growth models predict the gross change or reusability metrics. Neural networks are non-linear sophisticated modeling techniques that are able to model complex functions. Neural network techniques are used when exact nature of input and outputs is not known. A key feature is that they learn the relationship between input and output through training. In this present work, various Neural Network Based techniques are explored and comparative analysis is performed for the prediction of level of need of maintenance by predicting level severity of faults present in NASA-s public domain defect dataset. The comparison of different algorithms is made on the basis of Mean Absolute Error, Root Mean Square Error and Accuracy Values. It is concluded that Generalized Regression Networks is the best algorithm for classification of the software components into different level of severity of impact of the faults. The algorithm can be used to develop model that can be used for identifying modules that are heavily affected by the faults.Keywords: Neural Network, Software faults, Software Metric.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577869 Monitoring and Fault-Recovery Capacity with Waveguide Grating-based Optical Switch over WDM/OCDMA-PON
Authors: Yao-Tang Chang, Chuen-Ching Wang, Shu-Han Hu
Abstract:
In order to implement flexibility as well as survivable capacities over passive optical network (PON), a new automatic random fault-recovery mechanism with array-waveguide-grating based (AWG-based) optical switch (OSW) is presented. Firstly, wavelength-division-multiplexing and optical code-division multiple-access (WDM/OCDMA) scheme are configured to meet the various geographical locations requirement between optical network unit (ONU) and optical line terminal (OLT). The AWG-base optical switch is designed and viewed as central star-mesh topology to prohibit/decrease the duplicated redundant elements such as fiber and transceiver as well. Hence, by simple monitoring and routing switch algorithm, random fault-recovery capacity is achieved over bi-directional (up/downstream) WDM/OCDMA scheme. When error of distribution fiber (DF) takes place or bit-error-rate (BER) is higher than 10-9 requirement, the primary/slave AWG-based OSW are adjusted and controlled dynamically to restore the affected ONU groups via the other working DFs immediately.Keywords: Random fault recovery mechanism, Array-waveguide-grating based optical switch (AWG- based OSW), wavelength-division-multiplexing and optical code-divisionmultiple-access (WDM/ OCDMA)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643868 Discrete Polyphase Matched Filtering-based Soft Timing Estimation for Mobile Wireless Systems
Authors: Thomas O. Olwal, Michael A. van Wyk, Barend J. van Wyk
Abstract:
In this paper we present a soft timing phase estimation (STPE) method for wireless mobile receivers operating in low signal to noise ratios (SNRs). Discrete Polyphase Matched (DPM) filters, a Log-maximum a posterior probability (MAP) and/or a Soft-output Viterbi algorithm (SOVA) are combined to derive a new timing recovery (TR) scheme. We apply this scheme to wireless cellular communication system model that comprises of a raised cosine filter (RCF), a bit-interleaved turbo-coded multi-level modulation (BITMM) scheme and the channel is assumed to be memory-less. Furthermore, no clock signals are transmitted to the receiver contrary to the classical data aided (DA) models. This new model ensures that both the bandwidth and power of the communication system is conserved. However, the computational complexity of ideal turbo synchronization is increased by 50%. Several simulation tests on bit error rate (BER) and block error rate (BLER) versus low SNR reveal that the proposed iterative soft timing recovery (ISTR) scheme outperforms the conventional schemes.
Keywords: discrete polyphase matched filters, maximum likelihood estimators, soft timing phase estimation, wireless mobile systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695867 In Cognitive Radio the Analysis of Bit-Error- Rate (BER) by using PSO Algorithm
Authors: Shrikrishan Yadav, Akhilesh Saini, Krishna Chandra Roy
Abstract:
The electromagnetic spectrum is a natural resource and hence well-organized usage of the limited natural resources is the necessities for better communication. The present static frequency allocation schemes cannot accommodate demands of the rapidly increasing number of higher data rate services. Therefore, dynamic usage of the spectrum must be distinguished from the static usage to increase the availability of frequency spectrum. Cognitive radio is not a single piece of apparatus but it is a technology that can incorporate components spread across a network. It offers great promise for improving system efficiency, spectrum utilization, more effective applications, reduction in interference and reduced complexity of usage for users. Cognitive radio is aware of its environmental, internal state, and location, and autonomously adjusts its operations to achieve designed objectives. It first senses its spectral environment over a wide frequency band, and then adapts the parameters to maximize spectrum efficiency with high performance. This paper only focuses on the analysis of Bit-Error-Rate in cognitive radio by using Particle Swarm Optimization Algorithm. It is theoretically as well as practically analyzed and interpreted in the sense of advantages and drawbacks and how BER affects the efficiency and performance of the communication system.Keywords: BER, Cognitive Radio, Environmental Parameters, PSO, Radio spectrum, Transmission Parameters
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2160866 Methods for Data Selection in Medical Databases: The Binary Logistic Regression -Relations with the Calculated Risks
Authors: Cristina G. Dascalu, Elena Mihaela Carausu, Daniela Manuc
Abstract:
The medical studies often require different methods for parameters selection, as a second step of processing, after the database-s designing and filling with information. One common task is the selection of fields that act as risk factors using wellknown methods, in order to find the most relevant risk factors and to establish a possible hierarchy between them. Different methods are available in this purpose, one of the most known being the binary logistic regression. We will present the mathematical principles of this method and a practical example of using it in the analysis of the influence of 10 different psychiatric diagnostics over 4 different types of offences (in a database made from 289 psychiatric patients involved in different types of offences). Finally, we will make some observations about the relation between the risk factors hierarchy established through binary logistic regression and the individual risks, as well as the results of Chi-squared test. We will show that the hierarchy built using the binary logistic regression doesn-t agree with the direct order of risk factors, even if it was naturally to assume this hypothesis as being always true.Keywords: Databases, risk factors, binary logisticregression, hierarchy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1331