Search results for: syntactic errors
161 Using Analytical Hierarchy Process and TOPSIS Approaches in Designing a Finite Element Analysis Automation Program
Authors: Ming Wen, Nasim Nezamoddini
Abstract:
Sophisticated numerical simulations like finite element analysis (FEA) involve a complicated process from model setup to post-processing tasks that require replication of time-consuming steps. Utilizing FEA automation program simplifies the complexity of the involved steps while minimizing human errors in analysis set up, calculations, and results processing. One of the main challenges in designing FEA automation programs is to identify user requirements and link them to possible design alternatives. This paper presents a decision-making framework to design a Python based FEA automation program for modal analysis, frequency response analysis, and random vibration fatigue (RVF) analysis procedures. Analytical hierarchy process (AHP) and technique for order preference by similarity to ideal solution (TOPSIS) are applied to evaluate design alternatives considering the feedback received from experts and program users.
Keywords: FEA, random vibration fatigue, process automation, AHP, TOPSIS, multiple-criteria decision-making, MCDM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 534160 Accuracy of Displacement Estimation and Selection of Capacitors for a Four Degrees of Freedom Capacitive Force Sensor
Authors: Chisato Murakami, Makoto Takahashi
Abstract:
Force sensor has been used as requisite for knowing information on the amount and the directions of forces on the skin surface. We have developed a four-degrees-of-freedom capacitive force sensor (approximately 20×20×5 mm3) that has a flexible structure and sixteen parallel plate capacitors. An iterative algorithm was developed for estimating four displacements from the sixteen capacitances using fourth-order polynomial approximation of characteristics between capacitance and displacement. The estimation results from measured capacitances had large error caused by deterioration of the characteristics. In this study, effective capacitors had major information were selected on the basis of the capacitance change range and the characteristic shape. Maximum errors in calibration and non-calibration points were 25%and 6.8%.However the maximum error was larger than desired value, the smallness of averaged value indicated the occurrence of a few large error points. On the other hand, error in non-calibration point was within desired value.
Keywords: Force sensors, capacitive sensors, estimation, iterative algorithms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1616159 Measurement of Convective Heat Transfer from a Vertical Flat Plate Using Mach-Zehnder Interferometer with Wedge Fringe Setting
Authors: Divya Haridas, C. B. Sobhan
Abstract:
Laser interferometric methods have been utilized for the measurement of natural convection heat transfer from a heated vertical flat plate, in the investigation presented here. The study mainly aims at comparing two different fringe orientations in the wedge fringe setting of Mach-Zehnder interferometer (MZI), used for the measurements. The interference fringes are set in horizontal and vertical orientations with respect to the heated surface, and two different fringe analysis methods, namely the stepping method and the method proposed by Naylor and Duarte, are used to obtain the heat transfer coefficients. The experimental system is benchmarked with theoretical results, thus validating its reliability in heat transfer measurements. The interference fringe patterns are analyzed digitally using MATLAB 7 and MOTIC Plus softwares, which ensure improved efficiency in fringe analysis, hence reducing the errors associated with conventional fringe tracing. The work also discuss the relative merits and limitations of the two methods used.
Keywords: MZI, Natural Convection, Naylor Method, Vertical Flat Plate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3125158 A Fully Implicit Finite-Difference Solution to One Dimensional Coupled Nonlinear Burgers’ Equations
Authors: Vineet K. Srivastava, Mukesh K. Awasthi, Mohammad Tamsir
Abstract:
A fully implicit finite-difference method has been proposed for the numerical solutions of one dimensional coupled nonlinear Burgers’ equations on the uniform mesh points. The method forms a system of nonlinear difference equations which is to be solved at each iteration. Newton’s iterative method has been implemented to solve this nonlinear assembled system of equations. The linear system has been solved by Gauss elimination method with partial pivoting algorithm at each iteration of Newton’s method. Three test examples have been carried out to illustrate the accuracy of the method. Computed solutions obtained by proposed scheme have been compared with analytical solutions and those already available in the literature by finding L2 and L∞ errors.
Keywords: Burgers’ equation, Implicit Finite-difference method, Newton’s method, Gauss elimination with partial pivoting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5946157 Animated Versus Static User Interfaces: A Study of Mathsigner™
Authors: Scott Dyer, Nicoletta Adamo-Villani
Abstract:
In this paper we report a study aimed at determining the effects of animation on usability and appeal of educational software user interfaces. Specifically, the study compares 3 interfaces developed for the Mathsigner™ program: a static interface, an interface with highlighting/sound feedback, and an interface that incorporates five Disney animation principles. The main objectives of the comparative study were to: (1) determine which interface is the most effective for the target users of Mathsigner™ (e.g., children ages 5-11), and (2) identify any Gender and Age differences in using the three interfaces. To accomplish these goals we have designed an experiment consisting of a cognitive walkthrough and a survey with rating questions. Sixteen children ages 7-11 participated in the study, ten males and six females. Results showed no significant interface effect on user task performance (e.g., task completion time and number of errors); however, interface differences were seen in rating of appeal, with the animated interface rated more 'likeable' than the other two. Task performance and rating of appeal were not affected significantly by Gender or Age of the subjects.Keywords: Animation, Animated interfaces, EducationalSoftware, Human Computer Interaction, Multimedia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630156 Small Sample Bootstrap Confidence Intervals for Long-Memory Parameter
Authors: Josu Arteche, Jesus Orbe
Abstract:
The log periodogram regression is widely used in empirical applications because of its simplicity, since only a least squares regression is required to estimate the memory parameter, d, its good asymptotic properties and its robustness to misspecification of the short term behavior of the series. However, the asymptotic distribution is a poor approximation of the (unknown) finite sample distribution if the sample size is small. Here the finite sample performance of different nonparametric residual bootstrap procedures is analyzed when applied to construct confidence intervals. In particular, in addition to the basic residual bootstrap, the local and block bootstrap that might adequately replicate the structure that may arise in the errors of the regression are considered when the series shows weak dependence in addition to the long memory component. Bias correcting bootstrap to adjust the bias caused by that structure is also considered. Finally, the performance of the bootstrap in log periodogram regression based confidence intervals is assessed in different type of models and how its performance changes as sample size increases.Keywords: bootstrap, confidence interval, log periodogram regression, long memory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1739155 Dynamic Self-Scheduling of Pumped-Storage Power Plant in Energy and Ancillary Service Markets Using Sliding Window Technique
Authors: P. Kanakasabapathy, Radhika. S,
Abstract:
In the competitive electricity market environment, the profit of the pumped-storage plant in the energy market can be maximized by operating it as a generator, when market clearing price is high and as a pump, to pump water from lower reservoir to upper reservoir, when the price is low. An optimal self-scheduling plan has been developed for a pumped-storage plant, carried out on weekly basis in order to maximize the profit of the plant, keeping into account of all the major uncertainties such as the sudden ancillary service delivery request and the price forecasting errors. For a pumped storage power plant to operate in a real time market successive self scheduling has to be done by considering the forecast of the day-ahead market and the modified reservoir storage due to the ancillary service request of the previous day. Sliding Window Technique has been used for successive self scheduling to ensure profit for the plant.
Keywords: Ancillary services, BPSO, Power System Economics (Electricity markets), Self-Scheduling, Sliding Window Technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2574154 Visual Attention Analysis on Mutated Brand Name using Eye-Tracking: A Case Study
Authors: Anirban Chowdhury, Sougata Karmakar, Swathi Matta Reddy, Sanjog J., Subrata Ghosh, Debkumar Chakrabarti
Abstract:
Brand name plays a vital role for in-shop buying behavior of consumers and mutated brand name may affect the selling of leading branded products. In Indian market, there are many products with mutated brand names which are either orthographically or phonologically similar. Due to presence of such products, Indian consumers very often fall under confusion when buying some regularly used stuff. Authors of the present paper have attempted to demonstrate relationship between less attention and false recognition of mutated brand names during a product selection process. To achieve this goal, visual attention study was conducted on 15 male college students using eye-tracker against a mutated brand name and errors in recognition were noted using questionnaire. Statistical analysis of the acquired data revealed that there was more false recognition of mutated brand name when less attention was paid during selection of favorite product. Moreover, it was perceived that eye tracking is an effective tool for analyzing false recognition of brand name mutation.Keywords: Brand Name Mutation, Consumer Behavior, Visual Attention, Orthography
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2535153 A Comparison of Bias Among Relaxed Divisor Methods Using 3 Bias Measurements
Authors: Sumachaya Harnsukworapanich, Tetsuo Ichimori
Abstract:
The apportionment method is used by many countries, to calculate the distribution of seats in political bodies. For example, this method is used in the United States (U.S.) to distribute house seats proportionally based on the population of the electoral district. Famous apportionment methods include the divisor methods called the Adams Method, Dean Method, Hill Method, Jefferson Method and Webster Method. Sometimes the results from the implementation of these divisor methods are unfair and include errors. Therefore, it is important to examine the optimization of this method by using a bias measurement to figure out precise and fair results. In this research we investigate the bias of divisor methods in the U.S. Houses of Representatives toward large and small states by applying the Stolarsky Mean Method. We compare the bias of the apportionment method by using two famous bias measurements: the Balinski and Young measurement and the Ernst measurement. Both measurements have a formula for large and small states. The Third measurement however, which was created by the researchers, did not factor in the element of large and small states into the formula. All three measurements are compared and the results show that our measurement produces similar results to the other two famous measurements.
Keywords: Apportionment, Bias, Divisor, Fair, Simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1796152 Some Issues on Integrating Telepresence Technology into Industrial Robotic Assembly
Authors: Gunther Reinhart, Marwan Radi
Abstract:
Since the 1940s, many promising telepresence research results have been obtained. However, telepresence technology still has not reached industrial usage. As human intelligence is necessary for successful execution of most manual assembly tasks, the ability of the human is hindered in some cases, such as the assembly of heavy parts of small/medium lots or prototypes. In such a case of manual assembly, the help of industrial robots is mandatory. The telepresence technology can be considered as a solution for performing assembly tasks, where the human intelligence and haptic sense are needed to identify and minimize the errors during an assembly process and a robot is needed to carry heavy parts. In this paper, preliminary steps to integrate the telepresence technology into industrial robot systems are introduced. The system described here combines both, the human haptic sense and the industrial robot capability to perform a manual assembly task remotely using a force feedback joystick. Mapping between the joystick-s Degrees of Freedom (DOF) and the robot-s ones are introduced. Simulation and experimental results are shown and future work is discussed.Keywords: Assembly, Force Feedback, Industrial Robot, Teleassembly, Telepresence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1245151 An Image Segmentation Algorithm for Gradient Target Based on Mean-Shift and Dictionary Learning
Authors: Yanwen Li, Shuguo Xie
Abstract:
In electromagnetic imaging, because of the diffraction limited system, the pixel values could change slowly near the edge of the image targets and they also change with the location in the same target. Using traditional digital image segmentation methods to segment electromagnetic gradient images could result in lots of errors because of this change in pixel values. To address this issue, this paper proposes a novel image segmentation and extraction algorithm based on Mean-Shift and dictionary learning. Firstly, the preliminary segmentation results from adaptive bandwidth Mean-Shift algorithm are expanded, merged and extracted. Then the overlap rate of the extracted image block is detected before determining a segmentation region with a single complete target. Last, the gradient edge of the extracted targets is recovered and reconstructed by using a dictionary-learning algorithm, while the final segmentation results are obtained which are very close to the gradient target in the original image. Both the experimental results and the simulated results show that the segmentation results are very accurate. The Dice coefficients are improved by 70% to 80% compared with the Mean-Shift only method.
Keywords: Gradient image, segmentation and extract, mean-shift algorithm, dictionary learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 970150 Enhance Security in XML Databases: XLog File for Severity-Aware Trust-Based Access Control
Authors: Asmawi A., Affendey L. S., Udzir N. I., Mahmod R.
Abstract:
The topic of enhancing security in XML databases is important as it includes protecting sensitive data and providing a secure environment to users. In order to improve security and provide dynamic access control for XML databases, we presented XLog file to calculate user trust values by recording users’ bad transaction, errors and query severities. Severity-aware trust-based access control for XML databases manages the access policy depending on users' trust values and prevents unauthorized processes, malicious transactions and insider threats. Privileges are automatically modified and adjusted over time depending on user behaviour and query severity. Logging in database is an important process and is used for recovery and security purposes. In this paper, the Xlog file is presented as a dynamic and temporary log file for XML databases to enhance the level of security.
Keywords: XML database, trust-based access control, severity-aware, trust values, log file.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1852149 Study of EEGs from Somatosensory Cortex and Alzheimer's Disease Sources
Authors: Md R. Bashar, Yan Li, Peng Wen
Abstract:
This study is to investigate the electroencephalogram (EEG) differences generated from a normal and Alzheimer-s disease (AD) sources. We also investigate the effects of brain tissue distortions due to AD on EEG. We develop a realistic head model from T1 weighted magnetic resonance imaging (MRI) using finite element method (FEM) for normal source (somatosensory cortex (SC) in parietal lobe) and AD sources (right amygdala (RA) and left amygdala (LA) in medial temporal lobe). Then, we compare the AD sourced EEGs to the SC sourced EEG for studying the nature of potential changes due to sources and 5% to 20% brain tissue distortions. We find an average of 0.15 magnification errors produced by AD sourced EEGs. Different brain tissue distortion models also generate the maximum 0.07 magnification. EEGs obtained from AD sources and different brain tissue distortion levels vary scalp potentials from normal source, and the electrodes residing in parietal and temporal lobes are more sensitive than other electrodes for AD sourced EEG.
Keywords: Alzheimer's disease (AD), brain tissue distortion, electroencephalogram, finite element method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1919148 Performance Evaluation of Data Mining Techniques for Predicting Software Reliability
Authors: Pradeep Kumar, Abdul Wahid
Abstract:
Accurate software reliability prediction not only enables developers to improve the quality of software but also provides useful information to help them for planning valuable resources. This paper examines the performance of three well-known data mining techniques (CART, TreeNet and Random Forest) for predicting software reliability. We evaluate and compare the performance of proposed models with Cascade Correlation Neural Network (CCNN) using sixteen empirical databases from the Data and Analysis Center for Software. The goal of our study is to help project managers to concentrate their testing efforts to minimize the software failures in order to improve the reliability of the software systems. Two performance measures, Normalized Root Mean Squared Error (NRMSE) and Mean Absolute Errors (MAE), illustrate that CART model is accurate than the models predicted using Random Forest, TreeNet and CCNN in all datasets used in our study. Finally, we conclude that such methods can help in reliability prediction using real-life failure datasets.
Keywords: Classification, Cascade Correlation Neural Network, Random Forest, Software reliability, TreeNet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1839147 DODR : Delay On-Demand Routing
Authors: Dong Wan-li, Gu Nai-jie, Tu Kun, Bi Kun, Liu Gang
Abstract:
As originally designed for wired networks, TCP (transmission control protocol) congestion control mechanism is triggered into action when packet loss is detected. This implicit assumption for packet loss mostly due to network congestion does not work well in Mobile Ad Hoc Network, where there is a comparatively high likelihood of packet loss due to channel errors and node mobility etc. Such non-congestion packet loss, when dealt with by congestion control mechanism, causes poor TCP performance in MANET. In this study, we continue to investigate the impact of the interaction between transport protocols and on-demand routing protocols on the performance and stability of 802.11 multihop networks. We evaluate the important wireless networking events caused routing change, and propose a cross layer method to delay the unnecessary routing changes, only need to add a sensitivity parameter α , which represents the on-demand routing-s reaction to link failure of MAC layer. Our proposal is applicable to the plain 802.11 networking environment, the simulation results that this method can remarkably improve the stability and performance of TCP without any modification on TCP and MAC protocol.
Keywords: Mobile ad hoc networks (MANET), on-demandrouting, performance, transmission control protocol (TCP).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1792146 Periodic Control of a Wastewater Treatment Process to Improve Productivity
Authors: Muhammad Rizwan Azhar, Emadadeen Ali
Abstract:
In this paper, periodic force operation of a wastewater treatment process has been studied for the improved process performance. A previously developed dynamic model for the process is used to conduct the performance analysis. The static version of the model was utilized first to determine the optimal productivity conditions for the process. Then, feed flow rate in terms of dilution rate i.e. (D) is transformed into sinusoidal function. Nonlinear model predictive control algorithm is utilized to regulate the amplitude and period of the sinusoidal function. The parameters of the feed cyclic functions are determined which resulted in improved productivity than the optimal productivity under steady state conditions. The improvement in productivity is found to be marginal and is satisfactory in substrate conversion compared to that of the optimal condition and to the steady state condition, which corresponds to the average value of the periodic function. Successful results were also obtained in the presence of modeling errors and external disturbances.
Keywords: Dilution rate, nonlinear model predictive control, sinusoidal function, wastewater treatment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2209145 Performance Comparison of Particle Swarm Optimization with Traditional Clustering Algorithms used in Self-Organizing Map
Authors: Anurag Sharma, Christian W. Omlin
Abstract:
Self-organizing map (SOM) is a well known data reduction technique used in data mining. It can reveal structure in data sets through data visualization that is otherwise hard to detect from raw data alone. However, interpretation through visual inspection is prone to errors and can be very tedious. There are several techniques for the automatic detection of clusters of code vectors found by SOM, but they generally do not take into account the distribution of code vectors; this may lead to unsatisfactory clustering and poor definition of cluster boundaries, particularly where the density of data points is low. In this paper, we propose the use of an adaptive heuristic particle swarm optimization (PSO) algorithm for finding cluster boundaries directly from the code vectors obtained from SOM. The application of our method to several standard data sets demonstrates its feasibility. PSO algorithm utilizes a so-called U-matrix of SOM to determine cluster boundaries; the results of this novel automatic method compare very favorably to boundary detection through traditional algorithms namely k-means and hierarchical based approach which are normally used to interpret the output of SOM.Keywords: cluster boundaries, clustering, code vectors, data mining, particle swarm optimization, self-organizing maps, U-matrix.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1910144 Using Linear Quadratic Gaussian Optimal Control for Lateral Motion of Aircraft
Authors: A. Maddi, A. Guessoum, D. Berkani
Abstract:
The purpose of this paper is to provide a practical example to the Linear Quadratic Gaussian (LQG) controller. This method includes a description and some discussion of the discrete Kalman state estimator. One aspect of this optimality is that the estimator incorporates all information that can be provided to it. It processes all available measurements, regardless of their precision, to estimate the current value of the variables of interest, with use of knowledge of the system and measurement device dynamics, the statistical description of the system noises, measurement errors, and uncertainty in the dynamics models. Since the time of its introduction, the Kalman filter has been the subject of extensive research and application, particularly in the area of autonomous or assisted navigation. For example, to determine the velocity of an aircraft or sideslip angle, one could use a Doppler radar, the velocity indications of an inertial navigation system, or the relative wind information in the air data system. Rather than ignore any of these outputs, a Kalman filter could be built to combine all of this data and knowledge of the various systems- dynamics to generate an overall best estimate of velocity and sideslip angle.Keywords: Aircraft motion, Kalman filter, LQG control, Lateral stability, State estimator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2471143 Rapid Frequency Response Measurement of Power Conversion Products with Coherence-Based Confidence Analysis
Authors: Tomi Roinila, Aki Taskinen, Matti Vilkko
Abstract:
Switched-mode converters play now a significant role in modern society. Their operation are often crucial in various electrical applications affecting the every day life. Therefore, the quality of the converters needs to be reliably verified. Recent studies have shown that the converters can be fully characterized by a set of frequency responses which can be efficiently used to validate the proper operation of the converters. Consequently, several methods have been proposed to measure the frequency responses fast and accurately. Most often correlation-based techniques have been applied. The presented measurement methods are highly sensitive to external errors and system nonlinearities. This fact has been often forgotten and the necessary uncertainty analysis of the measured responses has been neglected. This paper presents a simple approach to analyze the noise and nonlinearities in the frequency-response measurements of switched-mode converters. Coherence analysis is applied to form a confidence interval characterizing the noise and nonlinearities involved in the measurements. The presented method is verified by practical measurements from a high-frequency switchedmode converter.Keywords: Switched-mode converters, Frequency analysis, CoherenceAnalysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1719142 Fall Avoidance Control of Wheeled Inverted Pendulum Type Robotic Wheelchair While Climbing Stairs
Authors: Nan Ding, Motoki Shino, Nobuyasu Tomokuni, Genki Murata
Abstract:
The wheelchair is the major means of transport for physically disabled people. However, it cannot overcome architectural barriers such as curbs and stairs. In this paper, the authors proposed a method to avoid falling down of a wheeled inverted pendulum type robotic wheelchair for climbing stairs. The problem of this system is that the feedback gain of the wheels cannot be set high due to modeling errors and gear backlash, which results in the movement of wheels. Therefore, the wheels slide down the stairs or collide with the side of the stairs, and finally the wheelchair falls down. To avoid falling down, the authors proposed a slider control strategy based on skyhook model in order to decrease the movement of wheels, and a rotary link control strategy based on the staircase dimensions in order to avoid collision or slide down. The effectiveness of the proposed fall avoidance control strategy was validated by ODE simulations and the prototype wheelchair.Keywords: EPW, fall avoidance control, skyhook, wheeled inverted pendulum.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1198141 An Approach for Coagulant Dosage Optimization Using Soft Jar Test: A Case Study of Bangkhen Water Treatment Plant
Authors: Ninlawat Phuangchoke, Waraporn Viyanon, Setta Sasananan
Abstract:
The most important process of the water treatment plant process is coagulation, which uses alum and poly aluminum chloride (PACL). Therefore, determining the dosage of alum and PACL is the most important factor to be prescribed. This research applies an artificial neural network (ANN), which uses the Levenberg–Marquardt algorithm to create a mathematical model (Soft Jar Test) for chemical dose prediction, as used for coagulation, such as alum and PACL, with input data consisting of turbidity, pH, alkalinity, conductivity, and, oxygen consumption (OC) of the Bangkhen Water Treatment Plant (BKWTP), under the authority of the Metropolitan Waterworks Authority of Thailand. The data were collected from 1 January 2019 to 31 December 2019 in order to cover the changing seasons of Thailand. The input data of ANN are divided into three groups: training set, test set, and validation set. The coefficient of determination and the mean absolute errors of the alum model are 0.73, 3.18 and the PACL model are 0.59, 3.21, respectively.
Keywords: Soft jar test, jar test, water treatment plant process, artificial neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 665140 Forecast Based on an Empirical Probability Function with an Adjusted Error Using Propagation of Error
Authors: Oscar Javier Herrera, Manuel Ángel Camacho
Abstract:
This paper addresses a cutting edge method of business demand forecasting, based on an empirical probability function when the historical behavior of the data is random. Additionally, it presents error determination based on the numerical method technique ‘propagation of errors.’ The methodology was conducted characterization and process diagnostics demand planning as part of the production management, then new ways to predict its value through techniques of probability and to calculate their mistake investigated, it was tools used numerical methods. All this based on the behavior of the data. This analysis was determined considering the specific business circumstances of a company in the sector of communications, located in the city of Bogota, Colombia. In conclusion, using this application it was possible to obtain the adequate stock of the products required by the company to provide its services, helping the company reduce its service time, increase the client satisfaction rate, reduce stock which has not been in rotation for a long time, code its inventory, and plan reorder points for the replenishment of stock.Keywords: Demand Forecasting, Empirical Distribution, Propagation of Error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1845139 Inadequate Requirements Engineering Process: A Key Factor for Poor Software Development in Developing Nations: A Case Study
Authors: K. Adu Michael, K. Alese Boniface
Abstract:
Developing a reliable and sustainable software products is today a big challenge among up–coming software developers in Nigeria. The inability to develop a comprehensive problem statement needed to execute proper requirements engineering process is missing. The need to describe the ‘what’ of a system in one document, written in a natural language is a major step in the overall process of Software Engineering. Requirements Engineering is a process use to discover, analyze and validate system requirements. This process is needed in reducing software errors at the early stage of the development of software. The importance of each of the steps in Requirements Engineering is clearly explained in the context of using detailed problem statement from client/customer to get an overview of an existing system along with expectations from the new system. This paper elicits inadequate Requirements Engineering principle as the major cause of poor software development in developing nations using a case study of final year computer science students of a tertiary-education institution in Nigeria.
Keywords: Client/Customer, Problem Statement, Requirements Engineering, Software Developers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2456138 Performance Analysis of MIMO-OFDM Using Convolution Codes with QAM Modulation
Authors: I Gede Puja Astawa, Yoedy Moegiharto, Ahmad Zainudin, Imam Dui Agus Salim, Nur Annisa Anggraeni
Abstract:
Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs. Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier transmits Rayleigh multipath channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2x2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4x4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4x4 MIMO-OFDM system without coding, power saving 7dB of 2x2 MIMO-OFDM and significant power savings from SISO-OFDM system
Keywords: Convolution code, OFDM, MIMO, QAM, BER.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3387137 Array Signal Processing: DOA Estimation for Missing Sensors
Authors: Lalita Gupta, R. P. Singh
Abstract:
Array signal processing involves signal enumeration and source localization. Array signal processing is centered on the ability to fuse temporal and spatial information captured via sampling signals emitted from a number of sources at the sensors of an array in order to carry out a specific estimation task: source characteristics (mainly localization of the sources) and/or array characteristics (mainly array geometry) estimation. Array signal processing is a part of signal processing that uses sensors organized in patterns or arrays, to detect signals and to determine information about them. Beamforming is a general signal processing technique used to control the directionality of the reception or transmission of a signal. Using Beamforming we can direct the majority of signal energy we receive from a group of array. Multiple signal classification (MUSIC) is a highly popular eigenstructure-based estimation method of direction of arrival (DOA) with high resolution. This Paper enumerates the effect of missing sensors in DOA estimation. The accuracy of the MUSIC-based DOA estimation is degraded significantly both by the effects of the missing sensors among the receiving array elements and the unequal channel gain and phase errors of the receiver.
Keywords: Array Signal Processing, Beamforming, ULA, Direction of Arrival, MUSIC
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3021136 Microscopic Emission and Fuel Consumption Modeling for Light-duty Vehicles Using Portable Emission Measurement System Data
Authors: Wei Lei, Hui Chen, Lin Lu
Abstract:
Microscopic emission and fuel consumption models have been widely recognized as an effective method to quantify real traffic emission and energy consumption when they are applied with microscopic traffic simulation models. This paper presents a framework for developing the Microscopic Emission (HC, CO, NOx, and CO2) and Fuel consumption (MEF) models for light-duty vehicles. The variable of composite acceleration is introduced into the MEF model with the purpose of capturing the effects of historical accelerations interacting with current speed on emission and fuel consumption. The MEF model is calibrated by multivariate least-squares method for two types of light-duty vehicle using on-board data collected in Beijing, China by a Portable Emission Measurement System (PEMS). The instantaneous validation results shows the MEF model performs better with lower Mean Absolute Percentage Error (MAPE) compared to other two models. Moreover, the aggregate validation results tells the MEF model produces reasonable estimations compared to actual measurements with prediction errors within 12%, 10%, 19%, and 9% for HC, CO, NOx emissions and fuel consumption, respectively.Keywords: Emission, Fuel consumption, Light-duty vehicle, Microscopic, Modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2007135 MIMO Antenna Selections using CSI from Reciprocal Channel
Authors: P. Uthansakul, K. Attakitmongkol, N. Promsuvana, M. Uthansakul
Abstract:
It is well known that the channel capacity of Multiple- Input-Multiple-Output (MIMO) system increases as the number of antenna pairs between transmitter and receiver increases but it suffers from multiple expensive RF chains. To reduce the cost of RF chains, Antenna Selection (AS) method can offer a good tradeoff between expense and performance. In a transmitting AS system, Channel State Information (CSI) feedback is necessarily required to choose the best subset of antennas in which the effects of delays and errors occurred in feedback channels are the most dominant factors degrading the performance of the AS method. This paper presents the concept of AS method using CSI from channel reciprocity instead of feedback method. Reciprocity technique can easily archive CSI by utilizing a reverse channel where the forward and reverse channels are symmetrically considered in time, frequency and location. In this work, the capacity performance of MIMO system when using AS method at transmitter with reciprocity channels is investigated by own developing Testbed. The obtained results show that reciprocity technique offers capacity close to a system with a perfect CSI and gains a higher capacity than a system without AS method from 0.9 to 2.2 bps/Hz at SNR 10 dB.Keywords: Antenna Selection, Capacity, Channel, Measurement, MIMO, Reciprocity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1965134 A Soft Error Rates Evaluation Method of Combinational Logic Circuit Based on Linear Energy Transfers
Authors: Man Li, Wanting Zhou, Lei Li
Abstract:
Communication stability is the primary concern of communication satellites. Communication satellites are easily affected by particle radiation to generate single event effects (SEE), which leads to soft errors (SE) of combinational logic circuit. The existing research on soft error rates (SER) of combined logic circuit is mostly based on the assumption that the logic gates being bombarded have the same pulse width. However, in the actual radiation environment, the pulse widths of the logic gates being bombarded are different due to different linear energy transfers (LET). In order to improve the accuracy of SER evaluation model, this paper proposes a soft error rates evaluation method based on LET. In this paper, we analyze the influence of LET on the pulse width of combinational logic and establish the pulse width model based on LET. Based on this model, the error rate of test circuit ISCAS’85 is calculated. Experimental results show that this model can be used for SER evaluation.
Keywords: Communication satellite, pulse width, soft error rates, linear energy transfer, LET.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 385133 Bandwidth Efficient Diversity Scheme Using STTC Concatenated With STBC: MIMO Systems
Authors: Sameru Sharma, Sanjay Sharma, Derick Engles
Abstract:
Multiple-input multiple-output (MIMO) systems are widely in use to improve quality, reliability of wireless transmission and increase the spectral efficiency. However in MIMO systems, multiple copies of data are received after experiencing various channel effects. The limitations on account of complexity due to number of antennas in case of conventional decoding techniques have been looked into. Accordingly we propose a modified sphere decoder (MSD-1) algorithm with lower complexity and give rise to system with high spectral efficiency. With the aim to increase signal diversity we apply rotated quadrature amplitude modulation (QAM) constellation in multi dimensional space. Finally, we propose a new architecture involving space time trellis code (STTC) concatenated with space time block code (STBC) using MSD-1 at the receiver for improving system performance. The system gains have been verified with channel state information (CSI) errors.Keywords: Channel State Information , Diversity, Multi-Antenna, Rotated Constellation, Space Time Codes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1666132 Determine of Constant Coefficients to RelateTotal Dissolved Solids to Electrical Conductivity
Authors: M. Siosemarde, F. Kave, E. Pazira, H. Sedghi, S. J. Ghaderi
Abstract:
Salinity is a measure of the amount of salts in the water. Total Dissolved Solids (TDS) as salinity parameter are often determined using laborious and time consuming laboratory tests, but it may be more appropriate and economical to develop a method which uses a more simple soil salinity index. Because dissolved ions increase salinity as well as conductivity, the two measures are related. The aim of this research was determine of constant coefficients for predicting of Total Dissolved Solids (TDS) based on Electrical Conductivity (EC) with Statistics of Correlation coefficient, Root mean square error, Maximum error, Mean Bias error, Mean absolute error, Relative error and Coefficient of residual mass. For this purpose, two experimental areas (S1, S2) of Khuzestan province-IRAN were selected and four treatments with three replications by series of double rings were applied. The treatments were included 25cm, 50cm, 75cm and 100cm water application. The results showed the values 16.3 & 12.4 were the best constant coefficients for predicting of Total Dissolved Solids (TDS) based on EC in Pilot S1 and S2 with correlation coefficient 0.977 & 0.997 and 191.1 & 106.1 Root mean square errors (RMSE) respectively.Keywords: constant coefficients, electrical conductivity, Khuzestan plain and total dissolved solids.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3905