Search results for: Gaussian process regression
6169 Designing of the Heating Process for Fiber- Reinforced Thermoplastics with Middle-Wave Infrared Radiators
Abstract:
Manufacturing components of fiber-reinforced thermoplastics requires three steps: heating the matrix, forming and consolidation of the composite and terminal cooling the matrix. For the heating process a pre-determined temperature distribution through the layers and the thickness of the pre-consolidated sheets is recommended to enable forming mechanism. Thus, a design for the heating process for forming composites with thermoplastic matrices is necessary. To obtain a constant temperature through thickness and width of the sheet, the heating process was analyzed by the help of the finite element method. The simulation models were validated by experiments with resistance thermometers as well as with an infrared camera. Based on the finite element simulation, heating methods for infrared radiators have been developed. Using the numeric simulation many iteration loops are required to determine the process parameters. Hence, the initiation of a model for calculating relevant process parameters started applying regression functions.Keywords: Fiber-reinforced thermoplastics, heating strategies, middle-wave infrared radiator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17426168 A Hybrid Particle Swarm Optimization Solution to Ramping Rate Constrained Dynamic Economic Dispatch
Authors: Pichet Sriyanyong
Abstract:
This paper presents the application of an enhanced Particle Swarm Optimization (EPSO) combined with Gaussian Mutation (GM) for solving the Dynamic Economic Dispatch (DED) problem considering the operating constraints of generators. The EPSO consists of the standard PSO and a modified heuristic search approaches. Namely, the ability of the traditional PSO is enhanced by applying the modified heuristic search approach to prevent the solutions from violating the constraints. In addition, Gaussian Mutation is aimed at increasing the diversity of global search, whilst it also prevents being trapped in suboptimal points during search. To illustrate its efficiency and effectiveness, the developed EPSO-GM approach is tested on the 3-unit and 10-unit 24-hour systems considering valve-point effect. From the experimental results, it can be concluded that the proposed EPSO-GM provides, the accurate solution, the efficiency, and the feature of robust computation compared with other algorithms under consideration.Keywords: Particle Swarm Optimization (PSO), GaussianMutation (GM), Dynamic Economic Dispatch (DED).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17956167 Predicting Bridge Pier Scour Depth with SVM
Authors: Arun Goel
Abstract:
Prediction of maximum local scour is necessary for the safety and economical design of the bridges. A number of equations have been developed over the years to predict local scour depth using laboratory data and a few pier equations have also been proposed using field data. Most of these equations are empirical in nature as indicated by the past publications. In this paper attempts have been made to compute local depth of scour around bridge pier in dimensional and non-dimensional form by using linear regression, simple regression and SVM (Poly & Rbf) techniques along with few conventional empirical equations. The outcome of this study suggests that the SVM (Poly & Rbf) based modeling can be employed as an alternate to linear regression, simple regression and the conventional empirical equations in predicting scour depth of bridge piers. The results of present study on the basis of non-dimensional form of bridge pier scour indicate the improvement in the performance of SVM (Poly & Rbf) in comparison to dimensional form of scour.Keywords: Modeling, pier scour, regression, prediction, SVM (Poly & Rbf kernels).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15456166 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures
Authors: Adriano Z. Zambom, Preethi Ravikumar
Abstract:
One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.Keywords: Additive models, local polynomial regression, residuals, mean square error, variable selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10116165 Computational Aspects of Regression Analysis of Interval Data
Authors: Michal Cerny
Abstract:
We consider linear regression models where both input data (the values of independent variables) and output data (the observations of the dependent variable) are interval-censored. We introduce a possibilistic generalization of the least squares estimator, so called OLS-set for the interval model. This set captures the impact of the loss of information on the OLS estimator caused by interval censoring and provides a tool for quantification of this effect. We study complexity-theoretic properties of the OLS-set. We also deal with restricted versions of the general interval linear regression model, in particular the crisp input – interval output model. We give an argument that natural descriptions of the OLS-set in the crisp input – interval output cannot be computed in polynomial time. Then we derive easily computable approximations for the OLS-set which can be used instead of the exact description. We illustrate the approach by an example.
Keywords: Linear regression, interval-censored data, computational complexity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14706164 A Comparison of Real Valued Transforms for Image Compression
Authors: Shivali D. Kulkarni, Ameya K. Naik, Nitin S. Nagori
Abstract:
In this paper we present simulation results for the application of a bandwidth efficient algorithm (mapping algorithm) to an image transmission system. This system considers three different real valued transforms to generate energy compact coefficients. First results are presented for gray scale and color image transmission in the absence of noise. It is seen that the system performs its best when discrete cosine transform is used. Also the performance of the system is dominated more by the size of the transform block rather than the number of coefficients transmitted or the number of bits used to represent each coefficient. Similar results are obtained in the presence of additive white Gaussian noise. The varying values of the bit error rate have very little or no impact on the performance of the algorithm. Optimum results are obtained for the system considering 8x8 transform block and by transmitting 15 coefficients from each block using 8 bits.Keywords: Additive white Gaussian noise channel, mapping algorithm, peak signal to noise ratio, transform encoding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15026163 Modeling of Coagulation Process for the Removal of Carbofuran in Aqueous Solution
Authors: Roli Saini, Pradeep Kumar
Abstract:
A coagulation/flocculation process was adopted for the reduction of carbamate insecticide (carbofuran) from aqueous solution. Ferric chloride (FeCl3) was used as a coagulant to treat the carbofuran. To exploit the reduction efficiency of pesticide concentration and COD, the jar-test experiments were carried out and process was optimized through response surface methodology (RSM). The effects of two independent factors; i.e., FeCl3 dosage and pH on the reduction efficiency were estimated by using central composite design (CCD). The initial COD of the 30 mg/L concentrated solution was found to be 510 mg/L. Results exposed that the maximum reduction occurred at an optimal condition of FeCl3 = 80 mg/L, and pH = 5.0, from which the reduction of concentration and COD 75.13% and 65.34%, respectively. The present study also predicted that the obtained regression equations could be helpful as the theoretical basis for the coagulation process of pesticide wastewater.
Keywords: Carbofuran, coagulation, optimization, response surface methodology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11076162 A Hybrid Model of ARIMA and Multiple Polynomial Regression for Uncertainties Modeling of a Serial Production Line
Authors: Amir Azizi, Amir Yazid b. Ali, Loh Wei Ping, Mohsen Mohammadzadeh
Abstract:
Uncertainties of a serial production line affect on the production throughput. The uncertainties cannot be prevented in a real production line. However the uncertain conditions can be controlled by a robust prediction model. Thus, a hybrid model including autoregressive integrated moving average (ARIMA) and multiple polynomial regression, is proposed to model the nonlinear relationship of production uncertainties with throughput. The uncertainties under consideration of this study are demand, breaktime, scrap, and lead-time. The nonlinear relationship of production uncertainties with throughput are examined in the form of quadratic and cubic regression models, where the adjusted R-squared for quadratic and cubic regressions was 98.3% and 98.2%. We optimized the multiple quadratic regression (MQR) by considering the time series trend of the uncertainties using ARIMA model. Finally the hybrid model of ARIMA and MQR is formulated by better adjusted R-squared, which is 98.9%.Keywords: ARIMA, multiple polynomial regression, production throughput, uncertainties
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22006161 A Statistical Approach for Predicting and Optimizing Depth of Cut in AWJ Machining for 6063-T6 Al Alloy
Authors: Farhad Kolahan, A. Hamid Khajavi
Abstract:
In this paper, a set of experimental data has been used to assess the influence of abrasive water jet (AWJ) process parameters in cutting 6063-T6 aluminum alloy. The process variables considered here include nozzle diameter, jet traverse rate, jet pressure and abrasive flow rate. The effects of these input parameters are studied on depth of cut (h); one of most important characteristics of AWJ. The Taguchi method and regression modeling are used in order to establish the relationships between input and output parameters. The adequacy of the model is evaluated using analysis of variance (ANOVA) technique. In the next stage, the proposed model is embedded into a Simulated Annealing (SA) algorithm to optimize the AWJ process parameters. The objective is to determine a suitable set of process parameters that can produce a desired depth of cut, considering the ranges of the process parameters. Computational results prove the effectiveness of the proposed model and optimization procedure.
Keywords: AWJ machining, Mathematical modeling, Simulated Annealing, Optimization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17756160 Arabic Character Recognition Using Regression Curves with the Expectation Maximization Algorithm
Authors: Abdullah A. AlShaher
Abstract:
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We then proceed by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
Keywords: Shape recognition, Arabic handwritten characters, regression curves, expectation maximization algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7136159 An ensemble of Weighted Support Vector Machines for Ordinal Regression
Authors: Willem Waegeman, Luc Boullart
Abstract:
Instead of traditional (nominal) classification we investigate the subject of ordinal classification or ranking. An enhanced method based on an ensemble of Support Vector Machines (SVM-s) is proposed. Each binary classifier is trained with specific weights for each object in the training data set. Experiments on benchmark datasets and synthetic data indicate that the performance of our approach is comparable to state of the art kernel methods for ordinal regression. The ensemble method, which is straightforward to implement, provides a very good sensitivity-specificity trade-off for the highest and lowest rank.Keywords: Ordinal regression, support vector machines, ensemblelearning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16446158 Economic Dispatch Fuzzy Linear Regression and Optimization
Authors: A. K. Al-Othman
Abstract:
This study presents a new approach based on Tanaka's fuzzy linear regression (FLP) algorithm to solve well-known power system economic load dispatch problem (ELD). Tanaka's fuzzy linear regression (FLP) formulation will be employed to compute the optimal solution of optimization problem after linearization. The unknowns are expressed as fuzzy numbers with a triangular membership function that has middle and spread value reflected on the unknowns. The proposed fuzzy model is formulated as a linear optimization problem, where the objective is to minimize the sum of the spread of the unknowns, subject to double inequality constraints. Linear programming technique is employed to obtain the middle and the symmetric spread for every unknown (power generation level). Simulation results of the proposed approach will be compared with those reported in literature.Keywords: Economic Dispatch, Fuzzy Linear Regression (FLP)and Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22936157 Fuzzy Cost Support Vector Regression
Authors: Hadi Sadoghi Yazdi, Tahereh Royani, Mehri Sadoghi Yazdi, Sohrab Effati
Abstract:
In this paper, a new version of support vector regression (SVR) is presented namely Fuzzy Cost SVR (FCSVR). Individual property of the FCSVR is operation over fuzzy data whereas fuzzy cost (fuzzy margin and fuzzy penalty) are maximized. This idea admits to have uncertainty in the penalty and margin terms jointly. Robustness against noise is shown in the experimental results as a property of the proposed method and superiority relative conventional SVR.
Keywords: Support vector regression, Fuzzy input, Fuzzy cost.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13726156 Landslide Susceptibility Mapping: A Comparison between Logistic Regression and Multivariate Adaptive Regression Spline Models in the Municipality of Oudka, Northern of Morocco
Authors: S. Benchelha, H. C. Aoudjehane, M. Hakdaoui, R. El Hamdouni, H. Mansouri, T. Benchelha, M. Layelmam, M. Alaoui
Abstract:
The logistic regression (LR) and multivariate adaptive regression spline (MarSpline) are applied and verified for analysis of landslide susceptibility map in Oudka, Morocco, using geographical information system. From spatial database containing data such as landslide mapping, topography, soil, hydrology and lithology, the eight factors related to landslides such as elevation, slope, aspect, distance to streams, distance to road, distance to faults, lithology map and Normalized Difference Vegetation Index (NDVI) were calculated or extracted. Using these factors, landslide susceptibility indexes were calculated by the two mentioned methods. Before the calculation, this database was divided into two parts, the first for the formation of the model and the second for the validation. The results of the landslide susceptibility analysis were verified using success and prediction rates to evaluate the quality of these probabilistic models. The result of this verification was that the MarSpline model is the best model with a success rate (AUC = 0.963) and a prediction rate (AUC = 0.951) higher than the LR model (success rate AUC = 0.918, rate prediction AUC = 0.901).
Keywords: Landslide susceptibility mapping, regression logistic, multivariate adaptive regression spline, Oudka, Taounate, Morocco.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9926155 Subjective Versus Objective Assessment for Magnetic Resonance Images
Authors: Heshalini Rajagopal, Li Sze Chow, Raveendran Paramesran
Abstract:
Magnetic Resonance Imaging (MRI) is one of the most important medical imaging modality. Subjective assessment of the image quality is regarded as the gold standard to evaluate MR images. In this study, a database of 210 MR images which contains ten reference images and 200 distorted images is presented. The reference images were distorted with four types of distortions: Rician Noise, Gaussian White Noise, Gaussian Blur and DCT compression. The 210 images were assessed by ten subjects. The subjective scores were presented in Difference Mean Opinion Score (DMOS). The DMOS values were compared with four FR-IQA metrics. We have used Pearson Linear Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) to validate the DMOS values. The high correlation values of PLCC and SROCC shows that the DMOS values are close to the objective FR-IQA metrics.Keywords: Medical Resonance (MR) images, Difference Mean Opinion Score (DMOS), Full Reference Image Quality Assessment (FR-IQA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22186154 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System
Authors: Cheima Ben Soltane, Ittansa Yonas Kelbesa
Abstract:
Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.Keywords: Feature Extraction, Speaker Modeling, Feature Matching, Mel Frequency Cepstrum Coefficient (MFCC), Gaussian mixture model (GMM), Vector Quantization (VQ), Linde-Buzo-Gray (LBG), Expectation Maximization (EM), pre-processing, Voice Activity Detection (VAD), Short Time Energy (STE), Background Noise Statistical Modeling, Closed-Set Tex-Independent Speaker Identification System (CISI).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18896153 A Study of Classification Models to Predict Drill-Bit Breakage Using Degradation Signals
Authors: Bharatendra Rai
Abstract:
Cutting tools are widely used in manufacturing processes and drilling is the most commonly used machining process. Although drill-bits used in drilling may not be expensive, their breakage can cause damage to expensive work piece being drilled and at the same time has major impact on productivity. Predicting drill-bit breakage, therefore, is important in reducing cost and improving productivity. This study uses twenty features extracted from two degradation signals viz., thrust force and torque. The methodology used involves developing and comparing decision tree, random forest, and multinomial logistic regression models for classifying and predicting drill-bit breakage using degradation signals.
Keywords: Degradation signal, drill-bit breakage, random forest, multinomial logistic regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22446152 Peakwise Smoothing of Data Models using Wavelets
Authors: D Sudheer Reddy, N Gopal Reddy, P V Radhadevi, J Saibaba, Geeta Varadan
Abstract:
Smoothing or filtering of data is first preprocessing step for noise suppression in many applications involving data analysis. Moving average is the most popular method of smoothing the data, generalization of this led to the development of Savitzky-Golay filter. Many window smoothing methods were developed by convolving the data with different window functions for different applications; most widely used window functions are Gaussian or Kaiser. Function approximation of the data by polynomial regression or Fourier expansion or wavelet expansion also gives a smoothed data. Wavelets also smooth the data to great extent by thresholding the wavelet coefficients. Almost all smoothing methods destroys the peaks and flatten them when the support of the window is increased. In certain applications it is desirable to retain peaks while smoothing the data as much as possible. In this paper we present a methodology called as peak-wise smoothing that will smooth the data to any desired level without losing the major peak features.Keywords: smoothing, moving average, peakwise smoothing, spatialdensity models, planar shape models, wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17506151 Small Sample Bootstrap Confidence Intervals for Long-Memory Parameter
Authors: Josu Arteche, Jesus Orbe
Abstract:
The log periodogram regression is widely used in empirical applications because of its simplicity, since only a least squares regression is required to estimate the memory parameter, d, its good asymptotic properties and its robustness to misspecification of the short term behavior of the series. However, the asymptotic distribution is a poor approximation of the (unknown) finite sample distribution if the sample size is small. Here the finite sample performance of different nonparametric residual bootstrap procedures is analyzed when applied to construct confidence intervals. In particular, in addition to the basic residual bootstrap, the local and block bootstrap that might adequately replicate the structure that may arise in the errors of the regression are considered when the series shows weak dependence in addition to the long memory component. Bias correcting bootstrap to adjust the bias caused by that structure is also considered. Finally, the performance of the bootstrap in log periodogram regression based confidence intervals is assessed in different type of models and how its performance changes as sample size increases.Keywords: bootstrap, confidence interval, log periodogram regression, long memory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17396150 Identifying Factors Contributing to the Spread of Lyme Disease: A Regression Analysis of Virginia’s Data
Authors: Fatemeh Valizadeh Gamchi, Edward L. Boone
Abstract:
This research focuses on Lyme disease, a widespread infectious condition in the United States caused by the bacterium Borrelia burgdorferi sensu stricto. It is critical to identify environmental and economic elements that are contributing to the spread of the disease. This study examined data from Virginia to identify a subset of explanatory variables significant for Lyme disease case numbers. To identify relevant variables and avoid overfitting, linear poisson, and regularization regression methods such as ridge, lasso, and elastic net penalty were employed. Cross-validation was performed to acquire tuning parameters. The methods proposed can automatically identify relevant disease count covariates. The efficacy of the techniques was assessed using four criteria on three simulated datasets. Finally, using the Virginia Department of Health’s Lyme disease dataset, the study successfully identified key factors, and the results were consistent with previous studies.
Keywords: Lyme disease, Poisson generalized linear model, Ridge regression, Lasso Regression, elastic net regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1256149 Developing New Processes and Optimizing Performance Using Response Surface Methodology
Authors: S. Raissi
Abstract:
Response surface methodology (RSM) is a very efficient tool to provide a good practical insight into developing new process and optimizing them. This methodology could help engineers to raise a mathematical model to represent the behavior of system as a convincing function of process parameters. Through this paper the sequential nature of the RSM surveyed for process engineers and its relationship to design of experiments (DOE), regression analysis and robust design reviewed. The proposed four-step procedure in two different phases could help system analyst to resolve the parameter design problem involving responses. In order to check accuracy of the designed model, residual analysis and prediction error sum of squares (PRESS) described. It is believed that the proposed procedure in this study can resolve a complex parameter design problem with one or more responses. It can be applied to those areas where there are large data sets and a number of responses are to be optimized simultaneously. In addition, the proposed procedure is relatively simple and can be implemented easily by using ready-made standard statistical packages.Keywords: Response Surface Methodology (RSM), Design of Experiments (DOE), Process modeling, Process setting, Process optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18376148 The Extension of Monomeric Computational Results to Polymeric Measurable Properties: An Introductory Computational Chemistry Experiment
Authors: Zhao Jing, Bai Yongqing, Shi Qiaofang, Zang Yang, Zhang Huaihao
Abstract:
Advances in software technology enable the computational chemistry to be commonly applied in various research fields, especially in pedagogy. Thus, in order to expand and improve experimental instructions of computational chemistry for undergraduates, we designed an introductory experiment—research on acrylamide molecular structure and physicochemical properties. Initially, students construct molecular models of acrylamide and polyacrylamide in Gaussian and Materials Studio software respectively. Then, the infrared spectral data, atomic charge and molecular orbitals of acrylamide as well as solvation effect of polyacrylamide are calculated to predict their physicochemical performance. At last, rheological experiments are used to validate these predictions. Through the combination of molecular simulation (performed on Gaussian, Materials Studio) with experimental verification (rheology experiment), learners have deeply comprehended the chemical nature of acrylamide and polyacrylamide, achieving good learning outcomes.
Keywords: Upper-division undergraduate, computer-based learning, laboratory instruction, amides, molecular modeling, spectroscopy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3716147 The Effect of Measurement Distribution on System Identification and Detection of Behavior of Nonlinearities of Data
Authors: Mohammad Javad Mollakazemi, Farhad Asadi, Aref Ghafouri
Abstract:
In this paper, we considered and applied parametric modeling for some experimental data of dynamical system. In this study, we investigated the different distribution of output measurement from some dynamical systems. Also, with variance processing in experimental data we obtained the region of nonlinearity in experimental data and then identification of output section is applied in different situation and data distribution. Finally, the effect of the spanning the measurement such as variance to identification and limitation of this approach is explained.
Keywords: Gaussian process, Nonlinearity distribution, Particle filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17226146 A Study on Inference from Distance Variables in Hedonic Regression
Authors: Yan Wang, Yasushi Asami, Yukio Sadahiro
Abstract:
In urban area, several landmarks may affect housing price and rents, and hedonic analysis should employ distance variables corresponding to each landmarks. Unfortunately, the effects of distances to landmarks on housing prices are generally not consistent with the true price. These distance variables may cause magnitude error in regression, pointing a problem of spatial multicollinearity. In this paper, we provided some approaches for getting the samples with less bias and method on locating the specific sampling area to avoid the multicollinerity problem in two specific landmarks case.
Keywords: Landmarks, hedonic regression, distance variables, collinearity, multicollinerity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19066145 An Internet of Things-Based Weight Monitoring System for Honey
Authors: Zheng-Yan Ruan, Chien-Hao Wang, Hong-Jen Lin, Chien-Peng Huang, Ying-Hao Chen, En-Cheng Yang, Chwan-Lu Tseng, Joe-Air Jiang
Abstract:
Bees play a vital role in pollination. This paper focuses on the weighing process of honey. Honey is usually stored at the comb in a hive. Bee farmers brush bees away from the comb and then collect honey, and the collected honey is weighed afterward. However, such a process brings strong negative influences on bees and even leads to the death of bees. This paper therefore presents an Internet of Things-based weight monitoring system which uses weight sensors to measure the weight of honey and simplifies the whole weighing procedure. To verify the system, the weight measured by the system is compared to the weight of standard weights used for calibration by employing a linear regression model. The R2 of the regression model is 0.9788, which suggests that the weighing system is highly reliable and is able to be applied to obtain actual weight of honey. In the future, the weight data of honey can be used to find the relationship between honey production and different ecological parameters, such as bees’ foraging behavior and weather conditions. It is expected that the findings can serve as critical information for honey production improvement.
Keywords: Internet of Things, weight, honey, bee.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13056144 Contrast Enhancement in Digital Images Using an Adaptive Unsharp Masking Method
Authors: Z. Mortezaie, H. Hassanpour, S. Asadi Amiri
Abstract:
Captured images may suffer from Gaussian blur due to poor lens focus or camera motion. Unsharp masking is a simple and effective technique to boost the image contrast and to improve digital images suffering from Gaussian blur. The technique is based on sharpening object edges by appending the scaled high-frequency components of the image to the original. The quality of the enhanced image is highly dependent on the characteristics of both the high-frequency components and the scaling/gain factor. Since the quality of an image may not be the same throughout, we propose an adaptive unsharp masking method in this paper. In this method, the gain factor is computed, considering the gradient variations, for individual pixels of the image. Subjective and objective image quality assessments are used to compare the performance of the proposed method both with the classic and the recently developed unsharp masking methods. The experimental results show that the proposed method has a better performance in comparison to the other existing methods.Keywords: Unsharp masking, blur image, sub-region gradient, image enhancement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14146143 Applications of Stable Distributions in Time Series Analysis, Computer Sciences and Financial Markets
Authors: Mohammad Ali Baradaran Ghahfarokhi, Parvin Baradaran Ghahfarokhi
Abstract:
In this paper, first we introduce the stable distribution, stable process and theirs characteristics. The a -stable distribution family has received great interest in the last decade due to its success in modeling data, which are too impulsive to be accommodated by the Gaussian distribution. In the second part, we propose major applications of alpha stable distribution in telecommunication, computer science such as network delays and signal processing and financial markets. At the end, we focus on using stable distribution to estimate measure of risk in stock markets and show simulated data with statistical softwares.
Keywords: stable distribution, SaS, infinite variance, heavy tail networks, VaR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20636142 Maximizer of the Posterior Marginal Estimate for Noise Reduction of JPEG-compressed Image
Authors: Yohei Saika, Yuji Haraguchi
Abstract:
We constructed a method of noise reduction for JPEG-compressed image based on Bayesian inference using the maximizer of the posterior marginal (MPM) estimate. In this method, we tried the MPM estimate using two kinds of likelihood, both of which enhance grayscale images converted into the JPEG-compressed image through the lossy JPEG image compression. One is the deterministic model of the likelihood and the other is the probabilistic one expressed by the Gaussian distribution. Then, using the Monte Carlo simulation for grayscale images, such as the 256-grayscale standard image “Lena" with 256 × 256 pixels, we examined the performance of the MPM estimate based on the performance measure using the mean square error. We clarified that the MPM estimate via the Gaussian probabilistic model of the likelihood is effective for reducing noises, such as the blocking artifacts and the mosquito noise, if we set parameters appropriately. On the other hand, we found that the MPM estimate via the deterministic model of the likelihood is not effective for noise reduction due to the low acceptance ratio of the Metropolis algorithm.Keywords: Noise reduction, JPEG-compressed image, Bayesian inference, the maximizer of the posterior marginal estimate
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19886141 Using Linear Quadratic Gaussian Optimal Control for Lateral Motion of Aircraft
Authors: A. Maddi, A. Guessoum, D. Berkani
Abstract:
The purpose of this paper is to provide a practical example to the Linear Quadratic Gaussian (LQG) controller. This method includes a description and some discussion of the discrete Kalman state estimator. One aspect of this optimality is that the estimator incorporates all information that can be provided to it. It processes all available measurements, regardless of their precision, to estimate the current value of the variables of interest, with use of knowledge of the system and measurement device dynamics, the statistical description of the system noises, measurement errors, and uncertainty in the dynamics models. Since the time of its introduction, the Kalman filter has been the subject of extensive research and application, particularly in the area of autonomous or assisted navigation. For example, to determine the velocity of an aircraft or sideslip angle, one could use a Doppler radar, the velocity indications of an inertial navigation system, or the relative wind information in the air data system. Rather than ignore any of these outputs, a Kalman filter could be built to combine all of this data and knowledge of the various systems- dynamics to generate an overall best estimate of velocity and sideslip angle.Keywords: Aircraft motion, Kalman filter, LQG control, Lateral stability, State estimator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24716140 Estimate of Maximum Expected Intensity of One-Half-Wave Lines Dancing
Authors: A. Bekbaev, M. Dzhamanbaev, R. Abitaeva, A. Karbozova, G. Nabyeva
Abstract:
In this paper, the regression dependence of dancing intensity from wind speed and length of span was established due to the statistic data obtained from multi-year observations on line wires dancing accumulated by power systems of Kazakhstan and the Russian Federation. The lower and upper limitations of the equations parameters were estimated, as well as the adequacy of the regression model. The constructed model will be used in research of dancing phenomena for the development of methods and means of protection against dancing and for zoning plan of the territories of line wire dancing.Keywords: Power lines, line wire dancing, dancing intensity, regression equation, dancing area intensity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1210