Search results for: least squares method
19132 Least Squares Method Identification of Corona Current-Voltage Characteristics and Electromagnetic Field in Electrostatic Precipitator
Authors: H. Nouri, I. E. Achouri, A. Grimes, H. Ait Said, M. Aissou, Y. Zebboudj
Abstract:
This paper aims to analysis the behaviour of DC corona discharge in wire-to-plate electrostatic precipitators (ESP). Current-voltage curves are particularly analysed. Experimental results show that discharge current is strongly affected by the applied voltage. The proposed method of current identification is to use the method of least squares. Least squares problems that of into two categories: linear or ordinary least squares and non-linear least squares, depending on whether or not the residuals are linear in all unknowns. The linear least-squares problem occurs in statistical regression analysis; it has a closed-form solution. A closed-form solution (or closed form expression) is any formula that can be evaluated in a finite number of standard operations. The non-linear problem has no closed-form solution and is usually solved by iterative.Keywords: electrostatic precipitator, current-voltage characteristics, least squares method, electric field, magnetic field
Procedia PDF Downloads 43119131 Online Estimation of Clutch Drag Torque in Wet Dual Clutch Transmission Based on Recursive Least Squares
Authors: Hongkui Li, Tongli Lu , Jianwu Zhang
Abstract:
This paper focuses on developing an estimation method of clutch drag torque in wet DCT. The modelling of clutch drag torque is investigated. As the main factor affecting the clutch drag torque, dynamic viscosity of oil is discussed. The paper proposes an estimation method of clutch drag torque based on recursive least squares by utilizing the dynamic equations of gear shifting synchronization process. The results demonstrate that the estimation method has good accuracy and efficiency.Keywords: clutch drag torque, wet DCT, dynamic viscosity, recursive least squares
Procedia PDF Downloads 31819130 Genetic Algorithm to Construct and Enumerate 4×4 Pan-Magic Squares
Authors: Younis R. Elhaddad, Mohamed A. Alshaari
Abstract:
Since 2700 B.C the problem of constructing magic squares attracts many researchers. Magic squares one of most difficult challenges for mathematicians. In this work, we describe how to construct and enumerate Pan- magic squares using genetic algorithm, using new chromosome encoding technique. The results were promising within reasonable time.Keywords: genetic algorithm, magic square, pan-magic square, computational intelligence
Procedia PDF Downloads 57619129 Hybrid Artificial Bee Colony and Least Squares Method for Rule-Based Systems Learning
Authors: Ahcene Habbi, Yassine Boudouaoui
Abstract:
This paper deals with the problem of automatic rule generation for fuzzy systems design. The proposed approach is based on hybrid artificial bee colony (ABC) optimization and weighted least squares (LS) method and aims to find the structure and parameters of fuzzy systems simultaneously. More precisely, two ABC based fuzzy modeling strategies are presented and compared. The first strategy uses global optimization to learn fuzzy models, the second one hybridizes ABC and weighted least squares estimate method. The performances of the proposed ABC and ABC-LS fuzzy modeling strategies are evaluated on complex modeling problems and compared to other advanced modeling methods.Keywords: automatic design, learning, fuzzy rules, hybrid, swarm optimization
Procedia PDF Downloads 43719128 Solving SPDEs by Least Squares Method
Authors: Hassan Manouzi
Abstract:
We present in this paper a useful strategy to solve stochastic partial differential equations (SPDEs) involving stochastic coefficients. Using the Wick-product of higher order and the Wiener-Itˆo chaos expansion, the SPDEs is reformulated as a large system of deterministic partial differential equations. To reduce the computational complexity of this system, we shall use a decomposition-coordination method. To obtain the chaos coefficients in the corresponding deterministic equations, we use a least square formulation. Once this approximation is performed, the statistics of the numerical solution can be easily evaluated.Keywords: least squares, wick product, SPDEs, finite element, wiener chaos expansion, gradient method
Procedia PDF Downloads 41919127 Application of the Least Squares Method in the Adjustment of Chlorodifluoromethane (HCFC-142b) Regression Models
Authors: L. J. de Bessa Neto, V. S. Filho, J. V. Ferreira Nunes, G. C. Bergamo
Abstract:
There are many situations in which human activities have significant effects on the environment. Damage to the ozone layer is one of them. The objective of this work is to use the Least Squares Method, considering the linear, exponential, logarithmic, power and polynomial models of the second degree, to analyze through the coefficient of determination (R²), which model best fits the behavior of the chlorodifluoromethane (HCFC-142b) in parts per trillion between 1992 and 2018, as well as estimates of future concentrations between 5 and 10 periods, i.e. the concentration of this pollutant in the years 2023 and 2028 in each of the adjustments. A total of 809 observations of the concentration of HCFC-142b in one of the monitoring stations of gases precursors of the deterioration of the ozone layer during the period of time studied were selected and, using these data, the statistical software Excel was used for make the scatter plots of each of the adjustment models. With the development of the present study, it was observed that the logarithmic fit was the model that best fit the data set, since besides having a significant R² its adjusted curve was compatible with the natural trend curve of the phenomenon.Keywords: chlorodifluoromethane (HCFC-142b), ozone, least squares method, regression models
Procedia PDF Downloads 12419126 Public Squares and Their Potential for Social Interactions: A Case Study of Historical Public Squares in Tehran
Authors: Asma Mehan
Abstract:
Under the thrust of technological changes, population growth and vehicular traffic, Iranian historical squares have lost their significance and they are no longer the main social nodes of the society. This research focuses on how historical public squares can inspire designers to enhance social interactions among citizens in Iranian urban context. Moreover, the recent master plan of Tehran demonstrates the lack of public spaces designed for the purpose of people’s social gatherings. For filling this gap, first the current situation of 7 selected primary historical public squares in Tehran including Sabze Meydan, Arg, Topkhaneh, Baherstan, Mokhber-al-dole, Rah Ahan and Hassan Abad have been compared. Later, the influencing elements on social interactions of the public squares such as subjective factors (human relationships and memories) and objective factors (natural and built environment) have been investigated. As a conclusion, some strategies are proposed for improving social interactions in historical public squares like; holding cultural, national, athletic and religious events, defining different and new functions in public squares’ surrounding, increasing pedestrian routs, reviving the collective memory, demonstrating the historical importance of square, eliminating visual obstacles across the square, organization the natural elements of the square, appropriate pavement for social activities. Finally, it is argued that the combination of all influencing factors which are: human interactions, natural elements and built environment criteria will lead to enhance the historical public squares’ potential for social interaction.Keywords: historical square, Iranian public square, social interaction, Tehran
Procedia PDF Downloads 40519125 Variogram Fitting Based on the Wilcoxon Norm
Authors: Hazem Al-Mofleh, John Daniels, Joseph McKean
Abstract:
Within geostatistics research, effective estimation of the variogram points has been examined, particularly in developing robust alternatives. The parametric fit of these variogram points which eventually defines the kriging weights, however, has not received the same attention from a robust perspective. This paper proposes the use of the non-linear Wilcoxon norm over weighted non-linear least squares as a robust variogram fitting alternative. First, we introduce the concept of variogram estimation and fitting. Then, as an alternative to non-linear weighted least squares, we discuss the non-linear Wilcoxon estimator. Next, the robustness properties of the non-linear Wilcoxon are demonstrated using a contaminated spatial data set. Finally, under simulated conditions, increasing levels of contaminated spatial processes have their variograms points estimated and fit. In the fitting of these variogram points, both non-linear Weighted Least Squares and non-linear Wilcoxon fits are examined for efficiency. At all levels of contamination (including 0%), using a robust estimation and robust fitting procedure, the non-weighted Wilcoxon outperforms weighted Least Squares.Keywords: non-linear wilcoxon, robust estimation, variogram estimation, wilcoxon norm
Procedia PDF Downloads 45819124 Applying Element Free Galerkin Method on Beam and Plate
Authors: Mahdad M’hamed, Belaidi Idir
Abstract:
This paper develops a meshless approach, called Element Free Galerkin (EFG) method, which is based on the weak form Moving Least Squares (MLS) of the partial differential governing equations and employs the interpolation to construct the meshless shape functions. The variation weak form is used in the EFG where the trial and test functions are approximated bye the MLS approximation. Since the shape functions constructed by this discretization have the weight function property based on the randomly distributed points, the essential boundary conditions can be implemented easily. The local weak form of the partial differential governing equations is obtained by the weighted residual method within the simple local quadrature domain. The spline function with high continuity is used as the weight function. The presently developed EFG method is a truly meshless method, as it does not require the mesh, either for the construction of the shape functions, or for the integration of the local weak form. Several numerical examples of two-dimensional static structural analysis are presented to illustrate the performance of the present EFG method. They show that the EFG method is highly efficient for the implementation and highly accurate for the computation. The present method is used to analyze the static deflection of beams and plate holeKeywords: numerical computation, element-free Galerkin (EFG), moving least squares (MLS), meshless methods
Procedia PDF Downloads 28319123 Equity Risk Premiums and Risk Free Rates in Modelling and Prediction of Financial Markets
Authors: Mohammad Ghavami, Reza S. Dilmaghani
Abstract:
This paper presents an adaptive framework for modelling financial markets using equity risk premiums, risk free rates and volatilities. The recorded economic factors are initially used to train four adaptive filters for a certain limited period of time in the past. Once the systems are trained, the adjusted coefficients are used for modelling and prediction of an important financial market index. Two different approaches based on least mean squares (LMS) and recursive least squares (RLS) algorithms are investigated. Performance analysis of each method in terms of the mean squared error (MSE) is presented and the results are discussed. Computer simulations carried out using recorded data show MSEs of 4% and 3.4% for the next month prediction using LMS and RLS adaptive algorithms, respectively. In terms of twelve months prediction, RLS method shows a better tendency estimation compared to the LMS algorithm.Keywords: adaptive methods, LSE, MSE, prediction of financial Markets
Procedia PDF Downloads 33619122 Design of Wide-Range Variable Fractional-Delay FIR Digital Filters
Authors: Jong-Jy Shyu, Soo-Chang Pei, Yun-Da Huang
Abstract:
In this paper, design of wide-range variable fractional-delay (WR-VFD) finite impulse response (FIR) digital filters is proposed. With respect to the conventional VFD filter which is designed such that its delay is adjustable within one unit, the proposed VFD FIR filter is designed such that its delay can be tunable within a wider range. By the traces of coefficients of the fractional-delay FIR filter, it is found that the conventional method of polynomial substitution for filter coefficients no longer satisfies the design demand, and the circuits perform the sinc function (sinc converter) are added to overcome this problem. In this paper, least-squares method is adopted to design WR-VFD FIR filter. Throughout this paper, several examples will be proposed to demonstrate the effectiveness of the presented methods.Keywords: digital filter, FIR filter, variable fractional-delay (VFD) filter, least-squares approximation
Procedia PDF Downloads 49119121 Influence of Optimization Method on Parameters Identification of Hyperelastic Models
Authors: Bale Baidi Blaise, Gilles Marckmann, Liman Kaoye, Talaka Dya, Moustapha Bachirou, Gambo Betchewe, Tibi Beda
Abstract:
This work highlights the capabilities of particles swarm optimization (PSO) method to identify parameters of hyperelastic models. The study compares this method with Genetic Algorithm (GA) method, Least Squares (LS) method, Pattern Search Algorithm (PSA) method, Beda-Chevalier (BC) method and the Levenberg-Marquardt (LM) method. Four classic hyperelastic models are used to test the different methods through parameters identification. Then, the study compares the ability of these models to reproduce experimental Treloar data in simple tension, biaxial tension and pure shear.Keywords: particle swarm optimization, identification, hyperelastic, model
Procedia PDF Downloads 17119120 Least Squares Solution for Linear Quadratic Gaussian Problem with Stochastic Approximation Approach
Authors: Sie Long Kek, Wah June Leong, Kok Lay Teo
Abstract:
Linear quadratic Gaussian model is a standard mathematical model for the stochastic optimal control problem. The combination of the linear quadratic estimation and the linear quadratic regulator allows the state estimation and the optimal control policy to be designed separately. This is known as the separation principle. In this paper, an efficient computational method is proposed to solve the linear quadratic Gaussian problem. In our approach, the Hamiltonian function is defined, and the necessary conditions are derived. In addition to this, the output error is defined and the least-square optimization problem is introduced. By determining the first-order necessary condition, the gradient of the sum squares of output error is established. On this point of view, the stochastic approximation approach is employed such that the optimal control policy is updated. Within a given tolerance, the iteration procedure would be stopped and the optimal solution of the linear-quadratic Gaussian problem is obtained. For illustration, an example of the linear-quadratic Gaussian problem is studied. The result shows the efficiency of the approach proposed. In conclusion, the applicability of the approach proposed for solving the linear quadratic Gaussian problem is highly demonstrated.Keywords: iteration procedure, least squares solution, linear quadratic Gaussian, output error, stochastic approximation
Procedia PDF Downloads 18719119 Shape Management Method for Safety Evaluation of Bridge Based on Terrestrial Laser Scanning Using Least Squares
Authors: Gichun Cha, Dongwan Lee, Junkyeong Kim, Aoqi Zhang, Seunghee Park
Abstract:
All the world are studying the construction technology of double deck tunnel in order to respond to the increasing urban traffic demands and environmental changes. Advanced countries have the construction technology of the double deck tunnel structure. but the domestic country began research on it. Construction technologies are important. But Safety evaluation of structure is necessary to prevent possible accidents during construction. Thus, the double deck tunnel was required the shape management of middle slabs. The domestic country is preparing the construction of double deck tunnel for an alternate route and a pleasant urban environment. Shape management of double deck tunnel has been no research because it is a new attempted technology. The present, a similar study is bridge structure for the shape management. Bridge is implemented shape model using terrestrial laser scanning(TLS). Therefore, we proceed research on the bridge slabs because there is a similar structure of double deck tunnel. In the study, we develop shape management method of bridge slabs using TLS. We select the Test-bed for measurement site. This site is bridge located on Sungkyunkwan University Natural Sciences Campus. This bridge has a total length of 34m, the vertical height of 8.7m from the ground. It connects Engineering Building #1 and Engineering Building #2. Point cloud data for shape management is acquired the TLS and We utilized the Leica ScanStation C10/C5 model. We will confirm the Maximum displacement area of middle slabs using Least-Squares Fitting. We expect to raise stability for double deck tunnel through shape management for middle slabs.Keywords: bridge slabs, least squares, safety evaluation, shape management method, terrestrial laser scanning
Procedia PDF Downloads 24119118 Analysis of Two Methods to Estimation Stochastic Demand in the Vehicle Routing Problem
Authors: Fatemeh Torfi
Abstract:
Estimation of stochastic demand in physical distribution in general and efficient transport routs management in particular is emerging as a crucial factor in urban planning domain. It is particularly important in some municipalities such as Tehran where a sound demand management calls for a realistic analysis of the routing system. The methodology involved critically investigating a fuzzy least-squares linear regression approach (FLLRs) to estimate the stochastic demands in the vehicle routing problem (VRP) bearing in mind the customer's preferences order. A FLLR method is proposed in solving the VRP with stochastic demands. Approximate-distance fuzzy least-squares (ADFL) estimator ADFL estimator is applied to original data taken from a case study. The SSR values of the ADFL estimator and real demand are obtained and then compared to SSR values of the nominal demand and real demand. Empirical results showed that the proposed methods can be viable in solving problems under circumstances of having vague and imprecise performance ratings. The results further proved that application of the ADFL was realistic and efficient estimator to face the stochastic demand challenges in vehicle routing system management and solve relevant problems.Keywords: fuzzy least-squares, stochastic, location, routing problems
Procedia PDF Downloads 43419117 Application of the Total Least Squares Estimation Method for an Aircraft Aerodynamic Model Identification
Authors: Zaouche Mohamed, Amini Mohamed, Foughali Khaled, Aitkaid Souhila, Bouchiha Nihad Sarah
Abstract:
The aerodynamic coefficients are important in the evaluation of an aircraft performance and stability-control characteristics. These coefficients also can be used in the automatic flight control systems and mathematical model of flight simulator. The study of the aerodynamic aspect of flying systems is a reserved domain and inaccessible for the developers. Doing tests in a wind tunnel to extract aerodynamic forces and moments requires a specific and expensive means. Besides, the glaring lack of published documentation in this field of study makes the aerodynamic coefficients determination complicated. This work is devoted to the identification of an aerodynamic model, by using an aircraft in virtual simulated environment. We deal with the identification of the system, we present an environment framework based on Software In the Loop (SIL) methodology and we use MicrosoftTM Flight Simulator (FS-2004) as the environment for plane simulation. We propose The Total Least Squares Estimation technique (TLSE) to identify the aerodynamic parameters, which are unknown, variable, classified and used in the expression of the piloting law. In this paper, we define each aerodynamic coefficient as the mean of its numerical values. All other variations are considered as modeling uncertainties that will be compensated by the robustness of the piloting control.Keywords: aircraft aerodynamic model, total least squares estimation, piloting the aircraft, robust control, Microsoft Flight Simulator, MQ-1 predator
Procedia PDF Downloads 28719116 QSRR Analysis of 17-Picolyl and 17-Picolinylidene Androstane Derivatives Based on Partial Least Squares and Principal Component Regression
Authors: Sanja Podunavac-Kuzmanović, Strahinja Kovačević, Lidija Jevrić, Evgenija Djurendić, Jovana Ajduković
Abstract:
There are several methods for determination of the lipophilicity of biologically active compounds, however chromatography has been shown as a very suitable method for this purpose. Chromatographic (C18-RP-HPLC) analysis of a series of 24 17-picolyl and 17-picolinylidene androstane derivatives was carried out. The obtained retention indices (logk, methanol (90%) / water (10%)) were correlated with calculated physicochemical and lipophilicity descriptors. The QSRR analysis was carried out applying principal component regression (PCR) and partial least squares regression (PLS). The PCR and PLS model were selected on the basis of the highest variance and the lowest root mean square error of cross-validation. The obtained PCR and PLS model successfully correlate the calculated molecular descriptors with logk parameter indicating the significance of the lipophilicity of compounds in chromatographic process. On the basis of the obtained results it can be concluded that the obtained logk parameters of the analyzed androstane derivatives can be considered as their chromatographic lipophilicity. These results are the part of the project No. 114-451-347/2015-02, financially supported by the Provincial Secretariat for Science and Technological Development of Vojvodina and CMST COST Action CM1105.Keywords: androstane derivatives, chromatography, molecular structure, principal component regression, partial least squares regression
Procedia PDF Downloads 27719115 UV-Vis Spectroscopy as a Tool for Online Tar Measurements in Wood Gasification Processes
Authors: Philip Edinger, Christian Ludwig
Abstract:
The formation and control of tars remain one of the major challenges in the implementation of biomass gasification technologies. Robust, on-line analytical methods are needed to investigate the fate of tar compounds when different measures for their reduction are applied. This work establishes an on-line UV-Vis method, based on a liquid quench sampling system, to monitor tar compounds in biomass gasification processes. Recorded spectra from the liquid phase were analyzed for their tar composition by means of a classical least squares (CLS) and partial least squares (PLS) approach. This allowed for the detection of UV-Vis active tar compounds with detection limits in the low part per million by volume (ppmV) region. The developed method was then applied to two case studies. The first involved a lab-scale reactor, intended to investigate the decomposition of a limited number of tar compounds across a catalyst. The second study involved a gas scrubber as part of a pilot scale wood gasification plant. Tar compound quantification results showed good agreement with off-line based reference methods (GC-FID) when the complexity of tar composition was limited. The two case studies show that the developed method can provide rapid, qualitative information on the tar composition for the purpose of process monitoring. In cases with a limited number of tar species, quantitative information about the individual tar compound concentrations provides an additional benefit of the analytical method.Keywords: biomass gasification, on-line, tar, UV-Vis
Procedia PDF Downloads 25919114 Dynamic Process Monitoring of an Ammonia Synthesis Fixed-Bed Reactor
Authors: Bothinah Altaf, Gary Montague, Elaine B. Martin
Abstract:
This study involves the modeling and monitoring of an ammonia synthesis fixed-bed reactor using partial least squares (PLS) and its variants. The process exhibits complex dynamic behavior due to the presence of heat recycling and feed quench. One limitation of static PLS model in this situation is that it does not take account of the process dynamics and hence dynamic PLS was used. Although it showed, superior performance to static PLS in terms of prediction, the monitoring scheme was inappropriate hence adaptive PLS was considered. A limitation of adaptive PLS is that non-conforming observations also contribute to the model, therefore, a new adaptive approach was developed, robust adaptive dynamic PLS. This approach updates a dynamic PLS model and is robust to non-representative data. The developed methodology showed a clear improvement over existing approaches in terms of the modeling of the reactor and the detection of faults.Keywords: ammonia synthesis fixed-bed reactor, dynamic partial least squares modeling, recursive partial least squares, robust modeling
Procedia PDF Downloads 39319113 Determination of the Effective Economic and/or Demographic Indicators in Classification of European Union Member and Candidate Countries Using Partial Least Squares Discriminant Analysis
Authors: Esra Polat
Abstract:
Partial Least Squares Discriminant Analysis (PLSDA) is a statistical method for classification and consists a classical Partial Least Squares Regression (PLSR) in which the dependent variable is a categorical one expressing the class membership of each observation. PLSDA can be applied in many cases when classical discriminant analysis cannot be applied. For example, when the number of observations is low and when the number of independent variables is high. When there are missing values, PLSDA can be applied on the data that is available. Finally, it is adapted when multicollinearity between independent variables is high. The aim of this study is to determine the economic and/or demographic indicators, which are effective in grouping the 28 European Union (EU) member countries and 7 candidate countries (including potential candidates Bosnia and Herzegovina (BiH) and Kosova) by using the data set obtained from database of the World Bank for 2014. Leaving the political issues aside, the analysis is only concerned with the economic and demographic variables that have the potential influence on country’s eligibility for EU entrance. Hence, in this study, both the performance of PLSDA method in classifying the countries correctly to their pre-defined groups (candidate or member) and the differences between the EU countries and candidate countries in terms of these indicators are analyzed. As a result of the PLSDA, the value of percentage correctness of 100 % indicates that overall of the 35 countries is classified correctly. Moreover, the most important variables that determine the statuses of member and candidate countries in terms of economic indicators are identified as 'external balance on goods and services (% GDP)', 'gross domestic savings (% GDP)' and 'gross national expenditure (% GDP)' that means for the 2014 economical structure of countries is the most important determinant of EU membership. Subsequently, the model validated to prove the predictive ability by using the data set for 2015. For prediction sample, %97,14 of the countries are correctly classified. An interesting result is obtained for only BiH, which is still a potential candidate for EU, predicted as a member of EU by using the indicators data set for 2015 as a prediction sample. Although BiH has made a significant transformation from a war-torn country to a semi-functional state, ethnic tensions, nationalistic rhetoric and political disagreements are still evident, which inhibit Bosnian progress towards the EU.Keywords: classification, demographic indicators, economic indicators, European Union, partial least squares discriminant analysis
Procedia PDF Downloads 28019112 Sparse Signal Restoration Algorithm Based on Piecewise Adaptive Backtracking Orthogonal Least Squares
Authors: Linyu Wang, Jiahui Ma, Jianhong Xiang, Hanyu Jiang
Abstract:
the traditional greedy compressed sensing algorithm needs to know the signal sparsity when recovering the signal, but the signal sparsity in the practical application can not be obtained as a priori information, and the recovery accuracy is low, which does not meet the needs of practical application. To solve this problem, this paper puts forward Piecewise adaptive backtracking orthogonal least squares algorithm. The algorithm is divided into two stages. In the first stage, the sparsity pre-estimation strategy is adopted, which can quickly approach the real sparsity and reduce time consumption. In the second stage iteration, the correction strategy and adaptive step size are used to accurately estimate the sparsity, and the backtracking idea is introduced to improve the accuracy of signal recovery. Through experimental simulation, the algorithm can accurately recover the estimated signal with fewer iterations when the sparsity is unknown.Keywords: compressed sensing, greedy algorithm, least square method, adaptive reconstruction
Procedia PDF Downloads 14819111 Mobile Platform’s Attitude Determination Based on Smoothed GPS Code Data and Carrier-Phase Measurements
Authors: Mohamed Ramdani, Hassen Abdellaoui, Abdenour Boudrassen
Abstract:
Mobile platform’s attitude estimation approaches mainly based on combined positioning techniques and developed algorithms; which aim to reach a fast and accurate solution. In this work, we describe the design and the implementation of an attitude determination (AD) process, using only measurements from GPS sensors. The major issue is based on smoothed GPS code data using Hatch filter and raw carrier-phase measurements integrated into attitude algorithm based on vectors measurement using least squares (LSQ) estimation method. GPS dataset from a static experiment is used to investigate the effectiveness of the presented approach and consequently to check the accuracy of the attitude estimation algorithm. Attitude results from GPS multi-antenna over short baselines are introduced and analyzed. The 3D accuracy of estimated attitude parameters using smoothed measurements is over 0.27°.Keywords: attitude determination, GPS code data smoothing, hatch filter, carrier-phase measurements, least-squares attitude estimation
Procedia PDF Downloads 15519110 Voltage Problem Location Classification Using Performance of Least Squares Support Vector Machine LS-SVM and Learning Vector Quantization LVQ
Authors: M. Khaled Abduesslam, Mohammed Ali, Basher H. Alsdai, Muhammad Nizam Inayati
Abstract:
This paper presents the voltage problem location classification using performance of Least Squares Support Vector Machine (LS-SVM) and Learning Vector Quantization (LVQ) in electrical power system for proper voltage problem location implemented by IEEE 39 bus New-England. The data was collected from the time domain simulation by using Power System Analysis Toolbox (PSAT). Outputs from simulation data such as voltage, phase angle, real power and reactive power were taken as input to estimate voltage stability at particular buses based on Power Transfer Stability Index (PTSI).The simulation data was carried out on the IEEE 39 bus test system by considering load bus increased on the system. To verify of the proposed LS-SVM its performance was compared to Learning Vector Quantization (LVQ). The results showed that LS-SVM is faster and better as compared to LVQ. The results also demonstrated that the LS-SVM was estimated by 0% misclassification whereas LVQ had 7.69% misclassification.Keywords: IEEE 39 bus, least squares support vector machine, learning vector quantization, voltage collapse
Procedia PDF Downloads 44119109 Near-Infrared Hyperspectral Imaging Spectroscopy to Detect Microplastics and Pieces of Plastic in Almond Flour
Authors: H. Apaza, L. Chévez, H. Loro
Abstract:
Plastic and microplastic pollution in human food chain is a big problem for human health that requires more elaborated techniques that can identify their presences in different kinds of food. Hyperspectral imaging technique is an optical technique than can detect the presence of different elements in an image and can be used to detect plastics and microplastics in a scene. To do this statistical techniques are required that need to be evaluated and compared in order to find the more efficient ones. In this work, two problems related to the presence of plastics are addressed, the first is to detect and identify pieces of plastic immersed in almond seeds, and the second problem is to detect and quantify microplastic in almond flour. To do this we make use of the analysis hyperspectral images taken in the range of 900 to 1700 nm using 4 unmixing techniques of hyperspectral imaging which are: least squares unmixing (LSU), non-negatively constrained least squares unmixing (NCLSU), fully constrained least squares unmixing (FCLSU), and scaled constrained least squares unmixing (SCLSU). NCLSU, FCLSU, SCLSU techniques manage to find the region where the plastic is found and also manage to quantify the amount of microplastic contained in the almond flour. The SCLSU technique estimated a 13.03% abundance of microplastics and 86.97% of almond flour compared to 16.66% of microplastics and 83.33% abundance of almond flour prepared for the experiment. Results show the feasibility of applying near-infrared hyperspectral image analysis for the detection of plastic contaminants in food.Keywords: food, plastic, microplastic, NIR hyperspectral imaging, unmixing
Procedia PDF Downloads 13019108 Introduction of Artificial Intelligence for Estimating Fractal Dimension and Its Applications in the Medical Field
Authors: Zerroug Abdelhamid, Danielle Chassoux
Abstract:
Various models are given to simulate homogeneous or heterogeneous cancerous tumors and extract in each case the boundary. The fractal dimension is then estimated by least squares method and compared to some previous methods.Keywords: simulation, cancerous tumor, Markov fields, fractal dimension, extraction, recovering
Procedia PDF Downloads 36519107 Using the Technology Acceptance Model to Examine Seniors’ Attitudes toward Facebook
Authors: Chien-Jen Liu, Shu Ching Yang
Abstract:
Using the technology acceptance model (TAM), this study examined the external variables of technological complexity (TC) to acquire a better understanding of the factors that influence the acceptance of computer application courses by learners at Active Aging Universities. After the learners in this study had completed a 27-hour Facebook course, 44 learners responded to a modified TAM survey. Data were collected to examine the path relationships among the variables that influence the acceptance of Facebook-mediated community learning. The partial least squares (PLS) method was used to test the measurement and the structural model. The study results demonstrated that attitudes toward Facebook use directly influence behavioral intentions (BI) with respect to Facebook use, evincing a high prediction rate of 58.3%. In addition to the perceived usefulness (PU) and perceived ease of use (PEOU) measures that are proposed in the TAM, other external variables, such as TC, also indirectly influence BI. These four variables can explain 88% of the variance in BI and demonstrate a high level of predictive ability. Finally, limitations of this investigation and implications for further research are discussed.Keywords: technology acceptance model (TAM), technological complexity, partial least squares (PLS), perceived usefulness
Procedia PDF Downloads 34619106 Comparison between Some of Robust Regression Methods with OLS Method with Application
Authors: Sizar Abed Mohammed, Zahraa Ghazi Sadeeq
Abstract:
The use of the classic method, least squares (OLS) to estimate the linear regression parameters, when they are available assumptions, and capabilities that have good characteristics, such as impartiality, minimum variance, consistency, and so on. The development of alternative statistical techniques to estimate the parameters, when the data are contaminated with outliers. These are powerful methods (or resistance). In this paper, three of robust methods are studied, which are: Maximum likelihood type estimate M-estimator, Modified Maximum likelihood type estimate MM-estimator and Least Trimmed Squares LTS-estimator, and their results are compared with OLS method. These methods applied to real data taken from Duhok company for manufacturing furniture, the obtained results compared by using the criteria: Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE) and Mean Sum of Absolute Error (MSAE). Important conclusions that this study came up with are: a number of typical values detected by using four methods in the furniture line and very close to the data. This refers to the fact that close to the normal distribution of standard errors, but typical values in the doors line data, using OLS less than that detected by the powerful ways. This means that the standard errors of the distribution are far from normal departure. Another important conclusion is that the estimated values of the parameters by using the lifeline is very far from the estimated values using powerful methods for line doors, gave LTS- destined better results using standard MSE, and gave the M- estimator better results using standard MAPE. Moreover, we noticed that using standard MSAE, and MM- estimator is better. The programs S-plus (version 8.0, professional 2007), Minitab (version 13.2) and SPSS (version 17) are used to analyze the data.Keywords: Robest, LTS, M estimate, MSE
Procedia PDF Downloads 23219105 Automating and Optimization Monitoring Prognostics for Rolling Bearing
Authors: H. Hotait, X. Chiementin, L. Rasolofondraibe
Abstract:
This paper presents a continuous work to detect the abnormal state in the rolling bearing by studying the vibration signature analysis and calculation of the remaining useful life. To achieve these aims, two methods; the first method is the classification to detect the degradation state by the AOM-OPTICS (Acousto-Optic Modulator) method. The second one is the prediction of the degradation state using least-squares support vector regression and then compared with the linear degradation model. An experimental investigation on ball-bearing was conducted to see the effectiveness of the used method by applying the acquired vibration signals. The proposed model for predicting the state of bearing gives us accurate results with the experimental and numerical data.Keywords: bearings, automatization, optimization, prognosis, classification, defect detection
Procedia PDF Downloads 12019104 A Periodogram-Based Spectral Method Approach: The Relationship between Tourism and Economic Growth in Turkey
Authors: Mesut BALIBEY, Serpil TÜRKYILMAZ
Abstract:
A popular topic in the econometrics and time series area is the cointegrating relationships among the components of a nonstationary time series. Engle and Granger’s least squares method and Johansen’s conditional maximum likelihood method are the most widely-used methods to determine the relationships among variables. Furthermore, a method proposed to test a unit root based on the periodogram ordinates has certain advantages over conventional tests. Periodograms can be calculated without any model specification and the exact distribution under the assumption of a unit root is obtained. For higher order processes the distribution remains the same asymptotically. In this study, in order to indicate advantages over conventional test of periodograms, we are going to examine a possible relationship between tourism and economic growth during the period 1999:01-2010:12 for Turkey by using periodogram method, Johansen’s conditional maximum likelihood method, Engle and Granger’s ordinary least square method.Keywords: cointegration, economic growth, periodogram ordinate, tourism
Procedia PDF Downloads 27019103 Sparse Principal Component Analysis: A Least Squares Approximation Approach
Authors: Giovanni Merola
Abstract:
Sparse Principal Components Analysis aims to find principal components with few non-zero loadings. We derive such sparse solutions by adding a genuine sparsity requirement to the original Principal Components Analysis (PCA) objective function. This approach differs from others because it preserves PCA's original optimality: uncorrelatedness of the components and least squares approximation of the data. To identify the best subset of non-zero loadings we propose a branch-and-bound search and an iterative elimination algorithm. This last algorithm finds sparse solutions with large loadings and can be run without specifying the cardinality of the loadings and the number of components to compute in advance. We give thorough comparisons with the existing sparse PCA methods and several examples on real datasets.Keywords: SPCA, uncorrelated components, branch-and-bound, backward elimination
Procedia PDF Downloads 381