Search results for: least squares method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18590

Search results for: least squares method

18560 Sparse Principal Component Analysis: A Least Squares Approximation Approach

Authors: Giovanni Merola

Abstract:

Sparse Principal Components Analysis aims to find principal components with few non-zero loadings. We derive such sparse solutions by adding a genuine sparsity requirement to the original Principal Components Analysis (PCA) objective function. This approach differs from others because it preserves PCA's original optimality: uncorrelatedness of the components and least squares approximation of the data. To identify the best subset of non-zero loadings we propose a branch-and-bound search and an iterative elimination algorithm. This last algorithm finds sparse solutions with large loadings and can be run without specifying the cardinality of the loadings and the number of components to compute in advance. We give thorough comparisons with the existing sparse PCA methods and several examples on real datasets.

Keywords: SPCA, uncorrelated components, branch-and-bound, backward elimination

Procedia PDF Downloads 341
18559 Chaotic Sequence Noise Reduction and Chaotic Recognition Rate Improvement Based on Improved Local Geometric Projection

Authors: Rubin Dan, Xingcai Wang, Ziyang Chen

Abstract:

A chaotic time series noise reduction method based on the fusion of the local projection method, wavelet transform, and particle swarm algorithm (referred to as the LW-PSO method) is proposed to address the problem of false recognition due to noise in the recognition process of chaotic time series containing noise. The method first uses phase space reconstruction to recover the original dynamical system characteristics and removes the noise subspace by selecting the neighborhood radius; then it uses wavelet transform to remove D1-D3 high-frequency components to maximize the retention of signal information while least-squares optimization is performed by the particle swarm algorithm. The Lorenz system containing 30% Gaussian white noise is simulated and verified, and the phase space, SNR value, RMSE value, and K value of the 0-1 test method before and after noise reduction of the Schreiber method, local projection method, wavelet transform method, and LW-PSO method are compared and analyzed, which proves that the LW-PSO method has a better noise reduction effect compared with the other three common methods. The method is also applied to the classical system to evaluate the noise reduction effect of the four methods and the original system identification effect, which further verifies the superiority of the LW-PSO method. Finally, it is applied to the Chengdu rainfall chaotic sequence for research, and the results prove that the LW-PSO method can effectively reduce the noise and improve the chaos recognition rate.

Keywords: Schreiber noise reduction, wavelet transform, particle swarm optimization, 0-1 test method, chaotic sequence denoising

Procedia PDF Downloads 164
18558 Design of Compact UWB Multilayered Microstrip Filter with Wide Stopband

Authors: N. Azadi-Tinat, H. Oraizi

Abstract:

Design of compact UWB multilayered microstrip filter with E-shape resonator is presented, which provides wide stopband up to 20 GHz and arbitrary impedance matching. The design procedure is developed based on the method of least squares and theory of N-coupled transmission lines. The dimensions of designed filter are about 11 mm × 11 mm and the three E-shape resonators are placed among four dielectric layers. The average insertion loss in the passband is less than 1 dB and in the stopband is about 30 dB up to 20 GHz. Its group delay in the UWB region is about 0.5 ns. The performance of the optimized filter design perfectly agrees with the microwave simulation softwares.

Keywords: method of least square, multilayer microstrip filter, n-coupled transmission lines, ultra-wideband

Procedia PDF Downloads 359
18557 Analytical Authentication of Butter Using Fourier Transform Infrared Spectroscopy Coupled with Chemometrics

Authors: M. Bodner, M. Scampicchio

Abstract:

Fourier Transform Infrared (FT-IR) spectroscopy coupled with chemometrics was used to distinguish between butter samples and non-butter samples. Further, quantification of the content of margarine in adulterated butter samples was investigated. Fingerprinting region (1400-800 cm–1) was used to develop unsupervised pattern recognition (Principal Component Analysis, PCA), supervised modeling (Soft Independent Modelling by Class Analogy, SIMCA), classification (Partial Least Squares Discriminant Analysis, PLS-DA) and regression (Partial Least Squares Regression, PLS-R) models. PCA of the fingerprinting region shows a clustering of the two sample types. All samples were classified in their rightful class by SIMCA approach; however, nine adulterated samples (between 1% and 30% w/w of margarine) were classified as belonging both at the butter class and at the non-butter one. In the two-class PLS-DA model’s (R2 = 0.73, RMSEP, Root Mean Square Error of Prediction = 0.26% w/w) sensitivity was 71.4% and Positive Predictive Value (PPV) 100%. Its threshold was calculated at 7% w/w of margarine in adulterated butter samples. Finally, PLS-R model (R2 = 0.84, RMSEP = 16.54%) was developed. PLS-DA was a suitable classification tool and PLS-R a proper quantification approach. Results demonstrate that FT-IR spectroscopy combined with PLS-R can be used as a rapid, simple and safe method to identify pure butter samples from adulterated ones and to determine the grade of adulteration of margarine in butter samples.

Keywords: adulterated butter, margarine, PCA, PLS-DA, PLS-R, SIMCA

Procedia PDF Downloads 113
18556 Interaction between Mutual Fund Performance and Portfolio Turnover

Authors: Sheng-Ching Wu

Abstract:

This paper examines the interaction between mutual fund performance and portfolio turnover. Active trading could affect fund performance, but underperforming funds could also be traded actively at the same time to perform well. Therefore, we used two-stage least squares to address with simultaneity. The results indicate that funds with higher portfolio turnovers exhibit inferior performance compared with funds having lower turnovers. Moreover, funds with poor performance exhibit higher portfolio turnover. The findings support the assumptions that active trading erodes performance, and that fund managers with poor performance attempt to trade actively to retain employment.

Keywords: mutual funds, portfolio turnover, simultaneity, two-stage least squares

Procedia PDF Downloads 406
18555 Channel Estimation for LTE Downlink

Authors: Rashi Jain

Abstract:

The LTE systems employ Orthogonal Frequency Division Multiplexing (OFDM) as the multiple access technology for the Downlink channels. For enhanced performance, accurate channel estimation is required. Various algorithms such as Least Squares (LS), Minimum Mean Square Error (MMSE) and Recursive Least Squares (RLS) can be employed for the purpose. The paper proposes channel estimation algorithm based on Kalman Filter for LTE-Downlink system. Using the frequency domain pilots, the initial channel response is obtained using the LS criterion. Then Kalman Filter is employed to track the channel variations in time-domain. To suppress the noise within a symbol, threshold processing is employed. The paper draws comparison between the LS, MMSE, RLS and Kalman filter for channel estimation. The parameters for evaluation are Bit Error Rate (BER), Mean Square Error (MSE) and run-time.

Keywords: LTE, channel estimation, OFDM, RLS, Kalman filter, threshold

Procedia PDF Downloads 330
18554 Adaptive Online Object Tracking via Positive and Negative Models Matching

Authors: Shaomei Li, Yawen Wang, Chao Gao

Abstract:

To improve tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, object tracking is posed as a binary classification problem and is modeled by partial least squares (PLS) analysis. Secondly, tracking object frame by frame via particle filtering. Thirdly, validating the tracking reliability based on both positive and negative models matching. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm cannot only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences.

Keywords: object tracking, tracking drift, partial least squares analysis, positive and negative models matching

Procedia PDF Downloads 494
18553 Microwave Dielectric Relaxation Study of Diethanolamine with Triethanolamine from 10 MHz-20 GHz

Authors: A. V. Patil

Abstract:

The microwave dielectric relaxation study of diethanolamine with triethanolamine binary mixture have been determined over the frequency range of 10 MHz to 20 GHz, at various temperatures using time domain reflectometry (TDR) method for 11 concentrations of the system. The present work reveals molecular interaction between same multi-functional groups [−OH and –NH2] of the alkanolamines (diethanolamine and triethanolamine) using different models such as Debye model, Excess model, and Kirkwood model. The dielectric parameters viz. static dielectric constant (ε0) and relaxation time (τ) have been obtained with Debye equation characterized by a single relaxation time without relaxation time distribution by the least squares fit method.

Keywords: diethanolamine, excess properties, kirkwood properties, time domain reflectometry, triethanolamine

Procedia PDF Downloads 272
18552 Online Prediction of Nonlinear Signal Processing Problems Based Kernel Adaptive Filtering

Authors: Hamza Nejib, Okba Taouali

Abstract:

This paper presents two of the most knowing kernel adaptive filtering (KAF) approaches, the kernel least mean squares and the kernel recursive least squares, in order to predict a new output of nonlinear signal processing. Both of these methods implement a nonlinear transfer function using kernel methods in a particular space named reproducing kernel Hilbert space (RKHS) where the model is a linear combination of kernel functions applied to transform the observed data from the input space to a high dimensional feature space of vectors, this idea known as the kernel trick. Then KAF is the developing filters in RKHS. We use two nonlinear signal processing problems, Mackey Glass chaotic time series prediction and nonlinear channel equalization to figure the performance of the approaches presented and finally to result which of them is the adapted one.

Keywords: online prediction, KAF, signal processing, RKHS, Kernel methods, KRLS, KLMS

Procedia PDF Downloads 366
18551 A Quadratic Approach for Generating Pythagorean Triples

Authors: P. K. Rahul Krishna, S. Sandeep Kumar, Jayanthi Sunder Raj

Abstract:

The article explores one of the important relations between numbers-the Pythagorean triples (triplets) which finds its application in distance measurement, construction of roads, towers, buildings and wherever Pythagoras theorem finds its application. The Pythagorean triples are numbers, that satisfy the condition “In a given set of three natural numbers, the sum of squares of two natural numbers is equal to the square of the other natural number”. There are numerous methods and equations to obtain the triplets, which have their own merits and demerits. Here, quadratic approach for generating triples uses the hypotenuse leg difference method. The advantage is that variables are few and finally only three independent variables are present.

Keywords: arithmetic progression, hypotenuse leg difference method, natural numbers, Pythagorean triplets, quadratic equation

Procedia PDF Downloads 173
18550 Chemometric Estimation of Inhibitory Activity of Benzimidazole Derivatives by Linear Least Squares and Artificial Neural Networks Modelling

Authors: Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević, Lidija R. Jevrić, Stela Jokić

Abstract:

The subject of this paper is to correlate antibacterial behavior of benzimidazole derivatives with their molecular characteristics using chemometric QSAR (Quantitative Structure–Activity Relationships) approach. QSAR analysis has been carried out on the inhibitory activity of benzimidazole derivatives against Staphylococcus aureus. The data were processed by linear least squares (LLS) and artificial neural network (ANN) procedures. The LLS mathematical models have been developed as a calibration models for prediction of the inhibitory activity. The quality of the models was validated by leave one out (LOO) technique and by using external data set. High agreement between experimental and predicted inhibitory acivities indicated the good quality of the derived models. These results are part of the CMST COST Action No. CM1306 "Understanding Movement and Mechanism in Molecular Machines".

Keywords: Antibacterial, benzimidazoles, chemometric, QSAR.

Procedia PDF Downloads 287
18549 Unit Root Tests Based On the Robust Estimator

Authors: Wararit Panichkitkosolkul

Abstract:

The unit root tests based on the robust estimator for the first-order autoregressive process are proposed and compared with the unit root tests based on the ordinary least squares (OLS) estimator. The percentiles of the null distributions of the unit root test are also reported. The empirical probabilities of Type I error and powers of the unit root tests are estimated via Monte Carlo simulation. Simulation results show that all unit root tests can control the probability of Type I error for all situations. The empirical power of the unit root tests based on the robust estimator are higher than the unit root tests based on the OLS estimator.

Keywords: autoregressive, ordinary least squares, type i error, power of the test, Monte Carlo simulation

Procedia PDF Downloads 262
18548 Sensor Registration in Multi-Static Sonar Fusion Detection

Authors: Longxiang Guo, Haoyan Hao, Xueli Sheng, Hanjun Yu, Jingwei Yin

Abstract:

In order to prevent target splitting and ensure the accuracy of fusion, system error registration is an important step in multi-static sonar fusion detection system. To eliminate the inherent system errors including distance error and angle error of each sonar in detection, this paper uses offline estimation method for error registration. Suppose several sonars from different platforms work together to detect a target. The target position detected by each sonar is based on each sonar’s own reference coordinate system. Based on the two-dimensional stereo projection method, this paper uses real-time quality control (RTQC) method and least squares (LS) method to estimate sensor biases. The RTQC method takes the average value of each sonar’s data as the observation value and the LS method makes the least square processing of each sonar’s data to get the observation value. In the underwater acoustic environment, matlab simulation is carried out and the simulation results show that both algorithms can estimate the distance and angle error of sonar system. The performance of the two algorithms is also compared through the root mean square error and the influence of measurement noise on registration accuracy is explored by simulation. The system error convergence of RTQC method is rapid, but the distribution of targets has a serious impact on its performance. LS method can not be affected by target distribution, but the increase of random noise will slow down the convergence rate. LS method is an improvement of RTQC method, which is widely used in two-dimensional registration. The improved method can be used for underwater multi-target detection registration.

Keywords: data fusion, multi-static sonar detection, offline estimation, sensor registration problem

Procedia PDF Downloads 135
18547 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 6
18546 Development, Optimization, and Validation of a Synchronous Fluorescence Spectroscopic Method with Multivariate Calibration for the Determination of Amlodipine and Olmesartan Implementing: Experimental Design

Authors: Noha Ibrahim, Eman S. Elzanfaly, Said A. Hassan, Ahmed E. El Gendy

Abstract:

Objectives: The purpose of the study is to develop a sensitive synchronous spectrofluorimetric method with multivariate calibration after studying and optimizing the different variables affecting the native fluorescence intensity of amlodipine and olmesartan implementing an experimental design approach. Method: In the first step, the fractional factorial design used to screen independent factors affecting the intensity of both drugs. The objective of the second step was to optimize the method performance using a Central Composite Face-centred (CCF) design. The optimal experimental conditions obtained from this study were; a temperature of (15°C ± 0.5), the solvent of 0.05N HCl and methanol with a ratio of (90:10, v/v respectively), Δλ of 42 and the addition of 1.48 % surfactant providing a sensitive measurement of amlodipine and olmesartan. The resolution of the binary mixture with a multivariate calibration method has been accomplished mainly by using partial least squares (PLS) model. Results: The recovery percentage for amlodipine besylate and atorvastatin calcium in tablets dosage form were found to be (102 ± 0.24, 99.56 ± 0.10, for amlodipine and Olmesartan, respectively). Conclusion: Method is valid according to some International Conference on Harmonization (ICH) guidelines, providing to be linear over a range of 200-300, 500-1500 ng mL⁻¹ for amlodipine and Olmesartan. The methods were successful to estimate amlodipine besylate and olmesartan in bulk powder and pharmaceutical preparation.

Keywords: amlodipine, central composite face-centred design, experimental design, fractional factorial design, multivariate calibration, olmesartan

Procedia PDF Downloads 119
18545 Nonparametric Copula Approximations

Authors: Serge Provost, Yishan Zang

Abstract:

Copulas are currently utilized in finance, reliability theory, machine learning, signal processing, geodesy, hydrology and biostatistics, among several other fields of scientific investigation. It follows from Sklar's theorem that the joint distribution function of a multidimensional random vector can be expressed in terms of its associated copula and marginals. Since marginal distributions can easily be determined by making use of a variety of techniques, we address the problem of securing the distribution of the copula. This will be done by using several approaches. For example, we will obtain bivariate least-squares approximations of the empirical copulas, modify the kernel density estimation technique and propose a criterion for selecting appropriate bandwidths, differentiate linearized empirical copulas, secure Bernstein polynomial approximations of suitable degrees, and apply a corollary to Sklar's result. Illustrative examples involving actual observations will be presented. The proposed methodologies will as well be applied to a sample generated from a known copula distribution in order to validate their effectiveness.

Keywords: copulas, Bernstein polynomial approximation, least-squares polynomial approximation, kernel density estimation, density approximation

Procedia PDF Downloads 41
18544 Processing and Modeling of High-Resolution Geophysical Data for Archaeological Prospection, Nuri Area, Northern Sudan

Authors: M. Ibrahim Ali, M. El Dawi, M. A. Mohamed Ali

Abstract:

In this study, the use of magnetic gradient survey, and the geoelectrical ground methods used together to explore archaeological features in Nuri’s pyramids area. Research methods used and the procedures and methodologies have taken full right during the study. The magnetic survey method was used to search for archaeological features using (Geoscan Fluxgate Gradiometer (FM36)). The study area was divided into a number of squares (networks) exactly equal (20 * 20 meters). These squares were collected at the end of the study to give a major network for each region. Networks also divided to take the sample using nets typically equal to (0.25 * 0.50 meter), in order to give a more specific archaeological features with some small bipolar anomalies that caused by buildings built from fired bricks. This definition is important to monitor many of the archaeological features such as rooms and others. This main network gives us an integrated map displayed for easy presentation, and it also allows for all the operations required using (Geoscan Geoplot software). The parallel traverse is the main way to take readings of the magnetic survey, to get out the high-quality data. The study area is very rich in old buildings that vary from small to very large. According to the proportion of the sand dunes and the loose soil, most of these buildings are not visible from the surface. Because of the proportion of the sandy dry soil, there is no connection between the ground surface and the electrodes. We tried to get electrical readings by adding salty water to the soil, but, unfortunately, we failed to confirm the magnetic readings with electrical readings as previously planned.

Keywords: archaeological features, independent grids, magnetic gradient, Nuri pyramid

Procedia PDF Downloads 453
18543 Policy Effectiveness in the Situation of Economic Recession

Authors: S. K. Ashiquer Rahman

Abstract:

The proper policy handling might not able to attain the target since some of recessions, e.g., pandemic-led crises, the variables shocks of the economics. At the level of this situation, the Central bank implements the monetary policy to choose increase the exogenous expenditure and level of money supply consecutively for booster level economic growth, whether the monetary policy is relatively more effective than fiscal policy in altering real output growth of a country or both stand for relatively effective in the direction of output growth of a country. The dispute with reference to the relationship between the monetary policy and fiscal policy is centered on the inflationary penalty of the shortfall financing by the fiscal authority. The latest variables socks of economics as well as the pandemic-led crises, central banks around the world predicted just about a general dilemma in relation to increase rates to face the or decrease rates to sustain the economic movement. Whether the prices hang about fundamentally unaffected, the aggregate demand has also been hold a significantly negative attitude by the outbreak COVID-19 pandemic. To empirically investigate the effects of economics shocks associated COVID-19 pandemic, the paper considers the effectiveness of the monetary policy and fiscal policy that linked to the adjustment mechanism of different economic variables. To examine the effects of economics shock associated COVID-19 pandemic towards the effectiveness of Monetary Policy and Fiscal Policy in the direction of output growth of a Country, this paper uses the Simultaneous equations model under the estimation of Two-Stage Least Squares (2SLS) and Ordinary Least Squares (OLS) Method.

Keywords: IS-LM framework, pandemic. Economics variables shocks, simultaneous equations model, output growth

Procedia PDF Downloads 58
18542 TessPy – Spatial Tessellation Made Easy

Authors: Jonas Hamann, Siavash Saki, Tobias Hagen

Abstract:

Discretization of urban areas is a crucial aspect in many spatial analyses. The process of discretization of space into subspaces without overlaps and gaps is called tessellation. It helps understanding spatial space and provides a framework for analyzing geospatial data. Tessellation methods can be divided into two groups: regular tessellations and irregular tessellations. While regular tessellation methods, like squares-grids or hexagons-grids, are suitable for addressing pure geometry problems, they cannot take the unique characteristics of different subareas into account. However, irregular tessellation methods allow the border between the subareas to be defined more realistically based on urban features like a road network or Points of Interest (POI). Even though Python is one of the most used programming languages when it comes to spatial analysis, there is currently no library that combines different tessellation methods to enable users and researchers to compare different techniques. To close this gap, we are proposing TessPy, an open-source Python package, which combines all above-mentioned tessellation methods and makes them easily accessible to everyone. The core functions of TessPy represent the five different tessellation methods: squares, hexagons, adaptive squares, Voronoi polygons, and city blocks. By using regular methods, users can set the resolution of the tessellation which defines the finesse of the discretization and the desired number of tiles. Irregular tessellation methods allow users to define which spatial data to consider (e.g., amenity, building, office) and how fine the tessellation should be. The spatial data used is open-source and provided by OpenStreetMap. This data can be easily extracted and used for further analyses. Besides the methodology of the different techniques, the state-of-the-art, including examples and future work, will be discussed. All dependencies can be installed using conda or pip; however, the former is more recommended.

Keywords: geospatial data science, geospatial data analysis, tessellations, urban studies

Procedia PDF Downloads 94
18541 Expounding on the Role of Sustainability Values (SVs) on Consumers’ Switching Intentions Regarding Disruptive 5G Technology in China

Authors: Sayed Kifayat Shah, Tang Zhongjun, Mohammad Ahmad, Sohaib Mostafa

Abstract:

This article investigates consumer’s intention to shift to 5G in the light of disruptive technology innovation. To switch from 4G (Existing) technology to 5G (Disruptive) technology requires not just economic benefits and costs but involves other values too, which aren't yet experienced in the framework of technology innovation. This study extended the valued adaptation (VAM) model by proposing the sustainability values (SVs) construct. The model was examined on data from 361 Chinese consumers using the partial least squares-based structural equation modelling (PLS-SEM) technique. The outcomes prove the significant correlation of sustainability values (SVs) which influences consumer’s switching intentions toward 5G disruptive technology. The findings of this research will be helpful to telecoms firms in developing consumer retention strategies. Some limitations and the importance of the research for scholars and managers are also discussed.

Keywords: value adaptation model (VAM), sustainability values (SVs), disruptive 5G technology, switching intentions (SI), partial least squares-based structural equation modelling (PLS-SEM)

Procedia PDF Downloads 117
18540 Recursive Doubly Complementary Filter Design Using Particle Swarm Optimization

Authors: Ju-Hong Lee, Ding-Chen Chung

Abstract:

This paper deals with the optimal design of recursive doubly complementary (DC) digital filter design using a metaheuristic based optimization technique. Based on the theory of DC digital filters using two recursive digital all-pass filters (DAFs), the design problem is appropriately formulated to result in an objective function which is a weighted sum of the phase response errors of the designed DAFs. To deal with the stability of the recursive DC filters during the design process, we can either impose some necessary constraints on the phases of the recursive DAFs. Through a frequency sampling and a weighted least squares approach, the optimization problem of the objective function can be solved by utilizing a population based stochastic optimization approach. The resulting DC digital filters can possess satisfactory frequency response. Simulation results are presented for illustration and comparison.

Keywords: doubly complementary, digital all-pass filter, weighted least squares algorithm, particle swarm optimization

Procedia PDF Downloads 651
18539 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).

Keywords: time history dynamic analysis, basic modal displacement, earthquake-induced demands, shear steel structures

Procedia PDF Downloads 324
18538 Estimation of Coefficients of Ridge and Principal Components Regressions with Multicollinear Data

Authors: Rajeshwar Singh

Abstract:

The presence of multicollinearity is common in handling with several explanatory variables simultaneously due to exhibiting a linear relationship among them. A great problem arises in understanding the impact of explanatory variables on the dependent variable. Thus, the method of least squares estimation gives inexact estimates. In this case, it is advised to detect its presence first before proceeding further. Using the ridge regression degree of its occurrence is reduced but principal components regression gives good estimates in this situation. This paper discusses well-known techniques of the ridge and principal components regressions and applies to get the estimates of coefficients by both techniques. In addition to it, this paper also discusses the conflicting claim on the discovery of the method of ridge regression based on available documents.

Keywords: conflicting claim on credit of discovery of ridge regression, multicollinearity, principal components and ridge regressions, variance inflation factor

Procedia PDF Downloads 374
18537 Quadrature Mirror Filter Bank Design Using Population Based Stochastic Optimization

Authors: Ju-Hong Lee, Ding-Chen Chung

Abstract:

The paper deals with the optimal design of two-channel linear-phase (LP) quadrature mirror filter (QMF) banks using a metaheuristic based optimization technique. Based on the theory of two-channel QMF banks using two recursive digital all-pass filters (DAFs), the design problem is appropriately formulated to result in an objective function which is a weighted sum of the group delay error of the designed QMF bank and the magnitude response error of the designed low-pass analysis filter. Through a frequency sampling and a weighted least squares approach, the optimization problem of the objective function can be solved by utilizing a particle swarm optimization algorithm. The resulting two-channel QMF banks can possess approximately LP response without magnitude distortion. Simulation results are presented for illustration and comparison.

Keywords: quadrature mirror filter bank, digital all-pass filter, weighted least squares algorithm, particle swarm optimization

Procedia PDF Downloads 487
18536 Numerical Board Game for Low-Income Preschoolers

Authors: Gozde Inal Kiziltepe, Ozgun Uyanik

Abstract:

There is growing evidence that socioeconomic (SES)-related differences in mathematical knowledge primarily start in early childhood period. Preschoolers from low-income families are likely to perform substantially worse in mathematical knowledge than their counterparts from middle and higher income families. The differences are seen on a wide range of recognizing written numerals, counting, adding and subtracting, and comparing numerical magnitudes. Early differences in numerical knowledge have a permanent effect childrens’ mathematical knowledge in other grades. In this respect, analyzing the effect of number board game on the number knowledge of 48-60 month-old children from disadvantaged low-income families constitutes the main objective of the study. Participants were the 71 preschoolers from a childcare center which served low-income urban families. Children were randomly assigned to the number board condition or to the color board condition. The number board condition included 35 children and the color board game condition included 36 children. Both board games were 50 cm long and 30 cm high; had ‘The Great Race’ written across the top; and included 11 horizontally arranged, different colored squares of equal sizes with the leftmost square labeled ‘Start’. The numerical board had the numbers 1–10 in the rightmost 10 squares; the color board had different colors in those squares. A rabbit or a bear token were presented to children for selecting, and on each trial spun a spinner to determine whether the token would move one or two spaces. The number condition spinner had a ‘1’ half and a ‘2’ half; the color condition spinner had colors that matched the colors of the squares on the board. Children met one-on-one with an experimenter for four 15- to 20-min sessions within a 2-week period. In the first and fourth sessions, children were administered identical pretest and posttest measures of numerical knowledge. All children were presented three numerical tasks and one subtest presented in the following order: counting, numerical magnitude comparison, numerical identification and Count Objects – Circle Number Probe subtest of Early Numeracy Assessment. In addition, same numerical tasks and subtest were given as a follow-up test four weeks after the post-test administration. Findings obtained from the study; showed that there was a meaningful difference between scores of children who played a color board game in favor of children who played number board game.

Keywords: low income, numerical board game, numerical knowledge, preschool education

Procedia PDF Downloads 321
18535 Efficient Chess Board Representation: A Space-Efficient Protocol

Authors: Raghava Dhanya, Shashank S.

Abstract:

This paper delves into the intersection of chess and computer science, specifically focusing on the efficient representation of chess game states. We propose two methods: the Static Method and the Dynamic Method, each offering unique advantages in terms of space efficiency and computational complexity. The Static Method aims to represent the game state using a fixedlength encoding, allocating 192 bits to capture the positions of all pieces on the board. This method introduces a protocol for ordering and encoding piece positions, ensuring efficient storage and retrieval. However, it faces challenges in representing pieces no longer in play. In contrast, the Dynamic Method adapts to the evolving game state by dynamically adjusting the encoding length based on the number of pieces in play. By incorporating Alive Bits for each piece kind, this method achieves greater flexibility and space efficiency. Additionally, it includes provisions for encoding additional game state information such as castling rights and en passant squares. Our findings demonstrate that the Dynamic Method offers superior space efficiency compared to traditional Forsyth-Edwards Notation (FEN), particularly as the game progresses and pieces are captured. However, it comes with increased complexity in encoding and decoding processes. In conclusion, this study provides insights into optimizing the representation of chess game states, offering potential applications in chess engines, game databases, and artificial intelligence research. The proposed methods offer a balance between space efficiency and computational overhead, paving the way for further advancements in the field.

Keywords: chess, optimisation, encoding, bit manipulation

Procedia PDF Downloads 16
18534 Non-Invasive Imaging of Tissue Using Near Infrared Radiations

Authors: Ashwani Kumar Aggarwal

Abstract:

NIR Light is non-ionizing and can pass easily through living tissues such as breast without any harmful effects. Therefore, use of NIR light for imaging the biological tissue and to quantify its optical properties is a good choice over other invasive methods. Optical tomography involves two steps. One is the forward problem and the other is the reconstruction problem. The forward problem consists of finding the measurements of transmitted light through the tissue from source to detector, given the spatial distribution of absorption and scattering properties. The second step is the reconstruction problem. In X-ray tomography, there is standard method for reconstruction called filtered back projection method or the algebraic reconstruction methods. But this method cannot be applied as such, in optical tomography due to highly scattering nature of biological tissue. A hybrid algorithm for reconstruction has been implemented in this work which takes into account the highly scattered path taken by photons while back projecting the forward data obtained during Monte Carlo simulation. The reconstructed image suffers from blurring due to point spread function. This blurred reconstructed image has been enhanced using a digital filter which is optimal in mean square sense.

Keywords: least-squares optimization, filtering, tomography, laser interaction, light scattering

Procedia PDF Downloads 283
18533 Calibration Model of %Titratable Acidity (Citric Acid) for Intact Tomato by Transmittance SW-NIR Spectroscopy

Authors: K. Petcharaporn, S. Kumchoo

Abstract:

The acidity (citric acid) is one of the chemical contents that can refer to the internal quality and the maturity index of tomato. The titratable acidity (%TA) can be predicted by a non-destructive method prediction by using the transmittance short wavelength (SW-NIR). Spectroscopy in the wavelength range between 665-955 nm. The set of 167 tomato samples divided into groups of 117 tomatoes sample for training set and 50 tomatoes sample for test set were used to establish the calibration model to predict and measure %TA by partial least squares regression (PLSR) technique. The spectra were pretreated with MSC pretreatment and it gave the optimal result for calibration model as (R = 0.92, RMSEC = 0.03%) and this model obtained high accuracy result to use for %TA prediction in test set as (R = 0.81, RMSEP = 0.05%). From the result of prediction in test set shown that the transmittance SW-NIR spectroscopy technique can be used for a non-destructive method for %TA prediction of tomatoes.

Keywords: tomato, quality, prediction, transmittance, titratable acidity, citric acid

Procedia PDF Downloads 241
18532 Spatial Characters Adapted to Rainwater Natural Circulation in Residential Landscape

Authors: Yun Zhang

Abstract:

Urban housing in China is typified by residential districts that occupy 25 to 40 percentage of the urban land. In residential districts, squares, roads, and building facades, as well as plants, usually form a four-grade spatial structure: district entrances, central landscapes, housing cluster entrances, green spaces between dwellings. This spatial structure and its elements not only compose the visible residential landscape but also play a major role of carrying rain water. These elements, therefore, imply ecological significance to urban fitness. Based upon theories of landscape ecology, residential landscape can be understood as a pattern typified by minor soft patch of planted area and major hard patch of buildings and squares, as well as hard corridors of roads. Use five landscape districts in Hangzhou as examples; this paper finds that the size, shape and slope direction of soft patch, the bend of roads, and the form of the four-grade spatial structure are influential for adapting to natural rainwater circulation.

Keywords: Hangzhou China, rainwater, residential landscape, spatial character, urban housing

Procedia PDF Downloads 293
18531 Infrastructural Investment and Economic Growth in Indian States: A Panel Data Analysis

Authors: Jonardan Koner, Basabi Bhattacharya, Avinash Purandare

Abstract:

The study is focused to find out the impact of infrastructural investment on economic development in Indian states. The study uses panel data analysis to measure the impact of infrastructural investment on Real Gross Domestic Product in Indian States. Panel data analysis incorporates Unit Root Test, Cointegration Teat, Pooled Ordinary Least Squares, Fixed Effect Approach, Random Effect Approach, Hausman Test. The study analyzes panel data (annual in frequency) ranging from 1991 to 2012 and concludes that infrastructural investment has a desirable impact on economic development in Indian. Finally, the study reveals that the infrastructural investment significantly explains the variation of economic indicator.

Keywords: infrastructural investment, real GDP, unit root test, cointegration teat, pooled ordinary least squares, fixed effect approach, random effect approach, Hausman test

Procedia PDF Downloads 374