Search results for: Least Squares
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 181

Search results for: Least Squares

31 Least-Squares Support Vector Machine for Characterization of Clusters of Microcalcifications

Authors: Baljit Singh Khehra, Amar Partap Singh Pharwaha

Abstract:

Clusters of Microcalcifications (MCCs) are most frequent symptoms of Ductal Carcinoma in Situ (DCIS) recognized by mammography. Least-Square Support Vector Machine (LS-SVM) is a variant of the standard SVM. In the paper, LS-SVM is proposed as a classifier for classifying MCCs as benign or malignant based on relevant extracted features from enhanced mammogram. To establish the credibility of LS-SVM classifier for classifying MCCs, a comparative evaluation of the relative performance of LS-SVM classifier for different kernel functions is made. For comparative evaluation, confusion matrix and ROC analysis are used. Experiments are performed on data extracted from mammogram images of DDSM database. A total of 380 suspicious areas are collected, which contain 235 malignant and 145 benign samples, from mammogram images of DDSM database. A set of 50 features is calculated for each suspicious area. After this, an optimal subset of 23 most suitable features is selected from 50 features by Particle Swarm Optimization (PSO). The results of proposed study are quite promising.

Keywords: Clusters of Microcalcifications, Ductal Carcinoma in Situ, Least-Square Support Vector Machine, Particle Swarm Optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1812
30 An Improved Learning Algorithm based on the Conjugate Gradient Method for Back Propagation Neural Networks

Authors: N. M. Nawi, M. R. Ransing, R. S. Ransing

Abstract:

The conjugate gradient optimization algorithm usually used for nonlinear least squares is presented and is combined with the modified back propagation algorithm yielding a new fast training multilayer perceptron (MLP) algorithm (CGFR/AG). The approaches presented in the paper consist of three steps: (1) Modification on standard back propagation algorithm by introducing gain variation term of the activation function, (2) Calculating the gradient descent on error with respect to the weights and gains values and (3) the determination of the new search direction by exploiting the information calculated by gradient descent in step (2) as well as the previous search direction. The proposed method improved the training efficiency of back propagation algorithm by adaptively modifying the initial search direction. Performance of the proposed method is demonstrated by comparing to the conjugate gradient algorithm from neural network toolbox for the chosen benchmark. The results show that the number of iterations required by the proposed method to converge is less than 20% of what is required by the standard conjugate gradient and neural network toolbox algorithm.

Keywords: Back-propagation, activation function, conjugategradient, search direction, gain variation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2837
29 A Mathematical Model Approach Regarding the Children’s Height Development with Fractional Calculus

Authors: Nisa Özge Önal, Kamil Karaçuha, Göksu Hazar Erdinç, Banu Bahar Karaçuha, Ertuğrul Karaçuha

Abstract:

The study aims to use a mathematical approach with the fractional calculus which is developed to have the ability to continuously analyze the factors related to the children’s height development. Until now, tracking the development of the child is getting more important and meaningful. Knowing and determining the factors related to the physical development of the child any desired time would provide better, reliable and accurate results for childcare. In this frame, 7 groups for height percentile curve (3th, 10th, 25th, 50th, 75th, 90th, and 97th) of Turkey are used. By using discrete height data of 0-18 years old children and the least squares method, a continuous curve is developed valid for any time interval. By doing so, in any desired instant, it is possible to find the percentage and location of the child in Percentage Chart. Here, with the help of the fractional calculus theory, a mathematical model is developed. The outcomes of the proposed approach are quite promising compared to the linear and the polynomial method. The approach also yields to predict the expected values of children in the sense of height.

Keywords: Children growth percentile, children physical development, fractional calculus, linear and polynomial model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 860
28 Frequency-Variation Based Method for Parameter Estimation of Transistor Amplifier

Authors: Akash Rathee, Harish Parthasarathy

Abstract:

In this paper, a frequency-variation based method has been proposed for transistor parameter estimation in a commonemitter transistor amplifier circuit. We design an algorithm to estimate the transistor parameters, based on noisy measurements of the output voltage when the input voltage is a sine wave of variable frequency and constant amplitude. The common emitter amplifier circuit has been modelled using the transistor Ebers-Moll equations and the perturbation technique has been used for separating the linear and nonlinear parts of the Ebers-Moll equations. This model of the amplifier has been used to determine the amplitude of the output sinusoid as a function of the frequency and the parameter vector. Then, applying the proposed method to the frequency components, the transistor parameters have been estimated. As compared to the conventional time-domain least squares method, the proposed method requires much less data storage and it results in more accurate parameter estimation, as it exploits the information in the time and frequency domain, simultaneously. The proposed method can be utilized for parameter estimation of an analog device in its operating range of frequencies, as it uses data collected from different frequencies output signals for parameter estimation.

Keywords: Perturbation Technique, Parameter estimation, frequency-variation based method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1755
27 Investigating the Dynamics of Knowledge Acquisition in Learning Using Differential Equations

Authors: Gilbert Makanda, Roelf Sypkens

Abstract:

A mathematical model for knowledge acquisition in teaching and learning is proposed. In this study we adopt the mathematical model that is normally used for disease modelling into teaching and learning. We derive mathematical conditions which facilitate knowledge acquisition. This study compares the effects of dropping out of the course at early stages with later stages of learning. The study also investigates effect of individual interaction and learning from other sources to facilitate learning. The study fits actual data to a general mathematical model using Matlab ODE45 and lsqnonlin to obtain a unique mathematical model that can be used to predict knowledge acquisition. The data used in this study was obtained from the tutorial test results for mathematics 2 students from the Central University of Technology, Free State, South Africa in the department of Mathematical and Physical Sciences. The study confirms already known results that increasing dropout rates and forgetting taught concepts reduce the population of knowledgeable students. Increasing teaching contacts and access to other learning materials facilitate knowledge acquisition. The effect of increasing dropout rates is more enhanced in the later stages of learning than earlier stages. The study opens up a new direction in further investigations in teaching and learning using differential equations.

Keywords: Differential equations, knowledge acquisition, least squares nonlinear, dynamical systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 916
26 The Effect of Mist Cooling on Sexual Behavior and Semen Quality of Sahiwal Bulls

Authors: Khalid Ahmed Elrabie Abdelrasoul

Abstract:

The present study was carried out on Sahiwal cattle bulls maintained at the Artificial Breeding Complex, NDRI, Karnal, Hayana, India, to assess the effect of cooling using mist cooling and fanning on Sahiwal bulls in the dry hot summer season. Fourteen Sahiwal bulls were divided into two groups of seven each. Sexual behavior and semen quality traits considered were: Reaction time (RT), Dismounting time (DMT), Total time taken in mounts (TTTM), Flehmen response (FR), Erection Score (ES), Protrusion Score (PS), Intensity of thrust (ITS), Temperament Score (TS), Libido Score (LS), Semen volume, Physical appearance, Mass activity, Initial progressive motility, Non-eosinophilic spermatozoa count (NESC) and post thaw motility percent. Data were analyzed by least squares technique. Group-1 was the control, whereas group-2 (treatment group) bulls were exposed to mist cooling and fanning (thrice a day 15 min each) in the dry hot summer season. Group-2 showed significantly (p < 0.01) higher value in DMT (sec), ES, PS, ITS, LS, semen volume (ml), semen color density, mass activity, initial motility, progressive motility and live sperm.

Keywords: Mist cooling, Sahiwal bulls, semen quality, sexual behavior.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1050
25 QSAR Studies of Certain Novel Heterocycles Derived from Bis-1, 2, 4 Triazoles as Anti-Tumor Agents

Authors: Madhusudan Purohit, Stephen Philip, Bharathkumar Inturi

Abstract:

In this paper we report the quantitative structure activity relationship of novel bis-triazole derivatives for predicting the activity profile. The full model encompassed a dataset of 46 Bis- triazoles. Tripos Sybyl X 2.0 program was used to conduct CoMSIA QSAR modeling. The Partial Least-Squares (PLS) analysis method was used to conduct statistical analysis and to derive a QSAR model based on the field values of CoMSIA descriptor. The compounds were divided into test and training set. The compounds were evaluated by various CoMSIA parameters to predict the best QSAR model. An optimum numbers of components were first determined separately by cross-validation regression for CoMSIA model, which were then applied in the final analysis. A series of parameters were used for the study and the best fit model was obtained using donor, partition coefficient and steric parameters. The CoMSIA models demonstrated good statistical results with regression coefficient (r2) and the cross-validated coefficient (q2) of 0.575 and 0.830 respectively. The standard error for the predicted model was 0.16322. In the CoMSIA model, the steric descriptors make a marginally larger contribution than the electrostatic descriptors. The finding that the steric descriptor is the largest contributor for the CoMSIA QSAR models is consistent with the observation that more than half of the binding site area is occupied by steric regions.

Keywords: 3D QSAR, CoMSIA, Triazoles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1480
24 Modeling Residential Electricity Consumption Function in Malaysia: Time Series Approach

Authors: L. L. Ivy-Yap, H. A. Bekhet

Abstract:

As the Malaysian residential electricity consumption continued to increase rapidly, effective energy policies, which address factors affecting residential electricity consumption, is urgently needed. This study attempts to investigate the relationship between residential electricity consumption (EC), real disposable income (Y), price of electricity (Pe) and population (Po) in Malaysia for 1978-2011 period. Unlike previous studies on Malaysia, the current study focuses on the residential sector, a sector that is important for the contemplation of energy policy. The Phillips-Perron (P-P) unit root test is employed to infer the stationarity of each variable while the bound test is executed to determine the existence of co-integration relationship among the variables, modelled in an Autoregressive Distributed Lag (ARDL) framework. The CUSUM and CUSUM of squares tests are applied to ensure the stability of the model. The results suggest the existence of long-run equilibrium relationship and bidirectional Granger causality between EC and the macroeconomic variables. The empirical findings will help policy makers of Malaysia in developing new monitoring standards of energy consumption. As it is the major contributing factor in economic growth and CO2 emission, there is a need for more proper planning in Malaysia to attain future targets in order to cut emissions.

Keywords: Co-integration, Elasticity, Granger causality, Malaysia, Residential electricity consumption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4102
23 A Data Driven Approach for the Degradation of a Lithium-Ion Battery Based on Accelerated Life Test

Authors: Alyaa M. Younes, Nermine Harraz, Mohammad H. Elwany

Abstract:

Lithium ion batteries are currently used for many applications including satellites, electric vehicles and mobile electronics. Their ability to store relatively large amount of energy in a limited space make them most appropriate for critical applications. Evaluation of the life of these batteries and their reliability becomes crucial to the systems they support. Reliability of Li-Ion batteries has been mainly considered based on its lifetime. However, another important factor that can be considered critical in many applications such as in electric vehicles is the cycle duration. The present work presents the results of an experimental investigation on the degradation behavior of a Laptop Li-ion battery (type TKV2V) and the effect of applied load on the battery cycle time. The reliability was evaluated using an accelerated life test. Least squares linear regression with median rank estimation was used to estimate the Weibull distribution parameters needed for the reliability functions estimation. The probability density function, failure rate and reliability function under each of the applied loads were evaluated and compared. An inverse power model is introduced that can predict cycle time at any stress level given.

Keywords: Accelerated life test, inverse power law, lithium ion battery, reliability evaluation, Weibull distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 831
22 Complex Condition Monitoring System of Aircraft Gas Turbine Engine

Authors: A. M. Pashayev, D. D. Askerov, C. Ardil, R. A. Sadiqov, P. S. Abdullayev

Abstract:

Researches show that probability-statistical methods application, especially at the early stage of the aviation Gas Turbine Engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods is considered. According to the purpose of this problem training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. For GTE technical condition more adequate model making dynamics of skewness and kurtosis coefficients- changes are analysed. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE workand output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stage-by-stage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine technical condition was made.

Keywords: aviation gas turbine engine, technical condition, fuzzy logic, neural networks, fuzzy statistics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2543
21 Enhancement of Environmental Security by the Application of Wireless Sensor Network in Nigeria

Authors: Ahmadu Girgiri, Lawan Gana Ali, Mamman M. Baba

Abstract:

Environmental security clearly articulates the perfections and developments of various communities around the world irrespective of the region, culture, religion or social inclination. Although, the present state of insecurity has become serious issue devastating the peace, unity, stability and progress of man and his physical environment particularly in developing countries. Recently, measure of security and it management in Nigeria has been a bottle-neck to the effectiveness and advancement of various sectors that include; business, education, social relations, politics and above all an economy. Several measures have been considered on mitigating environment insecurity such as surveillance, demarcation, security personnel empowerment and the likes, but still the issue remains disturbing. In this paper, we present the application of new technology that contributes to the improvement of security surveillance known as “Wireless Sensor Network (WSN)”. The system is new, smart and emerging technology that provides monitoring, detection and aggregation of information using sensor nodes and wireless network. WSN detects, monitors and stores information or activities in the deployed area such as schools, environment, business centers, public squares, industries, and outskirts and transmit to end users. This will reduce the cost of security funding and eases security surveillance depending on the nature and the requirement of the deployment.

Keywords: Wireless sensor network, node, application, monitoring, insecurity, environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736
20 Design and Fabrication of Stent with Negative Poisson’s Ratio

Authors: S. K. Bhullar, J. Ko, F. Ahmed, M. B. G. Jun

Abstract:

The negative Poisson’s ratios can be described in terms of models based on the geometry of the system and the way this geometry changes due to applied loads. As the Poisson’s ratio does not depend on scale hence deformation can take place at the nano to macro level the only requirement is the right combination of the geometry. Our thrust in this paper is to combine our knowledge of tailored enhanced mechanical properties of the materials having negative Poisson’s ratio with the micromachining and electrospining technology to develop a novel stent carrying a drug delivery system. Therefore, the objective of this paper includes (i) fabrication of a micromachined metal sheet tailored with structure having negative Poisson’s ratio through rotating solid squares geometry using femtosecond laser ablation; (ii) rolling fabricated structure and welding to make a tubular structure (iii) wrapping it with nanofibers of biocompatible polymer PCL (polycaprolactone) for drug delivery (iv) analysis of the functional and mechanical performance of fabricated structure analytically and experimentally. Further, as the applications concerned, tubular structures have potential in biomedical for example hollow tubes called stents are placed inside to provide mechanical support to a damaged artery or diseased region and to open a blocked esophagus thus allowing feeding capacity and improving quality of life.

Keywords: Micromachining, electrospining, auxetic materials, enhanced mechanical properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3689
19 Geopotential Models Evaluation in Algeria Using Stochastic Method, GPS/Leveling and Topographic Data

Authors: M. A. Meslem

Abstract:

For precise geoid determination, we use a reference field to subtract long and medium wavelength of the gravity field from observations data when we use the remove-compute-restore technique. Therefore, a comparison study between considered models should be made in order to select the optimal reference gravity field to be used. In this context, two recent global geopotential models have been selected to perform this comparison study over Northern Algeria. The Earth Gravitational Model (EGM2008) and the Global Gravity Model (GECO) conceived with a combination of the first model with anomalous potential derived from a GOCE satellite-only global model. Free air gravity anomalies in the area under study have been used to compute residual data using both gravity field models and a Digital Terrain Model (DTM) to subtract the residual terrain effect from the gravity observations. Residual data were used to generate local empirical covariance functions and their fitting to the closed form in order to compare their statistical behaviors according to both cases. Finally, height anomalies were computed from both geopotential models and compared to a set of GPS levelled points on benchmarks using least squares adjustment. The result described in details in this paper regarding these two models has pointed out a slight advantage of GECO global model globally through error degree variances comparison and ground-truth evaluation.

Keywords: Quasigeoid, gravity anomalies, covariance, GGM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 895
18 The Quality of Public Space in Mexico City: Current State and Trends

Authors: Mildred Moreno Villanueva

Abstract:

Public space is essential to strengthen the social and urban fabric and the social cohesion; there lies the importance of its study. Hence, the aim of this paper is to analyze the quality of public space in the XXI century in both quantitative and qualitative terms. In this article, the concept of public space includes open spaces such as parks, public squares and walking areas. To make this analysis, we take Mexico City as the case study. It has a population of nearly 9 million inhabitants and is composed of sixteen boroughs. For this analysis, we consider both existing public spaces and the government intervention for building and improving new and existent public spaces. Results show that on the one hand, quantitatively there is not an equitable distribution of public spaces due to both the growth of the city itself as well as due to the absence of political will to create public spaces. Another factor is the evolution of this city, which has been growing merely in a “patched pattern”, where public space has played no role at all with a total absence of urban design. On the other hand, qualitatively, even the boroughs with the most public spaces have not shown interest in making these spaces qualitatively inclusive and open to the general population aiming for integration. Therefore, urban projects that privatize public space seem to be the rule, rather than a rehabilitation effort of the existent public spaces. Hence, state intervention should reinforce its role as an agent of social change acting in benefit of the majority of the inhabitants with the promotion of more inclusive public spaces.

Keywords: Exclusion, inclusion, Mexico City, public space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3754
17 Dynamic Fault Diagnosis for Semi-Batch Reactor under Closed-Loop Control via Independent Radial Basis Function Neural Network

Authors: Abdelkarim M. Ertiame, D. W. Yu, D. L. Yu, J. B. Gomm

Abstract:

In this paper, a robust fault detection and isolation (FDI) scheme is developed to monitor a multivariable nonlinear chemical process called the Chylla-Haase polymerization reactor, when it is under the cascade PI control. The scheme employs a radial basis function neural network (RBFNN) in an independent mode to model the process dynamics, and using the weighted sum-squared prediction error as the residual. The Recursive Orthogonal Least Squares algorithm (ROLS) is employed to train the model to overcome the training difficulty of the independent mode of the network. Then, another RBFNN is used as a fault classifier to isolate faults from different features involved in the residual vector. Several actuator and sensor faults are simulated in a nonlinear simulation of the reactor in Simulink. The scheme is used to detect and isolate the faults on-line. The simulation results show the effectiveness of the scheme even the process is subjected to disturbances and uncertainties including significant changes in the monomer feed rate, fouling factor, impurity factor, ambient temperature, and measurement noise. The simulation results are presented to illustrate the effectiveness and robustness of the proposed method.

Keywords: Robust fault detection, cascade control, independent RBF model, RBF neural networks, Chylla-Haase reactor, FDI under closed-loop control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1835
16 Thermo-Physical Properties and Solubility of CO2 in Piperazine Activated Aqueous Solutions of β-Alanine

Authors: Ghulam Murshid

Abstract:

Carbon dioxide is one of the major greenhouse gas (GHG) contributors. It is an obligation of the industry to reduce the amount of carbon dioxide emission to the acceptable limits. Tremendous research and studies are reported in the past and still the quest to find the suitable and economical solution of this problem needed to be explored in order to develop the most plausible absorber for carbon dioxide removal. Amino acids can be potential alternate solvents for carbon dioxide capture from gaseous streams. This is due to its ability to resist oxidative degradation, low volatility and its ionic structure. In addition, the introduction of promoter-like piperazine to amino acid helps to further enhance the solubility. In this work, the effect of piperazine on thermo physical properties and solubility of β-Alanine aqueous solutions were studied for various concentrations. The measured physicochemical properties data was correlated as a function of temperature using least-squares method and the correlation parameters are reported together with it respective standard deviations. The effect of activator piperazine on the CO2 loading performance of selected amino acid under high-pressure conditions (1bar to 10bar) at temperature range of (30 to 60)oC was also studied. Solubility of CO2 decreases with increasing temperature and increases with increasing pressure. Quadratic representation of solubility using Response Surface Methodology (RSM) shows that the most important parameter to optimize solubility is system pressure. The addition of promoter increases the solubility effect of the solvent.

Keywords: Amino acids, CO2, Global warming, Solubility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3646
15 Simulating Dynamics of Thoracolumbar Spine Derived from Life MOD under Haptic Forces

Authors: K. T. Huynh, I. Gibson, W. F. Lu, B. N. Jagdish

Abstract:

In this paper, the construction of a detailed spine model is presented using the LifeMOD Biomechanics Modeler. The detailed spine model is obtained by refining spine segments in cervical, thoracic and lumbar regions into individual vertebra segments, using bushing elements representing the intervertebral discs, and building various ligamentous soft tissues between vertebrae. In the sagittal plane of the spine, constant force will be applied from the posterior to anterior during simulation to determine dynamic characteristics of the spine. The force magnitude is gradually increased in subsequent simulations. Based on these recorded dynamic properties, graphs of displacement-force relationships will be established in terms of polynomial functions by using the least-squares method and imported into a haptic integrated graphic environment. A thoracolumbar spine model with complex geometry of vertebrae, which is digitized from a resin spine prototype, will be utilized in this environment. By using the haptic technique, surgeons can touch as well as apply forces to the spine model through haptic devices to observe the locomotion of the spine which is computed from the displacement-force relationship graphs. This current study provides a preliminary picture of our ongoing work towards building and simulating bio-fidelity scoliotic spine models in a haptic integrated graphic environment whose dynamic properties are obtained from LifeMOD. These models can be helpful for surgeons to examine kinematic behaviors of scoliotic spines and to propose possible surgical plans before spine correction operations.

Keywords: Haptic interface, LifeMOD, spine modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1904
14 Reconstitute Information about Discontinued Water Quality Variables in the Nile Delta Monitoring Network Using Two Record Extension Techniques

Authors: Bahaa Khalil, Taha B. M. J. Ouarda, André St-Hilaire

Abstract:

The world economic crises and budget constraints have caused authorities, especially those in developing countries, to rationalize water quality monitoring activities. Rationalization consists of reducing the number of monitoring sites, the number of samples, and/or the number of water quality variables measured. The reduction in water quality variables is usually based on correlation. If two variables exhibit high correlation, it is an indication that some of the information produced may be redundant. Consequently, one variable can be discontinued, and the other continues to be measured. Later, the ordinary least squares (OLS) regression technique is employed to reconstitute information about discontinued variable by using the continuously measured one as an explanatory variable. In this paper, two record extension techniques are employed to reconstitute information about discontinued water quality variables, the OLS and the Line of Organic Correlation (LOC). An empirical experiment is conducted using water quality records from the Nile Delta water quality monitoring network in Egypt. The record extension techniques are compared for their ability to predict different statistical parameters of the discontinued variables. Results show that the OLS is better at estimating individual water quality records. However, results indicate an underestimation of the variance in the extended records. The LOC technique is superior in preserving characteristics of the entire distribution and avoids underestimation of the variance. It is concluded from this study that the OLS can be used for the substitution of missing values, while LOC is preferable for inferring statements about the probability distribution.

Keywords: Record extension, record augmentation, monitoringnetworks, water quality indicators.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1611
13 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).

Keywords: Time history dynamic analysis, basic modal displacement, earthquake induced demands, shear steel structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1420
12 Optimal Image Representation for Linear Canonical Transform Multiplexing

Authors: Navdeep Goel, Salvador Gabarda

Abstract:

Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4 × 4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4 × 4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4 × 4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.

Keywords: Chirp signals, Image multiplexing, Image transformation, Linear canonical transform, Polynomial approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2129
11 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation

Authors: Somayeh Komeylian

Abstract:

The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).

Keywords: DoA estimation, adaptive antenna array, Deep Neural Network, LS-SVM optimization model, radial basis function, MSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 539
10 FT-NIR Method to Determine Moisture in Gluten Free Rice Based Pasta during Drying

Authors: Navneet Singh Deora, Aastha Deswal, H. N. Mishra

Abstract:

Pasta is one of the most widely consumed food products around the world. Rapid determination of the moisture content in pasta will assist food processors to provide online quality control of pasta during large scale production. Rapid Fourier transform near-infrared method (FT-NIR) was developed for determining moisture content in pasta. A calibration set of 150 samples, a validation set of 30 samples and a prediction set of 25 samples of pasta were used. The diffuse reflection spectra of different types of pastas were measured by FT-NIR analyzer in the 4,000-12,000cm-1 spectral range. Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 10 to 15 percent (w.b) of the pasta. The prediction models based on partial least squares (PLS) regression, were developed in the near-infrared. Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre-processing (vector normalization, minimum-maximum normalization and multiplicative scatter correction) methods. Spectra of pasta sample were treated with different mathematic pre-treatments before being used to build models between the spectral information and moisture content. The moisture content in pasta predicted by FT-NIR methods had very good correlation with their values determined via traditional methods (R2 = 0.983), which clearly indicated that FT-NIR methods could be used as an effective tool for rapid determination of moisture content in pasta. The best calibration model was developed with min-max normalization (MMN) spectral pre-processing (R2 = 0.9775). The MMN pre-processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.9875 was obtained for the calibration model developed.

Keywords: FT-NIR, Pasta, moisture determination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2822
9 A CT-based Monte Carlo Dose Calculations for Proton Therapy Using a New Interface Program

Authors: A. Esmaili Torshabi, A. Terakawa, K. Ishii, H. Yamazaki, S. Matsuyama, Y. Kikuchi, M. Nakhostin, H. Sabet, A. Ishizaki, W. Yamashita, T. Togashi, J. Arikawa, H. Akiyama, K. Koyata

Abstract:

The purpose of this study is to introduce a new interface program to calculate a dose distribution with Monte Carlo method in complex heterogeneous systems such as organs or tissues in proton therapy. This interface program was developed under MATLAB software and includes a friendly graphical user interface with several tools such as image properties adjustment or results display. Quadtree decomposition technique was used as an image segmentation algorithm to create optimum geometries from Computed Tomography (CT) images for dose calculations of proton beam. The result of the mentioned technique is a number of nonoverlapped squares with different sizes in every image. By this way the resolution of image segmentation is high enough in and near heterogeneous areas to preserve the precision of dose calculations and is low enough in homogeneous areas to reduce the number of cells directly. Furthermore a cell reduction algorithm can be used to combine neighboring cells with the same material. The validation of this method has been done in two ways; first, in comparison with experimental data obtained with 80 MeV proton beam in Cyclotron and Radioisotope Center (CYRIC) in Tohoku University and second, in comparison with data based on polybinary tissue calibration method, performed in CYRIC. These results are presented in this paper. This program can read the output file of Monte Carlo code while region of interest is selected manually, and give a plot of dose distribution of proton beam superimposed onto the CT images.

Keywords: Monte Carlo, CT images, Quadtree decomposition, Interface program, Proton beam

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1866
8 Urbanization and Income Inequality in Thailand

Authors: Acumsiri Tantiakrnpanit

Abstract:

This paper aims to examine the relationship between urbanization and income inequality in Thailand during the period 2002–2020, using a panel of data for 76 provinces collected from Thailand’s National Statistical Office (Labor Force Survey: LFS), as well as geospatial data from the U.S. Air Force Defense Meteorological Satellite Program (DMSP) and the Visible Infrared Imaging Radiometer Suite Day/Night band (VIIRS-DNB) satellite for 19 selected years. This paper employs two different definitions to identify urban areas: 1) Urban areas defined by Thailand's National Statistical Office (LFS), and 2) Urban areas estimated using nighttime light data from the DMSP and VIIRS-DNB satellite. The second method includes two sub-categories: 2.1) Determining urban areas by calculating nighttime light density with a population density of 300 people per square kilometer, and 2.2) Calculating urban areas based on nighttime light density corresponding to a population density of 1,500 people per square kilometer. The empirical analysis based on Ordinary Least Squares (OLS), fixed effects, and random effects models reveals a consistent U-shaped relationship between income inequality and urbanization. The findings from the econometric analysis demonstrate that urbanization or population density has a significant and negative impact on income inequality. Moreover, the square of urbanization shows a statistically significant positive impact on income inequality. Additionally, there is a negative association between logarithmically transformed income and income inequality. This paper also proposes the inclusion of satellite imagery, geospatial data, and spatial econometric techniques in future studies to conduct quantitative analysis of spatial relationships.

Keywords: Income inequality, nighttime light, population density, Thailand, urbanization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 127
7 Phosphine Mortality Estimation for Simulation of Controlling Pest of Stored Grain: Lesser Grain Borer (Rhyzopertha dominica)

Authors: Mingren Shi, Michael Renton

Abstract:

There is a world-wide need for the development of sustainable management strategies to control pest infestation and the development of phosphine (PH3) resistance in lesser grain borer (Rhyzopertha dominica). Computer simulation models can provide a relatively fast, safe and inexpensive way to weigh the merits of various management options. However, the usefulness of simulation models relies on the accurate estimation of important model parameters, such as mortality. Concentration and time of exposure are both important in determining mortality in response to a toxic agent. Recent research indicated the existence of two resistance phenotypes in R. dominica in Australia, weak and strong, and revealed that the presence of resistance alleles at two loci confers strong resistance, thus motivating the construction of a two-locus model of resistance. Experimental data sets on purified pest strains, each corresponding to a single genotype of our two-locus model, were also available. Hence it became possible to explicitly include mortalities of the different genotypes in the model. In this paper we described how we used two generalized linear models (GLM), probit and logistic models, to fit the available experimental data sets. We used a direct algebraic approach generalized inverse matrix technique, rather than the traditional maximum likelihood estimation, to estimate the model parameters. The results show that both probit and logistic models fit the data sets well but the former is much better in terms of small least squares (numerical) errors. Meanwhile, the generalized inverse matrix technique achieved similar accuracy results to those from the maximum likelihood estimation, but is less time consuming and computationally demanding.

Keywords: mortality estimation, probit models, logistic model, generalized inverse matrix approach, pest control simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584
6 Multistage Condition Monitoring System of Aircraft Gas Turbine Engine

Authors: A. M. Pashayev, D. D. Askerov, C. Ardil, R. A. Sadiqov, P. S. Abdullayev

Abstract:

Researches show that probability-statistical methods application, especially at the early stage of the aviation Gas Turbine Engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods is considered. According to the purpose of this problem training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. For GTE technical condition more adequate model making dynamics of skewness and kurtosis coefficients- changes are analysed. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows drawing conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stageby- stage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine technical condition was made.

Keywords: aviation gas turbine engine, technical condition, fuzzy logic, neural networks, fuzzy statistics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1569
5 Study on Planning of Smart GRID using Landscape Ecology

Authors: Sunglim Lee, Susumu Fujii, Koji Okamura

Abstract:

Smart grid is a new approach for electric power grid that uses information and communications technology to control the electric power grid. Smart grid provides real-time control of the electric power grid, controlling the direction of power flow or time of the flow. Control devices are installed on the power lines of the electric power grid to implement smart grid. The number of the control devices should be determined, in relation with the area one control device covers and the cost associated with the control devices. One approach to determine the number of the control devices is to use the data on the surplus power generated by home solar generators. In current implementations, the surplus power is sent all the way to the power plant, which may cause power loss. To reduce the power loss, the surplus power may be sent to a control device and sent to where the power is needed from the control device. Under assumption that the control devices are installed on a lattice of equal size squares, our goal is to figure out the optimal spacing between the control devices, where the power sharing area (the area covered by one control device) is kept small to avoid power loss, and at the same time the power sharing area is big enough to have no surplus power wasted. To achieve this goal, a simulation using landscape ecology method is conducted on a sample area. First an aerial photograph of the land of interest is turned into a mosaic map where each area is colored according to the ratio of the amount of power production to the amount of power consumption in the area. The amount of power consumption is estimated according to the characteristics of the buildings in the area. The power production is calculated by the sum of the area of the roofs shown in the aerial photograph and assuming that solar panels are installed on all the roofs. The mosaic map is colored in three colors, each color representing producer, consumer, and neither. We started with a mosaic map with 100 m grid size, and the grid size is grown until there is no red grid. One control device is installed on each grid, so that the grid is the area which the control device covers. As the result of this simulation we got 350m as the optimal spacing between the control devices that makes effective use of the surplus power for the sample area.

Keywords: Landscape ecology, IT, smart grid, aerial photograph, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1967
4 Aircraft Gas Turbine Engines Technical Condition Identification System

Authors: A. M. Pashayev, C. Ardil, D. D. Askerov, R. A. Sadiqov, P. S. Abdullayev

Abstract:

In this paper is shown that the probability-statistic methods application, especially at the early stage of the aviation gas turbine engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence is considered the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods. Training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. Thus for GTE technical condition more adequate model making are analysed dynamics of skewness and kurtosis coefficients' changes. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows to draw conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. For checking of models adequacy is considered the Fuzzy Multiple Correlation Coefficient of Fuzzy Multiple Regression. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stage-bystage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine temperature condition was made.

Keywords: Gas turbine engines, neural networks, fuzzy logic, fuzzy statistics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1903
3 Conflation Methodology Applied to Flood Recovery

Authors: E. L. Suarez, D. E. Meeroff, Y. Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: Community resilience, conflation, flood risk, nuisance flooding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 137
2 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: Crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1174