Search results for: linear regression estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7384

Search results for: linear regression estimation

6844 Remote Sensing and GIS Integration for Paddy Production Estimation in Bali Province, Indonesia

Authors: Sarono, Hamim Zaky Hadibasyir, dan Ridho Kurniawan

Abstract:

Estimation of paddy production is one of the areas that can be examined using the techniques of remote sensing and geographic information systems (GIS) in the field of agriculture. The purpose of this research is to know the amount of the paddy production estimation and how remote sensing and geographic information systems (GIS) are able to perform analysis of paddy production estimation in Tegalallang and Payangan Sub district, Bali Province, Indonesia. The method used is the method of land suitability. This method associates a physical parameters which are to be embodied in the smallest unit of a mapping that represents a mapping unit in a particular field and connecting with its field productivity. Analysis of estimated production using standard land suitability from FAO using matching technique. The parameters used to create the land unit is slope (FAO), climate classification (Oldeman), landform (Prapto Suharsono), and soil type. Land use map consist of paddy and non paddy field information obtained from Geo-eye 1 imagery using visual interpretation technique. Landsat image of the Data used for the interpretation of the landform, the classification of the slopes obtained from high point identification with method of interpolation spline, whereas climate data, soil, use secondary data originating from institutions-related institutions. The results of this research indicate Tegallalang and Payangan Districts in known wetland suitability consists of S1 (very suitable) covering an area of 2884,7 ha with the productivity of 5 tons/ha and S2 (suitable) covering an area of 482,9 ha with the productivity of 3 tons/ha. The sum of paddy production estimation as a results in both districts are 31.744, 3 tons in one year.

Keywords: production estimation, paddy, remote sensing, geography information system, land suitability

Procedia PDF Downloads 334
6843 Multistage Adomian Decomposition Method for Solving Linear and Non-Linear Stiff System of Ordinary Differential Equations

Authors: M. S. H. Chowdhury, Ishak Hashim

Abstract:

In this paper, linear and non-linear stiff systems of ordinary differential equations are solved by the classical Adomian decomposition method (ADM) and the multi-stage Adomian decomposition method (MADM). The MADM is a technique adapted from the standard Adomian decomposition method (ADM) where standard ADM is converted into a hybrid numeric-analytic method called the multistage ADM (MADM). The MADM is tested for several examples. Comparisons with an explicit Runge-Kutta-type method (RK) and the classical ADM demonstrate the limitations of ADM and promising capability of the MADM for solving stiff initial value problems (IVPs).

Keywords: stiff system of ODEs, Runge-Kutta Type Method, Adomian decomposition method, Multistage ADM

Procedia PDF Downloads 430
6842 Solving Single Machine Total Weighted Tardiness Problem Using Gaussian Process Regression

Authors: Wanatchapong Kongkaew

Abstract:

This paper proposes an application of probabilistic technique, namely Gaussian process regression, for estimating an optimal sequence of the single machine with total weighted tardiness (SMTWT) scheduling problem. In this work, the Gaussian process regression (GPR) model is utilized to predict an optimal sequence of the SMTWT problem, and its solution is improved by using an iterated local search based on simulated annealing scheme, called GPRISA algorithm. The results show that the proposed GPRISA method achieves a very good performance and a reasonable trade-off between solution quality and time consumption. Moreover, in the comparison of deviation from the best-known solution, the proposed mechanism noticeably outperforms the recently existing approaches.

Keywords: Gaussian process regression, iterated local search, simulated annealing, single machine total weighted tardiness

Procedia PDF Downloads 304
6841 Estimation and Forecasting with a Quantile AR Model for Financial Returns

Authors: Yuzhi Cai

Abstract:

This talk presents a Bayesian approach to quantile autoregressive (QAR) time series model estimation and forecasting. We establish that the joint posterior distribution of the model parameters and future values is well defined. The associated MCMC algorithm for parameter estimation and forecasting converges to the posterior distribution quickly. We also present a combining forecasts technique to produce more accurate out-of-sample forecasts by using a weighted sequence of fitted QAR models. A moving window method to check the quality of the estimated conditional quantiles is developed. We verify our methodology using simulation studies and then apply it to currency exchange rate data. An application of the method to the USD to GBP daily currency exchange rates will also be discussed. The results obtained show that an unequally weighted combining method performs better than other forecasting methodology.

Keywords: combining forecasts, MCMC, quantile modelling, quantile forecasting, predictive density functions

Procedia PDF Downloads 342
6840 Estimation of Opc, Fly Ash and Slag Contents in Blended and Composite Cements by Selective Dissolution Method

Authors: Suresh Palla

Abstract:

This research paper presents the results of the study on the estimation of fly ash, slag and cement contents in blended and composite cements by novel selective dissolution method. Types of cement samples investigated include OPC with fly ash as performance improver, OPC with slag as performance improver, PPC, PSC and Composite cement confirming to respective Indian Standards. Slag and OPC contents in PSC were estimated by selectively dissolving OPC in stage 1 and selectively dissolving slag in stage 2. In the case of composite cement sample, the percentage of cement, slag and fly ash were estimated systematically by selective dissolution of cement, slag and fly ash in three stages. In the first stage, cement dissolved and separated by leaving the residue of slag and fly ash, designated as R1. The second stage involves gravimetric estimation of fractions of OPC, residue and selective dissolution of fly ash and slag contents. Fly ash content, R2 was estimated through gravimetric analysis. Thereafter, the difference between the R1 and R2 is considered as slag content. The obtained results of cement, fly ash and slag using selective dissolution method showed 10% of standard deviation with the corresponding percentage of respective constituents. The results suggest that this novel selective dissolution method can be successfully used for estimation of OPC and SCMs contents in different types of cements.

Keywords: selective dissolution method , fly ash, ggbfs slag, edta

Procedia PDF Downloads 151
6839 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland

Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski

Abstract:

PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.

Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks

Procedia PDF Downloads 139
6838 Minimum Half Power Beam Width and Side Lobe Level Reduction of Linear Antenna Array Using Particle Swarm Optimization

Authors: Saeed Ur Rahman, Naveed Ullah, Muhammad Irshad Khan, Quensheng Cao, Niaz Muhammad Khan

Abstract:

In this paper the optimization performance of non-uniform linear antenna array is presented. The Particle Swarm Optimization (PSO) algorithm is presented to minimize Side Lobe Level (SLL) and Half Power Beamwidth (HPBW). The purpose of using the PSO algorithm is to get the optimum values for inter-element spacing and excitation amplitude of linear antenna array that provides a radiation pattern with minimum SLL and HPBW. Various design examples are considered and the obtain results using PSO are confirmed by comparing with results achieved using other nature inspired metaheuristic algorithms such as real coded genetic algorithm (RGA) and biogeography (BBO) algorithm. The comparative results show that optimization of linear antenna array using the PSO provides considerable enhancement in the SLL and HPBW.

Keywords: linear antenna array, minimum side lobe level, narrow half power beamwidth, particle swarm optimization

Procedia PDF Downloads 543
6837 Nuclear Fuel Safety Threshold Determined by Logistic Regression Plus Uncertainty

Authors: D. S. Gomes, A. T. Silva

Abstract:

Analysis of the uncertainty quantification related to nuclear safety margins applied to the nuclear reactor is an important concept to prevent future radioactive accidents. The nuclear fuel performance code may involve the tolerance level determined by traditional deterministic models producing acceptable results at burn cycles under 62 GWd/MTU. The behavior of nuclear fuel can simulate applying a series of material properties under irradiation and physics models to calculate the safety limits. In this study, theoretical predictions of nuclear fuel failure under transient conditions investigate extended radiation cycles at 75 GWd/MTU, considering the behavior of fuel rods in light-water reactors under reactivity accident conditions. The fuel pellet can melt due to the quick increase of reactivity during a transient. Large power excursions in the reactor are the subject of interest bringing to a treatment that is known as the Fuchs-Hansen model. The point kinetic neutron equations show similar characteristics of non-linear differential equations. In this investigation, the multivariate logistic regression is employed to a probabilistic forecast of fuel failure. A comparison of computational simulation and experimental results was acceptable. The experiments carried out use the pre-irradiated fuels rods subjected to a rapid energy pulse which exhibits the same behavior during a nuclear accident. The propagation of uncertainty utilizes the Wilk's formulation. The variables chosen as essential to failure prediction were the fuel burnup, the applied peak power, the pulse width, the oxidation layer thickness, and the cladding type.

Keywords: logistic regression, reactivity-initiated accident, safety margins, uncertainty propagation

Procedia PDF Downloads 288
6836 Similar Correlation of Meat and Sugar to Global Obesity Prevalence

Authors: Wenpeng You, Maciej Henneberg

Abstract:

Background: Sugar consumption has been overwhelmingly advocated as a major dietary offender to obesity prevalence. Meat intake has been hypothesized as an obesity contributor in previous publications, but a moderate amount of meat to be included in our daily diet still has been suggested in many dietary guidelines. Comparable sugar and meat exposure data were obtained to assess the difference in relationships between the two major food groups and obesity prevalence at population level. Methods: Population level estimates of obesity and overweight rates, per capita per day exposure of major food groups (meat, sugar, starch crops, fibers, fats and fruits) and total calories, per capita per year GDP, urbanization and physical inactivity prevalence rate were extracted and matched for statistical analysis. Correlation coefficient (Pearson and partial) comparisons with Fisher’s r-to-z transformation and β range (β ± 2 SE) and overlapping in multiple linear regression (Enter and Stepwise) were used to examine potential differences in the relationships between obesity prevalence and sugar exposure and meat exposure respectively. Results: Pearson and partial correlations (controlled for total calories, physical inactivity prevalence, GDP and urbanization) analyses revealed that sugar and meat exposures correlated to obesity and overweight prevalence significantly. Fisher's r-to-z transformation did not show statistically significant difference in Pearson correlation coefficients (z=-0.53, p=0.5961) or partial correlation coefficients (z=-0.04, p=0.9681) between obesity prevalence and both sugar exposure and meat exposure. Both Enter and Stepwise models in multiple linear regression analysis showed that sugar and meat exposure were most significant predictors of obesity prevalence. Great β range overlapping in the Enter (0.289-0.573) and Stepwise (0.294-0.582) models indicated statistically sugar and meat exposure correlated to obesity without significant difference. Conclusion: Worldwide sugar and meat exposure correlated to obesity prevalence at the same extent. Like sugar, minimal meat exposure should also be suggested in the dietary guidelines.

Keywords: meat, sugar, obesity, energy surplus, meat protein, fats, insulin resistance

Procedia PDF Downloads 301
6835 Full-Field Estimation of Cyclic Threshold Shear Strain

Authors: E. E. S. Uy, T. Noda, K. Nakai, J. R. Dungca

Abstract:

Cyclic threshold shear strain is the cyclic shear strain amplitude that serves as the indicator of the development of pore water pressure. The parameter can be obtained by performing either cyclic triaxial test, shaking table test, cyclic simple shear or resonant column. In a cyclic triaxial test, other researchers install measuring devices in close proximity of the soil to measure the parameter. In this study, an attempt was made to estimate the cyclic threshold shear strain parameter using full-field measurement technique. The technique uses a camera to monitor and measure the movement of the soil. For this study, the technique was incorporated in a strain-controlled consolidated undrained cyclic triaxial test. Calibration of the camera was first performed to ensure that the camera can properly measure the deformation under cyclic loading. Its capacity to measure deformation was also investigated using a cylindrical rubber dummy. Two-dimensional image processing was implemented. Lucas and Kanade optical flow algorithm was applied to track the movement of the soil particles. Results from the full-field measurement technique were compared with the results from the linear variable displacement transducer. A range of values was determined from the estimation. This was due to the nonhomogeneous deformation of the soil observed during the cyclic loading. The minimum values were in the order of 10-2% in some areas of the specimen.

Keywords: cyclic loading, cyclic threshold shear strain, full-field measurement, optical flow

Procedia PDF Downloads 230
6834 A Quantitative Study Investigating Whether the Internalisation of Adolescent Femininity Ideologies Predicts Depression and Anxiety in Female Adolescents

Authors: Tondani Mudau, Sherine B. Van Wyk, Zuhayr Kafaar, Janan Dietrich

Abstract:

Female adolescents residing in a patriarchal society such as South Africa are more inclined to embrace feminine ideologies. Internalizing these ideologies may expose female adolescents to mental health challenges such as depression and anxiety. This study explored whether the internalisation of adolescent femininity ideologies namely, objectified relationship with own body (ORB) and inauthentic self in relationships (ISR) predicted anxiety and depression in late female adolescents at Stellenbosch University. The sample of the study consisted of 1451 (18-24) female undergraduate and postgraduate students enrolled at Stellenbosch University. The mean age of the participants was 20 (SD=1.46), and most participants (39.7%) were first-year students. The study employed a cross-sectional quantitative research design. Data was collected through an online self-completion survey, the survey consisted of three sections, the first section asked biographical questions regarding age, gender, race and family background. The second section measured the internalisation of feminine ideologies by using the adolescent femininity ideology scale which has two subscales namely inauthentic self in relationship with others (ISR) and objectified relationship with one’s own body (ORB). The ISR scale had the Cronbach Alpha of 0.76, and the ORB scale had the Cronbach Alpha of 0.83. The third section measured mental health (depression and anxiety) by using the Hopkins Symptoms 25-checklist which had the Cronbach Alpha of 0.93. Data were analysed through multiple linear regression from IBM SPSS (Statistical Package for the Social Sciences Version 24). The overall results of the multiple linear regression showed that The AFIS combination accounted for 14% for anxiety as measured by the Hopkins Symptoms Checklist R² = .142, F (2, 682) = 56.431, p < .001. The combination also accounted for 24% for depression as measured by the Hopkins Symptoms Checklist R² = .239, F (2, 682) = 106.971, p < .0. The findings in this study affirm the objectification and feminist theory contentions that internalising femininity ideologies (ISR and ORB) predict negative mental health in female adolescents.

Keywords: adolescents, anxiety, depression, feminine ideologies, inauthentic self, mental health, self-objectification, South Africa

Procedia PDF Downloads 148
6833 Polynomially Adjusted Bivariate Density Estimates Based on the Saddlepoint Approximation

Authors: S. B. Provost, Susan Sheng

Abstract:

An alternative bivariate density estimation methodology is introduced in this presentation. The proposed approach involves estimating the density function associated with the marginal distribution of each of the two variables by means of the saddlepoint approximation technique and applying a bivariate polynomial adjustment to the product of these density estimates. Since the saddlepoint approximation is utilized in the context of density estimation, such estimates are determined from empirical cumulant-generating functions. In the univariate case, the saddlepoint density estimate is itself adjusted by a polynomial. Given a set of observations, the coefficients of the polynomial adjustments are obtained from the sample moments. Several illustrative applications of the proposed methodology shall be presented. Since this approach relies essentially on a determinate number of sample moments, it is particularly well suited for modeling massive data sets.

Keywords: density estimation, empirical cumulant-generating function, moments, saddlepoint approximation

Procedia PDF Downloads 275
6832 Motion Estimator Architecture with Optimized Number of Processing Elements for High Efficiency Video Coding

Authors: Seongsoo Lee

Abstract:

Motion estimation occupies the heaviest computation in HEVC (high efficiency video coding). Many fast algorithms such as TZS (test zone search) have been proposed to reduce the computation. Still the huge computation of the motion estimation is a critical issue in the implementation of HEVC video codec. In this paper, motion estimator architecture with optimized number of PEs (processing element) is presented by exploiting early termination. It also reduces hardware size by exploiting parallel processing. The presented motion estimator architecture has 8 PEs, and it can efficiently perform TZS with very high utilization of PEs.

Keywords: motion estimation, test zone search, high efficiency video coding, processing element, optimization

Procedia PDF Downloads 356
6831 Count Regression Modelling on Number of Migrants in Households

Authors: Tsedeke Lambore Gemecho, Ayele Taye Goshu

Abstract:

The main objective of this study is to identify the determinants of the number of international migrants in a household and to compare regression models for count response. This study is done by collecting data from total of 2288 household heads of 16 randomly sampled districts in Hadiya and Kembata-Tembaro zones of Southern Ethiopia. The Poisson mixed models, as special cases of the generalized linear mixed model, is explored to determine effects of the predictors: age of household head, farm land size, and household size. Two ethnicities Hadiya and Kembata are included in the final model as dummy variables. Stepwise variable selection has indentified four predictors: age of head, farm land size, family size and dummy variable ethnic2 (0=other, 1=Kembata). These predictors are significant at 5% significance level with count response number of migrant. The Poisson mixed model consisting of the four predictors with random effects districts. Area specific random effects are significant with the variance of about 0.5105 and standard deviation of 0.7145. The results show that the number of migrant increases with heads age, family size, and farm land size. In conclusion, there is a significantly high number of international migration per household in the area. Age of household head, family size, and farm land size are determinants that increase the number of international migrant in households. Community-based intervention is needed so as to monitor and regulate the international migration for the benefits of the society.

Keywords: Poisson regression, GLM, number of migrant, Hadiya and Kembata Tembaro zones

Procedia PDF Downloads 280
6830 Factors Influencing Family Resilience and Quality of Life in Pediatric Cancer Patients and Their Caregivers: A Cluster Analysis

Authors: Li Wang, Dan Shu, Shiguang Pang, Lixiu Wang, Bing Xiang Yang, Qian Liu

Abstract:

Background: Cancer is one of the most severe diseases in childhood; long-term treatment and its side effects significantly impact the patient's physical, psychological, social functioning and quality of life while also placing substantial physical and psychological burdens on caregivers and families. Family resilience is crucial for children with cancer, helping them cope better with the disease and supporting the family in facing challenges together. As a family-level variable, family resilience requires information from multiple family members. However, to our best knowledge, there is currently no research investigating family resilience from both the perspectives of pediatric cancer patients and their caregivers. Therefore, this study aims to investigate the family resilience and quality of life of pediatric cancer patients from a patient–caregiver dyadic perspective. Methods: A total of 149 dyads of patients diagnosed with pediatric cancer patients and their principal caregivers were recruited from oncology departments of 4 tertiary hospitals in Wuhan and Taiyuan, China. All participants completed questionnaires that identified their demographic and clinical characteristics as well as assessed their family resilience and quality of life for both the patients and their caregivers. K-means cluster analysis was used to identify different clusters of family resilience based on the reports from patients and caregivers. Multivariate logistic regression and linear regression are used to analyze the factors influencing family resilience and quality of life, as well as the relationship between the two. Results: Three clusters of family resilience were identified: a cluster of high family resilience (HR), a cluster of low family resilience (LR), and a cluster of discrepant family resilience (DR). Most (67.1%) families fell into the cluster with low resilience. Characteristics such as the types of caregivers perceived social support of the patient were different among the three clusters. Compared to the LR group, families where the mother is the caregiver and where the patient has high social support are more likely to be assigned to the HR. The quality of life for caregivers was consistently highest in the HR cluster and lowest in the LR cluster. The patient's quality of life is not related to family resilience. In the linear regression analysis of the patient's quality of life, patients who are the first-born have higher quality of life, while those living with their parents have lower quality of life. The participants' characteristics were not associated with the quality of life for caregivers. Conclusions: In most families, family resilience was low. Families with maternal caregivers and patients receiving high levels of social support are more inclined to be higher levels of family resilience. Family resilience was linked to the quality of life of caregivers of pediatric cancer patients. The clinical implications of this findings suggest that healthcare and social support organizations should prioritize and support the participation of mothers in caregiving responsibilities. Furthermore, they should assist families in accessing social support to enhance family resilience. This study also emphasizes the importance of promoting family resilience for enhancing family health and happiness, as well as improving the quality of life for caregivers.

Keywords: pediatric cancer, cluster analysis, family resilience, quality of life

Procedia PDF Downloads 27
6829 Human Posture Estimation Based on Multiple Viewpoints

Authors: Jiahe Liu, HongyangYu, Feng Qian, Miao Luo

Abstract:

This study aimed to address the problem of improving the confidence of key points by fusing multi-view information, thereby estimating human posture more accurately. We first obtained multi-view image information and then used the MvP algorithm to fuse this multi-view information together to obtain a set of high-confidence human key points. We used these as the input for the Spatio-Temporal Graph Convolution (ST-GCN). ST-GCN is a deep learning model used for processing spatio-temporal data, which can effectively capture spatio-temporal relationships in video sequences. By using the MvP algorithm to fuse multi-view information and inputting it into the spatio-temporal graph convolution model, this study provides an effective method to improve the accuracy of human posture estimation and provides strong support for further research and application in related fields.

Keywords: multi-view, pose estimation, ST-GCN, joint fusion

Procedia PDF Downloads 61
6828 Optimization of Tundish Geometry for Minimizing Dead Volume Using OpenFOAM

Authors: Prateek Singh, Dilshad Ahmad

Abstract:

Growing demand for high-quality steel products has inspired researchers to investigate the unit operations involved in the manufacturing of these products (slabs, rods, sheets, etc.). One such operation is tundish operation, in which a vessel (tundish) acts as a buffer of molten steel for the solidification operation in mold. It is observed that tundish also plays a crucial role in the quality and cleanliness of the steel produced, besides merely acting as a reservoir for the mold. It facilitates removal of dissolved oxygen (inclusions) from the molten steel thus improving its cleanliness. Inclusion removal can be enhanced by increasing the residence time of molten steel in the tundish by incorporation of flow modifiers like dams, weirs, turbo-pad, etc. These flow modifiers also help in reducing the dead or short circuit zones within the tundish which is significant for maintaining thermal and chemical homogeneity of molten steel. Thus, it becomes important to analyze the flow of molten steel in the tundish for different configuration of flow modifiers. In the present work, effect of varying positions and heights/depths of dam and weir on the dead volume in tundish is studied. Steady state thermal and flow profiles of molten steel within the tundish are obtained using OpenFOAM. Subsequently, Residence Time Distribution analysis is performed to obtain the percentage of dead volume in the tundish. Design of Experiment method is then used to configure different tundish geometries for varying positions and heights/depths of dam and weir, and dead volume for each tundish design is obtained. A second-degree polynomial with two-term interactions of independent variables to predict the dead volume in the tundish with positions and heights/depths of dam and weir as variables are computed using Multiple Linear Regression model. This polynomial is then used in an optimization framework to obtain the optimal tundish geometry for minimizing dead volume using Sequential Quadratic Programming optimization.

Keywords: design of experiments, multiple linear regression, OpenFOAM, residence time distribution, sequential quadratic programming optimization, steel, tundish

Procedia PDF Downloads 197
6827 Low Complexity Carrier Frequency Offset Estimation for Cooperative Orthogonal Frequency Division Multiplexing Communication Systems without Cyclic Prefix

Authors: Tsui-Tsai Lin

Abstract:

Cooperative orthogonal frequency division multiplexing (OFDM) transmission, which possesses the advantages of better connectivity, expanded coverage, and resistance to frequency selective fading, has been a more powerful solution for the physical layer in wireless communications. However, such a hybrid scheme suffers from the carrier frequency offset (CFO) effects inherited from the OFDM-based systems, which lead to a significant degradation in performance. In addition, insertion of a cyclic prefix (CP) at each symbol block head for combating inter-symbol interference will lead to a reduction in spectral efficiency. The design on the CFO estimation for the cooperative OFDM system without CP is a suspended problem. This motivates us to develop a low complexity CFO estimator for the cooperative OFDM decode-and-forward (DF) communication system without CP over the multipath fading channel. Especially, using a block-type pilot, the CFO estimation is first derived in accordance with the least square criterion. A reliable performance can be obtained through an exhaustive two-dimensional (2D) search with a penalty of heavy computational complexity. As a remedy, an alternative solution realized with an iteration approach is proposed for the CFO estimation. In contrast to the 2D-search estimator, the iterative method enjoys the advantage of the substantially reduced implementation complexity without sacrificing the estimate performance. Computer simulations have been presented to demonstrate the efficacy of the proposed CFO estimation.

Keywords: cooperative transmission, orthogonal frequency division multiplexing (OFDM), carrier frequency offset, iteration

Procedia PDF Downloads 261
6826 New Method for Determining the Distribution of Birefringence and Linear Dichroism in Polymer Materials Based on Polarization-Holographic Grating

Authors: Barbara Kilosanidze, George Kakauridze, Levan Nadareishvili, Yuri Mshvenieradze

Abstract:

A new method for determining the distribution of birefringence and linear dichroism in optical polymer materials is presented. The method is based on the use of polarization-holographic diffraction grating that forms an orthogonal circular basis in the process of diffraction of probing laser beam on the grating. The intensities ratio of the orders of diffraction on this grating enables the value of birefringence and linear dichroism in the sample to be determined. The distribution of birefringence in the sample is determined by scanning with a circularly polarized beam with a wavelength far from the absorption band of the material. If the scanning is carried out by probing beam with the wavelength near to a maximum of the absorption band of the chromophore then the distribution of linear dichroism can be determined. An appropriate theoretical model of this method is presented. A laboratory setup was created for the proposed method. An optical scheme of the laboratory setup is presented. The results of measurement in polymer films with two-dimensional gradient distribution of birefringence and linear dichroism are discussed.

Keywords: birefringence, linear dichroism, graded oriented polymers, optical polymers, optical anisotropy, polarization-holographic grating

Procedia PDF Downloads 428
6825 Constructions of Linear and Robust Codes Based on Wavelet Decompositions

Authors: Alla Levina, Sergey Taranov

Abstract:

The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.

Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability

Procedia PDF Downloads 485
6824 Investigating the Potential of Spectral Bands in the Detection of Heavy Metals in Soil

Authors: Golayeh Yousefi, Mehdi Homaee, Ali Akbar Norouzi

Abstract:

Ongoing monitoring of soil contamination by heavy metals is critical for ecosystem stability and environmental protection, and food security. The conventional methods of determining these soil contaminants are time-consuming and costly. Spectroscopy in the visible near-infrared (VNIR) - short wave infrared (SWIR) region is a rapid, non-destructive, noninvasive, and cost-effective method for assessment of soil heavy metals concentration by studying the spectral properties of soil constituents. The aim of this study is to derive spectral bands and important ranges that are sensitive to heavy metals and can be used to estimate the concentration of these soil contaminants. In other words, the change in the spectral properties of spectrally active constituents of soil can lead to the accurate identification and estimation of the concentration of these compounds in soil. For this purpose, 325 soil samples were collected, and their spectral reflectance curves were evaluated at a range of 350-2500 nm. After spectral preprocessing operations, the partial least-squares regression (PLSR) model was fitted on spectral data to predict the concentration of Cu and Ni. Based on the results, the spectral range of Cu- sensitive spectra were 480, 580-610, 1370, 1425, 1850, 1920, 2145, and 2200 nm, and Ni-sensitive ranges were 543, 655, 761, 1003, 1271, 1415, 1903, 2199 nm. Finally, the results of this study indicated that the spectral data contains a lot of information that can be applied to identify the soil properties, such as the concentration of heavy metals, with more detail.

Keywords: heavy metals, spectroscopy, spectral bands, PLS regression

Procedia PDF Downloads 77
6823 Optimization of Temperature Difference Formula at Thermoacoustic Cryocooler Stack with Genetic Algorithm

Authors: H. Afsari, H. Shokouhmand

Abstract:

When stack is placed in a thermoacoustic resonator in a cryocooler, one extremity of the stack heats up while the other cools down due to the thermoacoustic effect. In the present, with expression a formula by linear theory, will see this temperature difference depends on what factors. The computed temperature difference is compared to the one predicted by the formula. These discrepancies can not be attributed to non-linear effects, rather they exist because of thermal effects. Two correction factors are introduced for close up results among linear theory and computed and use these correction factors to modified linear theory. In fact, this formula, is optimized by GA (Genetic Algorithm). Finally, results are shown at different Mach numbers and stack location in resonator.

Keywords: heat transfer, thermoacoustic cryocooler, stack, resonator, mach number, genetic algorithm

Procedia PDF Downloads 372
6822 Particle Filter State Estimation Algorithm Based on Improved Artificial Bee Colony Algorithm

Authors: Guangyuan Zhao, Nan Huang, Xuesong Han, Xu Huang

Abstract:

In order to solve the problem of sample dilution in the traditional particle filter algorithm and achieve accurate state estimation in a nonlinear system, a particle filter method based on an improved artificial bee colony (ABC) algorithm was proposed. The algorithm simulated the process of bee foraging and optimization and made the high likelihood region of the backward probability of particles moving to improve the rationality of particle distribution. The opposition-based learning (OBL) strategy is introduced to optimize the initial population of the artificial bee colony algorithm. The convergence factor is introduced into the neighborhood search strategy to limit the search range and improve the convergence speed. Finally, the crossover and mutation operations of the genetic algorithm are introduced into the search mechanism of the following bee, which makes the algorithm jump out of the local extreme value quickly and continue to search the global extreme value to improve its optimization ability. The simulation results show that the improved method can improve the estimation accuracy of particle filters, ensure the diversity of particles, and improve the rationality of particle distribution.

Keywords: particle filter, impoverishment, state estimation, artificial bee colony algorithm

Procedia PDF Downloads 138
6821 Survival Data with Incomplete Missing Categorical Covariates

Authors: Madaki Umar Yusuf, Mohd Rizam B. Abubakar

Abstract:

The survival censored data with incomplete covariate data is a common occurrence in many studies in which the outcome is survival time. With model when the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM by the method of weights. The survival outcome for the class of generalized linear model is applied and this method requires the estimation of the parameters of the distribution of the covariates. In this paper, we propose some clinical trials with ve covariates, four of which have some missing values which clearly show that they were fully censored data.

Keywords: EM algorithm, incomplete categorical covariates, ignorable missing data, missing at random (MAR), Weibull Distribution

Procedia PDF Downloads 397
6820 A Generalisation of Pearson's Curve System and Explicit Representation of the Associated Density Function

Authors: S. B. Provost, Hossein Zareamoghaddam

Abstract:

A univariate density approximation technique whereby the derivative of the logarithm of a density function is assumed to be expressible as a rational function is introduced. This approach which extends Pearson’s curve system is solely based on the moments of a distribution up to a determinable order. Upon solving a system of linear equations, the coefficients of the polynomial ratio can readily be identified. An explicit solution to the integral representation of the resulting density approximant is then obtained. It will be explained that when utilised in conjunction with sample moments, this methodology lends itself to the modelling of ‘big data’. Applications to sets of univariate and bivariate observations will be presented.

Keywords: density estimation, log-density, moments, Pearson's curve system

Procedia PDF Downloads 269
6819 Electrical Machine Winding Temperature Estimation Using Stateful Long Short-Term Memory Networks (LSTM) and Truncated Backpropagation Through Time (TBPTT)

Authors: Yujiang Wu

Abstract:

As electrical machine (e-machine) power density re-querulents become more stringent in vehicle electrification, mounting a temperature sensor for e-machine stator windings becomes increasingly difficult. This can lead to higher manufacturing costs, complicated harnesses, and reduced reliability. In this paper, we propose a deep-learning method for predicting electric machine winding temperature, which can either replace the sensor entirely or serve as a backup to the existing sensor. We compare the performance of our method, the stateful long short-term memory networks (LSTM) with truncated backpropagation through time (TBTT), with that of linear regression, as well as stateless LSTM with/without residual connection. Our results demonstrate the strength of combining stateful LSTM and TBTT in tackling nonlinear time series prediction problems with long sequence lengths. Additionally, in industrial applications, high-temperature region prediction accuracy is more important because winding temperature sensing is typically used for derating machine power when the temperature is high. To evaluate the performance of our algorithm, we developed a temperature-stratified MSE. We propose a simple but effective data preprocessing trick to improve the high-temperature region prediction accuracy. Our experimental results demonstrate the effectiveness of our proposed method in accurately predicting winding temperature, particularly in high-temperature regions, while also reducing manufacturing costs and improving reliability.

Keywords: deep learning, electrical machine, functional safety, long short-term memory networks (LSTM), thermal management, time series prediction

Procedia PDF Downloads 92
6818 A Systematic Review on Development of a Cost Estimation Framework: A Case Study of Nigeria

Authors: Babatunde Dosumu, Obuks Ejohwomu, Akilu Yunusa-Kaltungo

Abstract:

Cost estimation in construction is often difficult, particularly when dealing with risks and uncertainties, which are inevitable and peculiar to developing countries like Nigeria. Direct consequences of these are major deviations in cost, duration, and quality. The fundamental aim of this study is to develop a framework for assessing the impacts of risk on cost estimation, which in turn causes variabilities between contract sum and final account. This is very important, as initial estimates given to clients should reflect the certain magnitude of consistency and accuracy, which the client builds other planning-related activities upon, and also enhance the capabilities of construction industry professionals by enabling better prediction of the final account from the contract sum. In achieving this, a systematic literature review was conducted with cost variability and construction projects as search string within three databases: Scopus, Web of science, and Ebsco (Business source premium), which are further analyzed and gap(s) in knowledge or research discovered. From the extensive review, it was found that factors causing deviation between final accounts and contract sum ranged between 1 and 45. Besides, it was discovered that a cost estimation framework similar to Building Cost Information Services (BCIS) is unavailable in Nigeria, which is a major reason why initial estimates are very often inconsistent, leading to project delay, abandonment, or determination at the expense of the huge sum of money invested. It was concluded that the development of a cost estimation framework that is adjudged an important tool in risk shedding rather than risk-sharing in project risk management would be a panacea to cost estimation problems, leading to cost variability in the Nigerian construction industry by the time this ongoing Ph.D. research is completed. It was recommended that practitioners in the construction industry should always take into account risk in order to facilitate the rapid development of the construction industry in Nigeria, which should give stakeholders a more in-depth understanding of the estimation effectiveness and efficiency to be adopted by stakeholders in both the private and public sectors.

Keywords: cost variability, construction projects, future studies, Nigeria

Procedia PDF Downloads 198
6817 Non-Linear Behavior of Granular Materials in Pavement Design

Authors: Mounir Tichamakdj, Khaled Sandjak, Boualem Tiliouine

Abstract:

The design of flexible pavements is currently carried out using a multilayer elastic theory. However, for thin-surface pavements subject to light or medium traffic volumes, the importance of the non-linear stress-strain behavior of unbound granular materials requires the use of more sophisticated numerical models for the structural design of these pavements. The simplified analysis of the nonlinear behavior of granular materials in pavement design will be developed in this study. To achieve this objective, an equivalent linear model derived from a volumetric shear stress model is used to simulate the nonlinear elastic behavior of two unlinked local granular materials often used in pavements. This model is included here to adequately incorporate material non-linearity due to stress dependence and stiffness of the granular layers in the flexible pavement analysis. The sensitivity of the pavement design criteria to the likely variations in asphalt layer thickness and the mineralogical nature of unbound granular materials commonly used in pavement structures are also evaluated.

Keywords: granular materials, linear equivalent model, non-linear behavior, pavement design, shear volumetric strain model

Procedia PDF Downloads 171
6816 Design and Analysis of a Piezoelectric Linear Motor Based on Rigid Clamping

Authors: Chao Yi, Cunyue Lu, Lingwei Quan

Abstract:

Piezoelectric linear motors have the characteristics of great electromagnetic compatibility, high positioning accuracy, compact structure and no deceleration mechanism, which make it promising to applicate in micro-miniature precision drive systems. However, most piezoelectric motors are employed by flexible clamping, which has insufficient rigidity and is difficult to use in rapid positioning. Another problem is that this clamping method seriously affects the vibration efficiency of the vibrating unit. In order to solve these problems, this paper proposes a piezoelectric stack linear motor based on double-end rigid clamping. First, a piezoelectric linear motor with a length of only 35.5 mm is designed. This motor is mainly composed of a motor stator, a driving foot, a ceramic friction strip, a linear guide, a pre-tightening mechanism and a base. This structure is much simpler and smaller than most similar motors, and it is easy to assemble as well as to realize precise control. In addition, the properties of piezoelectric stack are reviewed and in order to obtain the elliptic motion trajectory of the driving head, a driving scheme of the longitudinal-shear composite stack is innovatively proposed. Finally, impedance analysis and speed performance testing were performed on the piezoelectric linear motor prototype. The motor can measure speed up to 25.5 mm/s under the excitation of signal voltage of 120 V and frequency of 390 Hz. The result shows that the proposed piezoelectric stacked linear motor obtains great performance. It can run smoothly in a large speed range, which is suitable for various precision control in medical images, aerospace, precision machinery and many other fields.

Keywords: piezoelectric stack, linear motor, rigid clamping, elliptical trajectory

Procedia PDF Downloads 149
6815 Construction of QSAR Models to Predict Potency on a Series of substituted Imidazole Derivatives as Anti-fungal Agents

Authors: Sara El Mansouria Beghdadi

Abstract:

Quantitative structure–activity relationship (QSAR) modelling is one of the main computer tools used in medicinal chemistry. Over the past two decades, the incidence of fungal infections has increased due to the development of resistance. In this study, the QSAR was performed on a series of esters of 2-carboxamido-3-(1H-imidazole-1-yl) propanoic acid derivatives. These compounds have showed moderate and very good antifungal activity. The multiple linear regression (MLR) was used to generate the linear 2d-QSAR models. The dataset consists of 115 compounds with their antifungal activity (log MIC) against «Candida albicans» (ATCC SC5314). Descriptors were calculated, and different models were generated using Chemoffice, Avogadro, GaussView software. The selected model was validated. The study suggests that the increase in lipophilicity and the reduction in the electronic character of the substituent in R1, as well as the reduction in the steric hindrance of the substituent in R2 and its aromatic character, supporting the potentiation of the antifungal effect. The results of QSAR could help scientists to propose new compounds with higher antifungal activities intended for immunocompromised patients susceptible to multi-resistant nosocomial infections.

Keywords: quantitative structure–activity relationship, imidazole, antifungal, candida albicans (ATCC SC5314)

Procedia PDF Downloads 79