Search results for: principal component regression (PCR)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5980

Search results for: principal component regression (PCR)

5710 The Effect of Mineral Addition (Natural Pozzolana) on the Capillary Absorption and Compressive Strength of Environmental Mortar

Authors: W. Deboucha, M. N. Oudjit, A. Bouzid, L. Belagraa, A.Noui

Abstract:

The cement manufacturing is the one of the factors that pollutes the atmosphere in the industrial sector. The common way to reduce this pollution is using mineral additions as partial replacement of Portland cement. Particularly, natural pozzolana (NP) is component in which they can be used to decrease the rate of pollution. The main objective of this experimental work is the study of the effect of mineral addition (natural pozzolana) on the capillary water absorption and compressive-flexural strength of cement mortar. The results obtained in the present research showed that the higher dosages of natural pozzolana added could be the principal parameter of such decrease in strength at early and medium term. Further, this increase of incorporated addition has been believed to reduce the capillary water absorption.

Keywords: Natural pozzolana, mortar, strength, capillary absorption

Procedia PDF Downloads 321
5709 Buffer Allocation and Traffic Shaping Policies Implemented in Routers Based on a New Adaptive Intelligent Multi Agent Approach

Authors: M. Taheri Tehrani, H. Ajorloo

Abstract:

In this paper, an intelligent multi-agent framework is developed for each router in which agents have two vital functionalities, traffic shaping and buffer allocation and are positioned in the ports of the routers. With traffic shaping functionality agents shape the traffic forward by dynamic and real time allocation of the rate of generation of tokens in a Token Bucket algorithm and with buffer allocation functionality agents share their buffer capacity between each other based on their need and the conditions of the network. This dynamic and intelligent framework gives this opportunity to some ports to work better under burst and more busy conditions. These agents work intelligently based on Reinforcement Learning (RL) algorithm and will consider effective parameters in their decision process. As RL have limitation considering much parameter in its decision process due to the volume of calculations, we utilize our novel method which invokes Principle Component Analysis (PCA) on the RL and gives a high dimensional ability to this algorithm to consider as much as needed parameters in its decision process. This implementation when is compared to our previous work where traffic shaping was done without any sharing and dynamic allocation of buffer size for each port, the lower packet drop in the whole network specifically in the source routers can be seen. These methods are implemented in our previous proposed intelligent simulation environment to be able to compare better the performance metrics. The results obtained from this simulation environment show an efficient and dynamic utilization of resources in terms of bandwidth and buffer capacities pre allocated to each port.

Keywords: principal component analysis, reinforcement learning, buffer allocation, multi- agent systems

Procedia PDF Downloads 481
5708 Incorporating Anomaly Detection in a Digital Twin Scenario Using Symbolic Regression

Authors: Manuel Alves, Angelica Reis, Armindo Lobo, Valdemar Leiras

Abstract:

In industry 4.0, it is common to have a lot of sensor data. In this deluge of data, hints of possible problems are difficult to spot. The digital twin concept aims to help answer this problem, but it is mainly used as a monitoring tool to handle the visualisation of data. Failure detection is of paramount importance in any industry, and it consumes a lot of resources. Any improvement in this regard is of tangible value to the organisation. The aim of this paper is to add the ability to forecast test failures, curtailing detection times. To achieve this, several anomaly detection algorithms were compared with a symbolic regression approach. To this end, Isolation Forest, One-Class SVM and an auto-encoder have been explored. For the symbolic regression PySR library was used. The first results show that this approach is valid and can be added to the tools available in this context as a low resource anomaly detection method since, after training, the only requirement is the calculation of a polynomial, a useful feature in the digital twin context.

Keywords: anomaly detection, digital twin, industry 4.0, symbolic regression

Procedia PDF Downloads 96
5707 Impact of Infrastructural Development on Socio-Economic Growth: An Empirical Investigation in India

Authors: Jonardan Koner

Abstract:

The study attempts to find out the impact of infrastructural investment on state economic growth in India. It further tries to determine the magnitude of the impact of infrastructural investment on economic indicator, i.e., per-capita income (PCI) in Indian States. The study uses panel regression technique to measure the impact of infrastructural investment on per-capita income (PCI) in Indian States. Panel regression technique helps incorporate both the cross-section and time-series aspects of the dataset. In order to analyze the difference in impact of the explanatory variables on the explained variables across states, the study uses Fixed Effect Panel Regression Model. The conclusions of the study are that infrastructural investment has a desirable impact on economic development and that the impact is different for different states in India. We analyze time series data (annual frequency) ranging from 1991 to 2010. The study reveals that the infrastructural investment significantly explains the variation of economic indicators.

Keywords: infrastructural investment, multiple regression, panel regression techniques, economic development, fixed effect dummy variable model

Procedia PDF Downloads 350
5706 Simplified Analysis Procedure for Seismic Evaluation of Tall Building at Structure and Component Level

Authors: Tahir Mehmood, Pennung Warnitchai

Abstract:

Simplified static analysis procedures such Nonlinear Static Procedure (NSP) are gaining popularity for the seismic evaluation of buildings. However, these simplified procedures accounts only for the seismic responses of the fundamental vibration mode of the structure. Some other procedures which can take into account the higher modes of vibration, lack in accuracy to determine the component responses. Hence, such procedures are not suitable for evaluating the structures where many vibration modes may participate significantly or where component responses are needed to be evaluated. Moreover, these procedures were found to either computationally expensive or tedious to obtain individual component responses. In this paper, a simplified but accurate procedure is studied. It is called the Uncoupled Modal Response History Analysis (UMRHA) procedure. In this procedure, the nonlinear response of each vibration mode is first computed, and they are later on combined into the total response of the structure. The responses of four tall buildings are computed by this simplified UMRHA procedure and compared with those obtained from the NLRHA procedure. The comparison shows that the UMRHA procedure is able to accurately compute the global responses, i.e., story shears and story overturning moments, floor accelerations and inter-story drifts as well as the component level responses of these tall buildings with heights varying from 20 to 44 stories. The required computational effort is also extremely low compared to that of the Nonlinear Response History Analysis (NLRHA) procedure.

Keywords: higher mode effects, seismic evaluation procedure, tall buildings, component responses

Procedia PDF Downloads 322
5705 A Quadratic Model to Early Predict the Blastocyst Stage with a Time Lapse Incubator

Authors: Cecile Edel, Sandrine Giscard D'Estaing, Elsa Labrune, Jacqueline Lornage, Mehdi Benchaib

Abstract:

Introduction: The use of incubator equipped with time-lapse technology in Artificial Reproductive Technology (ART) allows a continuous surveillance. With morphocinetic parameters, algorithms are available to predict the potential outcome of an embryo. However, the different proposed time-lapse algorithms do not take account the missing data, and then some embryos could not be classified. The aim of this work is to construct a predictive model even in the case of missing data. Materials and methods: Patients: A retrospective study was performed, in biology laboratory of reproduction at the hospital ‘Femme Mère Enfant’ (Lyon, France) between 1 May 2013 and 30 April 2015. Embryos (n= 557) obtained from couples (n=108) were cultured in a time-lapse incubator (Embryoscope®, Vitrolife, Goteborg, Sweden). Time-lapse incubator: The morphocinetic parameters obtained during the three first days of embryo life were used to build the predictive model. Predictive model: A quadratic regression was performed between the number of cells and time. N = a. T² + b. T + c. N: number of cells at T time (T in hours). The regression coefficients were calculated with Excel software (Microsoft, Redmond, WA, USA), a program with Visual Basic for Application (VBA) (Microsoft) was written for this purpose. The quadratic equation was used to find a value that allows to predict the blastocyst formation: the synthetize value. The area under the curve (AUC) obtained from the ROC curve was used to appreciate the performance of the regression coefficients and the synthetize value. A cut-off value has been calculated for each regression coefficient and for the synthetize value to obtain two groups where the difference of blastocyst formation rate according to the cut-off values was maximal. The data were analyzed with SPSS (IBM, Il, Chicago, USA). Results: Among the 557 embryos, 79.7% had reached the blastocyst stage. The synthetize value corresponds to the value calculated with time value equal to 99, the highest AUC was then obtained. The AUC for regression coefficient ‘a’ was 0.648 (p < 0.001), 0.363 (p < 0.001) for the regression coefficient ‘b’, 0.633 (p < 0.001) for the regression coefficient ‘c’, and 0.659 (p < 0.001) for the synthetize value. The results are presented as follow: blastocyst formation rate under cut-off value versus blastocyst rate formation above cut-off value. For the regression coefficient ‘a’ the optimum cut-off value was -1.14.10-3 (61.3% versus 84.3%, p < 0.001), 0.26 for the regression coefficient ‘b’ (83.9% versus 63.1%, p < 0.001), -4.4 for the regression coefficient ‘c’ (62.2% versus 83.1%, p < 0.001) and 8.89 for the synthetize value (58.6% versus 85.0%, p < 0.001). Conclusion: This quadratic regression allows to predict the outcome of an embryo even in case of missing data. Three regression coefficients and a synthetize value could represent the identity card of an embryo. ‘a’ regression coefficient represents the acceleration of cells division, ‘b’ regression coefficient represents the speed of cell division. We could hypothesize that ‘c’ regression coefficient could represent the intrinsic potential of an embryo. This intrinsic potential could be dependent from oocyte originating the embryo. These hypotheses should be confirmed by studies analyzing relationship between regression coefficients and ART parameters.

Keywords: ART procedure, blastocyst formation, time-lapse incubator, quadratic model

Procedia PDF Downloads 289
5704 The Impact of Purpose as a Principal Leadership Skill on the Performance Select Township Schools in South Africa

Authors: Pepe Marais, Krishna Govender

Abstract:

This study aimed to investigate the impact of “purpose” as a principal leadership skill on the performance of two township schools using a quantitative research design and collecting data from the school principals, teachers and matric learners, using the 28-scale Servant Leadership Test as well as Gallup’s Q12 Employee Engagement survey. The questionnaires addressed the key objectives, namely, the extent to which the principals of the participating schools exhibited servant leadership and their understanding of “purpose” as one word in leadership and how teachers and learners perceived the impact of a “one-word” purpose-driven leader on the performance of the selected schools. Although no relationship could be demonstrated between ‘’purpose’’ and the performance of the two township schools, it became evident that a significant increase in Servant Leadership leads to a significant increase in engagement and performance, as measured by the matric pass rate. It is recommended that workshops be facilitated with principals and teachers in order to entrench ‘’purpose’’ deeper throughout the schools. In addition, Servant Leadership training has to be conduced to increase the leadership ability of the school principals. Future research in the area of ‘’purpose as one word’’, as well as Servant Leadership as a principal skillset within South Africa’s public school leadership, is recommended.

Keywords: school leadership, servant leadership, one-word purpose, engagement, leadership

Procedia PDF Downloads 95
5703 CompPSA: A Component-Based Pairwise RNA Secondary Structure Alignment Algorithm

Authors: Ghada Badr, Arwa Alturki

Abstract:

The biological function of an RNA molecule depends on its structure. The objective of the alignment is finding the homology between two or more RNA secondary structures. Knowing the common functionalities between two RNA structures allows a better understanding and a discovery of other relationships between them. Besides, identifying non-coding RNAs -that is not translated into a protein- is a popular application in which RNA structural alignment is the first step A few methods for RNA structure-to-structure alignment have been developed. Most of these methods are partial structure-to-structure, sequence-to-structure, or structure-to-sequence alignment. Less attention is given in the literature to the use of efficient RNA structure representation and the structure-to-structure alignment methods are lacking. In this paper, we introduce an O(N2) Component-based Pairwise RNA Structure Alignment (CompPSA) algorithm, where structures are given as a component-based representation and where N is the maximum number of components in the two structures. The proposed algorithm compares the two RNA secondary structures based on their weighted component features rather than on their base-pair details. Extensive experiments are conducted illustrating the efficiency of the CompPSA algorithm when compared to other approaches and on different real and simulated datasets. The CompPSA algorithm shows an accurate similarity measure between components. The algorithm gives the flexibility for the user to align the two RNA structures based on their weighted features (position, full length, and/or stem length). Moreover, the algorithm proves scalability and efficiency in time and memory performance.

Keywords: alignment, RNA secondary structure, pairwise, component-based, data mining

Procedia PDF Downloads 435
5702 A Comprehensive Approach in Calculating the Impact of the Ground on Radiated Electromagnetic Fields Due to Lightning

Authors: Lahcene Boukelkoul

Abstract:

The influence of finite ground conductivity is of great importance in calculating the induced voltages from the radiated electromagnetic fields due to lightning. In this paper, we try to give a comprehensive approach to calculate the impact of the ground on the radiated electromagnetic fields to lightning. The vertical component of lightning electric field is calculated with a reasonable approximation assuming a perfectly conducting ground in case the observation point does not exceed a few kilometres from the lightning channel. However, for distant observation points the radiated vertical component of lightning electric field is attenuated due finitely conducting ground. The attenuation is calculated using the expression elaborated for both low and high frequencies. The horizontal component of the electric field, however, is more affected by a finite conductivity of a ground. Besides, the contribution of the horizontal component of the electric field, to induced voltages on an overhead transmission line, is greater than that of the vertical component. Therefore, the calculation of the horizontal electric field is great concern for the simulation of lightning-induced voltages. For field to transmission lines coupling the ground impedance is calculated for early time behaviour and for low frequency range.

Keywords: power engineering, radiated electromagnetic fields, lightning-induced voltages, lightning electric field

Procedia PDF Downloads 379
5701 Multivariate Statistical Analysis of Heavy Metals Pollution of Dietary Vegetables in Swabi, Khyber Pakhtunkhwa, Pakistan

Authors: Fawad Ali

Abstract:

Toxic heavy metal contamination has a negative impact on soil quality which ultimately pollutes the agriculture system. In the current work, we analyzed uptake of various heavy metals by dietary vegetables grown in wastewater irrigated areas of Swabi city. The samples of soil and vegetables were analyzed for heavy metals viz Cd, Cr, Mn, Fe, Ni, Cu, Zn and Pb using Atomic Absorption Spectrophotometer. High levels of metals were found in wastewater irrigated soil and vegetables in the study area. Especially the concentrations of Pb and Cd in the dietary vegetable crossed the permissible level of World Health Organization. Substantial positive correlation was found among the soil and vegetable contamination. Transfer factor for some metals including Cr, Zn, Mn, Ni, Cd and Cu was greater than 0.5 which shows enhanced accumulation of these metals due to contamination by domestic discharges and industrial effluents. Linear regression analysis indicated significant correlation of heavy metals viz Pb, Cr, Cd, Ni, Zn, Cu, Fe and Mn in vegetables with concentration in soil of 0.964 at P≤0.001. Abelmoschus esculentus indicated Health Risk Index (HRI) of Pb >1 in adults and children. The source identification analysis carried out by Principal Component Analysis (PCA) and Cluster Analysis (CA) showed that ground water and soil were being polluted by the trace metals coming out from industries and domestic wastes. Hierarchical cluster analysis (HCA) divided metals into two clusters for wastewater and soil but into five clusters for soil of control area. PCA extracted two factors for wastewater, each contributing 61.086 % and 16.229 % of the total 77.315 % variance. PCA extracted two factors, for soil samples, having total variance of 79.912 % factor 1 and factor 2 contributed 63.889 % and 16.023 % of the total variance. PCA for sub soil extracted two factors with a total variance of 76.136 % factor 1 being 61.768 % and factor 2 being 14.368 %of the total variance. High pollution load index for vegetables in the study area due to metal polluted soil has opened a study area for proper legislation to protect further contamination of vegetables. This work would further reveal serious health risks to human population of the study area.

Keywords: health risk, vegetables, wastewater, atomic absorption sepctrophotometer

Procedia PDF Downloads 39
5700 A Visual Analytics Tool for the Structural Health Monitoring of an Aircraft Panel

Authors: F. M. Pisano, M. Ciminello

Abstract:

Aerospace, mechanical, and civil engineering infrastructures can take advantages from damage detection and identification strategies in terms of maintenance cost reduction and operational life improvements, as well for safety scopes. The challenge is to detect so called “barely visible impact damage” (BVID), due to low/medium energy impacts, that can progressively compromise the structure integrity. The occurrence of any local change in material properties, that can degrade the structure performance, is to be monitored using so called Structural Health Monitoring (SHM) systems, in charge of comparing the structure states before and after damage occurs. SHM seeks for any "anomalous" response collected by means of sensor networks and then analyzed using appropriate algorithms. Independently of the specific analysis approach adopted for structural damage detection and localization, textual reports, tables and graphs describing possible outlier coordinates and damage severity are usually provided as artifacts to be elaborated for information extraction about the current health conditions of the structure under investigation. Visual Analytics can support the processing of monitored measurements offering data navigation and exploration tools leveraging the native human capabilities of understanding images faster than texts and tables. Herein, a SHM system enrichment by integration of a Visual Analytics component is investigated. Analytical dashboards have been created by combining worksheets, so that a useful Visual Analytics tool is provided to structural analysts for exploring the structure health conditions examined by a Principal Component Analysis based algorithm.

Keywords: interactive dashboards, optical fibers, structural health monitoring, visual analytics

Procedia PDF Downloads 101
5699 Two-Phase Sampling for Estimating a Finite Population Total in Presence of Missing Values

Authors: Daniel Fundi Murithi

Abstract:

Missing data is a real bane in many surveys. To overcome the problems caused by missing data, partial deletion, and single imputation methods, among others, have been proposed. However, problems such as discarding usable data and inaccuracy in reproducing known population parameters and standard errors are associated with them. For regression and stochastic imputation, it is assumed that there is a variable with complete cases to be used as a predictor in estimating missing values in the other variable, and the relationship between the two variables is linear, which might not be realistic in practice. In this project, we estimate population total in presence of missing values in two-phase sampling. Instead of regression or stochastic models, non-parametric model based regression model is used in imputing missing values. Empirical study showed that nonparametric model-based regression imputation is better in reproducing variance of population total estimate obtained when there were no missing values compared to mean, median, regression, and stochastic imputation methods. Although regression and stochastic imputation were better than nonparametric model-based imputation in reproducing population total estimates obtained when there were no missing values in one of the sample sizes considered, nonparametric model-based imputation may be used when the relationship between outcome and predictor variables is not linear.

Keywords: finite population total, missing data, model-based imputation, two-phase sampling

Procedia PDF Downloads 105
5698 A Novel Approach towards Test Case Prioritization Technique

Authors: Kamna Solanki, Yudhvir Singh, Sandeep Dalal

Abstract:

Software testing is a time and cost intensive process. A scrutiny of the code and rigorous testing is required to identify and rectify the putative bugs. The process of bug identification and its consequent correction is continuous in nature and often some of the bugs are removed after the software has been launched in the market. This process of code validation of the altered software during the maintenance phase is termed as Regression testing. Regression testing ubiquitously considers resource constraints; therefore, the deduction of an appropriate set of test cases, from the ensemble of the entire gamut of test cases, is a critical issue for regression test planning. This paper presents a novel method for designing a suitable prioritization process to optimize fault detection rate and performance of regression test on predefined constraints. The proposed method for test case prioritization m-ACO alters the food source selection criteria of natural ants and is basically a modified version of Ant Colony Optimization (ACO). The proposed m-ACO approach has been coded in 'Perl' language and results are validated using three examples by computation of Average Percentage of Faults Detected (APFD) metric.

Keywords: regression testing, software testing, test case prioritization, test suite optimization

Procedia PDF Downloads 309
5697 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator

Authors: Yildiz Stella Dak, Jale Tezcan

Abstract:

Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.

Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection

Procedia PDF Downloads 305
5696 Quantification of Global Cerebrovascular Reactivity in the Principal Feeding Arteries of the Human Brain

Authors: Ravinder Kaur

Abstract:

Introduction Global cerebrovascular reactivity (CVR) mapping is a promising clinical assessment for stress-testing the brain using physiological challenges, such as CO₂, to elicit changes in perfusion. It enables real-time assessment of cerebrovascular integrity and health. Conventional imaging approaches solely use steady-state parameters, like cerebral blood flow (CBF), to evaluate the integrity of the resting parenchyma and can erroneously show a healthy brain at rest, despite the underlying pathogenesis in the presence of cerebrovascular disease. Conversely, coupling CO₂ inhalation with phase-contrast MRI neuroimaging interrogates the capacity of the vasculature to respond to changes under stress. It shows promise in providing prognostic value as a novel health marker to measure neurovascular function in disease and to detect early brain vasculature dysfunction. Objective This exploratory study was established to:(a) quantify the CBF response to CO₂ in hypocapnia and hypercapnia,(b) evaluate disparities in CVR between internal carotid (ICA) and vertebral artery (VA), and (c) assess sex-specific variation in CVR. Methodology Phase-contrast MRI was employed to measure the cerebrovascular reactivity to CO₂ (±10 mmHg). The respiratory interventions were presented using the prospectively end-tidal targeting RespirActTM Gen3 system. Post-processing and statistical analysis were conducted. Results In 9 young, healthy subjects, the CBF increased from hypocapnia to hypercapnia in all vessels (4.21±0.76 to 7.20±1.83 mL/sec in ICA, 1.36±0.55 to 2.33±1.31 mL/sec in VA, p < 0.05). The CVR was quantitatively higher in ICA than VA (slope of linear regression: 0.23 vs. 0.07 mL/sec/mmHg, p < 0.05). No statistically significant effect was observed in CVR between male and female (0.25 vs 0.20 mL/sec/mmHg in ICA, 0.09 vs 0.11 mL/sec/mmHg in VA, p > 0.05). Conclusions The principal finding in this investigation validated the modulation of CBF by CO₂. Moreover, it has indicated that regional heterogeneity in hemodynamic response exists in the brain. This study provides scope to standardize the quantification of CVR prior to its clinical translation.

Keywords: cerebrovascular disease, neuroimaging, phase contrast MRI, cerebrovascular reactivity, carbon dioxide

Procedia PDF Downloads 123
5695 Features Dimensionality Reduction and Multi-Dimensional Voice-Processing Program to Parkinson Disease Discrimination

Authors: Djamila Meghraoui, Bachir Boudraa, Thouraya Meksen, M.Boudraa

Abstract:

Parkinson's disease is a pathology that involves characteristic perturbations in patients’ voices. This paper describes a proposed method that aims to diagnose persons with Parkinson (PWP) by analyzing on line their voices signals. First, Thresholds signals alterations are determined by the Multi-Dimensional Voice Program (MDVP). Principal Analysis (PCA) is exploited to select the main voice principal componentsthat are significantly affected in a patient. The decision phase is realized by a Mul-tinomial Bayes (MNB) Classifier that categorizes an analyzed voice in one of the two resulting classes: healthy or PWP. The prediction accuracy achieved reaching 98.8% is very promising.

Keywords: Parkinson’s disease recognition, PCA, MDVP, multinomial Naive Bayes

Procedia PDF Downloads 252
5694 Using Heat-Mask in the Thermoforming Machine for Component Positioning in Thermoformed Electronics

Authors: Behnam Madadnia

Abstract:

For several years, 3D-shaped electronics have been rising, with many uses in home appliances, automotive, and manufacturing. One of the biggest challenges in the fabrication of 3D shape electronics, which are made by thermoforming, is repeatable and accurate component positioning, and typically there is no control over the final position of the component. This paper aims to address this issue and present a reliable approach for guiding the electronic components in the desired place during thermoforming. We have proposed a heat-control mask in the thermoforming machine to control the heating of the polymer, not allowing specific parts to be formable, which can assure the conductive traces' mechanical stability during thermoforming of the substrate. We have verified our approach's accuracy by applying our method on a real industrial semi-sphere mold for positioning 7 LEDs and one touch sensor. We measured the LEDs' position after thermoforming to prove the process's repeatability. The experiment results demonstrate that the proposed method is capable of positioning electronic components in thermoformed 3D electronics with high precision.

Keywords: 3D-shaped electronics, electronic components, thermoforming, component positioning

Procedia PDF Downloads 67
5693 Nature of Body Image Distortion in Eating Disorders

Authors: Katri K. Cornelissen, Lise Gulli Brokjob, Kristofor McCarty, Jiri Gumancik, Martin J. Tovee, Piers L. Cornelissen

Abstract:

Recent research has shown that body size estimation of healthy women is driven by independent attitudinal and perceptual components. The attitudinal component represents psychological concerns about body, coupled to low self-esteem and a tendency towards depressive symptomatology, leading to over-estimation of body size, independent of the Body Mass Index (BMI) someone actually has. The perceptual component is a normal bias known as contraction bias, which, for bodies is dependent on actual BMI. Women with a BMI less than the population norm tend to overestimate their size, while women with a BMI greater than the population norm tend to underestimate their size. Women whose BMI is close to the population mean are most accurate. This is indexed by a regression of estimated BMI on actual BMI with a slope less than one. It is well established that body dissatisfaction, i.e. an attitudinal distortion, leads to body size overestimation in eating disordered individuals. However, debate persists as to whether women with eating disorders may also suffer a perceptual body distortion. Therefore, the current study was set to ask whether women with eating disorders exhibit the normal contraction bias when they estimate their own body size. If they do not, this would suggest differences in the way that women with eating disorders process the perceptual aspects of body shape and size in comparison to healthy controls. 100 healthy controls and 33 women with a history of eating disorders were recruited. Critically, it was ensured that both groups of participants represented comparable and adequate ranges of actual BMI (e.g. ~18 to ~40). Of those with eating disorders, 19 had a history of anorexia nervosa, 6 bulimia nervosa, and 8 OSFED. 87.5% of the women with a history of eating disorders self-reported that they were either recovered or recovering, and 89.7% of them self-reported that they had had one or more instances of relapse. The mean time lapsed since first diagnosis was 5 years and on average participants had experienced two relapses. Participants were asked to fill number of psychometric measures (EDE-Q, BSQ, RSE, BDI) to establish the attitudinal component of their body image as well as their tendency to internalize socio-cultural body ideals. Additionally, participants completed a method of adjustment psychophysical task, using photorealistic avatars calibrated for BMI, in order to provide an estimate of their own body size and shape. The data from the healthy controls replicate previous findings, revealing independent contributions to body size estimation from both attitudinal and perceptual (i.e. contraction bias) body image components, as described above. For the eating disorder group, once the adequacy of their actual BMI ranges was established, a regression of estimated BMI on actual BMI had a slope greater than 1, significantly different to that from controls. This suggests that (some) eating disordered individuals process the perceptual aspects of body image differently from healthy controls. It therefore is necessary to develop interventions which are specific to the perceptual processing of body shape and size for the management of (some) individuals with eating disorders.

Keywords: body image distortion, perception, recovery, relapse, BMI, eating disorders

Procedia PDF Downloads 45
5692 Institutional Capacity and Corruption: Evidence from Brazil

Authors: Dalson Figueiredo, Enivaldo Rocha, Ranulfo Paranhos, José Alexandre

Abstract:

This paper analyzes the effects of institutional capacity on corruption. Methodologically, the research design combines both descriptive and multivariate statistics to examine two original datasets based on secondary data. In particular, we employ a principal component model to estimate an indicator of institutional capacity for both state audit institutions and subnational judiciary courts. Then, we estimate the effect of institutional capacity on two dependent variables: (1) incidence of administrative irregularities and (2) time elapsed to judge corruption cases. The preliminary results using ordinary least squares, negative binomial and Tobit models suggest the same conclusions: higher the institutional audit capacity, higher is the probability of detecting a corruption case. On the other hand, higher the institutional capacity of state judiciary, the lower is the time to judge corruption cases.

Keywords: institutional capacity, corruption, state level institutions, evidence from Brazil

Procedia PDF Downloads 328
5691 Prediction of the Thermodynamic Properties of Hydrocarbons Using Gaussian Process Regression

Authors: N. Alhazmi

Abstract:

Knowing the thermodynamics properties of hydrocarbons is vital when it comes to analyzing the related chemical reaction outcomes and understanding the reaction process, especially in terms of petrochemical industrial applications, combustions, and catalytic reactions. However, measuring the thermodynamics properties experimentally is time-consuming and costly. In this paper, Gaussian process regression (GPR) has been used to directly predict the main thermodynamic properties - standard enthalpy of formation, standard entropy, and heat capacity -for more than 360 cyclic and non-cyclic alkanes, alkenes, and alkynes. A simple workflow has been proposed that can be applied to directly predict the main properties of any hydrocarbon by knowing its descriptors and chemical structure and can be generalized to predict the main properties of any material. The model was evaluated by calculating the statistical error R², which was more than 0.9794 for all the predicted properties.

Keywords: thermodynamic, Gaussian process regression, hydrocarbons, regression, supervised learning, entropy, enthalpy, heat capacity

Procedia PDF Downloads 194
5690 Solving Single Machine Total Weighted Tardiness Problem Using Gaussian Process Regression

Authors: Wanatchapong Kongkaew

Abstract:

This paper proposes an application of probabilistic technique, namely Gaussian process regression, for estimating an optimal sequence of the single machine with total weighted tardiness (SMTWT) scheduling problem. In this work, the Gaussian process regression (GPR) model is utilized to predict an optimal sequence of the SMTWT problem, and its solution is improved by using an iterated local search based on simulated annealing scheme, called GPRISA algorithm. The results show that the proposed GPRISA method achieves a very good performance and a reasonable trade-off between solution quality and time consumption. Moreover, in the comparison of deviation from the best-known solution, the proposed mechanism noticeably outperforms the recently existing approaches.

Keywords: Gaussian process regression, iterated local search, simulated annealing, single machine total weighted tardiness

Procedia PDF Downloads 282
5689 The Profit Trend of Cosmetics Products Using Bootstrap Edgeworth Approximation

Authors: Edlira Donefski, Lorenc Ekonomi, Tina Donefski

Abstract:

Edgeworth approximation is one of the most important statistical methods that has a considered contribution in the reduction of the sum of standard deviation of the independent variables’ coefficients in a Quantile Regression Model. This model estimates the conditional median or other quantiles. In this paper, we have applied approximating statistical methods in an economical problem. We have created and generated a quantile regression model to see how the profit gained is connected with the realized sales of the cosmetic products in a real data, taken from a local business. The Linear Regression of the generated profit and the realized sales was not free of autocorrelation and heteroscedasticity, so this is the reason that we have used this model instead of Linear Regression. Our aim is to analyze in more details the relation between the variables taken into study: the profit and the finalized sales and how to minimize the standard errors of the independent variable involved in this study, the level of realized sales. The statistical methods that we have applied in our work are Edgeworth Approximation for Independent and Identical distributed (IID) cases, Bootstrap version of the Model and the Edgeworth approximation for Bootstrap Quantile Regression Model. The graphics and the results that we have presented here identify the best approximating model of our study.

Keywords: bootstrap, edgeworth approximation, IID, quantile

Procedia PDF Downloads 132
5688 Determining a Suitable Maintenance Measure for Gentelligent Components Using Case-Based Reasoning

Authors: Maximilian Winkens, Peter Nyhuis

Abstract:

Components with sensory properties such as gentelligent components developed at the Collaborative Research Center 653 offer a new angle on the full utilization of the remaining service life in case of a preventive maintenance. The developed methodology of component status driven maintenance analyses the stress data obtained during the component's useful life and on the basis of this knowledge assesses the type of maintenance called for in this case. The procedure is derived from the case-based reasoning method and will be elucidated in detail. The method's functionality is demonstrated with real-life data obtained during test runs of a racing car prototype.

Keywords: gentelligent component, preventive maintenance, case-based reasoning, sensory

Procedia PDF Downloads 339
5687 Investigation of New Gait Representations for Improving Gait Recognition

Authors: Chirawat Wattanapanich, Hong Wei

Abstract:

This study presents new gait representations for improving gait recognition accuracy on cross gait appearances, such as normal walking, wearing a coat and carrying a bag. Based on the Gait Energy Image (GEI), two ideas are implemented to generate new gait representations. One is to append lower knee regions to the original GEI, and the other is to apply convolutional operations to the GEI and its variants. A set of new gait representations are created and used for training multi-class Support Vector Machines (SVMs). Tests are conducted on the CASIA dataset B. Various combinations of the gait representations with different convolutional kernel size and different numbers of kernels used in the convolutional processes are examined. Both the entire images as features and reduced dimensional features by Principal Component Analysis (PCA) are tested in gait recognition. Interestingly, both new techniques, appending the lower knee regions to the original GEI and convolutional GEI, can significantly contribute to the performance improvement in the gait recognition. The experimental results have shown that the average recognition rate can be improved from 75.65% to 87.50%.

Keywords: convolutional image, lower knee, gait

Procedia PDF Downloads 180
5686 Image Segmentation of Visual Markers in Robotic Tracking System Based on Differential Evolution Algorithm with Connected-Component Labeling

Authors: Shu-Yu Hsu, Chen-Chien Hsu, Wei-Yen Wang

Abstract:

Color segmentation is a basic and simple way for recognizing the visual markers in a robotic tracking system. In this paper, we propose a new method for color segmentation by incorporating differential evolution algorithm and connected component labeling to autonomously preset the HSV threshold of visual markers. To evaluate the effectiveness of the proposed algorithm, a ROBOTIS OP2 humanoid robot is used to conduct the experiment, where five most commonly used color including red, purple, blue, yellow, and green in visual markers are given for comparisons.

Keywords: color segmentation, differential evolution, connected component labeling, humanoid robot

Procedia PDF Downloads 578
5685 Heavy Metal Concentrations in Sediments of Sta. Maria River, Laguna

Authors: Francis Angelo A. Sta. Ana

Abstract:

Heavy metal pollutants are a major environmental concern in built-up areas in the Philippines. It causes negative effects on aquatic organisms and human health. Heavy metals concentrations of chromium, mercury, lead, copper, arsenic, zinc, cadmium, and nickel were investigated in Sta. Maria river, in Laguna. A total of 16 sediment samples were collected from the river at four stations. Atomic absorption spectroscopy (AAS) was used for element detection. It is found that copper is associated with chromium based on statistical analysis using principal component analysis (PCA). Conduct of Sediment Quality Guideline (SQG) revealed that chromium has high toxicity due to values higher than Sediment Quality Guidelines Probable Effect Level (SQG’s PEL). Copper, Nickel, and Pb fall on average toxicity while others are below PEL and effect range low (ERL).

Keywords: heavy metals, pollutants, sediment quality guidelines, atomic absorption spectroscopy

Procedia PDF Downloads 112
5684 Quantitative Elemental Analysis of Cyperus rotundus Medicinal Plant by Particle Induced X-Ray Emission and ICP-MS Techniques

Authors: J. Chandrasekhar Rao, B. G. Naidu, G. J. Naga Raju, P. Sarita

Abstract:

Particle Induced X-ray Emission (PIXE) and Inductively Coupled Plasma Mass Spectroscopy (ICP-MS) techniques have been employed in this work to determine the elements present in the root of Cyperus rotundus medicinal plant used in the treatment of rheumatoid arthritis. The elements V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Rb, and Sr were commonly identified and quantified by both PIXE and ICP-MS whereas the elements Li, Be, Al, As, Se, Ag, Cd, Ba, Tl, Pb and U were determined by ICP-MS and Cl, K, Ca, Ti and Br were determined by PIXE. The regional variation of elemental content has also been studied by analyzing the same plant collected from different geographical locations. Information on the elemental content of the medicinal plant would be helpful in correlating its ability in the treatment of rheumatoid arthritis and also in deciding the dosage of this herbal medicine from the metal toxicity point of view. Principal component analysis and cluster analysis were also applied to the data matrix to understand the correlation among the elements.

Keywords: PIXE, CP-MS, elements, Cyperus rotundus, rheumatoid arthritis

Procedia PDF Downloads 310
5683 Exploratory Factor Analysis of Natural Disaster Preparedness Awareness of Thai Citizens

Authors: Chaiyaset Promsri

Abstract:

Based on the synthesis of related literatures, this research found thirteen related dimensions that involved the development of natural disaster preparedness awareness including hazard knowledge, hazard attitude, training for disaster preparedness, rehearsal and practice for disaster preparedness, cultural development for preparedness, public relations and communication, storytelling, disaster awareness game, simulation, past experience to natural disaster, information sharing with family members, and commitment to the community (time of living).  The 40-item of natural disaster preparedness awareness questionnaire was developed based on these thirteen dimensions. Data were collected from 595 participants in Bangkok metropolitan and vicinity. Cronbach's alpha was used to examine the internal consistency for this instrument. Reliability coefficient was 97, which was highly acceptable.  Exploratory Factor Analysis where principal axis factor analysis was employed. The Kaiser-Meyer-Olkin index of sampling adequacy was .973, indicating that the data represented a homogeneous collection of variables suitable for factor analysis. Bartlett's test of Sphericity was significant for the sample as Chi-Square = 23168.657, df = 780, and p-value < .0001, which indicated that the set of correlations in the correlation matrix was significantly different and acceptable for utilizing EFA. Factor extraction was done to determine the number of factors by using principal component analysis and varimax.  The result revealed that four factors had Eigen value greater than 1 with more than 60% cumulative of variance. Factor #1 had Eigen value of 22.270, and factor loadings ranged from 0.626-0.760. This factor was named as "Knowledge and Attitude of Natural Disaster Preparedness".  Factor #2 had Eigen value of 2.491, and factor loadings ranged from 0.596-0.696. This factor was named as "Training and Development". Factor #3 had Eigen value of 1.821, and factor loadings ranged from 0.643-0.777. This factor was named as "Building Experiences about Disaster Preparedness".  Factor #4 had Eigen value of 1.365, and factor loadings ranged from 0.657-0.760. This was named as "Family and Community". The results of this study provided support for the reliability and construct validity of natural disaster preparedness awareness for utilizing with populations similar to sample employed.

Keywords: natural disaster, disaster preparedness, disaster awareness, Thai citizens

Procedia PDF Downloads 352
5682 Time Series Regression with Meta-Clusters

Authors: Monika Chuchro

Abstract:

This paper presents a preliminary attempt to apply classification of time series using meta-clusters in order to improve the quality of regression models. In this case, clustering was performed as a method to obtain a subgroups of time series data with normal distribution from inflow into waste water treatment plant data which Composed of several groups differing by mean value. Two simple algorithms: K-mean and EM were chosen as a clustering method. The rand index was used to measure the similarity. After simple meta-clustering, regression model was performed for each subgroups. The final model was a sum of subgroups models. The quality of obtained model was compared with the regression model made using the same explanatory variables but with no clustering of data. Results were compared by determination coefficient (R2), measure of prediction accuracy mean absolute percentage error (MAPE) and comparison on linear chart. Preliminary results allows to foresee the potential of the presented technique.

Keywords: clustering, data analysis, data mining, predictive models

Procedia PDF Downloads 441
5681 Economic Analysis of Cowpea (Unguiculata spp) Production in Northern Nigeria: A Case Study of Kano Katsina and Jigawa States

Authors: Yakubu Suleiman, S. A. Musa

Abstract:

Nigeria is the largest cowpea producer in the world, accounting for about 45%, followed by Brazil with about 17%. Cowpea is grown in Kano, Bauchi, Katsina, Borno in the north, Oyo in the west, and to the lesser extent in Enugu in the east. This study was conducted to determine the input–output relationship of Cowpea production in Kano, Katsina, and Jigawa states of Nigeria. The data were collected with the aid of 1000 structured questionnaires that were randomly distributed to Cowpea farmers in the three states mentioned above of the study area. The data collected were analyzed using regression analysis (Cobb–Douglass production function model). The result of the regression analysis revealed the coefficient of multiple determinations, R2, to be 72.5% and the F ration to be 106.20 and was found to be significant (P < 0.01). The regression coefficient of constant is 0.5382 and is significant (P < 0.01). The regression coefficient with respect to labor and seeds were 0.65554 and 0.4336, respectively, and they are highly significant (P < 0.01). The regression coefficient with respect to fertilizer is 0.26341 which is significant (P < 0.05). This implies that a unit increase of any one of the variable inputs used while holding all other variables inputs constants, will significantly increase the total Cowpea output by their corresponding coefficient. This indicated that farmers in the study area are operating in stage II of the production function. The result revealed that Cowpea farmer in Kano, Jigawa and Katsina States realized a profit of N15,997, N34,016 and N19,788 per hectare respectively. It is hereby recommended that more attention should be given to Cowpea production by government and research institutions.

Keywords: coefficient, constant, inputs, regression

Procedia PDF Downloads 389