Search results for: logistic principal component analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9328

Search results for: logistic principal component analysis

9118 Fault Detection via Stability Analysis for the Hybrid Control Unit of HEVs

Authors: Kyogun Chang, Yoon Bok Lee

Abstract:

Fault detection determines faultexistence and detecting time. This paper discusses two layered fault detection methods to enhance the reliability and safety. Two layered fault detection methods consist of fault detection methods of component level controllers and system level controllers. Component level controllers detect faults by using limit checking, model-based detection, and data-driven detection and system level controllers execute detection by stability analysis which can detect unknown changes. System level controllers compare detection results via stability with fault signals from lower level controllers. This paper addresses fault detection methods via stability and suggests fault detection criteria in nonlinear systems. The fault detection method applies tothe hybrid control unit of a military hybrid electric vehicleso that the hybrid control unit can detect faults of the traction motor.

Keywords: Two Layered Fault Detection, Stability Analysis, Fault-Tolerant Control

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1673
9117 A Study on the Assessment of Prosthetic Infection after Total Knee Replacement Surgery

Authors: Chang, Chun-Lang, Liu, Chun-Kai

Abstract:

This study, for its research subjects, uses patients who had undergone total knee replacement surgery from the database of the National Health Insurance Administration. Through the review of literatures and the interviews with physicians, important factors are selected after careful screening. Then using Cross Entropy Method, Genetic Algorithm Logistic Regression, and Particle Swarm Optimization, the weight of each factor is calculated and obtained. In the meantime, Excel VBA and Case Based Reasoning are combined and adopted to evaluate the system. Results show no significant difference found through Genetic Algorithm Logistic Regression and Particle Swarm Optimization with over 97% accuracy in both methods. Both ROC areas are above 0.87. This study can provide critical reference to medical personnel as clinical assessment to effectively enhance medical care quality and efficiency, prevent unnecessary waste, and provide practical advantages to resource allocation to medical institutes.

Keywords: Total knee replacement, Case Based Reasoning, Cross Entropy Method, Genetic Algorithm Logistic Regression, Particle Swarm Optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1996
9116 A Study on the Differential Diagnostic Model for Newborn Hearing Loss Screening

Authors: Chun-Lang Chang

Abstract:

According to the statistics, the prevalence of congenital hearing loss in Taiwan is approximately six thousandths; furthermore, one thousandths of infants have severe hearing impairment. Hearing ability during infancy has significant impact in the development of children-s oral expressions, language maturity, cognitive performance, education ability and social behaviors in the future. Although most children born with hearing impairment have sensorineural hearing loss, almost every child more or less still retains some residual hearing. If provided with a hearing aid or cochlear implant (a bionic ear) timely in addition to hearing speech training, even severely hearing-impaired children can still learn to talk. On the other hand, those who failed to be diagnosed and thus unable to begin hearing and speech rehabilitations on a timely manner might lose an important opportunity to live a complete and healthy life. Eventually, the lack of hearing and speaking ability will affect the development of both mental and physical functions, intelligence, and social adaptability. Not only will this problem result in an irreparable regret to the hearing-impaired child for the life time, but also create a heavy burden for the family and society. Therefore, it is necessary to establish a set of computer-assisted predictive model that can accurately detect and help diagnose newborn hearing loss so that early interventions can be provided timely to eliminate waste of medical resources. This study uses information from the neonatal database of the case hospital as the subjects, adopting two different analysis methods of using support vector machine (SVM) for model predictions and using logistic regression to conduct factor screening prior to model predictions in SVM to examine the results. The results indicate that prediction accuracy is as high as 96.43% when the factors are screened and selected through logistic regression. Hence, the model constructed in this study will have real help in clinical diagnosis for the physicians and actually beneficial to the early interventions of newborn hearing impairment.

Keywords: Data mining, Hearing impairment, Logistic regression analysis, Support vector machines

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1761
9115 Untargeted Small Metabolite Identification from Thermally Treated Tualang Honey

Authors: Lee Suan Chua

Abstract:

This study investigated the effects of thermal treatment on Tualang honey sample in terms of honey colour and heat-induced small metabolites. The heating process was carried out in a temperature controlled water batch at 90oC for 4 hours. The honey samples were put in cylinder tubes with the dimension of 1 cm diameter and 10 cm length for homogenous heat transfer. The results found that the thermal treatment produced not only hydroxylmethylfurfural, but also other harmful substances such as phthalic anhydride and radiolytic byproducts. The degradation of honey protein was due to the detection of free amino acids such as cysteine and phenylalanine in heat-treated honey samples. Sugar dehydration was also occurred because fragmented di-galactose was identified based on the presence of characteristic ions in the mass fragmentation pattern. The honey colour was found getting darker as the heating duration was increased up to 4 hours. Approximately, 60 mm PFund of increment was noticed for the honey colour with the colour change rate of 14.8 mm PFund per hour. Based on the principal component analysis, the score plot clearly shows that the chemical profile of Tualang honey was significantly altered after 2 hours of heating at 90oC.

Keywords: Honey colour, hydroxylmethylfurfural, thermal treatment, Tualang honey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1826
9114 Application of Company Financial Crisis Early Warning Model- Use of “Financial Reference Database“

Authors: Chiung-ying Lee, Chia-hua Chang

Abstract:

In July 1, 2007, Taiwan Stock Exchange (TWSE) on market observation post system (MOPS) adds a new "Financial reference database" for investors to do investment reference. This database as a warning to public offering companies listed on the public financial information and it original within eight targets. In this paper, this database provided by the indicators for the application of company financial crisis early warning model verify that the database provided by the indicator forecast for the financial crisis, whether or not companies have a high accuracy rate as opposed to domestic and foreign scholars have positive results. There is use of Logistic Regression Model application of the financial early warning model, in which no joined back-conditions is the first model, joined it in is the second model, has been taken occurred in the financial crisis of companies to research samples and then business took place before the financial crisis point with T-1 and T-2 sample data to do positive analysis. The results show that this database provided the debt ratio and net per share for the best forecast variables.

Keywords: Financial reference database, Financial early warning model, Logistic Regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1383
9113 Prediction of Reusability of Object Oriented Software Systems using Clustering Approach

Authors: Anju Shri, Parvinder S. Sandhu, Vikas Gupta, Sanyam Anand

Abstract:

In literature, there are metrics for identifying the quality of reusable components but the framework that makes use of these metrics to precisely predict reusability of software components is still need to be worked out. These reusability metrics if identified in the design phase or even in the coding phase can help us to reduce the rework by improving quality of reuse of the software component and hence improve the productivity due to probabilistic increase in the reuse level. As CK metric suit is most widely used metrics for extraction of structural features of an object oriented (OO) software; So, in this study, tuned CK metric suit i.e. WMC, DIT, NOC, CBO and LCOM, is used to obtain the structural analysis of OO-based software components. An algorithm has been proposed in which the inputs can be given to K-Means Clustering system in form of tuned values of the OO software component and decision tree is formed for the 10-fold cross validation of data to evaluate the in terms of linguistic reusability value of the component. The developed reusability model has produced high precision results as desired.

Keywords: CK-Metric, Desicion Tree, Kmeans, Reusability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1877
9112 Spike Sorting Method Using Exponential Autoregressive Modeling of Action Potentials

Authors: Sajjad Farashi

Abstract:

Neurons in the nervous system communicate with each other by producing electrical signals called spikes. To investigate the physiological function of nervous system it is essential to study the activity of neurons by detecting and sorting spikes in the recorded signal. In this paper a method is proposed for considering the spike sorting problem which is based on the nonlinear modeling of spikes using exponential autoregressive model. The genetic algorithm is utilized for model parameter estimation. In this regard some selected model coefficients are used as features for sorting purposes. For optimal selection of model coefficients, self-organizing feature map is used. The results show that modeling of spikes with nonlinear autoregressive model outperforms its linear counterpart. Also the extracted features based on the coefficients of exponential autoregressive model are better than wavelet based extracted features and get more compact and well-separated clusters. In the case of spikes different in small-scale structures where principal component analysis fails to get separated clouds in the feature space, the proposed method can obtain well-separated cluster which removes the necessity of applying complex classifiers.

Keywords: Exponential autoregressive model, Neural data, spike sorting, time series modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730
9111 Low Dimensional Representation of Dorsal Hand Vein Features Using Principle Component Analysis (PCA)

Authors: M.Heenaye-Mamode Khan, R.K. Subramanian, N. A. Mamode Khan

Abstract:

The quest of providing more secure identification system has led to a rise in developing biometric systems. Dorsal hand vein pattern is an emerging biometric which has attracted the attention of many researchers, of late. Different approaches have been used to extract the vein pattern and match them. In this work, Principle Component Analysis (PCA) which is a method that has been successfully applied on human faces and hand geometry is applied on the dorsal hand vein pattern. PCA has been used to obtain eigenveins which is a low dimensional representation of vein pattern features. Low cost CCD cameras were used to obtain the vein images. The extraction of the vein pattern was obtained by applying morphology. We have applied noise reduction filters to enhance the vein patterns. The system has been successfully tested on a database of 200 images using a threshold value of 0.9. The results obtained are encouraging.

Keywords: Biometric, Dorsal vein pattern, PCA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1835
9110 Measuring Principal and Teacher Cultural Competency: A Needs Assessment of Three Proximate PreK-5 Schools

Authors: Teresa Caswell

Abstract:

Throughout the United States and within a myriad of demographic contexts, students of color experience the results of systemic inequities as an academic outcome. These disparities continue despite the increased resources provided to students and ongoing instruction-focused professional learning received by teachers. We postulated that lower levels of educator cultural competency are an underlying factor of why resource and instructional interventions are less effective than desired. Before implementing any type of intervention, however, cultural competency needed to be confirmed as a factor in schools demonstrating academic disparities between racial subgroups. A needs assessment was designed to measure levels of individual beliefs, including cultural competency, in both principals and teachers at three neighboring schools verified to have academic disparities. The resulting mixed method study utilized the Optimal Theory Applied to Identity Development (OTAID) model to measure cultural competency quantitatively, through self-identity inventory survey items, with teachers and qualitatively, through one-on-one interviews, with each school’s principal. A joint display was utilized to see combined data within and across school contexts. Each school was confirmed to have misalignments between principal and teacher levels of cultural competency beliefs while also indicating that a number of participants in the self-identity inventory survey may have intentionally skipped items referencing the term oppression. Additional use of the OTAID model and self-identity inventory in future research and across contexts is needed to determine transferability and dependability as cultural competency measures.

Keywords: Cultural competency, identity development, mixed method analysis, needs assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 106
9109 Nuclear Fuel Safety Threshold Determined by Logistic Regression Plus Uncertainty

Authors: D. S. Gomes, A. T. Silva

Abstract:

Analysis of the uncertainty quantification related to nuclear safety margins applied to the nuclear reactor is an important concept to prevent future radioactive accidents. The nuclear fuel performance code may involve the tolerance level determined by traditional deterministic models producing acceptable results at burn cycles under 62 GWd/MTU. The behavior of nuclear fuel can simulate applying a series of material properties under irradiation and physics models to calculate the safety limits. In this study, theoretical predictions of nuclear fuel failure under transient conditions investigate extended radiation cycles at 75 GWd/MTU, considering the behavior of fuel rods in light-water reactors under reactivity accident conditions. The fuel pellet can melt due to the quick increase of reactivity during a transient. Large power excursions in the reactor are the subject of interest bringing to a treatment that is known as the Fuchs-Hansen model. The point kinetic neutron equations show similar characteristics of non-linear differential equations. In this investigation, the multivariate logistic regression is employed to a probabilistic forecast of fuel failure. A comparison of computational simulation and experimental results was acceptable. The experiments carried out use the pre-irradiated fuels rods subjected to a rapid energy pulse which exhibits the same behavior during a nuclear accident. The propagation of uncertainty utilizes the Wilk's formulation. The variables chosen as essential to failure prediction were the fuel burnup, the applied peak power, the pulse width, the oxidation layer thickness, and the cladding type.

Keywords: Logistic regression, reactivity-initiated accident, safety margins, uncertainty propagation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 976
9108 Monitoring Blood Pressure Using Regression Techniques

Authors: Qasem Qananwah, Ahmad Dagamseh, Hiam AlQuran, Khalid Shaker Ibrahim

Abstract:

Blood pressure helps the physicians greatly to have a deep insight into the cardiovascular system. The determination of individual blood pressure is a standard clinical procedure considered for cardiovascular system problems. The conventional techniques to measure blood pressure (e.g. cuff method) allows a limited number of readings for a certain period (e.g. every 5-10 minutes). Additionally, these systems cause turbulence to blood flow; impeding continuous blood pressure monitoring, especially in emergency cases or critically ill persons. In this paper, the most important statistical features in the photoplethysmogram (PPG) signals were extracted to estimate the blood pressure noninvasively. PPG signals from more than 40 subjects were measured and analyzed and 12 features were extracted. The features were fed to principal component analysis (PCA) to find the most important independent features that have the highest correlation with blood pressure. The results show that the stiffness index means and standard deviation for the beat-to-beat heart rate were the most important features. A model representing both features for Systolic Blood Pressure (SBP) and Diastolic Blood Pressure (DBP) was obtained using a statistical regression technique. Surface fitting is used to best fit the series of data and the results show that the error value in estimating the SBP is 4.95% and in estimating the DBP is 3.99%.

Keywords: Blood pressure, noninvasive optical system, PCA, continuous monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 625
9107 Mathematical Modeling of Non-Isothermal Multi-Component Fluid Flow in Pipes Applying to Rapid Gas Decompression in Rich and Base Gases

Authors: Evgeniy Burlutskiy

Abstract:

The paper presents a one-dimensional transient mathematical model of compressible non-isothermal multicomponent fluid mixture flow in a pipe. The set of the mass, momentum and enthalpy conservation equations for gas phase is solved in the model. Thermo-physical properties of multi-component gas mixture are calculated by solving the Equation of State (EOS) model. The Soave-Redlich-Kwong (SRK-EOS) model is chosen. Gas mixture viscosity is calculated on the basis of the Lee-Gonzales- Eakin (LGE) correlation. Numerical analysis of rapid gas decompression process in rich and base natural gases is made on the basis of the proposed mathematical model. The model is successfully validated on the experimental data [1]. The proposed mathematical model shows a very good agreement with the experimental data [1] in a wide range of pressure values and predicts the decompression in rich and base gas mixtures much better than analytical and mathematical models, which are available from the open source literature.

Keywords: Mathematical model, Multi-Component gas mixture flow, Rapid Gas Decompression

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1896
9106 Numerical Analysis on Rapid Decompression in Conventional Dry Gases using One- Dimensional Mathematical Modeling

Authors: Evgeniy Burlutskiy

Abstract:

The paper presents a one-dimensional transient mathematical model of compressible thermal multi-component gas mixture flows in pipes. The set of the mass, momentum and enthalpy conservation equations for gas phase is solved. Thermo-physical properties of multi-component gas mixture are calculated by solving the Equation of State (EOS) model. The Soave-Redlich-Kwong (SRK-EOS) model is chosen. Gas mixture viscosity is calculated on the basis of the Lee-Gonzales-Eakin (LGE) correlation. Numerical analysis on rapid decompression in conventional dry gases is performed by using the proposed mathematical model. The model is validated on measured values of the decompression wave speed in dry natural gas mixtures. All predictions show excellent agreement with the experimental data at high and low pressure. The presented model predicts the decompression in dry natural gas mixtures much better than GASDECOM and OLGA codes, which are the most frequently-used codes in oil and gas pipeline transport service.

Keywords: Mathematical model, Rapid Gas Decompression

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2946
9105 Primary School Principals in Turkey: Their Working Conditions and Professional Profiles

Authors: Ali I. Gumuseli

Abstract:

In order to achieve effective management, the professional and individual characteristics and qualifications of school principals and their system-oriented perception is very important. Therefore, it is necessary to conduct regular comprehensive studies into the profiles of school principals. The purpose of this study is to determine the perceptions of primary school principals about their working conditions and to present their professional profiles. The questionnaire was distributed to 1475 respondents and 1428 valid questionnaires were evaluated. The results of the research were discussed and compared to other similar studies.Keywordseducation, education management, primary school principal, principals profiles

Keywords: education, education management, primary school principal, principals profiles

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1505
9104 Towards a Suitable and Systematic Approach for Component Based Software Development

Authors: Kuljit Kaur, Parminder Kaur, Jaspreet Bedi, Hardeep Singh

Abstract:

Software crisis refers to the situation in which the developers are not able to complete the projects within time and budget constraints and moreover these overscheduled and over budget projects are of low quality as well. Several methodologies have been adopted form time to time to overcome this situation and now in the focus is component based software engineering. In this approach, emphasis is on reuse of already existing software artifacts. But the results can not be achieved just by preaching the principles; they need to be practiced as well. This paper highlights some of the very basic elements of this approach, which has to be in place to get the desired goals of high quality, low cost with shorter time-to-market software products.

Keywords: Component Model, Software Components, SoftwareRepository, Process Models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1724
9103 Material Characterization and Numerical Simulation of a Rubber Bumper

Authors: Tamás Mankovits, Dávid Huri, Imre Kállai, Imre Kocsis, Tamás Szabó

Abstract:

Non-linear FEM calculations are indispensable when important technical information like operating performance of a rubber component is desired. Rubber bumpers built into air-spring structures may undergo large deformations under load, which in itself shows non-linear behavior. The changing contact range between the parts and the incompressibility of the rubber increases this non-linear behavior further. The material characterization of an elastomeric component is also a demanding engineering task. In this paper a comprehensive investigation is introduced including laboratory measurements, mesh density analysis and complex finite element simulations to obtain the load-displacement curve of the chosen rubber bumper. Contact and friction effects are also taken into consideration. The aim of this research is to elaborate a FEM model which is accurate and competitive for a future shape optimization task.

Keywords: Rubber bumper, finite element analysis, compression test, Mooney-Rivlin material model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3540
9102 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances

Authors: P. Mounnarath, U. Schmitz, Ch. Zhang

Abstract:

Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.

Keywords: Expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676
9101 A Study of Classification Models to Predict Drill-Bit Breakage Using Degradation Signals

Authors: Bharatendra Rai

Abstract:

Cutting tools are widely used in manufacturing processes and drilling is the most commonly used machining process. Although drill-bits used in drilling may not be expensive, their breakage can cause damage to expensive work piece being drilled and at the same time has major impact on productivity. Predicting drill-bit breakage, therefore, is important in reducing cost and improving productivity. This study uses twenty features extracted from two degradation signals viz., thrust force and torque. The methodology used involves developing and comparing decision tree, random forest, and multinomial logistic regression models for classifying and predicting drill-bit breakage using degradation signals.

Keywords: Degradation signal, drill-bit breakage, random forest, multinomial logistic regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2201
9100 Teachers’ Perceptions of Their Principals’ Interpersonal Emotionally Intelligent Behaviours Affecting Their Job Satisfaction

Authors: Prakash Singh

Abstract:

For schools to be desirable places in which to work, it is necessary for principals to recognise their teachers’ emotions, and be sensitive to their needs. This necessitates that principals are capable to correctly identify their emotionally intelligent behaviours (EIBs) they need to use in order to be successful leaders. They also need to have knowledge of their emotional intelligence and be able to identify the factors and situations that evoke emotion at an interpersonal level. If a principal is able to do this, then the control and understanding of emotions and behaviours of oneself and others could improve vastly. This study focuses on the interpersonal EIBS of principals affecting the job satisfaction of teachers. The correlation coefficients in this quantitative study strongly indicate that there is a statistical significance between the respondents’ level of job satisfaction, the rating of their principals’ EIBs and how they believe their principals’ EIBs will affect their sense of job satisfaction. It can be concluded from the data obtained in this study that there is a significant correlation between the sense of job satisfaction of teachers and their principals’ interpersonal EIBs. This means that the more satisfied a teacher is at school, the more appropriate and meaningful a principal’s EIBs will be. Conversely, the more dissatisfied a teacher is at school the less appropriate and less meaningful a principal’s interpersonal EIBs will be. This implies that the leaders’ EIBs can be construed as one of the major factors affecting the job satisfaction of employees.

Keywords: Emotional intelligence, teachers’ emotions, teachers’ job satisfaction, principals’ emotionally intelligent behaviours.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450
9099 A Novel Neighborhood Defined Feature Selection on Phase Congruency Images for Recognition of Faces with Extreme Variations

Authors: Satyanadh Gundimada, Vijayan K Asari

Abstract:

A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.

Keywords: Discriminant analysis, intra-class probability distribution, principal component analysis, phase congruency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1810
9098 A Hypermap for Supply Chain Management

Authors: James K. Ho

Abstract:

We present a prototype interactive (hyper) map of strategic, tactical, and logistic options for Supply Chain Management. The map comprises an anthology of options, broadly classified within the strategic spectrum of efficiency versus responsiveness, and according to logistic and cross-functional drivers. They are exemplified by cases in diverse industries. We seek to get all these information and ideas organized to help supply chain managers identify effective choices for specific business environments. The key and innovative linkage we introduce is the configuration of competitive forces. Instead of going through seemingly endless and isolated cases and wondering how one can borrow from them, we aim to provide a guide by force comparisons. The premise is that best practices in a different industry facing similar forces may be a most productive resource in supply chain design and planning. A prototype template is demonstrated.

Keywords: Competitive forces, strategic innovation, supplychain management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1804
9097 Model Discovery and Validation for the Qsar Problem using Association Rule Mining

Authors: Luminita Dumitriu, Cristina Segal, Marian Craciun, Adina Cocu, Lucian P. Georgescu

Abstract:

There are several approaches in trying to solve the Quantitative 1Structure-Activity Relationship (QSAR) problem. These approaches are based either on statistical methods or on predictive data mining. Among the statistical methods, one should consider regression analysis, pattern recognition (such as cluster analysis, factor analysis and principal components analysis) or partial least squares. Predictive data mining techniques use either neural networks, or genetic programming, or neuro-fuzzy knowledge. These approaches have a low explanatory capability or non at all. This paper attempts to establish a new approach in solving QSAR problems using descriptive data mining. This way, the relationship between the chemical properties and the activity of a substance would be comprehensibly modeled.

Keywords: association rules, classification, data mining, Quantitative Structure - Activity Relationship.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1738
9096 Component Criticality Importance Measures in Thermal Power Plants Design

Authors: Smajo Bisanovic, Mensur Hajro, Mersiha Samardzic

Abstract:

This paper presents quantitative component criticality importance indices applicable for identifying and ranking critical components in the phase of thermal power plants design. Identifying critical components for power plant reliability provides one important input to decision-making and guidance throughout the development project. The study of components criticality importance indices to several characteristic structural schemes of conventional thermal power plant is presented and discussed.

Keywords: Component criticality importance measures, discrete event, reliability, thermal power plant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2470
9095 Finite Element Analysis of Ball-Joint Boots under Environmental and Endurance Tests

Authors: Young-Doo Kwon, Seong-Hwa Jun, Dong-Jin Lee, Hyung-Seok Lee

Abstract:

Ball joints support and guide certain automotive parts that move relative to the frame of the vehicle. Such ball joints are covered and protected from dust, mud, and other interfering materials by ball-joint boots made of rubber—a flexible and near-incompressible material. The boots may experience twisting and bending deformations because of the motion of the joint arm. Thus, environmental and endurance tests of ball-joint boots apply both bending and twisting deformations. In this study, environmental and endurance testing was simulated via the finite element method performed by using a commercial software package. The ranges of principal stress and principal strain values that are known to directly affect the fatigue lives of the parts were sought. By defining these ranges, the number of iterative tests and modifications of the materials and dimensions of the boot can be decreased. Therefore, instead of performing actual part tests, manufacturers can perform standard fatigue tests in trials of different materials by applying only the defined range of stress or strain values.

Keywords: Boot, endurance tests, rubber, FEA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1323
9094 Resting-State Functional Connectivity Analysis Using an Independent Component Approach

Authors: Eric Jacob Bacon, Chaoyang Jin, Dianning He, Shuaishuai Hu, Lanbo Wang, Han Li, Shouliang Qi

Abstract:

Refractory epilepsy is a complicated type of epilepsy that can be difficult to diagnose. Recent technological advancements have made resting-state functional magnetic resonance (rsfMRI) a vital technique for studying brain activity. However, there is still much to learn about rsfMRI. Investigating rsfMRI connectivity may aid in the detection of abnormal activities. In this paper, we propose studying the functional connectivity of rsfMRI candidates to diagnose epilepsy. 45 rsfMRI candidates, comprising 26 with refractory epilepsy and 19 healthy controls, were enrolled in this study. A data-driven approach known as Independent Component Analysis (ICA) was used to achieve our goal. First, rsfMRI data from both patients and healthy controls were analyzed using group ICA. The components that were obtained were then spatially sorted to find and select meaningful ones. A two-sample t-test was also used to identify abnormal networks in patients and healthy controls. Finally, based on the fractional amplitude of low-frequency fluctuations (fALFF), a chi-square statistic test was used to distinguish the network properties of the patient and healthy control groups. The two-sample t-test analysis yielded abnormal in the default mode network, including the left superior temporal lobe and the left supramarginal. The right precuneus was found to be abnormal in the dorsal attention network. In addition, the frontal cortex showed an abnormal cluster in the medial temporal gyrus. In contrast, the temporal cortex showed an abnormal cluster in the right middle temporal gyrus and the right fronto-operculum gyrus. Finally, the chi-square statistic test was significant, producing a p-value of 0.001 for the analysis. This study offers evidence that investigating rsfMRI connectivity provides an excellent diagnosis option for refractory epilepsy.

Keywords: Independent Component Analysis, Resting State Network, refractory epilepsy, rsfMRI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 212
9093 Analysis of Building Response from Vertical Ground Motions

Authors: George C. Yao, Chao-Yu Tu, Wei-Chung Chen, Fung-Wen Kuo, Yu-Shan Chang

Abstract:

Building structures are subjected to both horizontal and vertical ground motions during earthquakes, but only the horizontal ground motion has been extensively studied and considered in design. Most of the prevailing seismic codes assume the vertical component to be 1/2 to 2/3 of the horizontal one. In order to understand the building responses from vertical ground motions, many earthquakes records are studied in this paper. System identification methods (ARX Model) are used to analyze the strong motions and to find out the characteristics of the vertical amplification factors and the natural frequencies of buildings. Analysis results show that the vertical amplification factors for high-rise buildings and low-rise building are 1.78 and 2.52 respectively, and the average vertical amplification factor of all buildings is about 2. The relationship between the vertical natural frequency and building height was regressed to a suggested formula in this study. The result points out an important message; the taller the building is, the greater chance of resonance of vertical vibration on the building will be.

Keywords: Vertical ground motion, vertical amplification factor, natural frequency, component.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1017
9092 Applying the Regression Technique for Prediction of the Acute Heart Attack

Authors: Paria Soleimani, Arezoo Neshati

Abstract:

Myocardial infarction is one of the leading causes of death in the world. Some of these deaths occur even before the patient reaches the hospital. Myocardial infarction occurs as a result of impaired blood supply. Because the most of these deaths are due to coronary artery disease, hence the awareness of the warning signs of a heart attack is essential. Some heart attacks are sudden and intense, but most of them start slowly, with mild pain or discomfort, then early detection and successful treatment of these symptoms is vital to save them. Therefore, importance and usefulness of a system designing to assist physicians in early diagnosis of the acute heart attacks is obvious. The main purpose of this study would be to enable patients to become better informed about their condition and to encourage them to seek professional care at an earlier stage in the appropriate situations. For this purpose, the data were collected on 711 heart patients in Iran hospitals. 28 attributes of clinical factors can be reported by patients; were studied. Three logistic regression models were made on the basis of the 28 features to predict the risk of heart attacks. The best logistic regression model in terms of performance had a C-index of 0.955 and with an accuracy of 94.9%. The variables, severe chest pain, back pain, cold sweats, shortness of breath, nausea and vomiting, were selected as the main features.

Keywords: Coronary heart disease, acute heart attacks, prediction, logistic regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2384
9091 Structural Analysis of Warehouse Rack Construction for Heavy Loads

Authors: C. Kozkurt, A. Fenercioglu, M. Soyaslan

Abstract:

In this study rack systems that are structural storage units of warehouses have been analyzed as structural with Finite Element Method (FEA). Each cell of discussed rack system storages pallets which have from 800 kg to 1000 kg weights and 0.80x1.15x1.50 m dimensions. Under this load, total deformations and equivalent stresses of structural elements and principal stresses, tensile stresses and shear stresses of connection elements have been analyzed. The results of analyses have been evaluated according to resistance limits of structural and connection elements. Obtained results have been presented as visual and magnitude.

Keywords: warehouse, structural analysis, AS/RS, FEM, FEA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3683
9090 CompPSA: A Component-Based Pairwise RNA Secondary Structure Alignment Algorithm

Authors: Ghada Badr, Arwa Alturki

Abstract:

The biological function of an RNA molecule depends on its structure. The objective of the alignment is finding the homology between two or more RNA secondary structures. Knowing the common functionalities between two RNA structures allows a better understanding and a discovery of other relationships between them. Besides, identifying non-coding RNAs -that is not translated into a protein- is a popular application in which RNA structural alignment is the first step A few methods for RNA structure-to-structure alignment have been developed. Most of these methods are partial structure-to-structure, sequence-to-structure, or structure-to-sequence alignment. Less attention is given in the literature to the use of efficient RNA structure representation and the structure-to-structure alignment methods are lacking. In this paper, we introduce an O(N2) Component-based Pairwise RNA Structure Alignment (CompPSA) algorithm, where structures are given as a component-based representation and where N is the maximum number of components in the two structures. The proposed algorithm compares the two RNA secondary structures based on their weighted component features rather than on their base-pair details. Extensive experiments are conducted illustrating the efficiency of the CompPSA algorithm when compared to other approaches and on different real and simulated datasets. The CompPSA algorithm shows an accurate similarity measure between components. The algorithm gives the flexibility for the user to align the two RNA structures based on their weighted features (position, full length, and/or stem length). Moreover, the algorithm proves scalability and efficiency in time and memory performance.

Keywords: Alignment, RNA secondary structure, pairwise, component-based, data mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 926
9089 A Comprehensive Approach in Calculating the Impact of the Ground on Radiated Electromagnetic Fields Due to Lightning

Authors: Lahcene Boukelkoul

Abstract:

The influence of finite ground conductivity is of great importance in calculating the induced voltages from the radiated electromagnetic fields due to lightning. In this paper, we try to give a comprehensive approach to calculate the impact of the ground on the radiated electromagnetic fields to lightning. The vertical component of lightning electric field is calculated with a reasonable approximation assuming a perfectly conducting ground in case the observation point does not exceed a few kilometers from the lightning channel. However, for distant observation points the radiated vertical component of lightning electric field is attenuated due finitely conducting ground. The attenuation is calculated using the expression elaborated for both low and high frequencies. The horizontal component of the electric field, however, is more affected by a finite conductivity of a ground. Besides, the contribution of the horizontal component of the electric field, to induced voltages on an overhead transmission line, is greater than that of the vertical component. Therefore, the calculation of the horizontal electric field is great concern for the simulation of lightning-induced voltages. For field to transmission lines coupling the ground impedance is calculated for early time behavior and for low frequency range.

Keywords: Ground impedance, horizontal electric field, lightning, transient propagation, vertical electric field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1825