Search results for: principal component analysis.
9164 Monitoring Blood Pressure Using Regression Techniques
Authors: Qasem Qananwah, Ahmad Dagamseh, Hiam AlQuran, Khalid Shaker Ibrahim
Abstract:
Blood pressure helps the physicians greatly to have a deep insight into the cardiovascular system. The determination of individual blood pressure is a standard clinical procedure considered for cardiovascular system problems. The conventional techniques to measure blood pressure (e.g. cuff method) allows a limited number of readings for a certain period (e.g. every 5-10 minutes). Additionally, these systems cause turbulence to blood flow; impeding continuous blood pressure monitoring, especially in emergency cases or critically ill persons. In this paper, the most important statistical features in the photoplethysmogram (PPG) signals were extracted to estimate the blood pressure noninvasively. PPG signals from more than 40 subjects were measured and analyzed and 12 features were extracted. The features were fed to principal component analysis (PCA) to find the most important independent features that have the highest correlation with blood pressure. The results show that the stiffness index means and standard deviation for the beat-to-beat heart rate were the most important features. A model representing both features for Systolic Blood Pressure (SBP) and Diastolic Blood Pressure (DBP) was obtained using a statistical regression technique. Surface fitting is used to best fit the series of data and the results show that the error value in estimating the SBP is 4.95% and in estimating the DBP is 3.99%.
Keywords: Blood pressure, noninvasive optical system, PCA, continuous monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6879163 Mathematical Modeling of Non-Isothermal Multi-Component Fluid Flow in Pipes Applying to Rapid Gas Decompression in Rich and Base Gases
Authors: Evgeniy Burlutskiy
Abstract:
The paper presents a one-dimensional transient mathematical model of compressible non-isothermal multicomponent fluid mixture flow in a pipe. The set of the mass, momentum and enthalpy conservation equations for gas phase is solved in the model. Thermo-physical properties of multi-component gas mixture are calculated by solving the Equation of State (EOS) model. The Soave-Redlich-Kwong (SRK-EOS) model is chosen. Gas mixture viscosity is calculated on the basis of the Lee-Gonzales- Eakin (LGE) correlation. Numerical analysis of rapid gas decompression process in rich and base natural gases is made on the basis of the proposed mathematical model. The model is successfully validated on the experimental data [1]. The proposed mathematical model shows a very good agreement with the experimental data [1] in a wide range of pressure values and predicts the decompression in rich and base gas mixtures much better than analytical and mathematical models, which are available from the open source literature.Keywords: Mathematical model, Multi-Component gas mixture flow, Rapid Gas Decompression
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19539162 Low Dimensional Representation of Dorsal Hand Vein Features Using Principle Component Analysis (PCA)
Authors: M.Heenaye-Mamode Khan, R.K. Subramanian, N. A. Mamode Khan
Abstract:
The quest of providing more secure identification system has led to a rise in developing biometric systems. Dorsal hand vein pattern is an emerging biometric which has attracted the attention of many researchers, of late. Different approaches have been used to extract the vein pattern and match them. In this work, Principle Component Analysis (PCA) which is a method that has been successfully applied on human faces and hand geometry is applied on the dorsal hand vein pattern. PCA has been used to obtain eigenveins which is a low dimensional representation of vein pattern features. Low cost CCD cameras were used to obtain the vein images. The extraction of the vein pattern was obtained by applying morphology. We have applied noise reduction filters to enhance the vein patterns. The system has been successfully tested on a database of 200 images using a threshold value of 0.9. The results obtained are encouraging.Keywords: Biometric, Dorsal vein pattern, PCA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18959161 Component Criticality Importance Measures in Thermal Power Plants Design
Authors: Smajo Bisanovic, Mensur Hajro, Mersiha Samardzic
Abstract:
This paper presents quantitative component criticality importance indices applicable for identifying and ranking critical components in the phase of thermal power plants design. Identifying critical components for power plant reliability provides one important input to decision-making and guidance throughout the development project. The study of components criticality importance indices to several characteristic structural schemes of conventional thermal power plant is presented and discussed.
Keywords: Component criticality importance measures, discrete event, reliability, thermal power plant.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25249160 Numerical Analysis on Rapid Decompression in Conventional Dry Gases using One- Dimensional Mathematical Modeling
Authors: Evgeniy Burlutskiy
Abstract:
The paper presents a one-dimensional transient mathematical model of compressible thermal multi-component gas mixture flows in pipes. The set of the mass, momentum and enthalpy conservation equations for gas phase is solved. Thermo-physical properties of multi-component gas mixture are calculated by solving the Equation of State (EOS) model. The Soave-Redlich-Kwong (SRK-EOS) model is chosen. Gas mixture viscosity is calculated on the basis of the Lee-Gonzales-Eakin (LGE) correlation. Numerical analysis on rapid decompression in conventional dry gases is performed by using the proposed mathematical model. The model is validated on measured values of the decompression wave speed in dry natural gas mixtures. All predictions show excellent agreement with the experimental data at high and low pressure. The presented model predicts the decompression in dry natural gas mixtures much better than GASDECOM and OLGA codes, which are the most frequently-used codes in oil and gas pipeline transport service.Keywords: Mathematical model, Rapid Gas Decompression
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30109159 Finite Element Analysis of Ball-Joint Boots under Environmental and Endurance Tests
Authors: Young-Doo Kwon, Seong-Hwa Jun, Dong-Jin Lee, Hyung-Seok Lee
Abstract:
Ball joints support and guide certain automotive parts that move relative to the frame of the vehicle. Such ball joints are covered and protected from dust, mud, and other interfering materials by ball-joint boots made of rubber—a flexible and near-incompressible material. The boots may experience twisting and bending deformations because of the motion of the joint arm. Thus, environmental and endurance tests of ball-joint boots apply both bending and twisting deformations. In this study, environmental and endurance testing was simulated via the finite element method performed by using a commercial software package. The ranges of principal stress and principal strain values that are known to directly affect the fatigue lives of the parts were sought. By defining these ranges, the number of iterative tests and modifications of the materials and dimensions of the boot can be decreased. Therefore, instead of performing actual part tests, manufacturers can perform standard fatigue tests in trials of different materials by applying only the defined range of stress or strain values.
Keywords: Boot, endurance tests, rubber, FEA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13879158 A Novel Neighborhood Defined Feature Selection on Phase Congruency Images for Recognition of Faces with Extreme Variations
Authors: Satyanadh Gundimada, Vijayan K Asari
Abstract:
A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.
Keywords: Discriminant analysis, intra-class probability distribution, principal component analysis, phase congruency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18519157 CompPSA: A Component-Based Pairwise RNA Secondary Structure Alignment Algorithm
Authors: Ghada Badr, Arwa Alturki
Abstract:
The biological function of an RNA molecule depends on its structure. The objective of the alignment is finding the homology between two or more RNA secondary structures. Knowing the common functionalities between two RNA structures allows a better understanding and a discovery of other relationships between them. Besides, identifying non-coding RNAs -that is not translated into a protein- is a popular application in which RNA structural alignment is the first step A few methods for RNA structure-to-structure alignment have been developed. Most of these methods are partial structure-to-structure, sequence-to-structure, or structure-to-sequence alignment. Less attention is given in the literature to the use of efficient RNA structure representation and the structure-to-structure alignment methods are lacking. In this paper, we introduce an O(N2) Component-based Pairwise RNA Structure Alignment (CompPSA) algorithm, where structures are given as a component-based representation and where N is the maximum number of components in the two structures. The proposed algorithm compares the two RNA secondary structures based on their weighted component features rather than on their base-pair details. Extensive experiments are conducted illustrating the efficiency of the CompPSA algorithm when compared to other approaches and on different real and simulated datasets. The CompPSA algorithm shows an accurate similarity measure between components. The algorithm gives the flexibility for the user to align the two RNA structures based on their weighted features (position, full length, and/or stem length). Moreover, the algorithm proves scalability and efficiency in time and memory performance.Keywords: Alignment, RNA secondary structure, pairwise, component-based, data mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9749156 Material Characterization and Numerical Simulation of a Rubber Bumper
Authors: Tamás Mankovits, Dávid Huri, Imre Kállai, Imre Kocsis, Tamás Szabó
Abstract:
Non-linear FEM calculations are indispensable when important technical information like operating performance of a rubber component is desired. Rubber bumpers built into air-spring structures may undergo large deformations under load, which in itself shows non-linear behavior. The changing contact range between the parts and the incompressibility of the rubber increases this non-linear behavior further. The material characterization of an elastomeric component is also a demanding engineering task. In this paper a comprehensive investigation is introduced including laboratory measurements, mesh density analysis and complex finite element simulations to obtain the load-displacement curve of the chosen rubber bumper. Contact and friction effects are also taken into consideration. The aim of this research is to elaborate a FEM model which is accurate and competitive for a future shape optimization task.
Keywords: Rubber bumper, finite element analysis, compression test, Mooney-Rivlin material model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35849155 A Comprehensive Approach in Calculating the Impact of the Ground on Radiated Electromagnetic Fields Due to Lightning
Authors: Lahcene Boukelkoul
Abstract:
The influence of finite ground conductivity is of great importance in calculating the induced voltages from the radiated electromagnetic fields due to lightning. In this paper, we try to give a comprehensive approach to calculate the impact of the ground on the radiated electromagnetic fields to lightning. The vertical component of lightning electric field is calculated with a reasonable approximation assuming a perfectly conducting ground in case the observation point does not exceed a few kilometers from the lightning channel. However, for distant observation points the radiated vertical component of lightning electric field is attenuated due finitely conducting ground. The attenuation is calculated using the expression elaborated for both low and high frequencies. The horizontal component of the electric field, however, is more affected by a finite conductivity of a ground. Besides, the contribution of the horizontal component of the electric field, to induced voltages on an overhead transmission line, is greater than that of the vertical component. Therefore, the calculation of the horizontal electric field is great concern for the simulation of lightning-induced voltages. For field to transmission lines coupling the ground impedance is calculated for early time behavior and for low frequency range.
Keywords: Ground impedance, horizontal electric field, lightning, transient propagation, vertical electric field.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18749154 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances
Authors: P. Mounnarath, U. Schmitz, Ch. Zhang
Abstract:
Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.Keywords: Expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17169153 A Framework for Data Mining Based Multi-Agent: An Application to Spatial Data
Authors: H. Baazaoui Zghal, S. Faiz, H. Ben Ghezala
Abstract:
Data mining is an extraordinarily demanding field referring to extraction of implicit knowledge and relationships, which are not explicitly stored in databases. A wide variety of methods of data mining have been introduced (classification, characterization, generalization...). Each one of these methods includes more than algorithm. A system of data mining implies different user categories,, which mean that the user-s behavior must be a component of the system. The problem at this level is to know which algorithm of which method to employ for an exploratory end, which one for a decisional end, and how can they collaborate and communicate. Agent paradigm presents a new way of conception and realizing of data mining system. The purpose is to combine different algorithms of data mining to prepare elements for decision-makers, benefiting from the possibilities offered by the multi-agent systems. In this paper the agent framework for data mining is introduced, and its overall architecture and functionality are presented. The validation is made on spatial data. Principal results will be presented.
Keywords: Databases, data mining, multi-agent, spatial datamart.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20459152 Different Approaches for the Design of IFIR Compaction Filter
Authors: Sheeba V.S, Elizabeth Elias
Abstract:
Optimization of filter banks based on the knowledge of input statistics has been of interest for a long time. Finite impulse response (FIR) Compaction filters are used in the design of optimal signal adapted orthonormal FIR filter banks. In this paper we discuss three different approaches for the design of interpolated finite impulse response (IFIR) compaction filters. In the first method, the magnitude squared response satisfies Nyquist constraint approximately. In the second and third methods Nyquist constraint is exactly satisfied. These methods yield FIR compaction filters whose response is comparable with that of the existing methods. At the same time, IFIR filters enjoy significant saving in the number of multipliers and can be implemented efficiently. Since eigenfilter approach is used here, the method is less complex. Design of IFIR filters in the least square sense is presented.
Keywords: Principal Component Filter Bank, InterpolatedFinite Impulse Response filter, Orthonormal Filter Bank, Eigen Filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15789151 Combined Feature Based Hyperspectral Image Classification Technique Using Support Vector Machines
Authors: Mrs.K.Kavitha, S.Arivazhagan
Abstract:
A spatial classification technique incorporating a State of Art Feature Extraction algorithm is proposed in this paper for classifying a heterogeneous classes present in hyper spectral images. The classification accuracy can be improved if and only if both the feature extraction and classifier selection are proper. As the classes in the hyper spectral images are assumed to have different textures, textural classification is entertained. Run Length feature extraction is entailed along with the Principal Components and Independent Components. A Hyperspectral Image of Indiana Site taken by AVIRIS is inducted for the experiment. Among the original 220 bands, a subset of 120 bands is selected. Gray Level Run Length Matrix (GLRLM) is calculated for the selected forty bands. From GLRLMs the Run Length features for individual pixels are calculated. The Principle Components are calculated for other forty bands. Independent Components are calculated for next forty bands. As Principal & Independent Components have the ability to represent the textural content of pixels, they are treated as features. The summation of Run Length features, Principal Components, and Independent Components forms the Combined Features which are used for classification. SVM with Binary Hierarchical Tree is used to classify the hyper spectral image. Results are validated with ground truth and accuracies are calculated.
Keywords: Multi-class, Run Length features, PCA, ICA, classification and Support Vector Machines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15239150 Model Discovery and Validation for the Qsar Problem using Association Rule Mining
Authors: Luminita Dumitriu, Cristina Segal, Marian Craciun, Adina Cocu, Lucian P. Georgescu
Abstract:
There are several approaches in trying to solve the Quantitative 1Structure-Activity Relationship (QSAR) problem. These approaches are based either on statistical methods or on predictive data mining. Among the statistical methods, one should consider regression analysis, pattern recognition (such as cluster analysis, factor analysis and principal components analysis) or partial least squares. Predictive data mining techniques use either neural networks, or genetic programming, or neuro-fuzzy knowledge. These approaches have a low explanatory capability or non at all. This paper attempts to establish a new approach in solving QSAR problems using descriptive data mining. This way, the relationship between the chemical properties and the activity of a substance would be comprehensibly modeled.Keywords: association rules, classification, data mining, Quantitative Structure - Activity Relationship.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17889149 Analysis of Building Response from Vertical Ground Motions
Authors: George C. Yao, Chao-Yu Tu, Wei-Chung Chen, Fung-Wen Kuo, Yu-Shan Chang
Abstract:
Building structures are subjected to both horizontal and vertical ground motions during earthquakes, but only the horizontal ground motion has been extensively studied and considered in design. Most of the prevailing seismic codes assume the vertical component to be 1/2 to 2/3 of the horizontal one. In order to understand the building responses from vertical ground motions, many earthquakes records are studied in this paper. System identification methods (ARX Model) are used to analyze the strong motions and to find out the characteristics of the vertical amplification factors and the natural frequencies of buildings. Analysis results show that the vertical amplification factors for high-rise buildings and low-rise building are 1.78 and 2.52 respectively, and the average vertical amplification factor of all buildings is about 2. The relationship between the vertical natural frequency and building height was regressed to a suggested formula in this study. The result points out an important message; the taller the building is, the greater chance of resonance of vertical vibration on the building will be.
Keywords: Vertical ground motion, vertical amplification factor, natural frequency, component.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10659148 Structural Analysis of Warehouse Rack Construction for Heavy Loads
Authors: C. Kozkurt, A. Fenercioglu, M. Soyaslan
Abstract:
In this study rack systems that are structural storage units of warehouses have been analyzed as structural with Finite Element Method (FEA). Each cell of discussed rack system storages pallets which have from 800 kg to 1000 kg weights and 0.80x1.15x1.50 m dimensions. Under this load, total deformations and equivalent stresses of structural elements and principal stresses, tensile stresses and shear stresses of connection elements have been analyzed. The results of analyses have been evaluated according to resistance limits of structural and connection elements. Obtained results have been presented as visual and magnitude.Keywords: warehouse, structural analysis, AS/RS, FEM, FEA
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37389147 Automatic Image Alignment and Stitching of Medical Images with Seam Blending
Authors: Abhinav Kumar, Raja Sekhar Bandaru, B Madhusudan Rao, Saket Kulkarni, Nilesh Ghatpande
Abstract:
This paper proposes an algorithm which automatically aligns and stitches the component medical images (fluoroscopic) with varying degrees of overlap into a single composite image. The alignment method is based on similarity measure between the component images. As applied here the technique is intensity based rather than feature based. It works well in domains where feature based methods have difficulty, yet more robust than traditional correlation. Component images are stitched together using the new triangular averaging based blending algorithm. The quality of the resultant image is tested for photometric inconsistencies and geometric misalignments. This method cannot correct rotational, scale and perspective artifacts.
Keywords: Histogram Matching, Image Alignment, ImageStitching, Medical Imaging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37619146 Resting-State Functional Connectivity Analysis Using an Independent Component Approach
Authors: Eric Jacob Bacon, Chaoyang Jin, Dianning He, Shuaishuai Hu, Lanbo Wang, Han Li, Shouliang Qi
Abstract:
Refractory epilepsy is a complicated type of epilepsy that can be difficult to diagnose. Recent technological advancements have made resting-state functional magnetic resonance (rsfMRI) a vital technique for studying brain activity. However, there is still much to learn about rsfMRI. Investigating rsfMRI connectivity may aid in the detection of abnormal activities. In this paper, we propose studying the functional connectivity of rsfMRI candidates to diagnose epilepsy. 45 rsfMRI candidates, comprising 26 with refractory epilepsy and 19 healthy controls, were enrolled in this study. A data-driven approach known as Independent Component Analysis (ICA) was used to achieve our goal. First, rsfMRI data from both patients and healthy controls were analyzed using group ICA. The components that were obtained were then spatially sorted to find and select meaningful ones. A two-sample t-test was also used to identify abnormal networks in patients and healthy controls. Finally, based on the fractional amplitude of low-frequency fluctuations (fALFF), a chi-square statistic test was used to distinguish the network properties of the patient and healthy control groups. The two-sample t-test analysis yielded abnormal in the default mode network, including the left superior temporal lobe and the left supramarginal. The right precuneus was found to be abnormal in the dorsal attention network. In addition, the frontal cortex showed an abnormal cluster in the medial temporal gyrus. In contrast, the temporal cortex showed an abnormal cluster in the right middle temporal gyrus and the right fronto-operculum gyrus. Finally, the chi-square statistic test was significant, producing a p-value of 0.001 for the analysis. This study offers evidence that investigating rsfMRI connectivity provides an excellent diagnosis option for refractory epilepsy.
Keywords: Independent Component Analysis, Resting State Network, refractory epilepsy, rsfMRI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2929145 Simulation of a Multi-Component Transport Model for the Chemical Reaction of a CVD-Process
Abstract:
In this paper we present discretization and decomposition methods for a multi-component transport model of a chemical vapor deposition (CVD) process. CVD processes are used to manufacture deposition layers or bulk materials. In our transport model we simulate the deposition of thin layers. The microscopic model is based on the heavy particles, which are derived by approximately solving a linearized multicomponent Boltzmann equation. For the drift-process of the particles we propose diffusionreaction equations as well as for the effects of heat conduction. We concentrate on solving the diffusion-reaction equation with analytical and numerical methods. For the chemical processes, modelled with reaction equations, we propose decomposition methods and decouple the multi-component models to simpler systems of differential equations. In the numerical experiments we present the computational results of our proposed models.
Keywords: Chemical reactions, chemical vapor deposition, convection-diffusion-reaction equations, decomposition methods, multi-component transport.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14119144 Non-Overlapping Hierarchical Index Structure for Similarity Search
Authors: Mounira Taileb, Sid Lamrous, Sami Touati
Abstract:
In order to accelerate the similarity search in highdimensional database, we propose a new hierarchical indexing method. It is composed of offline and online phases. Our contribution concerns both phases. In the offline phase, after gathering the whole of the data in clusters and constructing a hierarchical index, the main originality of our contribution consists to develop a method to construct bounding forms of clusters to avoid overlapping. For the online phase, our idea improves considerably performances of similarity search. However, for this second phase, we have also developed an adapted search algorithm. Our method baptized NOHIS (Non-Overlapping Hierarchical Index Structure) use the Principal Direction Divisive Partitioning (PDDP) as algorithm of clustering. The principle of the PDDP is to divide data recursively into two sub-clusters; division is done by using the hyper-plane orthogonal to the principal direction derived from the covariance matrix and passing through the centroid of the cluster to divide. Data of each two sub-clusters obtained are including by a minimum bounding rectangle (MBR). The two MBRs are directed according to the principal direction. Consequently, the nonoverlapping between the two forms is assured. Experiments use databases containing image descriptors. Results show that the proposed method outperforms sequential scan and SRtree in processing k-nearest neighbors.
Keywords: K-nearest neighbour search, multi-dimensional indexing, multimedia databases, similarity search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15639143 Wear and Friction Analysis of Sintered Metal Powder Self Lubricating Bush Bearing
Authors: J. K. Khare, Abhay Kumar Sharma, Ajay Tiwari, Amol A. Talankar
Abstract:
Powder metallurgy (P/M) is the only economic way to produce porous parts/products. P/M can produce near net shape parts hence reduces wastage of raw material and energy, avoids various machining operations. The most vital use of P/M is in production of metallic filters and self lubricating bush bearings and siding surfaces. The porosity of the part can be controlled by varying compaction pressure, sintering temperature and composition of metal powder mix. The present work is aimed for experimental analysis of friction and wear properties of self lubricating copper and tin bush bearing. Experimental results confirm that wear rate of sintered component is lesser for components having 10% tin by weight percentage. Wear rate increases for high tin percentage (experimented for 20% tin and 30% tin) at same sintering temperature. Experimental results also confirms that wear rate of sintered component is also dependent on sintering temperature, soaking period, composition of the preform, compacting pressure, powder particle shape and size. Interfacial friction between die and punch, between inter powder particles, between die face and powder particle depends on compaction pressure, powder particle size and shape, size and shape of component which decides size & shape of die & punch, material of die & punch and material of powder particles.
Keywords: Interfacial friction, porous bronze bearing, sintering temperature, wear rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39759142 Machine Learning Techniques for COVID-19 Detection: A Comparative Analysis
Authors: Abeer Aljohani
Abstract:
The COVID-19 virus spread has been one of the extreme pandemics across the globe. It is also referred as corona virus which is a contagious disease that continuously mutates into numerous variants. Currently, the B.1.1.529 variant labeled as Omicron is detected in South Africa. The huge spread of COVID-19 disease has affected several lives and has surged exceptional pressure on the healthcare systems worldwide. Also, everyday life and the global economy have been at stake. Numerous COVID-19 cases have produced a huge burden on hospitals as well as health workers. To reduce this burden, this paper predicts COVID-19 disease based on the symptoms and medical history of the patient. As machine learning is a widely accepted area and gives promising results for healthcare, this research presents an architecture for COVID-19 detection using ML techniques integrated with feature dimensionality reduction. This paper uses a standard University of California Irvine (UCI) dataset for predicting COVID-19 disease. This dataset comprises symptoms of 5434 patients. This paper also compares several supervised ML techniques on the presented architecture. The architecture has also utilized 10-fold cross validation process for generalization and Principal Component Analysis (PCA) technique for feature reduction. Standard parameters are used to evaluate the proposed architecture including F1-Score, precision, accuracy, recall, Receiver Operating Characteristic (ROC) and Area under Curve (AUC). The results depict that Decision tree, Random Forest and neural networks outperform all other state-of-the-art ML techniques. This result can be used to effectively identify COVID-19 infection cases.
Keywords: Supervised machine learning, COVID-19 prediction, healthcare analytics, Random Forest, Neural Network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3859141 Acute Coronary Syndrome Prediction Using Data Mining Techniques- An Application
Authors: Tahseen A. Jilani, Huda Yasin, Madiha Yasin, C. Ardil
Abstract:
In this paper we use data mining techniques to investigate factors that contribute significantly to enhancing the risk of acute coronary syndrome. We assume that the dependent variable is diagnosis – with dichotomous values showing presence or absence of disease. We have applied binary regression to the factors affecting the dependent variable. The data set has been taken from two different cardiac hospitals of Karachi, Pakistan. We have total sixteen variables out of which one is assumed dependent and other 15 are independent variables. For better performance of the regression model in predicting acute coronary syndrome, data reduction techniques like principle component analysis is applied. Based on results of data reduction, we have considered only 14 out of sixteen factors.
Keywords: Acute coronary syndrome (ACS), binary logistic regression analyses, myocardial ischemia (MI), principle component analysis, unstable angina (U.A.).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21159140 Improved Pattern Matching Applied to Surface Mounting Devices Components Localization on Automated Optical Inspection
Authors: Pedro M. A. Vitoriano, Tito. G. Amaral
Abstract:
Automated Optical Inspection (AOI) Systems are commonly used on Printed Circuit Boards (PCB) manufacturing. The use of this technology has been proven as highly efficient for process improvements and quality achievements. The correct extraction of the component for posterior analysis is a critical step of the AOI process. Nowadays, the Pattern Matching Algorithm is commonly used, although this algorithm requires extensive calculations and is time consuming. This paper will present an improved algorithm for the component localization process, with the capability of implementation in a parallel execution system.
Keywords: AOI, automated optical inspection, SMD, surface mounting devices, pattern matching, parallel execution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10839139 A Reusability Evaluation Model for OO-Based Software Components
Authors: Parvinder S. Sandhu, Hardeep Singh
Abstract:
The requirement to improve software productivity has promoted the research on software metric technology. There are metrics for identifying the quality of reusable components but the function that makes use of these metrics to find reusability of software components is still not clear. These metrics if identified in the design phase or even in the coding phase can help us to reduce the rework by improving quality of reuse of the component and hence improve the productivity due to probabilistic increase in the reuse level. CK metric suit is most widely used metrics for the objectoriented (OO) software; we critically analyzed the CK metrics, tried to remove the inconsistencies and devised the framework of metrics to obtain the structural analysis of OO-based software components. Neural network can learn new relationships with new input data and can be used to refine fuzzy rules to create fuzzy adaptive system. Hence, Neuro-fuzzy inference engine can be used to evaluate the reusability of OO-based component using its structural attributes as inputs. In this paper, an algorithm has been proposed in which the inputs can be given to Neuro-fuzzy system in form of tuned WMC, DIT, NOC, CBO , LCOM values of the OO software component and output can be obtained in terms of reusability. The developed reusability model has produced high precision results as expected by the human experts.Keywords: CK-Metric, ID3, Neuro-fuzzy, Reusability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18199138 Concept, Design and Implementation of Power System Component Simulator Based on Thyristor Controlled Transformer and Power Converter
Authors: B. Kędra, R. Małkowski
Abstract:
This paper presents information on Power System Component Simulator – a device designed for LINTE^2 laboratory owned by Gdansk University of Technology in Poland. In this paper, we first provide an introductory information on the Power System Component Simulator and its capabilities. Then, the concept of the unit is presented. Requirements for the unit are described as well as proposed and introduced functions are listed. Implementation details are given. Hardware structure is presented and described. Information about used communication interface, data maintenance and storage solution, as well as used Simulink real-time features are presented. List and description of all measurements is provided. Potential of laboratory setup modifications is evaluated. Lastly, the results of experiments performed using Power System Component Simulator are presented. This includes simulation of under frequency load shedding, frequency and voltage dependent characteristics of groups of load units, time characteristics of group of different load units in a chosen area.
Keywords: Power converter, Simulink real-time, MATLAB, load, tap controller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7959137 Mapping the Digital Landscape: An Analysis of Party Differences between Conventional and Digital Policy Positions
Authors: Daniel Schwarz, Jan Fivaz, Alessia Neuroni
Abstract:
Although digitization is a buzzword in almost every election campaign, the political parties leave voters largely in the dark about their specific positions on digital issues. In the run-up to the 2019 elections in Switzerland, the ‘Digitization Monitor’ project (DMP) was launched in order to change this situation. Within the framework of the DMP, all 4,736 candidates were surveyed about their digital policy positions and values. The DMP is designed as a digital policy supplement to the existing ‘smartvote’ voting advice application. This enabled a direct comparison of the digital policy attitudes according to the DMP with the topics of the ‘smartvote’ questionnaire which are comprehensive in content but mainly related to conventional policy areas. This paper’s main research goal is to analyze and visualize possible differences between conventional and digital policy areas in terms of response patterns between and within political parties. The analysis is based on dimensionality reduction methods (multidimensional scaling and principal component analysis) for the visualization of inter-party differences, and on standard deviation as a measure of variation for the evaluation of intra-party unity. The results reveal that digital issues show a lower degree of inter-party polarization compared to conventional policy areas. Thus, the parties have more common ground in issues on digitization than in conventional policy areas. In contrast, the study reveals a mixed picture regarding intra-party unity. Homogeneous parties show a lower degree of unity in digitization issues whereas parties with heterogeneous positions in conventional areas have more united positions in digital areas. All things considered, the findings are encouraging as less polarized conditions apply to the debate on digital development compared to conventional politics. For the future, it would be desirable if in further countries similar projects to the DMP could emerge to broaden the basis for conclusions.
Keywords: Comparison of political issue dimensions, digital awareness of candidates, digital policy space, party positions on digital issues.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6539136 Adaptive Filtering of Heart Rate Signals for an Improved Measure of Cardiac Autonomic Control
Authors: Desmond B. Keenan, Paul Grossman
Abstract:
In order to provide accurate heart rate variability indices of sympathetic and parasympathetic activity, the low frequency and high frequency components of an RR heart rate signal must be adequately separated. This is not always possible by just applying spectral analysis, as power from the high and low frequency components often leak into their adjacent bands. Furthermore, without the respiratory spectra it is not obvious that the low frequency component is not another respiratory component, which can appear in the lower band. This paper describes an adaptive filter, which aids the separation of the low frequency sympathetic and high frequency parasympathetic components from an ECG R-R interval signal, enabling the attainment of more accurate heart rate variability measures. The algorithm is applied to simulated signals and heart rate and respiratory signals acquired from an ambulatory monitor incorporating single lead ECG and inductive plethysmography sensors embedded in a garment. The results show an improvement over standard heart rate variability spectral measurements.Keywords: Heart rate variability, vagal tone, sympathetic, parasympathetic, spectral analysis, adaptive filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17559135 Neuron Dynamics of Single-Compartment Traub Model for Hardware Implementations
Authors: J. C. Moctezuma, V. Breña-Medina, Jose Luis Nunez-Yanez, Joseph P. McGeehan
Abstract:
In this work we make a bifurcation analysis for a single compartment representation of Traub model, one of the most important conductance-based models. The analysis focus in two principal parameters: current and leakage conductance. Study of stable and unstable solutions are explored; also Hop-bifurcation and frequency interpretation when current varies is examined. This study allows having control of neuron dynamics and neuron response when these parameters change. Analysis like this is particularly important for several applications such as: tuning parameters in learning process, neuron excitability tests, measure bursting properties of the neuron, etc. Finally, a hardware implementation results were developed to corroborate these results.Keywords: Traub model, Pinsky-Rinzel model, Hopf bifurcation, single-compartment models, Bifurcation analysis, neuron modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1205