Search results for: Minor Component Analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 28474

Search results for: Minor Component Analysis

28354 Advancing Epilepsy Diagnosis through EEG Analysis and Independent Component Analysis Algorithms

Authors: Eyad Talal Attar

Abstract:

Epilepsy is a prevalent neurological condition that impacts a considerable population of around 50 million individuals globally, rendering it one of the most widespread neurological disorders. The condition is distinguished by recurring seizures, which are abrupt and transient disruptions in a cerebral activity that can induce alterations in perception, conduct, and awareness. Seizures can be classified as focal or generalized, based on the specific site and scope of the atypical brain activity. Focal seizures are identified by confinement to a particular brain area and can elicit localized manifestations. Generalized seizures are identified by extensive electrical activity throughout the brain, and they can appear in various symptoms such as convulsions, muscle rigidity, and loss of consciousness. This study represents seven individuals chosen according to the number of seizures in the range of three to five seizure and investigates the ability to detect brain seizure activity. The EEG recording Siena Scalp Database was used from PhysioNet databases. EEGLAB is a robust tool utilized for processing and analyzing electroencephalogram (EEG) data and is used to analyze the raw data. The efficacy of Independent Component Analysis ICA algorithms has been demonstrated in the separation of arterial EEG sources and neuronal-generated EEG sources.

Keywords: EEG, MATLAB software, power spectral density, PSD, signal analysis, attention, alpha, beta, gamma

Procedia PDF Downloads 25
28353 Fuzzy-Machine Learning Models for the Prediction of Fire Outbreak: A Comparative Analysis

Authors: Uduak Umoh, Imo Eyoh, Emmauel Nyoho

Abstract:

This paper compares fuzzy-machine learning algorithms such as Support Vector Machine (SVM), and K-Nearest Neighbor (KNN) for the predicting cases of fire outbreak. The paper uses the fire outbreak dataset with three features (Temperature, Smoke, and Flame). The data is pre-processed using Interval Type-2 Fuzzy Logic (IT2FL) algorithm. Min-Max Normalization and Principal Component Analysis (PCA) are used to predict feature labels in the dataset, normalize the dataset, and select relevant features respectively. The output of the pre-processing is a dataset with two principal components (PC1 and PC2). The pre-processed dataset is then used in the training of the aforementioned machine learning models. K-fold (with K=10) cross-validation method is used to evaluate the performance of the models using the matrices – ROC (Receiver Operating Curve), Specificity, and Sensitivity. The model is also tested with 20% of the dataset. The validation result shows KNN is the better model for fire outbreak detection with an ROC value of 0.99878, followed by SVM with an ROC value of 0.99753.

Keywords: Machine Learning Algorithms , Interval Type-2 Fuzzy Logic, Fire Outbreak, Support Vector Machine, K-Nearest Neighbour, Principal Component Analysis

Procedia PDF Downloads 137
28352 Site Analysis’ Importance as a Valid Factor in Building Design

Authors: Mekwa Eme, Anya chukwuma

Abstract:

The act of evaluating a particular site physically and socially in order to create a good design solution that will address the physical and interior environment of the location is known as architectural site analysis. This essay will describe site analysis as a useful design component. According to the introduction and supporting research, site evaluation and analysis are crucial to good design in terms of topography, orientation, site size, accessibility, rainfall, wind direction, and times of sunrise and sunset. Methodology: Both quantitative and qualitative analyses are used in this paper. The primary and secondary types of data collection are as follows. This information was gathered via the case study approach, already published literature, journals, the internet, a local poll, oral interviews, inquiries, and in-person interviews. The purpose of this is to clarify the benefits of site analysis for the design process and its implications for the working or building stage. Results: Each site's criteria are unique in terms of things like soil, plants, trees, accessibility, topography, and security. This will make it easier for the architect and environmentalist to decide on the idea, shape, and supporting structures of the design. It is crucial because before any design work is done, the nature of the target location will be determined through site visits and research. The location, contours, site features, and accessibility are just a few of the topics included in this site study. In order for students and working architects to understand the nature of the site they will be working on, site analysis is a key component of architectural education. The building's orientation, the site's circulation, and the sustainability of the site may all be determined with thorough research of the site's features.

Keywords: analysis, climate, statistics, design

Procedia PDF Downloads 216
28351 The Connection between De Minimis Rule and the Effect on Trade

Authors: Pedro Mario Gonzalez Jimenez

Abstract:

The novelties introduced by the last Notice on agreements of minor importance tighten the application of the ‘De minimis’ safe harbour in the European Union. However, the undetermined legal concept of effect on trade between the Member States becomes importance at the same time. Therefore, the current analysis that the jurist should carry out in the European Union to determine if an agreement appreciably restrict competition under Article 101 of the Treaty on the Functioning of the European Union is double. Hence, it is necessary to know how to balance the significance in competition and the significance in effect on trade between the Member States. It is a crucial issue due to the negative delimitation of restriction of competition affects the positive one. The methodology of this research is rather simple. Beginning with a historical approach to the ‘De Minimis Rule’, their main problems and uncertainties will be found. So, after the analysis of normative documents and the jurisprudence of the Court of Justice of the European Union some proposals of ‘Lege ferenda’ will be offered. These proposals try to overcome the contradictions and questions that currently exist in the European Union as a consequence of the current legal regime of agreements of minor importance. The main findings of this research are the followings: Firstly, the effect on trade is another way to analyze the importance of an agreement different from the ‘De minimis rule’. In point of fact, this concept is singularly adapted to go through agreements that have as object the prevention, restriction or distortion of competition, as it is observed in the most famous European Union case-law. Thanks to the effect on trade, as long as the proper requirements are met there is no a restriction of competition under article 101 of the Treaty on the Functioning of the European Union, even if the agreement had an anti-competitive object. These requirements are an aggregate market share lower than 5% on any of the relevant markets affected by the agreement and turnover lower than 40 million of Euros. Secondly, as the Notice itself says ‘it is also intended to give guidance to the courts and competition authorities of the Member States in their application of Article 101 of the Treaty, but it has no binding force for them’. This reality makes possible the existence of different statements among the different Member States and a confusing perception of what a restriction of competition is. Ultimately, damage on trade between the Member States could be observed for this reason. The main conclusion is that the significant effect on trade between Member States is irrelevant in agreements that restrict competition because of their effects but crucial in agreements that restrict competition because of their object. Thus, the Member States should propose the incorporation of a similar concept in their legal orders in order to apply the content of the Notice. Otherwise, the significance of the restrictive agreement on competition would not be properly assessed.

Keywords: De minimis rule, effect on trade, minor importance agreements, safe harbour

Procedia PDF Downloads 149
28350 Material Characterization and Numerical Simulation of a Rubber Bumper

Authors: Tamás Mankovits, Dávid Huri, Imre Kállai, Imre Kocsis, Tamás Szabó

Abstract:

Non-linear FEM calculations are indispensable when important technical information like operating performance of a rubber component is desired. Rubber bumpers built into air-spring structures may undergo large deformations under load, which in itself shows non-linear behavior. The changing contact range between the parts and the incompressibility of the rubber increases this non-linear behavior further. The material characterization of an elastomeric component is also a demanding engineering task. In this paper, a comprehensive investigation is introduced including laboratory measurements, mesh density analysis and complex finite element simulations to obtain the load-displacement curve of the chosen rubber bumper. Contact and friction effects are also taken into consideration. The aim of this research is to elaborate an FEM model which is accurate and competitive for a future shape optimization task.

Keywords: rubber bumper, finite element analysis, compression test, Mooney-Rivlin material model

Procedia PDF Downloads 483
28349 Association of Depression with Physical Inactivity and Time Watching Television: A Cross-Sectional Study with the Brazilian Population PNS, 2013

Authors: Margareth Guimaraes Lima, Marilisa Berti A. Barros, Deborah Carvalho Malta

Abstract:

The relationship between physical activity (PA) and depression has been investigated, in both, observational and clinical studies: PA can integrate the treatments for depression; the physical inactivity (PI) may contribute to increase depression symptoms; and on the other hand, emotional problems can decrease PA. The main of this study was analyze the association among leisure and transportation PI and time watching television (TV) according to depression (minor and major), evaluated with the Patient Health Questionnaire (PHQ-9). The association was also analyzed by gender. This is a cross-sectional study. Data were obtained from the National Health Survey 2013 (PNS), performed with representative sample of the Brazilian adult population, in 2013. The PNS collected information from 60,202 individuals, aged 18 years or more. The independent variable were: leisure time physical inactivity (LTPI), considering inactive or insufficiently actives (categories were linked for analyzes), those who do not performed a minimum of 150 or 74 minutes of moderate or vigorous LTPA, respectively, by week; transportation physical inactivity (TPI), individuals who did not reached 150 minutes, by week, travelling by bicycle or on foot to work or other activities; daily time watching TV > 5 hours. The principal independent variable was depression, identified by PHQ-9. Individuals were classified with major depression, with > 5 symptoms, more than seven days, but one of the symptoms was “depressive mood” or “lack of interest or pleasure”. The others had minor depression. The variables used to adjustment were gender, age, schooling and chronic disease. The prevalence of LTPI, TPI and TV time were estimated according to depression, and differences were tested with Chi-Square test. Adjusted prevalence ratios were estimated using multiple Poisson regression models. The analyzes also had stratification by gender. Mean age of the studied population was 42.9 years old (CI95%:42.6-43.2) and 52.9% were women. 77.5% and 68.1% were inactive or insufficiently active in leisure and transportation, respectively and 13.3% spent time watching TV 5 > hours. 6% and 4.1% of the Brazilian population were diagnosed with minor or major depression. LTPI prevalence was 5% and 9% higher among individuals with minor and major depression, respectively, comparing with no depression. The prevalence of TPI was 7% higher in those with major depression. Considering larger time watching TV, the prevalence was 45% and 74% higher among those with minor and major depression, respectively. Analyzing by gender, the associations were greater in men than women and TPI was note be associated, in women. The study detected the higher prevalence of leisure time physical inactivity and, especially, time spent watching TV, among individuals with major and minor depression, after to adjust for a number of potential confounding factors. TPI was only associated with major disorders and among men. Considering the cross-sectional design of the research, these associations can point out the importance of the mental problems control of the population to increase PA and decrease the sedentary lifestyle; on the other hand, the study highlight the need of interventions by encouraging people with depression, to practice PA, even to transportation.

Keywords: depression, physical activity, PHQ-9, sedentary lifestyle

Procedia PDF Downloads 129
28348 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances

Authors: P. Mounnarath, U. Schmitz, Ch. Zhang

Abstract:

Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.

Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis

Procedia PDF Downloads 411
28347 Genetic Variability and Principal Component Analysis in Eggplant (Solanum melongena)

Authors: M. R. Naroui Rad, A. Ghalandarzehi, J. A. Koohpayegani

Abstract:

Nine advanced cultivars and lines were planted in transplant trays on March, 2013. In mid-April 2014, nine cultivars and lines were taken from the seedling trays and were evaluated and compared in an experiment in form of a completely randomized block design with three replications at the Agricultural Research Station, Zahak. The results of the analysis of variance showed that there was a significant difference between the studied cultivars in terms of average fruit weight, fruit length, fruit diameter, ratio of fruit length to its diameter, the relative number of seeds per fruit, and each plant yield. The total yield of Sohrab and Y6 line with and an average of 41.9 and 36.7 t/ ha allocated the highest yield respectively to themselves. The results of simple correlation between the analyzed traits showed the final yield was affected by the average fruit weight due to direct and indirect effects of fruit weight and plant yield on the final yield. The genotypic and heritability values were high for fruit weight, fruit length and number of seed per fruit. The first two principal components accounted for 81.6% of the total variation among the characters describing genotypes.

Keywords: eggplant, principal component, variation, path analysis

Procedia PDF Downloads 201
28346 Emotion Recognition with Occlusions Based on Facial Expression Reconstruction and Weber Local Descriptor

Authors: Jadisha Cornejo, Helio Pedrini

Abstract:

Recognition of emotions based on facial expressions has received increasing attention from the scientific community over the last years. Several fields of applications can benefit from facial emotion recognition, such as behavior prediction, interpersonal relations, human-computer interactions, recommendation systems. In this work, we develop and analyze an emotion recognition framework based on facial expressions robust to occlusions through the Weber Local Descriptor (WLD). Initially, the occluded facial expressions are reconstructed following an extension approach of Robust Principal Component Analysis (RPCA). Then, WLD features are extracted from the facial expression representation, as well as Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG). The feature vector space is reduced using Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). Finally, K-Nearest Neighbor (K-NN) and Support Vector Machine (SVM) classifiers are used to recognize the expressions. Experimental results on three public datasets demonstrated that the WLD representation achieved competitive accuracy rates for occluded and non-occluded facial expressions compared to other approaches available in the literature.

Keywords: emotion recognition, facial expression, occlusion, fiducial landmarks

Procedia PDF Downloads 154
28345 An Efficient Machine Learning Model to Detect Metastatic Cancer in Pathology Scans Using Principal Component Analysis Algorithm, Genetic Algorithm, and Classification Algorithms

Authors: Bliss Singhal

Abstract:

Machine learning (ML) is a branch of Artificial Intelligence (AI) where computers analyze data and find patterns in the data. The study focuses on the detection of metastatic cancer using ML. Metastatic cancer is the stage where cancer has spread to other parts of the body and is the cause of approximately 90% of cancer-related deaths. Normally, pathologists spend hours each day to manually classifying whether tumors are benign or malignant. This tedious task contributes to mislabeling metastasis being over 60% of the time and emphasizes the importance of being aware of human error and other inefficiencies. ML is a good candidate to improve the correct identification of metastatic cancer, saving thousands of lives and can also improve the speed and efficiency of the process, thereby taking fewer resources and time. So far, the deep learning methodology of AI has been used in research to detect cancer. This study is a novel approach to determining the potential of using preprocessing algorithms combined with classification algorithms in detecting metastatic cancer. The study used two preprocessing algorithms: principal component analysis (PCA) and the genetic algorithm, to reduce the dimensionality of the dataset and then used three classification algorithms: logistic regression, decision tree classifier, and k-nearest neighbors to detect metastatic cancer in the pathology scans. The highest accuracy of 71.14% was produced by the ML pipeline comprising of PCA, the genetic algorithm, and the k-nearest neighbor algorithm, suggesting that preprocessing and classification algorithms have great potential for detecting metastatic cancer.

Keywords: breast cancer, principal component analysis, genetic algorithm, k-nearest neighbors, decision tree classifier, logistic regression

Procedia PDF Downloads 50
28344 Research Attitude: Its Factor Structure and Determinants in the Graduate Level

Authors: Janet Lynn S. Montemayor

Abstract:

Dropping survivability and rising drop-out rate in the graduate school is attributed to the demands that come along with research-related requirements. Graduate students tend to withdraw from their studies when confronted with such requirements. This act of succumbing to the challenge is primarily due to a negative mindset. An understanding of students’ view towards research is essential for teachers in facilitating research activities in the graduate school. This study aimed to develop a tool that accurately measures attitude towards research. Psychometric properties of the Research Attitude Inventory (RAIn) was assessed. A pool of items (k=50) was initially constructed and was administered to a development sample composed of Masters and Doctorate degree students (n=159). Results show that the RAIn is a reliable measure of research attitude (k=41, αmax = 0.894). Principal component analysis using orthogonal rotation with Kaiser normalization identified four underlying factors of research attitude, namely predisposition, purpose, perspective, and preparation. Research attitude among the respondents was analyzed using this measure.

Keywords: graduate education, principal component analysis, research attitude, scale development

Procedia PDF Downloads 163
28343 The Effect of Incorporation of Inulin as a Fat Replacer on the Quality of Milk Products Vis-À-Vis Ice Cream

Authors: Harish Kumar Sharma

Abstract:

The influence of different levels of inulin as a fat replacer on the quality of ice cream was investigated. The physicochemical, rheological and textural properties of control ice cream and ice cream prepared with inulin in different proportions were determined and correlated to the different parameters using Pearson correlation and Principle Component Analysis (PCA). Based on the overall acepectability, ice cream with 4% inulin was found best and was selected for preparation of ice cream with inulin:SPI in different proportions. Compared with control ice cream, Inulin:SPI showed different rheological properties, resulting in significantly higher apparent viscosities, consistency coefficient and greater deviations from Newtonian flow. In addition, both hardness and melting resistance significantly increased with increase in the SPI content in ice cream prepared with inulin: SPI. Also hardness value increased for inulin based ice cream compared to control ice cream but it melted significantly faster than the latter. Colour value significantly decreased in both the cases compared to the control sample. The deliberation shall focus on the effect of incorporation of inulin on the quality of ice-cream.

Keywords: fat replacer, inulin, ice cream, viscosity, principal component analysis

Procedia PDF Downloads 352
28342 Analysis of Rural Roads in Developing Countries Using Principal Component Analysis and Simple Average Technique in the Development of a Road Safety Performance Index

Authors: Muhammad Tufail, Jawad Hussain, Hammad Hussain, Imran Hafeez, Naveed Ahmad

Abstract:

Road safety performance index is a composite index which combines various indicators of road safety into single number. Development of a road safety performance index using appropriate safety performance indicators is essential to enhance road safety. However, a road safety performance index in developing countries has not been given as much priority as needed. The primary objective of this research is to develop a general Road Safety Performance Index (RSPI) for developing countries based on the facility as well as behavior of road user. The secondary objectives include finding the critical inputs in the RSPI and finding the better method of making the index. In this study, the RSPI is developed by selecting four main safety performance indicators i.e., protective system (seat belt, helmet etc.), road (road width, signalized intersections, number of lanes, speed limit), number of pedestrians, and number of vehicles. Data on these four safety performance indicators were collected using observation survey on a 20 km road section of the National Highway N-125 road Taxila, Pakistan. For the development of this composite index, two methods are used: a) Principal Component Analysis (PCA) and b) Equal Weighting (EW) method. PCA is used for extraction, weighting, and linear aggregation of indicators to obtain a single value. An individual index score was calculated for each road section by multiplication of weights and standardized values of each safety performance indicator. However, Simple Average technique was used for weighting and linear aggregation of indicators to develop a RSPI. The road sections are ranked according to RSPI scores using both methods. The two weighting methods are compared, and the PCA method is found to be much more reliable than the Simple Average Technique.

Keywords: indicators, aggregation, principle component analysis, weighting, index score

Procedia PDF Downloads 121
28341 A Stable Method for Determination of the Number of Independent Components

Authors: Yuyan Yi, Jingyi Zheng, Nedret Billor

Abstract:

Independent component analysis (ICA) is one of the most commonly used blind source separation (BSS) techniques for signal pre-processing, such as noise reduction and feature extraction. The main parameter in the ICA method is the number of independent components (IC). Although there have been several methods for the determination of the number of ICs, it has not been given sufficient attentionto this important parameter. In this study, wereview the mostused methods fordetermining the number of ICs and providetheir advantages and disadvantages. Further, wepropose an improved version of column-wise ICAByBlock method for the determination of the number of ICs.To assess the performance of the proposed method, we compare the column-wise ICAbyBlock with several existing methods through different ICA methods by using simulated and real signal data. Results show that the proposed column-wise ICAbyBlock is an effective and stable method for determining the optimal number of components in ICA. This method is simple, and results can be demonstrated intuitively with good visualizations.

Keywords: independent component analysis, optimal number, column-wise, correlation coefficient, cross-validation, ICAByblock

Procedia PDF Downloads 74
28340 Assessment of Soil Quality Indicators in Rice Soil of Tamil Nadu

Authors: Kaleeswari R. K., Seevagan L .

Abstract:

Soil quality in an agroecosystem is influenced by the cropping system, water and soil fertility management. A valid soil quality index would help to assess the soil and crop management practices for desired productivity and soil health. The soil quality indices also provide an early indication of soil degradation and needy remedial and rehabilitation measures. Imbalanced fertilization and inadequate organic carbon dynamics deteriorate soil quality in an intensive cropping system. The rice soil ecosystem is different from other arable systems since rice is grown under submergence, which requires a different set of key soil attributes for enhancing soil quality and productivity. Assessment of the soil quality index involves indicator selection, indicator scoring and comprehensive score into one index. The most appropriate indicator to evaluate soil quality can be selected by establishing the minimum data set, which can be screened by linear and multiple regression factor analysis and score function. This investigation was carried out in intensive rice cultivating regions (having >1.0 lakh hectares) of Tamil Nadu viz., Thanjavur, Thiruvarur, Nagapattinam, Villupuram, Thiruvannamalai, Cuddalore and Ramanathapuram districts. In each district, intensive rice growing block was identified. In each block, two sampling grids (10 x 10 sq.km) were used with a sampling depth of 10 – 15 cm. Using GIS coordinates, and soil sampling was carried out at various locations in the study area. The number of soil sampling points were 41, 28, 28, 32, 37, 29 and 29 in Thanjavur, Thiruvarur, Nagapattinam, Cuddalore, Villupuram, Thiruvannamalai and Ramanathapuram districts, respectively. Principal Component Analysis is a data reduction tool to select some of the potential indicators. Principal Component is a linear combination of different variables that represents the maximum variance of the dataset. Principal Component that has eigenvalues equal or higher than 1.0 was taken as the minimum data set. Principal Component Analysis was used to select the representative soil quality indicators in rice soils based on factor loading values and contribution percent values. Variables having significant differences within the production system were used for the preparation of the minimum data set. Each Principal Component explained a certain amount of variation (%) in the total dataset. This percentage provided the weight for variables. The final Principal Component Analysis based soil quality equation is SQI = ∑ i=1 (W ᵢ x S ᵢ); where S- score for the subscripted variable; W-weighing factor derived from PCA. Higher index scores meant better soil quality. Soil respiration, Soil available Nitrogen and Potentially Mineralizable Nitrogen were assessed as soil quality indicators in rice soil of the Cauvery Delta zone covering Thanjavur, Thiruvavur and Nagapattinam districts. Soil available phosphorus could be used as a soil quality indicator of rice soils in the Cuddalore district. In rain-fed rice ecosystems of coastal sandy soil, DTPA – Zn could be used as an effective soil quality indicator. Among the soil parameters selected from Principal Component Analysis, Microbial Biomass Nitrogen could be used quality indicator for rice soils of the Villupuram district. Cauvery Delta zone has better SQI as compared with other intensive rice growing zone of Tamil Nadu.

Keywords: soil quality index, soil attributes, soil mapping, and rice soil

Procedia PDF Downloads 52
28339 Parametric Appraisal of Robotic Arc Welding of Mild Steel Material by Principal Component Analysis-Fuzzy with Taguchi Technique

Authors: Amruta Rout, Golak Bihari Mahanta, Gunji Bala Murali, Bibhuti Bhusan Biswal, B. B. V. L. Deepak

Abstract:

The use of industrial robots for performing welding operation is one of the chief sign of contemporary welding in these days. The weld joint parameter and weld process parameter modeling is one of the most crucial aspects of robotic welding. As weld process parameters affect the weld joint parameters differently, a multi-objective optimization technique has to be utilized to obtain optimal setting of weld process parameter. In this paper, a hybrid optimization technique, i.e., Principal Component Analysis (PCA) combined with fuzzy logic has been proposed to get optimal setting of weld process parameters like wire feed rate, welding current. Gas flow rate, welding speed and nozzle tip to plate distance. The weld joint parameters considered for optimization are the depth of penetration, yield strength, and ultimate strength. PCA is a very efficient multi-objective technique for converting the correlated and dependent parameters into uncorrelated and independent variables like the weld joint parameters. Also in this approach, no need for checking the correlation among responses as no individual weight has been assigned to responses. Fuzzy Inference Engine can efficiently consider these aspects into an internal hierarchy of it thereby overcoming various limitations of existing optimization approaches. At last Taguchi method is used to get the optimal setting of weld process parameters. Therefore, it has been concluded the hybrid technique has its own advantages which can be used for quality improvement in industrial applications.

Keywords: robotic arc welding, weld process parameters, weld joint parameters, principal component analysis, fuzzy logic, Taguchi method

Procedia PDF Downloads 154
28338 Toxicity Analysis of Metal Coating Industry Wastewaters by Phytotoxicity Method

Authors: Sukru Dursun, Zeynep Cansu Ayturan, Mostafa Maroof

Abstract:

Metal coating which is important method used for protecting metals against oxidation and corrosion, decreasing friction, protecting metals from chemicals, easing cleaning of the metals. There are several methods used for metal coating such as hot-dip galvanizing, thermal spraying, electroplating and sherardizing. Method which will be used for metal coating depends on the type of metal. The materials mostly used for coating are zinc, nickel, brass, chrome, gold, cadmium, copper, brass, and silver. Within these materials, chrome ion has significant negative impacts on human, other living organisms and environment. Moreover, especially on human chrome may cause lung cancer, stomach ulcer, kidney and liver function disorders and death. Therefore, wastewaters of metal coating industry including chrome should be treated very carefully. In this study, wastewater containing chrome produced by metal coating industry was analysed with phytotoxicity method that is based on measuring the reaction of some plant species against different concentrations of chrome solution. Main plants used for phytotoxicity tests are Lepidium sativum and Lemna minor. Owing to phytotoxicity test, assessing the negative effects of chrome which may harm plants and offering more accurate wastewater treatment techniques against chromium wastewater is possible. Furthermore, the results taken from phytotoxicity tests were analysed with respect to their variance and their importance against different concentrations of chrome solution were determined.

Keywords: metal coating wastewater, chrome, phytotoxicity, Lepidium sativum, Lemna minor

Procedia PDF Downloads 274
28337 Analysis of Building Response from Vertical Ground Motions

Authors: George C. Yao, Chao-Yu Tu, Wei-Chung Chen, Fung-Wen Kuo, Yu-Shan Chang

Abstract:

Building structures are subjected to both horizontal and vertical ground motions during earthquakes, but only the horizontal ground motion has been extensively studied and considered in design. Most of the prevailing seismic codes assume the vertical component to be 1/2 to 2/3 of the horizontal one. In order to understand the building responses from vertical ground motions, many earthquakes records are studied in this paper. System identification methods (ARX Model) are used to analyze the strong motions and to find out the characteristics of the vertical amplification factors and the natural frequencies of buildings. Analysis results show that the vertical amplification factors for high-rise buildings and low-rise building are 1.78 and 2.52 respectively, and the average vertical amplification factor of all buildings is about 2. The relationship between the vertical natural frequency and building height was regressed to a suggested formula in this study. The result points out an important message; the taller the building is, the greater chance of resonance of vertical vibration on the building will be.

Keywords: vertical ground motion, vertical amplification factor, natural frequency, component

Procedia PDF Downloads 290
28336 CompPSA: A Component-Based Pairwise RNA Secondary Structure Alignment Algorithm

Authors: Ghada Badr, Arwa Alturki

Abstract:

The biological function of an RNA molecule depends on its structure. The objective of the alignment is finding the homology between two or more RNA secondary structures. Knowing the common functionalities between two RNA structures allows a better understanding and a discovery of other relationships between them. Besides, identifying non-coding RNAs -that is not translated into a protein- is a popular application in which RNA structural alignment is the first step A few methods for RNA structure-to-structure alignment have been developed. Most of these methods are partial structure-to-structure, sequence-to-structure, or structure-to-sequence alignment. Less attention is given in the literature to the use of efficient RNA structure representation and the structure-to-structure alignment methods are lacking. In this paper, we introduce an O(N2) Component-based Pairwise RNA Structure Alignment (CompPSA) algorithm, where structures are given as a component-based representation and where N is the maximum number of components in the two structures. The proposed algorithm compares the two RNA secondary structures based on their weighted component features rather than on their base-pair details. Extensive experiments are conducted illustrating the efficiency of the CompPSA algorithm when compared to other approaches and on different real and simulated datasets. The CompPSA algorithm shows an accurate similarity measure between components. The algorithm gives the flexibility for the user to align the two RNA structures based on their weighted features (position, full length, and/or stem length). Moreover, the algorithm proves scalability and efficiency in time and memory performance.

Keywords: alignment, RNA secondary structure, pairwise, component-based, data mining

Procedia PDF Downloads 429
28335 Impact on the Results of Sub-Group Analysis on Performance of Recommender Systems

Authors: Ho Yeon Park, Kyoung-Jae Kim

Abstract:

The purpose of this study is to investigate whether friendship in social media can be an important factor in recommender system through social scientific analysis of friendship in popular social media such as Facebook and Twitter. For this purpose, this study analyzes data on friendship in real social media using component analysis and clique analysis among sub-group analysis in social network analysis. In this study, we propose an algorithm to reflect the results of sub-group analysis on the recommender system. The key to this algorithm is to ensure that recommendations from users in friendships are more likely to be reflected in recommendations from users. As a result of this study, outcomes of various subgroup analyzes were derived, and it was confirmed that the results were different from the results of the existing recommender system. Therefore, it is considered that the results of the subgroup analysis affect the recommendation performance of the system. Future research will attempt to generalize the results of the research through further analysis of various social data.

Keywords: sub-group analysis, social media, social network analysis, recommender systems

Procedia PDF Downloads 324
28334 Bi-Component Particle Segregation Studies in a Spiral Concentrator Using Experimental and CFD Techniques

Authors: Prudhvinath Reddy Ankireddy, Narasimha Mangadoddy

Abstract:

Spiral concentrators are commonly used in various industries, including mineral and coal processing, to efficiently separate materials based on their density and size. In these concentrators, a mixture of solid particles and fluid (usually water) is introduced as feed at the top of a spiral channel. As the mixture flows down the spiral, centrifugal and gravitational forces act on the particles, causing them to stratify based on their density and size. Spiral flows exhibit complex fluid dynamics, and interactions involve multiple phases and components in the process. Understanding the behavior of these phases within the spiral concentrator is crucial for achieving efficient separation. An experimental bi-component particle interaction study is conducted in this work utilizing magnetite (heavier density) and silica (lighter density) with different proportions processed in the spiral concentrator. The observation separation reveals that denser particles accumulate towards the inner region of the spiral trough, while a significant concentration of lighter particles are found close to the outer edge. The 5th turn of the spiral trough is partitioned into five zones to achieve a comprehensive distribution analysis of bicomponent particle segregation. Samples are then gathered from these individual streams using an in-house sample collector, and subsequent analysis is conducted to assess component segregation. Along the trough, there was a decline in the concentration of coarser particles, accompanied by an increase in the concentration of lighter particles. The segregation pattern indicates that the heavier coarse component accumulates in the inner zone, whereas the lighter fine component collects in the outer zone. The middle zone primarily consists of heavier fine particles and lighter coarse particles. The zone-wise results reveal that there is a significant fraction of segregation occurs in inner and middle zones. Finer magnetite and silica particles predominantly accumulate in outer zones with the smallest fraction of segregation. Additionally, numerical simulations are also carried out using the computational fluid dynamics (CFD) model based on the volume of fluid (VOF) approach incorporating the RSM turbulence model. The discrete phase model (DPM) is employed for particle tracking, thereby understanding the particle segregation of magnetite and silica along the spiral trough.

Keywords: spiral concentrator, bi-component particle segregation, computational fluid dynamics, discrete phase model

Procedia PDF Downloads 39
28333 A Comprehensive Approach in Calculating the Impact of the Ground on Radiated Electromagnetic Fields Due to Lightning

Authors: Lahcene Boukelkoul

Abstract:

The influence of finite ground conductivity is of great importance in calculating the induced voltages from the radiated electromagnetic fields due to lightning. In this paper, we try to give a comprehensive approach to calculate the impact of the ground on the radiated electromagnetic fields to lightning. The vertical component of lightning electric field is calculated with a reasonable approximation assuming a perfectly conducting ground in case the observation point does not exceed a few kilometres from the lightning channel. However, for distant observation points the radiated vertical component of lightning electric field is attenuated due finitely conducting ground. The attenuation is calculated using the expression elaborated for both low and high frequencies. The horizontal component of the electric field, however, is more affected by a finite conductivity of a ground. Besides, the contribution of the horizontal component of the electric field, to induced voltages on an overhead transmission line, is greater than that of the vertical component. Therefore, the calculation of the horizontal electric field is great concern for the simulation of lightning-induced voltages. For field to transmission lines coupling the ground impedance is calculated for early time behaviour and for low frequency range.

Keywords: power engineering, radiated electromagnetic fields, lightning-induced voltages, lightning electric field

Procedia PDF Downloads 376
28332 Combining Chiller and Variable Frequency Drives

Authors: Nasir Khalid, S. Thirumalaichelvam

Abstract:

In most buildings, according to US Department of Energy Data Book, the electrical consumption attributable to centralized heating and ventilation of air- condition (HVAC) component can be as high as 40-60% of the total electricity consumption for an entire building. To provide efficient energy management for the market today, researchers are finding new ways to develop a system that can save electrical consumption of buildings even more. In this concept paper, a system known as Intelligent Chiller Energy Efficiency (iCEE) System is being developed that is capable of saving up to 25% from the chiller’s existing electrical energy consumption. In variable frequency drives (VFDs), research has found significant savings up to 30% of electrical energy consumption. Together with the VFDs at specific Air Handling Unit (AHU) of HVAC component, this system will save even more electrical energy consumption. The iCEE System is compatible with any make, model or age of centrifugal, rotary or reciprocating chiller air-conditioning systems which are electrically driven. The iCEE system uses engineering principles of efficiency analysis, enthalpy analysis, heat transfer, mathematical prediction, modified genetic algorithm, psychometrics analysis, and optimization formulation to achieve true and tangible energy savings for consumers.

Keywords: variable frequency drives, adjustable speed drives, ac drives, chiller energy system

Procedia PDF Downloads 530
28331 Using Heat-Mask in the Thermoforming Machine for Component Positioning in Thermoformed Electronics

Authors: Behnam Madadnia

Abstract:

For several years, 3D-shaped electronics have been rising, with many uses in home appliances, automotive, and manufacturing. One of the biggest challenges in the fabrication of 3D shape electronics, which are made by thermoforming, is repeatable and accurate component positioning, and typically there is no control over the final position of the component. This paper aims to address this issue and present a reliable approach for guiding the electronic components in the desired place during thermoforming. We have proposed a heat-control mask in the thermoforming machine to control the heating of the polymer, not allowing specific parts to be formable, which can assure the conductive traces' mechanical stability during thermoforming of the substrate. We have verified our approach's accuracy by applying our method on a real industrial semi-sphere mold for positioning 7 LEDs and one touch sensor. We measured the LEDs' position after thermoforming to prove the process's repeatability. The experiment results demonstrate that the proposed method is capable of positioning electronic components in thermoformed 3D electronics with high precision.

Keywords: 3D-shaped electronics, electronic components, thermoforming, component positioning

Procedia PDF Downloads 64
28330 Evaluation of Real-Time Background Subtraction Technique for Moving Object Detection Using Fast-Independent Component Analysis

Authors: Naoum Abderrahmane, Boumehed Meriem, Alshaqaqi Belal

Abstract:

Background subtraction algorithm is a larger used technique for detecting moving objects in video surveillance to extract the foreground objects from a reference background image. There are many challenges to test a good background subtraction algorithm, like changes in illumination, dynamic background such as swinging leaves, rain, snow, and the changes in the background, for example, moving and stopping of vehicles. In this paper, we propose an efficient and accurate background subtraction method for moving object detection in video surveillance. The main idea is to use a developed fast-independent component analysis (ICA) algorithm to separate background, noise, and foreground masks from an image sequence in practical environments. The fast-ICA algorithm is adapted and adjusted with a matrix calculation and searching for an optimum non-quadratic function to be faster and more robust. Moreover, in order to estimate the de-mixing matrix and the denoising de-mixing matrix parameters, we propose to convert all images to YCrCb color space, where the luma component Y (brightness of the color) gives suitable results. The proposed technique has been verified on the publicly available datasets CD net 2012 and CD net 2014, and experimental results show that our algorithm can detect competently and accurately moving objects in challenging conditions compared to other methods in the literature in terms of quantitative and qualitative evaluations with real-time frame rate.

Keywords: background subtraction, moving object detection, fast-ICA, de-mixing matrix

Procedia PDF Downloads 68
28329 Clinical Impact of Ultra-Deep Versus Sanger Sequencing Detection of Minority Mutations on the HIV-1 Drug Resistance Genotype Interpretations after Virological Failure

Authors: S. Mohamed, D. Gonzalez, C. Sayada, P. Halfon

Abstract:

Drug resistance mutations are routinely detected using standard Sanger sequencing, which does not detect minor variants with a frequency below 20%. The impact of detecting minor variants generated by ultra-deep sequencing (UDS) on HIV drug-resistance (DR) interpretations has not yet been studied. Fifty HIV-1 patients who experienced virological failure were included in this retrospective study. The HIV-1 UDS protocol allowed the detection and quantification of HIV-1 protease and reverse transcriptase variants related to genotypes A, B, C, E, F, and G. DeepChek®-HIV simplified DR interpretation software was used to compare Sanger sequencing and UDS. The total time required for the UDS protocol was found to be approximately three times longer than Sanger sequencing with equivalent reagent costs. UDS detected all of the mutations found by population sequencing and identified additional resistance variants in all patients. An analysis of DR revealed a total of 643 and 224 clinically relevant mutations by UDS and Sanger sequencing, respectively. Three resistance mutations with > 20% prevalence were detected solely by UDS: A98S (23%), E138A (21%) and V179I (25%). A significant difference in the DR interpretations for 19 antiretroviral drugs was observed between the UDS and Sanger sequencing methods. Y181C and T215Y were the most frequent mutations associated with interpretation differences. A combination of UDS and DeepChek® software for the interpretation of DR results would help clinicians provide suitable treatments. A cut-off of 1% allowed a better characterisation of the viral population by identifying additional resistance mutations and improving the DR interpretation.

Keywords: HIV-1, ultra-deep sequencing, Sanger sequencing, drug resistance

Procedia PDF Downloads 304
28328 Network Analysis and Sex Prediction based on a full Human Brain Connectome

Authors: Oleg Vlasovets, Fabian Schaipp, Christian L. Mueller

Abstract:

we conduct a network analysis and predict the sex of 1000 participants based on ”connectome” - pairwise Pearson’s correlation across 436 brain parcels. We solve the non-smooth convex optimization problem, known under the name of Graphical Lasso, where the solution includes a low-rank component. With this solution and machine learning model for a sex prediction, we explain the brain parcels-sex connectivity patterns.

Keywords: network analysis, neuroscience, machine learning, optimization

Procedia PDF Downloads 113
28327 Principal Component Analysis Combined Machine Learning Techniques on Pharmaceutical Samples by Laser Induced Breakdown Spectroscopy

Authors: Kemal Efe Eseller, Göktuğ Yazici

Abstract:

Laser-induced breakdown spectroscopy (LIBS) is a rapid optical atomic emission spectroscopy which is used for material identification and analysis with the advantages of in-situ analysis, elimination of intensive sample preparation, and micro-destructive properties for the material to be tested. LIBS delivers short pulses of laser beams onto the material in order to create plasma by excitation of the material to a certain threshold. The plasma characteristics, which consist of wavelength value and intensity amplitude, depends on the material and the experiment’s environment. In the present work, medicine samples’ spectrum profiles were obtained via LIBS. Medicine samples’ datasets include two different concentrations for both paracetamol based medicines, namely Aferin and Parafon. The spectrum data of the samples were preprocessed via filling outliers based on quartiles, smoothing spectra to eliminate noise and normalizing both wavelength and intensity axis. Statistical information was obtained and principal component analysis (PCA) was incorporated to both the preprocessed and raw datasets. The machine learning models were set based on two different train-test splits, which were 70% training – 30% test and 80% training – 20% test. Cross-validation was preferred to protect the models against overfitting; thus the sample amount is small. The machine learning results of preprocessed and raw datasets were subjected to comparison for both splits. This is the first time that all supervised machine learning classification algorithms; consisting of Decision Trees, Discriminant, naïve Bayes, Support Vector Machines (SVM), k-NN(k-Nearest Neighbor) Ensemble Learning and Neural Network algorithms; were incorporated to LIBS data of paracetamol based pharmaceutical samples, and their different concentrations on preprocessed and raw dataset in order to observe the effect of preprocessing.

Keywords: machine learning, laser-induced breakdown spectroscopy, medicines, principal component analysis, preprocessing

Procedia PDF Downloads 65
28326 Physical, Chemical and Mineralogical Characterization of Construction and Demolition Waste Produced in Greece

Authors: C. Alexandridou, G. N. Angelopoulos, F. A. Coutelieris

Abstract:

Construction industry in Greece consumes annually more than 25 million tons of natural aggregates originating mainly from quarries. At the same time, more than 2 million tons of construction and demolition waste are deposited every year, usually without control, therefore increasing the environmental impact of this sector. A potential alternative for saving natural resources and minimize landfilling, could be the recycling and re-use of Concrete and Demolition Waste (CDW) in concrete production. Moreover, in order to conform to the European legislation, Greece is obliged to recycle non-hazardous construction and demolition waste to a minimum of 70% by 2020. In this paper characterization of recycled materials - commercially and laboratory produced, coarse and fine, Recycled Concrete Aggregates (RCA) - has been performed. Namely, X-Ray Fluorescence and X-ray diffraction (XRD) analysis were used for chemical and mineralogical analysis respectively. Physical properties such as particle density, water absorption, sand equivalent and resistance to fragmentation were also determined. This study, first time made in Greece, aims at outlining the differences between RCA and natural aggregates and evaluating their possible influence in concrete performance. Results indicate that RCA’s chemical composition is enriched in Si, Al, and alkali oxides compared to natural aggregates. X-ray diffraction (XRD) analyses results indicated the presence of calcite, quartz and minor peaks of mica and feldspars. From all the evaluated physical properties of coarse RCA, only water absorption and resistance to fragmentation seem to have a direct influence on the properties of concrete. Low Sand Equivalent and significantly high water absorption values indicate that fine fractions of RCA cannot be used for concrete production unless further processed. Chemical properties of RCA in terms of water soluble ions are similar to those of natural aggregates. Four different concrete mixtures were produced and examined, replacing natural coarse aggregates with RCA by a ratio of 0%, 25%, 50% and 75% respectively. Results indicate that concrete mixtures containing recycled concrete aggregates have a minor deterioration of their properties (3-9% lower compression strength at 28 days) compared to conventional concrete containing the same cement quantity.

Keywords: chemical and physical characterization, compressive strength, mineralogical analysis, recycled concrete aggregates, waste management

Procedia PDF Downloads 204
28325 Data Management System for Environmental Remediation

Authors: Elizaveta Petelina, Anton Sizo

Abstract:

Environmental remediation projects deal with a wide spectrum of data, including data collected during site assessment, execution of remediation activities, and environmental monitoring. Therefore, an appropriate data management is required as a key factor for well-grounded decision making. The Environmental Data Management System (EDMS) was developed to address all necessary data management aspects, including efficient data handling and data interoperability, access to historical and current data, spatial and temporal analysis, 2D and 3D data visualization, mapping, and data sharing. The system focuses on support of well-grounded decision making in relation to required mitigation measures and assessment of remediation success. The EDMS is a combination of enterprise and desktop level data management and Geographic Information System (GIS) tools assembled to assist to environmental remediation, project planning, and evaluation, and environmental monitoring of mine sites. EDMS consists of seven main components: a Geodatabase that contains spatial database to store and query spatially distributed data; a GIS and Web GIS component that combines desktop and server-based GIS solutions; a Field Data Collection component that contains tools for field work; a Quality Assurance (QA)/Quality Control (QC) component that combines operational procedures for QA and measures for QC; Data Import and Export component that includes tools and templates to support project data flow; a Lab Data component that provides connection between EDMS and laboratory information management systems; and a Reporting component that includes server-based services for real-time report generation. The EDMS has been successfully implemented for the Project CLEANS (Clean-up of Abandoned Northern Mines). Project CLEANS is a multi-year, multimillion-dollar project aimed at assessing and reclaiming 37 uranium mine sites in northern Saskatchewan, Canada. The EDMS has effectively facilitated integrated decision-making for CLEANS project managers and transparency amongst stakeholders.

Keywords: data management, environmental remediation, geographic information system, GIS, decision making

Procedia PDF Downloads 125