Search results for: DC Component
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 921

Search results for: DC Component

771 A Neural Network Approach in Predicting the Blood Glucose Level for Diabetic Patients

Authors: Zarita Zainuddin, Ong Pauline, C. Ardil

Abstract:

Diabetes Mellitus is a chronic metabolic disorder, where the improper management of the blood glucose level in the diabetic patients will lead to the risk of heart attack, kidney disease and renal failure. This paper attempts to enhance the diagnostic accuracy of the advancing blood glucose levels of the diabetic patients, by combining principal component analysis and wavelet neural network. The proposed system makes separate blood glucose prediction in the morning, afternoon, evening and night intervals, using dataset from one patient covering a period of 77 days. Comparisons of the diagnostic accuracy with other neural network models, which use the same dataset are made. The comparison results showed overall improved accuracy, which indicates the effectiveness of this proposed system.

Keywords: Diabetes Mellitus, principal component analysis, time-series, wavelet neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2988
770 Effects of Different Meteorological Variables on Reference Evapotranspiration Modeling: Application of Principal Component Analysis

Authors: Akinola Ikudayisi, Josiah Adeyemo

Abstract:

The correct estimation of reference evapotranspiration (ETₒ) is required for effective irrigation water resources planning and management. However, there are some variables that must be considered while estimating and modeling ETₒ. This study therefore determines the multivariate analysis of correlated variables involved in the estimation and modeling of ETₒ at Vaalharts irrigation scheme (VIS) in South Africa using Principal Component Analysis (PCA) technique. Weather and meteorological data between 1994 and 2014 were obtained both from South African Weather Service (SAWS) and Agricultural Research Council (ARC) in South Africa for this study. Average monthly data of minimum and maximum temperature (°C), rainfall (mm), relative humidity (%), and wind speed (m/s) were the inputs to the PCA-based model, while ETₒ is the output. PCA technique was adopted to extract the most important information from the dataset and also to analyze the relationship between the five variables and ETₒ. This is to determine the most significant variables affecting ETₒ estimation at VIS. From the model performances, two principal components with a variance of 82.7% were retained after the eigenvector extraction. The results of the two principal components were compared and the model output shows that minimum temperature, maximum temperature and windspeed are the most important variables in ETₒ estimation and modeling at VIS. In order words, ETₒ increases with temperature and windspeed. Other variables such as rainfall and relative humidity are less important and cannot be used to provide enough information about ETₒ estimation at VIS. The outcome of this study has helped to reduce input variable dimensionality from five to the three most significant variables in ETₒ modelling at VIS, South Africa.

Keywords: Irrigation, principal component analysis, reference evapotranspiration, Vaalharts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1061
769 Numerical Analysis on Rapid Decompression in Conventional Dry Gases using One- Dimensional Mathematical Modeling

Authors: Evgeniy Burlutskiy

Abstract:

The paper presents a one-dimensional transient mathematical model of compressible thermal multi-component gas mixture flows in pipes. The set of the mass, momentum and enthalpy conservation equations for gas phase is solved. Thermo-physical properties of multi-component gas mixture are calculated by solving the Equation of State (EOS) model. The Soave-Redlich-Kwong (SRK-EOS) model is chosen. Gas mixture viscosity is calculated on the basis of the Lee-Gonzales-Eakin (LGE) correlation. Numerical analysis on rapid decompression in conventional dry gases is performed by using the proposed mathematical model. The model is validated on measured values of the decompression wave speed in dry natural gas mixtures. All predictions show excellent agreement with the experimental data at high and low pressure. The presented model predicts the decompression in dry natural gas mixtures much better than GASDECOM and OLGA codes, which are the most frequently-used codes in oil and gas pipeline transport service.

Keywords: Mathematical model, Rapid Gas Decompression

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3007
768 Identification of Reusable Software Modules in Function Oriented Software Systems using Neural Network Based Technique

Authors: Sonia Manhas, Parvinder S. Sandhu, Vinay Chopra, Nirvair Neeru

Abstract:

The cost of developing the software from scratch can be saved by identifying and extracting the reusable components from already developed and existing software systems or legacy systems [6]. But the issue of how to identify reusable components from existing systems has remained relatively unexplored. We have used metric based approach for characterizing a software module. In this present work, the metrics McCabe-s Cyclometric Complexity Measure for Complexity measurement, Regularity Metric, Halstead Software Science Indicator for Volume indication, Reuse Frequency metric and Coupling Metric values of the software component are used as input attributes to the different types of Neural Network system and reusability of the software component is calculated. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE).

Keywords: Software reusability, Neural Networks, MAE, RMSE, Accuracy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1867
767 An Exploration on Competency-Based Curricula in Integrated Circuit Design

Authors: Chih Chin Yang, Chung Shan Sun

Abstract:

In this paper the relationships between professional competences and school curriculain IC design industry are explored. The semi-structured questionnaire survey and focus group interview is the research method. Study participants are graduates of microelectronics engineering professional departments who are currently employed in the IC industry. The IC industries are defined as the electronic component manufacturing industry and optical-electronic component manufacturing industry in the semiconductor industry and optical-electronic material devices, respectively. Study participants selected from IC design industry include IC engineering and electronic & semiconductor engineering. The human training with IC design professional competence in microelectronics engineering professional departments is explored in this research. IC professional competences of human resources in the IC design industry include general intelligence and professional intelligence.

Keywords: IC design, curricula, competence, task, duty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1495
766 Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis

Authors: A.K. Tangirala, S. Babji

Abstract:

In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.

Keywords: non-negative matrix factorization, PCA, source separation, plant-wide diagnosis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1533
765 Automatic Removal of Ocular Artifacts using JADE Algorithm and Neural Network

Authors: V Krishnaveni, S Jayaraman, A Gunasekaran, K Ramadoss

Abstract:

The ElectroEncephaloGram (EEG) is useful for clinical diagnosis and biomedical research. EEG signals often contain strong ElectroOculoGram (EOG) artifacts produced by eye movements and eye blinks especially in EEG recorded from frontal channels. These artifacts obscure the underlying brain activity, making its visual or automated inspection difficult. The goal of ocular artifact removal is to remove ocular artifacts from the recorded EEG, leaving the underlying background signals due to brain activity. In recent times, Independent Component Analysis (ICA) algorithms have demonstrated superior potential in obtaining the least dependent source components. In this paper, the independent components are obtained by using the JADE algorithm (best separating algorithm) and are classified into either artifact component or neural component. Neural Network is used for the classification of the obtained independent components. Neural Network requires input features that exactly represent the true character of the input signals so that the neural network could classify the signals based on those key characters that differentiate between various signals. In this work, Auto Regressive (AR) coefficients are used as the input features for classification. Two neural network approaches are used to learn classification rules from EEG data. First, a Polynomial Neural Network (PNN) trained by GMDH (Group Method of Data Handling) algorithm is used and secondly, feed-forward neural network classifier trained by a standard back-propagation algorithm is used for classification and the results show that JADE-FNN performs better than JADEPNN.

Keywords: Auto Regressive (AR) Coefficients, Feed Forward Neural Network (FNN), Joint Approximation Diagonalisation of Eigen matrices (JADE) Algorithm, Polynomial Neural Network (PNN).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888
764 Material Characterization and Numerical Simulation of a Rubber Bumper

Authors: Tamás Mankovits, Dávid Huri, Imre Kállai, Imre Kocsis, Tamás Szabó

Abstract:

Non-linear FEM calculations are indispensable when important technical information like operating performance of a rubber component is desired. Rubber bumpers built into air-spring structures may undergo large deformations under load, which in itself shows non-linear behavior. The changing contact range between the parts and the incompressibility of the rubber increases this non-linear behavior further. The material characterization of an elastomeric component is also a demanding engineering task. In this paper a comprehensive investigation is introduced including laboratory measurements, mesh density analysis and complex finite element simulations to obtain the load-displacement curve of the chosen rubber bumper. Contact and friction effects are also taken into consideration. The aim of this research is to elaborate a FEM model which is accurate and competitive for a future shape optimization task.

Keywords: Rubber bumper, finite element analysis, compression test, Mooney-Rivlin material model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3584
763 Novel Anti-leukemia Calanone Compounds by Quantitative Structure-Activity Relationship AM1 Semiempirical Method

Authors: Ponco Iswanto, Mochammad Chasani, Muhammad Hanafi, Iqmal Tahir, Eva Vaulina YD, Harjono, Lestari Solikhati, Winkanda S. Putra, Yayuk Yuliantini

Abstract:

Quantitative Structure-Activity Relationship (QSAR) approach for discovering novel more active Calanone derivative as anti-leukemia compound has been conducted. There are 6 experimental activities of Calanone compounds against leukemia cell L1210 that are used as material of the research. Calculation of theoretical predictors (independent variables) was performed by AM1 semiempirical method. The QSAR equation is determined by Principle Component Regression (PCR) analysis, with Log IC50 as dependent variable and the independent variables are atomic net charges, dipole moment (μ), and coefficient partition of noctanol/ water (Log P). Three novel Calanone derivatives that obtained by this research have higher activity against leukemia cell L1210 than pure Calanone.

Keywords: AM1 semiempirical calculation, Calanone, Principle Component Regression, QSAR approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1477
762 Implementation of TinyHash based on Hash Algorithm for Sensor Network

Authors: HangRok Lee, YongJe Choi, HoWon Kim

Abstract:

In recent years, it has been proposed security architecture for sensor network.[2][4]. One of these, TinySec by Chris Kalof, Naveen Sastry, David Wagner had proposed Link layer security architecture, considering some problems of sensor network. (i.e : energy, bandwidth, computation capability,etc). The TinySec employs CBC_mode of encryption and CBC-MAC for authentication based on SkipJack Block Cipher. Currently, This TinySec is incorporated in the TinyOS for sensor network security. This paper introduces TinyHash based on general hash algorithm. TinyHash is the module in order to replace parts of authentication and integrity in the TinySec. it implies that apply hash algorithm on TinySec architecture. For compatibility about TinySec, Components in TinyHash is constructed as similar structure of TinySec. And TinyHash implements the HMAC component for authentication and the Digest component for integrity of messages. Additionally, we define the some interfaces for service associated with hash algorithm.

Keywords: sensor network security, nesC, TinySec, TinyOS, Hash, HMAC, integrity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2353
761 The Laser Line Detection for Autonomous Mapping Based on Color Segmentation

Authors: Pavel Chmelar, Martin Dobrovolny

Abstract:

Laser projection or laser footprint detection is today widely used in many fields of robotics, measurement or electronics. The system accuracy strictly depends on precise laser footprint detection on target objects. This article deals with the laser line detection based on the RGB segmentation and the component labeling. As a measurement device was used the developed optical rangefinder. The optical rangefinder is equipped with vertical sweeping of the laser beam and high quality camera. This system was developed mainly for automatic exploration and mapping of unknown spaces. In the first section is presented a new detection algorithm. In the second section are presented measurements results. The measurements were performed in variable light conditions in interiors. The last part of the article present achieved results and their differences between day and night measurements.

Keywords: Automatic mapping, color segmentation, component labeling, distance measurement, laser line detection, vector map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3563
760 Fault Detection of Drinking Water Treatment Process Using PCA and Hotelling's T2 Chart

Authors: Joval P George, Dr. Zheng Chen, Philip Shaw

Abstract:

This paper deals with the application of Principal Component Analysis (PCA) and the Hotelling-s T2 Chart, using data collected from a drinking water treatment process. PCA is applied primarily for the dimensional reduction of the collected data. The Hotelling-s T2 control chart was used for the fault detection of the process. The data was taken from a United Utilities Multistage Water Treatment Works downloaded from an Integrated Program Management (IPM) dashboard system. The analysis of the results show that Multivariate Statistical Process Control (MSPC) techniques such as PCA, and control charts such as Hotelling-s T2, can be effectively applied for the early fault detection of continuous multivariable processes such as Drinking Water Treatment. The software package SIMCA-P was used to develop the MSPC models and Hotelling-s T2 Chart from the collected data.

Keywords: Principal component analysis, hotelling's t2 chart, multivariate statistical process control, drinking water treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2784
759 Analysis of Building Response from Vertical Ground Motions

Authors: George C. Yao, Chao-Yu Tu, Wei-Chung Chen, Fung-Wen Kuo, Yu-Shan Chang

Abstract:

Building structures are subjected to both horizontal and vertical ground motions during earthquakes, but only the horizontal ground motion has been extensively studied and considered in design. Most of the prevailing seismic codes assume the vertical component to be 1/2 to 2/3 of the horizontal one. In order to understand the building responses from vertical ground motions, many earthquakes records are studied in this paper. System identification methods (ARX Model) are used to analyze the strong motions and to find out the characteristics of the vertical amplification factors and the natural frequencies of buildings. Analysis results show that the vertical amplification factors for high-rise buildings and low-rise building are 1.78 and 2.52 respectively, and the average vertical amplification factor of all buildings is about 2. The relationship between the vertical natural frequency and building height was regressed to a suggested formula in this study. The result points out an important message; the taller the building is, the greater chance of resonance of vertical vibration on the building will be.

Keywords: Vertical ground motion, vertical amplification factor, natural frequency, component.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1064
758 The Implicit Methods for the Study of Tolerance

Authors: M. Bambulyakа

Abstract:

Tolerance is a tool for achieving a social cohesion, particularly, among individuals and groups with different values. The aim is to study the characteristics of the ethnic tolerance, the inhabitants of Latvia. The ethnic tolerance is taught as a set of conscious and unconscious orientations of the individual in social interaction and inter-ethnic communication. It uses the tools of empirical studies of the ethnic tolerance which allows to identify the explicitly and implicitly levels of the emotional component of Latvia's residents. Explicit measurements were made using the techniques of self-report which revealed the index of the ethnic tolerance and the ethnic identity of the participants. The implicit component was studied using methods based on the effect of the emotional priming. During the processing of the results, there were calculated indicators of the positive and negative implicit attitudes towards members of their own and other ethnicity as well as the explicit parameters of the ethnic tolerance and the ethnic identity of Latvia-s residents. The implicit measurements of the ratio of neighboring ethnic groups against each other showed a mutual negative attitude whereas the explicit measurements indicate a neutral attitude. The data obtained contribute to a further study of the ethnic tolerance of Latvia's residents.

Keywords: ethnic tolerance, implicit measure, priming, ethnic attitudes

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1591
757 Efficient Real-time Remote Data Propagation Mechanism for a Component-Based Approach to Distributed Manufacturing

Authors: V. Barot, S. McLeod, R. Harrison, A. A. West

Abstract:

Manufacturing Industries face a crucial change as products and processes are required to, easily and efficiently, be reconfigurable and reusable. In order to stay competitive and flexible, situations also demand distribution of enterprises globally, which requires implementation of efficient communication strategies. A prototype system called the “Broadcaster" has been developed with an assumption that the control environment description has been engineered using the Component-based system paradigm. This prototype distributes information to a number of globally distributed partners via an adoption of the circular-based data processing mechanism. The work highlighted in this paper includes the implementation of this mechanism in the domain of the manufacturing industry. The proposed solution enables real-time remote propagation of machine information to a number of distributed supply chain client resources such as a HMI, VRML-based 3D views and remote client instances regardless of their distribution nature and/ or their mechanisms. This approach is presented together with a set of evaluation results. Authors- main concentration surrounds the reliability and the performance metric of the adopted approach. Performance evaluation is carried out in terms of the response times taken to process the data in this domain and compared with an alternative data processing implementation such as the linear queue mechanism. Based on the evaluation results obtained, authors justify the benefits achieved from this proposed implementation and highlight any further research work that is to be carried out.

Keywords: Broadcaster, circular buffer, Component-based, distributed manufacturing, remote data propagation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1372
756 Acute Coronary Syndrome Prediction Using Data Mining Techniques- An Application

Authors: Tahseen A. Jilani, Huda Yasin, Madiha Yasin, C. Ardil

Abstract:

In this paper we use data mining techniques to investigate factors that contribute significantly to enhancing the risk of acute coronary syndrome. We assume that the dependent variable is diagnosis – with dichotomous values showing presence or  absence of disease. We have applied binary regression to the factors affecting the dependent variable. The data set has been taken from two different cardiac hospitals of Karachi, Pakistan. We have total sixteen variables out of which one is assumed dependent and other 15 are independent variables. For better performance of the regression model in predicting acute coronary syndrome, data reduction techniques like principle component analysis is applied. Based on results of data reduction, we have considered only 14 out of sixteen factors.

Keywords: Acute coronary syndrome (ACS), binary logistic regression analyses, myocardial ischemia (MI), principle component analysis, unstable angina (U.A.).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2114
755 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.

Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 165
754 Human Action Recognition Based on Ridgelet Transform and SVM

Authors: A. Ouanane, A. Serir

Abstract:

In this paper, a novel algorithm based on Ridgelet Transform and support vector machine is proposed for human action recognition. The Ridgelet transform is a directional multi-resolution transform and it is more suitable for describing the human action by performing its directional information to form spatial features vectors. The dynamic transition between the spatial features is carried out using both the Principal Component Analysis and clustering algorithm K-means. First, the Principal Component Analysis is used to reduce the dimensionality of the obtained vectors. Then, the kmeans algorithm is then used to perform the obtained vectors to form the spatio-temporal pattern, called set-of-labels, according to given periodicity of human action. Finally, a Support Machine classifier is used to discriminate between the different human actions. Different tests are conducted on popular Datasets, such as Weizmann and KTH. The obtained results show that the proposed method provides more significant accuracy rate and it drives more robustness in very challenging situations such as lighting changes, scaling and dynamic environment

Keywords: Human action, Ridgelet Transform, PCA, K-means, SVM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2070
753 Fault Detection via Stability Analysis for the Hybrid Control Unit of HEVs

Authors: Kyogun Chang, Yoon Bok Lee

Abstract:

Fault detection determines faultexistence and detecting time. This paper discusses two layered fault detection methods to enhance the reliability and safety. Two layered fault detection methods consist of fault detection methods of component level controllers and system level controllers. Component level controllers detect faults by using limit checking, model-based detection, and data-driven detection and system level controllers execute detection by stability analysis which can detect unknown changes. System level controllers compare detection results via stability with fault signals from lower level controllers. This paper addresses fault detection methods via stability and suggests fault detection criteria in nonlinear systems. The fault detection method applies tothe hybrid control unit of a military hybrid electric vehicleso that the hybrid control unit can detect faults of the traction motor.

Keywords: Two Layered Fault Detection, Stability Analysis, Fault-Tolerant Control

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1710
752 Simultaneous Optimization of Design and Maintenance through a Hybrid Process Using Genetic Algorithms

Authors: O. Adjoul, A. Feugier, K. Benfriha, A. Aoussat

Abstract:

In general, issues related to design and maintenance are considered in an independent manner. However, the decisions made in these two sets influence each other. The design for maintenance is considered an opportunity to optimize the life cycle cost of a product, particularly in the nuclear or aeronautical field, where maintenance expenses represent more than 60% of life cycle costs. The design of large-scale systems starts with product architecture, a choice of components in terms of cost, reliability, weight and other attributes, corresponding to the specifications. On the other hand, the design must take into account maintenance by improving, in particular, real-time monitoring of equipment through the integration of new technologies such as connected sensors and intelligent actuators. We noticed that different approaches used in the Design For Maintenance (DFM) methods are limited to the simultaneous characterization of the reliability and maintainability of a multi-component system. This article proposes a method of DFM that assists designers to propose dynamic maintenance for multi-component industrial systems. The term "dynamic" refers to the ability to integrate available monitoring data to adapt the maintenance decision in real time. The goal is to maximize the availability of the system at a given life cycle cost. This paper presents an approach for simultaneous optimization of the design and maintenance of multi-component systems. Here the design is characterized by four decision variables for each component (reliability level, maintainability level, redundancy level, and level of monitoring data). The maintenance is characterized by two decision variables (the dates of the maintenance stops and the maintenance operations to be performed on the system during these stops). The DFM model helps the designers choose technical solutions for the large-scale industrial products. Large-scale refers to the complex multi-component industrial systems and long life-cycle, such as trains, aircraft, etc. The method is based on a two-level hybrid algorithm for simultaneous optimization of design and maintenance, using genetic algorithms. The first level is to select a design solution for a given system that considers the life cycle cost and the reliability. The second level consists of determining a dynamic and optimal maintenance plan to be deployed for a design solution. This level is based on the Maintenance Free Operating Period (MFOP) concept, which takes into account the decision criteria such as, total reliability, maintenance cost and maintenance time. Depending on the life cycle duration, the desired availability, and the desired business model (sales or rental), this tool provides visibility of overall costs and optimal product architecture.

Keywords: Availability, design for maintenance, DFM, dynamic maintenance, life cycle cost, LCC, maintenance free operating period, MFOP, simultaneous optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 597
751 A Generic Approach to Reuse Unified Modeling Language Components Following an Agile Process

Authors: Rim Bouhaouel, Naoufel Kraïem, Zuhoor Al Khanjari

Abstract:

Unified Modeling Language (UML) is considered as one of the widespread modeling language standardized by the Object Management Group (OMG). Therefore, the model driving engineering (MDE) community attempts to provide reuse of UML diagrams, and do not construct it from scratch. The UML model appears according to a specific software development process. The existing method generation models focused on the different techniques of transformation without considering the development process. Our work aims to construct an UML component from fragments of UML diagram basing on an agile method. We define UML fragment as a portion of a UML diagram, which express a business target. To guide the generation of fragments of UML models using an agile process, we need a flexible approach, which adapts to the agile changes and covers all its activities. We use the software product line (SPL) to derive a fragment of process agile method. This paper explains our approach, named RECUP, to generate UML fragments following an agile process, and overviews the different aspects. In this paper, we present the approach and we define the different phases and artifacts.

Keywords: UML, component, fragment, agile, SPL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 917
750 The Influence of Physical-Mechanical and Thermal Properties of Hemp Filling Materials by the Addition of Energy Byproducts

Authors: Sarka Keprdova, Jiri Bydzovsky

Abstract:

This article describes to what extent the addition of energy by-products into the structures of the technical hemp filling materials influence their properties. The article focuses on the changes in physical-mechanical and thermal technical properties of materials after the addition of ash or FBC ash or slag in the binding component of material. Technical hemp filling materials are made of technical hemp shives bonded by the mixture of cement and dry hydrate lime. They are applicable as fillers of vertical or horizontal structures or roofs. The research used eight types of energy by-products of power or heating plants in the Czech Republic. Secondary energy products were dispensed in three different percentage ratios as a replacement of cement in the binding component. Density, compressive strength and determination of the coefficient of thermal conductivity after 28, 60 and 90 days of curing in a laboratory environment were determined and subsequently evaluated on the specimens produced.

Keywords: Ash, binder, cement, energy by-product, FBC ash (fluidized bed combustion ash), filling materials, shives, slag, technical hemp.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1895
749 Multiple-Points Fault Signature's Dynamics Modeling for Bearing Defect Frequencies

Authors: Muhammad F. Yaqub, Iqbal Gondal, Joarder Kamruzzaman

Abstract:

Occurrence of a multiple-points fault in machine operations could result in exhibiting complex fault signatures, which could result in lowering fault diagnosis accuracy. In this study, a multiple-points defect model (MPDM) is proposed which can simulate fault signature-s dynamics for n-points bearing faults. Furthermore, this study identifies that in case of multiple-points fault in the rotary machine, the location of the dominant component of defect frequency shifts depending upon the relative location of the fault points which could mislead the fault diagnostic model to inaccurate detections. Analytical and experimental results are presented to characterize and validate the variation in the dominant component of defect frequency. Based on envelop detection analysis, a modification is recommended in the existing fault diagnostic models to consider the multiples of defect frequency rather than only considering the frequency spectrum at the defect frequency in order to incorporate the impact of multiple points fault.

Keywords: Envelop detection, machine defect frequency, multiple faults, machine health monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2274
748 A Multivariate Statistical Approach for Water Quality Assessment of River Hindon, India

Authors: Nida Rizvi, Deeksha Katyal, Varun Joshi

Abstract:

River Hindon is an important river catering the demand of highly populated rural and industrial cluster of western Uttar Pradesh, India. Water quality of river Hindon is deteriorating at an alarming rate due to various industrial, municipal and agricultural activities. The present study aimed at identifying the pollution sources and quantifying the degree to which these sources are responsible for the deteriorating water quality of the river. Various water quality parameters, like pH, temperature, electrical conductivity, total dissolved solids, total hardness, calcium, chloride, nitrate, sulphate, biological oxygen demand, chemical oxygen demand, and total alkalinity were assessed. Water quality data obtained from eight study sites for one year has been subjected to the two multivariate techniques, namely, principal component analysis and cluster analysis. Principal component analysis was applied with the aim to find out spatial variability and to identify the sources responsible for the water quality of the river. Three Varifactors were obtained after varimax rotation of initial principal components using principal component analysis. Cluster analysis was carried out to classify sampling stations of certain similarity, which grouped eight different sites into two clusters. The study reveals that the anthropogenic influence (municipal, industrial, waste water and agricultural runoff) was the major source of river water pollution. Thus, this study illustrates the utility of multivariate statistical techniques for analysis and elucidation of multifaceted data sets, recognition of pollution sources/factors and understanding temporal/spatial variations in water quality for effective river water quality management.

Keywords: Cluster analysis, multivariate statistical technique, river Hindon, water Quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3813
747 Adaptive Filtering of Heart Rate Signals for an Improved Measure of Cardiac Autonomic Control

Authors: Desmond B. Keenan, Paul Grossman

Abstract:

In order to provide accurate heart rate variability indices of sympathetic and parasympathetic activity, the low frequency and high frequency components of an RR heart rate signal must be adequately separated. This is not always possible by just applying spectral analysis, as power from the high and low frequency components often leak into their adjacent bands. Furthermore, without the respiratory spectra it is not obvious that the low frequency component is not another respiratory component, which can appear in the lower band. This paper describes an adaptive filter, which aids the separation of the low frequency sympathetic and high frequency parasympathetic components from an ECG R-R interval signal, enabling the attainment of more accurate heart rate variability measures. The algorithm is applied to simulated signals and heart rate and respiratory signals acquired from an ambulatory monitor incorporating single lead ECG and inductive plethysmography sensors embedded in a garment. The results show an improvement over standard heart rate variability spectral measurements.

Keywords: Heart rate variability, vagal tone, sympathetic, parasympathetic, spectral analysis, adaptive filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1753
746 Textile Dyeing with Natural Dye from Sappan Tree (Caesalpinia sappan Linn.) Extract

Authors: Ploysai Ohama, Nattida Tumpat

Abstract:

Natural dye extracted from Caesalpinia sappan Linn. was applied to a cotton fabric and silk yarn by dyeing process. The dyestuff component of Caesalpinia sappan Linn. was extracted using water and ethanol. Analytical studies such as UV–VIS spectrophotometry and gravimetric analysis were performed on the extracts. Brazilein, the major dyestuff component of Caesalpinia sappan Linn. was confirmed in both aqueous and ethanolic extracts by UV–VIS spectrum. The color of each dyed material was investigated in terms of the CIELAB (L*, a* and b*) and K/S values. Cotton fabric dyed without mordant had a shade of reddish-brown, while those post-mordanted with aluminum potassium sulfate, ferrous sulfate and copper sulfate produced a variety of wine red to dark purple color shades. Cotton fabric and silk yarn dyeing was studied using aluminum potassium sulfate as a mordant. The observed color strength was enhanced with increase in mordant concentration.

Keywords: Natural dyes, Plant materials, Dyeing, Mordant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5067
745 Optical Properties of WO3-NiO Complementary Electrochromic Devices

Authors: Chih-Ming Wang, Chih-Yu Wen, Ying-Chung Chen, Chun-Chieh Wang, Chien-Chung Hsu, Jui-Yang Chang, Jyun-Min Lin

Abstract:

In this study, we developed a complementary electrochromic device consisting of WO3 and NiO films fabricated by rf-magnetron sputtered. The electrochromic properties of WO3 and NiO films were investigated using cyclic voltammograms (CV), performed on WO3 and NiO films immersed in an electrolyte of 1 M LiClO4 in propylene carbonate (PC). Optical and electrochemical of the films, as a function of coloration–bleaching cycle, were characterized using an UV-Vis-NIR spectrophotometer and cyclic voltammetry (CV). After investigating the properties of WO3 film, NiO film, and complementary electrochromic devices, we concluded that this device provides good reversibility, low power consumption of -2.5 V in color state, high variation of transmittance of 58.96%, changes in optical density of 0.81 and good memory effect under open-circuit conditions. In addition, electrochromic component penetration rate can be retained below 20% within 24h, showing preferred memory features; however, component coloring and bleaching response time are about 33s.

Keywords: Complementary electrochromic device, Rf-magnetron sputtered, Transmittance, Memory effect, Optical density change

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3227
744 Constructing a Suitable Model of Distance Training for Community Leader in the Upper Northeastern Region

Authors: Teerawach Khamkorn, Laongtip Mathurasa, Savittree Rochanasmita Arnold, Witthaya Mekhum

Abstract:

The objective of this research intends to create a suitable model of distance training for community leaders in the upper northeastern region of Thailand. The implementation of the research process is divided into four steps: The first step is to analyze relevant documents. The second step deals with an interview in depth with experts. The third step is concerned with constructing a model. And the fourth step takes aim at model validation by expert assessments. The findings reveal the two important components for constructing an appropriate model of distance training for community leaders in the upper northeastern region. The first component consists of the context of technology management, e.g., principle, policy and goals. The second component can be viewed in two ways. Firstly, there are elements comprising input, process, output and feedback. Secondly, the sub-components include steps and process in training. The result of expert assessments informs that the researcher-s constructed model is consistent and suitable and overall the most appropriate.

Keywords: Constructing, Distance Training, Management, Technology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1345
743 Effect of Plant Nutrients on Anthocyanin Content and Yield Component of Black Glutinous Rice Plants

Authors: Chonlada Bennett, Phumon Sookwong, Sakul Moolkam, Sivapong Naruebal Sugunya Mahatheeranont

Abstract:

The cultivation of black glutinous rice rich in anthocyanins can provide great benefits to both farmers and consumers. Total anthocyanins content and yield component data of black glutinous rice cultivar (KHHK) grown with the addition of mineral elements (Ca, Mg, Cu, Cr, Fe and Se) under soilless conditions were studied. Ca application increased seed anthocyanins content by three-folds compared to controls. Cu application to rice plants obtained the highest number of grains panicle, panicle length and subsequently high panicle weight. Se application had the largest effect on leaf anthocyanins content, the number of tillers, number of panicles and 100-grain weight. These findings showed that the addition of mineral elements had a positive effect on increasing anthocyanins content in black rice plants and seeds as well as the heightened development of black glutinous rice plant growth.

Keywords: Anthocyanins, black glutinous rice, mineral elements, soilless culture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 847
742 Analysis of Rural Roads in Developing Countries Using Principal Component Analysis and Simple Average Technique in the Development of a Road Safety Performance Index

Authors: Muhammad Tufail, Jawad Hussain, Hammad Hussain, Imran Hafeez, Naveed Ahmad

Abstract:

Road safety performance index is a composite index which combines various indicators of road safety into single number. Development of a road safety performance index using appropriate safety performance indicators is essential to enhance road safety. However, a road safety performance index in developing countries has not been given as much priority as needed. The primary objective of this research is to develop a general Road Safety Performance Index (RSPI) for developing countries based on the facility as well as behavior of road user. The secondary objectives include finding the critical inputs in the RSPI and finding the better method of making the index. In this study, the RSPI is developed by selecting four main safety performance indicators i.e., protective system (seat belt, helmet etc.), road (road width, signalized intersections, number of lanes, speed limit), number of pedestrians, and number of vehicles. Data on these four safety performance indicators were collected using observation survey on a 20 km road section of the National Highway N-125 road Taxila, Pakistan. For the development of this composite index, two methods are used: a) Principal Component Analysis (PCA) and b) Equal Weighting (EW) method. PCA is used for extraction, weighting, and linear aggregation of indicators to obtain a single value. An individual index score was calculated for each road section by multiplication of weights and standardized values of each safety performance indicator. However, Simple Average technique was used for weighting and linear aggregation of indicators to develop a RSPI. The road sections are ranked according to RSPI scores using both methods. The two weighting methods are compared, and the PCA method is found to be much more reliable than the Simple Average Technique.

Keywords: Aggregation, index score, indicators, principal component analysis, weighting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 572