Search results for: minimum root mean square (RMS) error matching algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5669

Search results for: minimum root mean square (RMS) error matching algorithm

179 Decision Tree Based Scheduling for Flexible Job Shops with Multiple Process Plans

Authors: H.-H. Doh, J.-M. Yu, Y.-J. Kwon, J.-H. Shin, H.-W. Kim, S.-H. Nam, D.-H. Lee

Abstract:

This paper suggests a decision tree based approach for flexible job shop scheduling with multiple process plans, i.e. each job can be processed through alternative operations, each of which can be processed on alternative machines. The main decision variables are: (a) selecting operation/machine pair; and (b) sequencing the jobs assigned to each machine. As an extension of the priority scheduling approach that selects the best priority rule combination after many simulation runs, this study suggests a decision tree based approach in which a decision tree is used to select a priority rule combination adequate for a specific system state and hence the burdens required for developing simulation models and carrying out simulation runs can be eliminated. The decision tree based scheduling approach consists of construction and scheduling modules. In the construction module, a decision tree is constructed using a four-stage algorithm, and in the scheduling module, a priority rule combination is selected using the decision tree. To show the performance of the decision tree based approach suggested in this study, a case study was done on a flexible job shop with reconfigurable manufacturing cells and a conventional job shop, and the results are reported by comparing it with individual priority rule combinations for the objectives of minimizing total flow time and total tardiness.

Keywords: Flexible job shop scheduling, Decision tree, Priority rules, Case study.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3277
178 Arduino Pressure Sensor Cushion for Tracking and Improving Sitting Posture

Authors: Andrew Hwang

Abstract:

The average American worker sits for thirteen hours a day, often with poor posture and infrequent breaks, which can lead to health issues and back problems. The Smart Cushion was created to alert individuals of their poor postures, and may potentially alleviate back problems and correct poor posture. The Smart Cushion is a portable, rectangular, foam cushion, with five strategically placed pressure sensors, that utilizes an Arduino Uno circuit board and specifically designed software, allowing it to collect data from the five pressure sensors and store the data on an SD card. The data is then compiled into graphs and compared to controlled postures. Before volunteers sat on the cushion, their levels of back pain were recorded on a scale from 1-10. Data was recorded for an hour during sitting, and then a new, corrected posture was suggested. After using the suggested posture for an hour, the volunteers described their level of discomfort on a scale from 1-10. Different patterns of sitting postures were generated that were able to serve as early warnings of potential back problems. By using the Smart Cushion, the areas where different volunteers were applying the most pressure while sitting could be identified, and the sitting postures could be corrected. Further studies regarding the relationships between posture and specific regions of the body are necessary to better understand the origins of back pain; however, the Smart Cushion is sufficient for correcting sitting posture and preventing the development of additional back pain.

Keywords: Arduino Sketch Algorithm, biomedical technology, pressure sensors, Smart Cushion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1245
177 Solving Part Type Selection and Loading Problem in Flexible Manufacturing System Using Real Coded Genetic Algorithms – Part II: Optimization

Authors: Wayan F. Mahmudy, Romeo M. Marian, Lee H. S. Luong

Abstract:

This paper presents modeling and optimization of two NP-hard problems in flexible manufacturing system (FMS), part type selection problem and loading problem. Due to the complexity and extent of the problems, the paper was split into two parts. The first part of the papers has discussed the modeling of the problems and showed how the real coded genetic algorithms (RCGA) can be applied to solve the problems. This second part discusses the effectiveness of the RCGA which uses an array of real numbers as chromosome representation. The novel proposed chromosome representation produces only feasible solutions which minimize a computational time needed by GA to push its population toward feasible search space or repair infeasible chromosomes. The proposed RCGA improves the FMS performance by considering two objectives, maximizing system throughput and maintaining the balance of the system (minimizing system unbalance). The resulted objective values are compared to the optimum values produced by branch-and-bound method. The experiments show that the proposed RCGA could reach near optimum solutions in a reasonable amount of time.

Keywords: Flexible manufacturing system, production planning, part type selection problem, loading problem, real-coded genetic algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1939
176 Effective Traffic Lights Recognition Method for Real Time Driving Assistance Systemin the Daytime

Authors: Hyun-Koo Kim, Ju H. Park, Ho-Youl Jung

Abstract:

This paper presents an effective traffic lights recognition method at the daytime. First, Potential Traffic Lights Detector (PTLD) use whole color source of YCbCr channel image and make each binary image of green and red traffic lights. After PTLD step, Shape Filter (SF) use to remove noise such as traffic sign, street tree, vehicle, and building. At this time, noise removal properties consist of information of blobs of binary image; length, area, area of boundary box, etc. Finally, after an intermediate association step witch goal is to define relevant candidates region from the previously detected traffic lights, Adaptive Multi-class Classifier (AMC) is executed. The classification method uses Haar-like feature and Adaboost algorithm. For simulation, we are implemented through Intel Core CPU with 2.80 GHz and 4 GB RAM and tested in the urban and rural roads. Through the test, we are compared with our method and standard object-recognition learning processes and proved that it reached up to 94 % of detection rate which is better than the results achieved with cascade classifiers. Computation time of our proposed method is 15 ms.

Keywords: Traffic Light Detection, Multi-class Classification, Driving Assistance System, Haar-like Feature, Color SegmentationMethod, Shape Filter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2750
175 Combined Feature Based Hyperspectral Image Classification Technique Using Support Vector Machines

Authors: Mrs.K.Kavitha, S.Arivazhagan

Abstract:

A spatial classification technique incorporating a State of Art Feature Extraction algorithm is proposed in this paper for classifying a heterogeneous classes present in hyper spectral images. The classification accuracy can be improved if and only if both the feature extraction and classifier selection are proper. As the classes in the hyper spectral images are assumed to have different textures, textural classification is entertained. Run Length feature extraction is entailed along with the Principal Components and Independent Components. A Hyperspectral Image of Indiana Site taken by AVIRIS is inducted for the experiment. Among the original 220 bands, a subset of 120 bands is selected. Gray Level Run Length Matrix (GLRLM) is calculated for the selected forty bands. From GLRLMs the Run Length features for individual pixels are calculated. The Principle Components are calculated for other forty bands. Independent Components are calculated for next forty bands. As Principal & Independent Components have the ability to represent the textural content of pixels, they are treated as features. The summation of Run Length features, Principal Components, and Independent Components forms the Combined Features which are used for classification. SVM with Binary Hierarchical Tree is used to classify the hyper spectral image. Results are validated with ground truth and accuracies are calculated.

Keywords: Multi-class, Run Length features, PCA, ICA, classification and Support Vector Machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1494
174 A 10 Giga VPN Accelerator Board for Trust Channel Security System

Authors: Ki Hyun Kim, Jang-Hee Yoo, Kyo Il Chung

Abstract:

This paper proposes a VPN Accelerator Board (VPN-AB), a virtual private network (VPN) protocol designed for trust channel security system (TCSS). TCSS supports safety communication channel between security nodes in internet. It furnishes authentication, confidentiality, integrity, and access control to security node to transmit data packets with IPsec protocol. TCSS consists of internet key exchange block, security association block, and IPsec engine block. The internet key exchange block negotiates crypto algorithm and key used in IPsec engine block. Security Association blocks setting-up and manages security association information. IPsec engine block treats IPsec packets and consists of networking functions for communication. The IPsec engine block should be embodied by H/W and in-line mode transaction for high speed IPsec processing. Our VPN-AB is implemented with high speed security processor that supports many cryptographic algorithms and in-line mode. We evaluate a small TCSS communication environment, and measure a performance of VPN-AB in the environment. The experiment results show that VPN-AB gets a performance throughput of maximum 15.645Gbps when we set the IPsec protocol with 3DES-HMAC-MD5 tunnel mode.

Keywords: TCSS(Trust Channel Security System), VPN(VirtualPrivate Network), IPsec, SSL, Security Processor, Securitycommunication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2075
173 Traffic Forecasting for Open Radio Access Networks Virtualized Network Functions in 5G Networks

Authors: Khalid Ali, Manar Jammal

Abstract:

In order to meet the stringent latency and reliability requirements of the upcoming 5G networks, Open Radio Access Networks (O-RAN) have been proposed. The virtualization of O-RAN has allowed it to be treated as a Network Function Virtualization (NFV) architecture, while its components are considered Virtualized Network Functions (VNFs). Hence, intelligent Machine Learning (ML) based solutions can be utilized to apply different resource management and allocation techniques on O-RAN. However, intelligently allocating resources for O-RAN VNFs can prove challenging due to the dynamicity of traffic in mobile networks. Network providers need to dynamically scale the allocated resources in response to the incoming traffic. Elastically allocating resources can provide a higher level of flexibility in the network in addition to reducing the OPerational EXpenditure (OPEX) and increasing the resources utilization. Most of the existing elastic solutions are reactive in nature, despite the fact that proactive approaches are more agile since they scale instances ahead of time by predicting the incoming traffic. In this work, we propose and evaluate traffic forecasting models based on the ML algorithm. The algorithms aim at predicting future O-RAN traffic by using previous traffic data. Detailed analysis of the traffic data was carried out to validate the quality and applicability of the traffic dataset. Hence, two ML models were proposed and evaluated based on their prediction capabilities.

Keywords: O-RAN, traffic forecasting, NFV, ARIMA, LSTM, elasticity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 470
172 Detecting Geographically Dispersed Overlay Communities Using Community Networks

Authors: Madhushi Bandara, Dharshana Kasthurirathna, Danaja Maldeniya, Mahendra Piraveenan

Abstract:

Community detection is an extremely useful technique in understanding the structure and function of a social network. Louvain algorithm, which is based on Newman-Girman modularity optimization technique, is extensively used as a computationally efficient method extract the communities in social networks. It has been suggested that the nodes that are in close geographical proximity have a higher tendency of forming communities. Variants of the Newman-Girman modularity measure such as dist-modularity try to normalize the effect of geographical proximity to extract geographically dispersed communities, at the expense of losing the information about the geographically proximate communities. In this work, we propose a method to extract geographically dispersed communities while preserving the information about the geographically proximate communities, by analyzing the ‘community network’, where the centroids of communities would be considered as network nodes. We suggest that the inter-community link strengths, which are normalized over the community sizes, may be used to identify and extract the ‘overlay communities’. The overlay communities would have relatively higher link strengths, despite being relatively apart in their spatial distribution. We apply this method to the Gowalla online social network, which contains the geographical signatures of its users, and identify the overlay communities within it.

Keywords: Social networks, community detection, modularity optimization, geographically dispersed communities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1268
171 Modeling and Optimization of Part Type Selection and Loading Problem in Flexible Manufacturing System Using Real Coded Genetic Algorithms

Authors: Wayan F. Mahmudy, Romeo M. Marian, Lee H. S. Luong

Abstract:

 This paper deals with modeling and optimization of two NP-hard problems in production planning of flexible manufacturing system (FMS), part type selection problem and loading problem. The part type selection problem and the loading problem are strongly related and heavily influence the system’s efficiency and productivity. These problems have been modeled and solved simultaneously by using real coded genetic algorithms (RCGA) which uses an array of real numbers as chromosome representation. The novel proposed chromosome representation produces only feasible solutions which minimize a computational time needed by GA to push its population toward feasible search space or repair infeasible chromosomes. The proposed RCGA improves the FMS performance by considering two objectives, maximizing system throughput and maintaining the balance of the system (minimizing system unbalance). The resulted objective values are compared to the optimum values produced by branch-and-bound method. The experiments show that the proposed RCGA could reach near optimum solutions in a reasonable amount of time.

Keywords: Flexible manufacturing system, production planning, part type selection problem, loading problem, real-coded genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2608
170 Application of Gamma Frailty Model in Survival of Liver Cirrhosis Patients

Authors: Elnaz Saeedi, Jamileh Abolaghasemi, Mohsen Nasiri Tousi, Saeedeh Khosravi

Abstract:

Goals and Objectives: A typical analysis of survival data involves the modeling of time-to-event data, such as the time till death. A frailty model is a random effect model for time-to-event data, where the random effect has a multiplicative influence on the baseline hazard function. This article aims to investigate the use of gamma frailty model with concomitant variable in order to individualize the prognostic factors that influence the liver cirrhosis patients’ survival times. Methods: During the one-year study period (May 2008-May 2009), data have been used from the recorded information of patients with liver cirrhosis who were scheduled for liver transplantation and were followed up for at least seven years in Imam Khomeini Hospital in Iran. In order to determine the effective factors for cirrhotic patients’ survival in the presence of latent variables, the gamma frailty distribution has been applied. In this article, it was considering the parametric model, such as Exponential and Weibull distributions for survival time. Data analysis is performed using R software, and the error level of 0.05 was considered for all tests. Results: 305 patients with liver cirrhosis including 180 (59%) men and 125 (41%) women were studied. The age average of patients was 39.8 years. At the end of the study, 82 (26%) patients died, among them 48 (58%) were men and 34 (42%) women. The main cause of liver cirrhosis was found hepatitis 'B' with 23%, followed by cryptogenic with 22.6% were identified as the second factor. Generally, 7-year’s survival was 28.44 months, for dead patients and for censoring was 19.33 and 31.79 months, respectively. Using multi-parametric survival models of progressive and regressive, Exponential and Weibull models with regard to the gamma frailty distribution were fitted to the cirrhosis data. In both models, factors including, age, bilirubin serum, albumin serum, and encephalopathy had a significant effect on survival time of cirrhotic patients. Conclusion: To investigate the effective factors for the time of patients’ death with liver cirrhosis in the presence of latent variables, gamma frailty model with parametric distributions seems desirable.

Keywords: Frailty model, latent variables, liver cirrhosis, parametric distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1031
169 Customer Segmentation Model in E-commerce Using Clustering Techniques and LRFM Model: The Case of Online Stores in Morocco

Authors: Rachid Ait daoud, Abdellah Amine, Belaid Bouikhalene, Rachid Lbibb

Abstract:

Given the increase in the number of e-commerce sites, the number of competitors has become very important. This means that companies have to take appropriate decisions in order to meet the expectations of their customers and satisfy their needs. In this paper, we present a case study of applying LRFM (length, recency, frequency and monetary) model and clustering techniques in the sector of electronic commerce with a view to evaluating customers’ values of the Moroccan e-commerce websites and then developing effective marketing strategies. To achieve these objectives, we adopt LRFM model by applying a two-stage clustering method. In the first stage, the self-organizing maps method is used to determine the best number of clusters and the initial centroid. In the second stage, kmeans method is applied to segment 730 customers into nine clusters according to their L, R, F and M values. The results show that the cluster 6 is the most important cluster because the average values of L, R, F and M are higher than the overall average value. In addition, this study has considered another variable that describes the mode of payment used by customers to improve and strengthen clusters’ analysis. The clusters’ analysis demonstrates that the payment method is one of the key indicators of a new index which allows to assess the level of customers’ confidence in the company's Website.

Keywords: Customer value, LRFM model, Cluster analysis, Self-Organizing Maps method (SOM), K-means algorithm, loyalty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6205
168 Application of GA Optimization in Analysis of Variable Stiffness Composites

Authors: Nasim Fallahi, Erasmo Carrera, Alfonso Pagani

Abstract:

Variable angle tow describes the fibres which are curvilinearly steered in a composite lamina. Significantly, stiffness tailoring freedom of VAT composite laminate can be enlarged and enabled. Composite structures with curvilinear fibres have been shown to improve the buckling load carrying capability in contrast with the straight laminate composites. However, the optimal design and analysis of VAT are faced with high computational efforts due to the increasing number of variables. In this article, an efficient optimum solution has been used in combination with 1D Carrera’s Unified Formulation (CUF) to investigate the optimum fibre orientation angles for buckling analysis. The particular emphasis is on the LE-based CUF models, which provide a Lagrange Expansions to address a layerwise description of the problem unknowns. The first critical buckling load has been considered under simply supported boundary conditions. Special attention is lead to the sensitivity of buckling load corresponding to the fibre orientation angle in comparison with the results which obtain through the Genetic Algorithm (GA) optimization frame and then Artificial Neural Network (ANN) is applied to investigate the accuracy of the optimized model. As a result, numerical CUF approach with an optimal solution demonstrates the robustness and computational efficiency of proposed optimum methodology.

Keywords: Beam structures, layerwise, optimization, variable angle tow, neural network

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 597
167 A System for Analyzing and Eliciting Public Grievances Using Cache Enabled Big Data

Authors: P. Kaladevi, N. Giridharan

Abstract:

The system for analyzing and eliciting public grievances serves its main purpose to receive and process all sorts of complaints from the public and respond to users. Due to the more number of complaint data becomes big data which is difficult to store and process. The proposed system uses HDFS to store the big data and uses MapReduce to process the big data. The concept of cache was applied in the system to provide immediate response and timely action using big data analytics. Cache enabled big data increases the response time of the system. The unstructured data provided by the users are efficiently handled through map reduce algorithm. The processing of complaints takes place in the order of the hierarchy of the authority. The drawbacks of the traditional database system used in the existing system are set forth by our system by using Cache enabled Hadoop Distributed File System. MapReduce framework codes have the possible to leak the sensitive data through computation process. We propose a system that add noise to the output of the reduce phase to avoid signaling the presence of sensitive data. If the complaints are not processed in the ample time, then automatically it is forwarded to the higher authority. Hence it ensures assurance in processing. A copy of the filed complaint is sent as a digitally signed PDF document to the user mail id which serves as a proof. The system report serves to be an essential data while making important decisions based on legislation.

Keywords: Big Data, Hadoop, HDFS, Caching, MapReduce, web personalization, e-governance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1564
166 DTC-SVM Scheme for Induction Motors Fedwith a Three-level Inverter

Authors: Ehsan Hassankhan, Davood A. Khaburi

Abstract:

Direct Torque Control is a control technique in AC drive systems to obtain high performance torque control. The conventional DTC drive contains a pair of hysteresis comparators. DTC drives utilizing hysteresis comparators suffer from high torque ripple and variable switching frequency. The most common solution to those problems is to use the space vector depends on the reference torque and flux. In this Paper The space vector modulation technique (SVPWM) is applied to 2 level inverter control in the proposed DTC-based induction motor drive system, thereby dramatically reducing the torque ripple. Then the controller based on space vector modulation is designed to be applied in the control of Induction Motor (IM) with a three-level Inverter. This type of Inverter has several advantages over the standard two-level VSI, such as a greater number of levels in the output voltage waveforms, Lower dV/dt, less harmonic distortion in voltage and current waveforms and lower switching frequencies. This paper proposes a general SVPWM algorithm for three-level based on standard two-level SVPWM. The proposed scheme is described clearly and simulation results are reported to demonstrate its effectiveness. The entire control scheme is implemented with Matlab/Simulink.

Keywords: Direct torque control, space vector Pulsewidthmodulation(SVPWM), neutral point clamped(NPC), two-levelinverter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4370
165 Objects Extraction by Cooperating Optical Flow, Edge Detection and Region Growing Procedures

Authors: C. Lodato, S. Lopes

Abstract:

The image segmentation method described in this paper has been developed as a pre-processing stage to be used in methodologies and tools for video/image indexing and retrieval by content. This method solves the problem of whole objects extraction from background and it produces images of single complete objects from videos or photos. The extracted images are used for calculating the object visual features necessary for both indexing and retrieval processes. The segmentation algorithm is based on the cooperation among an optical flow evaluation method, edge detection and region growing procedures. The optical flow estimator belongs to the class of differential methods. It permits to detect motions ranging from a fraction of a pixel to a few pixels per frame, achieving good results in presence of noise without the need of a filtering pre-processing stage and includes a specialised model for moving object detection. The first task of the presented method exploits the cues from motion analysis for moving areas detection. Objects and background are then refined using respectively edge detection and seeded region growing procedures. All the tasks are iteratively performed until objects and background are completely resolved. The method has been applied to a variety of indoor and outdoor scenes where objects of different type and shape are represented on variously textured background.

Keywords: Image Segmentation, Motion Detection, Object Extraction, Optical Flow

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736
164 Optimization of Reaction Rate Parameters in Modeling of Heavy Paraffins Dehydrogenation

Authors: Leila Vafajoo, Farhad Khorasheh, Mehrnoosh Hamzezadeh Nakhjavani, Moslem Fattahi

Abstract:

In the present study, a procedure was developed to determine the optimum reaction rate constants in generalized Arrhenius form and optimized through the Nelder-Mead method. For this purpose, a comprehensive mathematical model of a fixed bed reactor for dehydrogenation of heavy paraffins over Pt–Sn/Al2O3 catalyst was developed. Utilizing appropriate kinetic rate expressions for the main dehydrogenation reaction as well as side reactions and catalyst deactivation, a detailed model for the radial flow reactor was obtained. The reactor model composed of a set of partial differential equations (PDE), ordinary differential equations (ODE) as well as algebraic equations all of which were solved numerically to determine variations in components- concentrations in term of mole percents as a function of time and reactor radius. It was demonstrated that most significant variations observed at the entrance of the bed and the initial olefin production obtained was rather high. The aforementioned method utilized a direct-search optimization algorithm along with the numerical solution of the governing differential equations. The usefulness and validity of the method was demonstrated by comparing the predicted values of the kinetic constants using the proposed method with a series of experimental values reported in the literature for different systems.

Keywords: Dehydrogenation, Pt-Sn/Al2O3 Catalyst, Modeling, Nelder-Mead, Optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2708
163 Distances over Incomplete Diabetes and Breast Cancer Data Based on Bhattacharyya Distance

Authors: Loai AbdAllah, Mahmoud Kaiyal

Abstract:

Missing values in real-world datasets are a common problem. Many algorithms were developed to deal with this problem, most of them replace the missing values with a fixed value that was computed based on the observed values. In our work, we used a distance function based on Bhattacharyya distance to measure the distance between objects with missing values. Bhattacharyya distance, which measures the similarity of two probability distributions. The proposed distance distinguishes between known and unknown values. Where the distance between two known values is the Mahalanobis distance. When, on the other hand, one of them is missing the distance is computed based on the distribution of the known values, for the coordinate that contains the missing value. This method was integrated with Wikaya, a digital health company developing a platform that helps to improve prevention of chronic diseases such as diabetes and cancer. In order for Wikaya’s recommendation system to work distance between users need to be measured. Since there are missing values in the collected data, there is a need to develop a distance function distances between incomplete users profiles. To evaluate the accuracy of the proposed distance function in reflecting the actual similarity between different objects, when some of them contain missing values, we integrated it within the framework of k nearest neighbors (kNN) classifier, since its computation is based only on the similarity between objects. To validate this, we ran the algorithm over diabetes and breast cancer datasets, standard benchmark datasets from the UCI repository. Our experiments show that kNN classifier using our proposed distance function outperforms the kNN using other existing methods.

Keywords: Missing values, distance metric, Bhattacharyya distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 749
162 A Sensorless Robust Tracking Control of an Implantable Rotary Blood Pump for Heart Failure Patients

Authors: Mohsen A. Bakouri, Andrey V. Savkin, Abdul-Hakeem H. Alomari, Robert F. Salamonsen, Einly Lim, Nigel H. Lovell

Abstract:

Physiological control of a left ventricle assist device (LVAD) is generally a complicated task due to diverse operating environments and patient variability. In this work, a tracking control algorithm based on sliding mode and feed forward control for a class of discrete-time single input single output (SISO) nonlinear uncertain systems is presented. The controller was developed to track the reference trajectory to a set operating point without inducing suction in the ventricle. The controller regulates the estimated mean pulsatile flow Qp and mean pulsatility index of pump rotational speed PIω that was generated from a model of the assist device. We recall the principle of the sliding mode control theory then we combine the feed-forward control design with the sliding mode control technique to follow the reference trajectory. The uncertainty is replaced by its upper and lower boundary. The controller was tested in a computer simulation covering two scenarios (preload and ventricular contractility). The simulation results prove the effectiveness and the robustness of the proposed controller

Keywords: robust control system, discrete-sliding mode, left ventricularle assist devicse, pulsatility index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1845
161 Intelligent Recognition of Diabetes Disease via FCM Based Attribute Weighting

Authors: Kemal Polat

Abstract:

In this paper, an attribute weighting method called fuzzy C-means clustering based attribute weighting (FCMAW) for classification of Diabetes disease dataset has been used. The aims of this study are to reduce the variance within attributes of diabetes dataset and to improve the classification accuracy of classifier algorithm transforming from non-linear separable datasets to linearly separable datasets. Pima Indians Diabetes dataset has two classes including normal subjects (500 instances) and diabetes subjects (268 instances). Fuzzy C-means clustering is an improved version of K-means clustering method and is one of most used clustering methods in data mining and machine learning applications. In this study, as the first stage, fuzzy C-means clustering process has been used for finding the centers of attributes in Pima Indians diabetes dataset and then weighted the dataset according to the ratios of the means of attributes to centers of theirs. Secondly, after weighting process, the classifier algorithms including support vector machine (SVM) and k-NN (k- nearest neighbor) classifiers have been used for classifying weighted Pima Indians diabetes dataset. Experimental results show that the proposed attribute weighting method (FCMAW) has obtained very promising results in the classification of Pima Indians diabetes dataset.

Keywords: Fuzzy C-means clustering, Fuzzy C-means clustering based attribute weighting, Pima Indians diabetes dataset, SVM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1735
160 Optimized Facial Features-based Age Classification

Authors: Md. Zahangir Alom, Mei-Lan Piao, Md. Shariful Islam, Nam Kim, Jae-Hyeung Park

Abstract:

The evaluation and measurement of human body dimensions are achieved by physical anthropometry. This research was conducted in view of the importance of anthropometric indices of the face in forensic medicine, surgery, and medical imaging. The main goal of this research is to optimization of facial feature point by establishing a mathematical relationship among facial features and used optimize feature points for age classification. Since selected facial feature points are located to the area of mouth, nose, eyes and eyebrow on facial images, all desire facial feature points are extracted accurately. According this proposes method; sixteen Euclidean distances are calculated from the eighteen selected facial feature points vertically as well as horizontally. The mathematical relationships among horizontal and vertical distances are established. Moreover, it is also discovered that distances of the facial feature follows a constant ratio due to age progression. The distances between the specified features points increase with respect the age progression of a human from his or her childhood but the ratio of the distances does not change (d = 1 .618 ) . Finally, according to the proposed mathematical relationship four independent feature distances related to eight feature points are selected from sixteen distances and eighteen feature point-s respectively. These four feature distances are used for classification of age using Support Vector Machine (SVM)-Sequential Minimal Optimization (SMO) algorithm and shown around 96 % accuracy. Experiment result shows the proposed system is effective and accurate for age classification.

Keywords: 3D Face Model, Face Anthropometrics, Facial Features Extraction, Feature distances, SVM-SMO

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2021
159 Optimal Simultaneous Sizing and Siting of DGs and Smart Meters Considering Voltage Profile Improvement in Active Distribution Networks

Authors: T. Sattarpour, D. Nazarpour

Abstract:

This paper investigates the effect of simultaneous placement of DGs and smart meters (SMs), on voltage profile improvement in active distribution networks (ADNs). A substantial center of attention has recently been on responsive loads initiated in power system problem studies such as distributed generations (DGs). Existence of responsive loads in active distribution networks (ADNs) would have undeniable effect on sizing and siting of DGs. For this reason, an optimal framework is proposed for sizing and siting of DGs and SMs in ADNs. SMs are taken into consideration for the sake of successful implementing of demand response programs (DRPs) such as direct load control (DLC) with end-side consumers. Looking for voltage profile improvement, the optimization procedure is solved by genetic algorithm (GA) and tested on IEEE 33-bus distribution test system. Different scenarios with variations in the number of DG units, individual or simultaneous placing of DGs and SMs, and adaptive power factor (APF) mode for DGs to support reactive power have been established. The obtained results confirm the significant effect of DRPs and APF mode in determining the optimal size and site of DGs to be connected in ADN resulting to the improvement of voltage profile as well.

Keywords: Active distribution network (ADN), distributed generations (DGs), smart meters (SMs), demand response programs (DRPs), adaptive power factor (APF).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1742
158 Unstructured-Data Content Search Based on Optimized EEG Signal Processing and Multi-Objective Feature Extraction

Authors: Qais M. Yousef, Yasmeen A. Alshaer

Abstract:

Over the last few years, the amount of data available on the globe has been increased rapidly. This came up with the emergence of recent concepts, such as the big data and the Internet of Things, which have furnished a suitable solution for the availability of data all over the world. However, managing this massive amount of data remains a challenge due to their large verity of types and distribution. Therefore, locating the required file particularly from the first trial turned to be a not easy task, due to the large similarities of names for different files distributed on the web. Consequently, the accuracy and speed of search have been negatively affected. This work presents a method using Electroencephalography signals to locate the files based on their contents. Giving the concept of natural mind waves processing, this work analyses the mind wave signals of different people, analyzing them and extracting their most appropriate features using multi-objective metaheuristic algorithm, and then classifying them using artificial neural network to distinguish among files with similar names. The aim of this work is to provide the ability to find the files based on their contents using human thoughts only. Implementing this approach and testing it on real people proved its ability to find the desired files accurately within noticeably shorter time and retrieve them as a first choice for the user.

Keywords: Artificial intelligence, data contents search, human active memory, mind wave, multi-objective optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 890
157 Comparison between Experimental and Numerical Studies of Fully Encased Composite Columns

Authors: Md. Soebur Rahman, Mahbuba Begum, Raquib Ahsan

Abstract:

Composite column is a structural member that uses a combination of structural steel shapes, pipes or tubes with or without reinforcing steel bars and reinforced concrete to provide adequate load carrying capacity to sustain either axial compressive loads alone or a combination of axial loads and bending moments. Composite construction takes the advantages of the speed of construction, light weight and strength of steel, and the higher mass, stiffness, damping properties and economy of reinforced concrete. The most usual types of composite columns are the concrete filled steel tubes and the partially or fully encased steel profiles. Fully encased composite column (FEC) provides compressive strength, stability, stiffness, improved fire proofing and better corrosion protection. This paper reports experimental and numerical investigations of the behaviour of concrete encased steel composite columns subjected to short-term axial load. In this study, eleven short FEC columns with square shaped cross section were constructed and tested to examine the load-deflection behavior. The main variables in the test were considered as concrete compressive strength, cross sectional size and percentage of structural steel. A nonlinear 3-D finite element (FE) model has been developed to analyse the inelastic behaviour of steel, concrete, and longitudinal reinforcement as well as the effect of concrete confinement of the FEC columns. FE models have been validated against the current experimental study conduct in the laboratory and published experimental results under concentric load. It has been observed that FE model is able to predict the experimental behaviour of FEC columns under concentric gravity loads with good accuracy. Good agreement has been achieved between the complete experimental and the numerical load-deflection behaviour in this study. The capacities of each constituent of FEC columns such as structural steel, concrete and rebar's were also determined from the numerical study. Concrete is observed to provide around 57% of the total axial capacity of the column whereas the steel I-sections contributes to the rest of the capacity as well as ductility of the overall system. The nonlinear FE model developed in this study is also used to explore the effect of concrete strength and percentage of structural steel on the behaviour of FEC columns under concentric loads. The axial capacity of FEC columns has been found to increase significantly by increasing the strength of concrete.

Keywords: Composite, columns, experimental, finite element, fully encased, strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2812
156 On the Early Development of Dispersion in Flow through a Tube with Wall Reactions

Authors: M. W. Lau, C. O. Ng

Abstract:

This is a study on numerical simulation of the convection-diffusion transport of a chemical species in steady flow through a small-diameter tube, which is lined with a very thin layer made up of retentive and absorptive materials. The species may be subject to a first-order kinetic reversible phase exchange with the wall material and irreversible absorption into the tube wall. Owing to the velocity shear across the tube section, the chemical species may spread out axially along the tube at a rate much larger than that given by the molecular diffusion; this process is known as dispersion. While the long-time dispersion behavior, well described by the Taylor model, has been extensively studied in the literature, the early development of the dispersion process is by contrast much less investigated. By early development, that means a span of time, after the release of the chemical into the flow, that is shorter than or comparable to the diffusion time scale across the tube section. To understand the early development of the dispersion, the governing equations along with the reactive boundary conditions are solved numerically using the Flux Corrected Transport Algorithm (FCTA). The computation has enabled us to investigate the combined effects on the early development of the dispersion coefficient due to the reversible and irreversible wall reactions. One of the results is shown that the dispersion coefficient may approach its steady-state limit in a short time under the following conditions: (i) a high value of Damkohler number (say Da ≥ 10); (ii) a small but non-zero value of absorption rate (say Γ* ≤ 0.5).

Keywords: Dispersion coefficient, early development of dispersion, FCTA, wall reactions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1316
155 Conflation Methodology Applied to Flood Recovery

Authors: E. L. Suarez, D. E. Meeroff, Y. Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: Community resilience, conflation, flood risk, nuisance flooding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 75
154 In vitro Study of Laser Diode Radiation Effect on the Photo-Damage of MCF-7 and MCF-10A Cell Clusters

Authors: A. Dashti, M. Eskandari, L. Farahmand, P. Parvin, A. Jafargholi

Abstract:

Breast Cancer is one of the most considerable diseases in the United States and other countries and is the second leading cause of death in women. Common breast cancer treatments would lead to adverse side effects such as loss of hair, nausea, and weakness. These complications arise because these cancer treatments damage some healthy cells while eliminating the cancer cells. In an effort to address these complications, laser radiation was utilized and tested as a targeted cancer treatment for breast cancer. In this regard, tissue engineering approaches are being employed by using an electrospun scaffold in order to facilitate the growth of breast cancer cells. Polycaprolacton (PCL) was used as a material for scaffold fabricating because of its biocompatibility, biodegradability, and supporting cell growth. The specific breast cancer cells have the ability to create a three-dimensional cell cluster due to the spontaneous accumulation of cells in the porosity of the scaffold under some specific conditions. Therefore, we are looking for a higher density of porosity and larger pore size. Fibers showed uniform diameter distribution and final scaffold had optimum characteristics with approximately 40% porosity. The images were taken by SEM and the density and the size of the porosity were determined with the Image. After scaffold preparation, it has cross-linked by glutaraldehyde. Then, it has been washed with glycine and phosphate buffer saline (PBS), in order to neutralize the residual glutaraldehyde. 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromidefor (MTT) results have represented approximately 91.13% viability of the scaffolds for cancer cells. In order to create a cluster, Michigan Cancer Foundation-7 (MCF-7, breast cancer cell line) and Michigan Cancer Foundation-10A (MCF-10A, human mammary epithelial cell line) cells were cultured on the scaffold in 24 well plate for five days. Then, we have exposed the cluster to the laser diode 808 nm radiation to investigate the effect of laser on the tumor with different power and time. Under the same conditions, cancer cells lost their viability more than the healthy ones. In conclusion, laser therapy is a viable method to destroy the target cells and has a minimum effect on the healthy tissues and cells and it can improve the other method of cancer treatments limitations.

Keywords: Breast cancer, electrospun scaffold, polycaprolacton, laser diode, cancer treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 773
153 Design and Development of iLON Smart Server Based Remote Monitoring System for Induction Motors

Authors: G. S. Ayyappan, M. Raja Raghavan, R. Poonthalir, Kota Srinivas, B. Ramesh Babu

Abstract:

Electrical energy demand in the World and particularly in India, is increasing drastically more than its production over a period of time. In order to reduce the demand-supply gap, conserving energy becomes mandatory. Induction motors are the main driving force in the industries and contributes to about half of the total plant energy consumption. By effective monitoring and control of induction motors, huge electricity can be saved. This paper deals about the design and development of such a system, which employs iLON Smart Server and motor performance monitoring nodes. These nodes will monitor the performance of induction motors on-line, on-site and in-situ in the industries. The node monitors the performance of motors by simply measuring the electrical power input and motor shaft speed; coupled to genetic algorithm to estimate motor efficiency. The nodes are connected to the iLON Server through RS485 network. The web server collects the motor performance data from nodes, displays online, logs periodically, analyzes, alerts, and generates reports. The system could be effectively used to operate the motor around its Best Operating Point (BOP) as well as to perform the Life Cycle Assessment of Induction motors used in the industries in continuous operation.

Keywords: Best operating point, iLON smart server, motor asset management, LONWORKS, Modbus RTU, motor performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 671
152 Classification of Potential Biomarkers in Breast Cancer Using Artificial Intelligence Algorithms and Anthropometric Datasets

Authors: Aref Aasi, Sahar Ebrahimi Bajgani, Erfan Aasi

Abstract:

Breast cancer (BC) continues to be the most frequent cancer in females and causes the highest number of cancer-related deaths in women worldwide. Inspired by recent advances in studying the relationship between different patient attributes and features and the disease, in this paper, we have tried to investigate the different classification methods for better diagnosis of BC in the early stages. In this regard, datasets from the University Hospital Centre of Coimbra were chosen, and different machine learning (ML)-based and neural network (NN) classifiers have been studied. For this purpose, we have selected favorable features among the nine provided attributes from the clinical dataset by using a random forest algorithm. This dataset consists of both healthy controls and BC patients, and it was noted that glucose, BMI, resistin, and age have the most importance, respectively. Moreover, we have analyzed these features with various ML-based classifier methods, including Decision Tree (DT), K-Nearest Neighbors (KNN), eXtreme Gradient Boosting (XGBoost), Logistic Regression (LR), Naive Bayes (NB), and Support Vector Machine (SVM) along with NN-based Multi-Layer Perceptron (MLP) classifier. The results revealed that among different techniques, the SVM and MLP classifiers have the most accuracy, with amounts of 96% and 92%, respectively. These results divulged that the adopted procedure could be used effectively for the classification of cancer cells, and also it encourages further experimental investigations with more collected data for other types of cancers.

Keywords: Breast cancer, health diagnosis, Machine Learning, biomarker classification, Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 252
151 Solving an Extended Resource Leveling Problem with Multiobjective Evolutionary Algorithms

Authors: Javier Roca, Etienne Pugnaghi, Gaëtan Libert

Abstract:

We introduce an extended resource leveling model that abstracts real life projects that consider specific work ranges for each resource. Contrary to traditional resource leveling problems this model considers scarce resources and multiple objectives: the minimization of the project makespan and the leveling of each resource usage over time. We formulate this model as a multiobjective optimization problem and we propose a multiobjective genetic algorithm-based solver to optimize it. This solver consists in a two-stage process: a main stage where we obtain non-dominated solutions for all the objectives, and a postprocessing stage where we seek to specifically improve the resource leveling of these solutions. We propose an intelligent encoding for the solver that allows including domain specific knowledge in the solving mechanism. The chosen encoding proves to be effective to solve leveling problems with scarce resources and multiple objectives. The outcome of the proposed solvers represent optimized trade-offs (alternatives) that can be later evaluated by a decision maker, this multi-solution approach represents an advantage over the traditional single solution approach. We compare the proposed solver with state-of-art resource leveling methods and we report competitive and performing results.

Keywords: Intelligent problem encoding, multiobjective decision making, evolutionary computing, RCPSP, resource leveling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4159
150 Customer Need Type Classification Model using Data Mining Techniques for Recommender Systems

Authors: Kyoung-jae Kim

Abstract:

Recommender systems are usually regarded as an important marketing tool in the e-commerce. They use important information about users to facilitate accurate recommendation. The information includes user context such as location, time and interest for personalization of mobile users. We can easily collect information about location and time because mobile devices communicate with the base station of the service provider. However, information about user interest can-t be easily collected because user interest can not be captured automatically without user-s approval process. User interest usually represented as a need. In this study, we classify needs into two types according to prior research. This study investigates the usefulness of data mining techniques for classifying user need type for recommendation systems. We employ several data mining techniques including artificial neural networks, decision trees, case-based reasoning, and multivariate discriminant analysis. Experimental results show that CHAID algorithm outperforms other models for classifying user need type. This study performs McNemar test to examine the statistical significance of the differences of classification results. The results of McNemar test also show that CHAID performs better than the other models with statistical significance.

Keywords: Customer need type, Data mining techniques, Recommender system, Personalization, Mobile user.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2118