Search results for: Hunting Search Algorithm and Firefly Algorithm.
468 Hybrid Adaptive Modeling to Enhance Robustness of Real-Time Optimization
Authors: Hussain Syed Asad, Richard Kwok Kit Yuen, Gongsheng Huang
Abstract:
Real-time optimization has been considered an effective approach for improving energy efficient operation of heating, ventilation, and air-conditioning (HVAC) systems. In model-based real-time optimization, model mismatches cannot be avoided. When model mismatches are significant, the performance of the real-time optimization will be impaired and hence the expected energy saving will be reduced. In this paper, the model mismatches for chiller plant on real-time optimization are considered. In the real-time optimization of the chiller plant, simplified semi-physical or grey box model of chiller is always used, which should be identified using available operation data. To overcome the model mismatches associated with the chiller model, hybrid Genetic Algorithms (HGAs) method is used for online real-time training of the chiller model. HGAs combines Genetic Algorithms (GAs) method (for global search) and traditional optimization method (i.e. faster and more efficient for local search) to avoid conventional hit and trial process of GAs. The identification of model parameters is synthesized as an optimization problem; and the objective function is the Least Square Error between the output from the model and the actual output from the chiller plant. A case study is used to illustrate the implementation of the proposed method. It has been shown that the proposed approach is able to provide reliability in decision making, enhance the robustness of the real-time optimization strategy and improve on energy performance.
Keywords: Energy performance, hybrid adaptive modeling, hybrid genetic algorithms, real-time optimization, heating, ventilation, and air-conditioning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1139467 Optimal Sliding Mode Controller for Knee Flexion During Walking
Authors: Gabriel Sitler, Yousef Sardahi, Asad Salem
Abstract:
This paper presents an optimal and robust sliding mode controller (SMC) to regulate the position of the knee joint angle for patients suffering from knee injuries. The controller imitates the role of active orthoses that produce the joint torques required to overcome gravity and loading forces and regain natural human movements. To this end, a mathematical model of the shank, the lower part of the leg, is derived first and then used for the control system design and computer simulations. The design of the controller is carried out in optimal and multi-objective settings. Four objectives are considered: minimization of the control effort and tracking error; and maximization of the control signal smoothness and closed-loop system’s speed of response. Optimal solutions in terms of the Pareto set and its image, the Pareto front, are obtained. The results show that there are trade-offs among the design objectives and many optimal solutions from which the decision-maker can choose to implement. Also, computer simulations conducted at different points from the Pareto set and assuming knee squat movement demonstrate competing relationships among the design goals. In addition, the proposed control algorithm shows robustness in tracking a standard gait signal when accounting for uncertainty in the shank’s parameters.
Keywords: Optimal control, multi-objective optimization, sliding mode control, wearable knee exoskeletons.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 181466 On the Early Development of Dispersion in Flow through a Tube with Wall Reactions
Abstract:
This is a study on numerical simulation of the convection-diffusion transport of a chemical species in steady flow through a small-diameter tube, which is lined with a very thin layer made up of retentive and absorptive materials. The species may be subject to a first-order kinetic reversible phase exchange with the wall material and irreversible absorption into the tube wall. Owing to the velocity shear across the tube section, the chemical species may spread out axially along the tube at a rate much larger than that given by the molecular diffusion; this process is known as dispersion. While the long-time dispersion behavior, well described by the Taylor model, has been extensively studied in the literature, the early development of the dispersion process is by contrast much less investigated. By early development, that means a span of time, after the release of the chemical into the flow, that is shorter than or comparable to the diffusion time scale across the tube section. To understand the early development of the dispersion, the governing equations along with the reactive boundary conditions are solved numerically using the Flux Corrected Transport Algorithm (FCTA). The computation has enabled us to investigate the combined effects on the early development of the dispersion coefficient due to the reversible and irreversible wall reactions. One of the results is shown that the dispersion coefficient may approach its steady-state limit in a short time under the following conditions: (i) a high value of Damkohler number (say Da ≥ 10); (ii) a small but non-zero value of absorption rate (say Γ* ≤ 0.5).
Keywords: Dispersion coefficient, early development of dispersion, FCTA, wall reactions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1339465 Profit Optimization for Solar Plant Electricity Production
Authors: Fl. Loury, P. Sablonière
Abstract:
In this paper a stochastic scenario-based model predictive control applied to molten salt storage systems in concentrated solar tower power plant is presented. The main goal of this study is to build up a tool to analyze current and expected future resources for evaluating the weekly power to be advertised on electricity secondary market. This tool will allow plant operator to maximize profits while hedging the impact on the system of stochastic variables such as resources or sunlight shortage.
Solving the problem first requires a mixed logic dynamic modeling of the plant. The two stochastic variables, respectively the sunlight incoming energy and electricity demands from secondary market, are modeled by least square regression. Robustness is achieved by drawing a certain number of random variables realizations and applying the most restrictive one to the system. This scenario approach control technique provides the plant operator a confidence interval containing a given percentage of possible stochastic variable realizations in such a way that robust control is always achieved within its bounds. The results obtained from many trajectory simulations show the existence of a ‘’reliable’’ interval, which experimentally confirms the algorithm robustness.
Keywords: Molten Salt Storage System, Concentrated Solar Tower Power Plant, Robust Stochastic Model Predictive Control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1926464 Deep Learning Application for Object Image Recognition and Robot Automatic Grasping
Authors: Shiuh-Jer Huang, Chen-Zon Yan, C. K. Huang, Chun-Chien Ting
Abstract:
Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.
Keywords: Deep learning, image processing, convolution neural network, YOLOv2, 7A6 series manipulator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1094463 Design and Development of iLON Smart Server Based Remote Monitoring System for Induction Motors
Authors: G. S. Ayyappan, M. Raja Raghavan, R. Poonthalir, Kota Srinivas, B. Ramesh Babu
Abstract:
Electrical energy demand in the World and particularly in India, is increasing drastically more than its production over a period of time. In order to reduce the demand-supply gap, conserving energy becomes mandatory. Induction motors are the main driving force in the industries and contributes to about half of the total plant energy consumption. By effective monitoring and control of induction motors, huge electricity can be saved. This paper deals about the design and development of such a system, which employs iLON Smart Server and motor performance monitoring nodes. These nodes will monitor the performance of induction motors on-line, on-site and in-situ in the industries. The node monitors the performance of motors by simply measuring the electrical power input and motor shaft speed; coupled to genetic algorithm to estimate motor efficiency. The nodes are connected to the iLON Server through RS485 network. The web server collects the motor performance data from nodes, displays online, logs periodically, analyzes, alerts, and generates reports. The system could be effectively used to operate the motor around its Best Operating Point (BOP) as well as to perform the Life Cycle Assessment of Induction motors used in the industries in continuous operation.
Keywords: Best operating point, iLON smart server, motor asset management, LONWORKS, Modbus RTU, motor performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 720462 Classification of Potential Biomarkers in Breast Cancer Using Artificial Intelligence Algorithms and Anthropometric Datasets
Authors: Aref Aasi, Sahar Ebrahimi Bajgani, Erfan Aasi
Abstract:
Breast cancer (BC) continues to be the most frequent cancer in females and causes the highest number of cancer-related deaths in women worldwide. Inspired by recent advances in studying the relationship between different patient attributes and features and the disease, in this paper, we have tried to investigate the different classification methods for better diagnosis of BC in the early stages. In this regard, datasets from the University Hospital Centre of Coimbra were chosen, and different machine learning (ML)-based and neural network (NN) classifiers have been studied. For this purpose, we have selected favorable features among the nine provided attributes from the clinical dataset by using a random forest algorithm. This dataset consists of both healthy controls and BC patients, and it was noted that glucose, BMI, resistin, and age have the most importance, respectively. Moreover, we have analyzed these features with various ML-based classifier methods, including Decision Tree (DT), K-Nearest Neighbors (KNN), eXtreme Gradient Boosting (XGBoost), Logistic Regression (LR), Naive Bayes (NB), and Support Vector Machine (SVM) along with NN-based Multi-Layer Perceptron (MLP) classifier. The results revealed that among different techniques, the SVM and MLP classifiers have the most accuracy, with amounts of 96% and 92%, respectively. These results divulged that the adopted procedure could be used effectively for the classification of cancer cells, and also it encourages further experimental investigations with more collected data for other types of cancers.
Keywords: Breast cancer, health diagnosis, Machine Learning, biomarker classification, Neural Network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 320461 Solving an Extended Resource Leveling Problem with Multiobjective Evolutionary Algorithms
Authors: Javier Roca, Etienne Pugnaghi, Gaëtan Libert
Abstract:
We introduce an extended resource leveling model that abstracts real life projects that consider specific work ranges for each resource. Contrary to traditional resource leveling problems this model considers scarce resources and multiple objectives: the minimization of the project makespan and the leveling of each resource usage over time. We formulate this model as a multiobjective optimization problem and we propose a multiobjective genetic algorithm-based solver to optimize it. This solver consists in a two-stage process: a main stage where we obtain non-dominated solutions for all the objectives, and a postprocessing stage where we seek to specifically improve the resource leveling of these solutions. We propose an intelligent encoding for the solver that allows including domain specific knowledge in the solving mechanism. The chosen encoding proves to be effective to solve leveling problems with scarce resources and multiple objectives. The outcome of the proposed solvers represent optimized trade-offs (alternatives) that can be later evaluated by a decision maker, this multi-solution approach represents an advantage over the traditional single solution approach. We compare the proposed solver with state-of-art resource leveling methods and we report competitive and performing results.
Keywords: Intelligent problem encoding, multiobjective decision making, evolutionary computing, RCPSP, resource leveling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4193460 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks
Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone
Abstract:
Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.
Keywords: Artificial Neural Network, Data Mining, Electroencephalogram, Epilepsy, Feature Extraction, Seizure Detection, Signal Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1314459 Customer Need Type Classification Model using Data Mining Techniques for Recommender Systems
Authors: Kyoung-jae Kim
Abstract:
Recommender systems are usually regarded as an important marketing tool in the e-commerce. They use important information about users to facilitate accurate recommendation. The information includes user context such as location, time and interest for personalization of mobile users. We can easily collect information about location and time because mobile devices communicate with the base station of the service provider. However, information about user interest can-t be easily collected because user interest can not be captured automatically without user-s approval process. User interest usually represented as a need. In this study, we classify needs into two types according to prior research. This study investigates the usefulness of data mining techniques for classifying user need type for recommendation systems. We employ several data mining techniques including artificial neural networks, decision trees, case-based reasoning, and multivariate discriminant analysis. Experimental results show that CHAID algorithm outperforms other models for classifying user need type. This study performs McNemar test to examine the statistical significance of the differences of classification results. The results of McNemar test also show that CHAID performs better than the other models with statistical significance.Keywords: Customer need type, Data mining techniques, Recommender system, Personalization, Mobile user.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2146458 Selecting Negative Examples for Protein-Protein Interaction
Authors: Mohammad Shoyaib, M. Abdullah-Al-Wadud, Oksam Chae
Abstract:
Proteomics is one of the largest areas of research for bioinformatics and medical science. An ambitious goal of proteomics is to elucidate the structure, interactions and functions of all proteins within cells and organisms. Predicting Protein-Protein Interaction (PPI) is one of the crucial and decisive problems in current research. Genomic data offer a great opportunity and at the same time a lot of challenges for the identification of these interactions. Many methods have already been proposed in this regard. In case of in-silico identification, most of the methods require both positive and negative examples of protein interaction and the perfection of these examples are very much crucial for the final prediction accuracy. Positive examples are relatively easy to obtain from well known databases. But the generation of negative examples is not a trivial task. Current PPI identification methods generate negative examples based on some assumptions, which are likely to affect their prediction accuracy. Hence, if more reliable negative examples are used, the PPI prediction methods may achieve even more accuracy. Focusing on this issue, a graph based negative example generation method is proposed, which is simple and more accurate than the existing approaches. An interaction graph of the protein sequences is created. The basic assumption is that the longer the shortest path between two protein-sequences in the interaction graph, the less is the possibility of their interaction. A well established PPI detection algorithm is employed with our negative examples and in most cases it increases the accuracy more than 10% in comparison with the negative pair selection method in that paper.Keywords: Interaction graph, Negative training data, Protein-Protein interaction, Support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1702457 Finite Volume Method for Flow Prediction Using Unstructured Meshes
Authors: Juhee Lee, Yongjun Lee
Abstract:
In designing a low-energy-consuming buildings, the heat transfer through a large glass or wall becomes critical. Multiple layers of the window glasses and walls are employed for the high insulation. The gravity driven air flow between window glasses or wall layers is a natural heat convection phenomenon being a key of the heat transfer. For the first step of the natural heat transfer analysis, in this study the development and application of a finite volume method for the numerical computation of viscous incompressible flows is presented. It will become a part of the natural convection analysis with high-order scheme, multi-grid method, and dual-time step in the future. A finite volume method based on a fully-implicit second-order is used to discretize and solve the fluid flow on unstructured grids composed of arbitrary-shaped cells. The integrations of the governing equation are discretised in the finite volume manner using a collocated arrangement of variables. The convergence of the SIMPLE segregated algorithm for the solution of the coupled nonlinear algebraic equations is accelerated by using a sparse matrix solver such as BiCGSTAB. The method used in the present study is verified by applying it to some flows for which either the numerical solution is known or the solution can be obtained using another numerical technique available in the other researches. The accuracy of the method is assessed through the grid refinement.
Keywords: Finite volume method, fluid flow, laminar flow, unstructured grid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1845456 Low-Cost Mechatronic Design of an Omnidirectional Mobile Robot
Authors: S. Cobos-Guzman
Abstract:
This paper presents the results of a mechatronic design based on a 4-wheel omnidirectional mobile robot that can be used in indoor logistic applications. The low-level control has been selected using two open-source hardware (Raspberry Pi 3 Model B+ and Arduino Mega 2560) that control four industrial motors, four ultrasound sensors, four optical encoders, a vision system of two cameras, and a Hokuyo URG-04LX-UG01 laser scanner. Moreover, the system is powered with a lithium battery that can supply 24 V DC and a maximum current-hour of 20Ah.The Robot Operating System (ROS) has been implemented in the Raspberry Pi and the performance is evaluated with the selection of the sensors and hardware selected. The mechatronic system is evaluated and proposed safe modes of power distribution for controlling all the electronic devices based on different tests. Therefore, based on different performance results, some recommendations are indicated for using the Raspberry Pi and Arduino in terms of power, communication, and distribution of control for different devices. According to these recommendations, the selection of sensors is distributed in both real-time controllers (Arduino and Raspberry Pi). On the other hand, the drivers of the cameras have been implemented in Linux and a python program has been implemented to access the cameras. These cameras will be used for implementing a deep learning algorithm to recognize people and objects. In this way, the level of intelligence can be increased in combination with the maps that can be obtained from the laser scanner.
Keywords: Autonomous, indoor robot, mechatronic, omnidirectional robot.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 586455 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore
Authors: Ronal Muresano, Andrea Pagano
Abstract:
Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.
Keywords: Algorithm optimization, Bank Failures, OpenMP, Parallel Techniques, Statistical tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1900454 Research on the Optimization of the Facility Layout of Efficient Cafeterias for Troops
Authors: Qing Zhang, Jiachen Nie, Yujia Wen, Guanyuan Kou, Peng Yu, Kun Xia, Qin Yang, Li Ding
Abstract:
Background: A facility layout problem (FLP) is an NP-complete (non-deterministic polynomial) problem, for which is hard to obtain an exact optimal solution. FLP has been widely studied in various limited spaces and workflows. For example, cafeterias with many types of equipment for troops cause chaotic processes when dining. Objective: This article tried to optimize the layout of a troops’ cafeteria and to improve the overall efficiency of the dining process. Methods: First, the original cafeteria layout design scheme was analyzed from an ergonomic perspective and two new design schemes were generated. Next, three facility layout models were designed, and further simulation was applied to compare the total time and density of troops between each scheme. Last, an experiment of the dining process with video observation and analysis verified the simulation results. Results: In a simulation, the dining time under the second new layout is shortened by 2.25% and 1.89% (p<0.0001, p=0.0001) compared with the other two layouts, while troops-flow density and interference both greatly reduced in the two new layouts. In the experiment, process completing time and the number of interferences reduced as well, which verified corresponding simulation results. Conclusion: Our two new layout schemes are tested to be optimal by a series of simulation and space experiments. In future research, similar approaches could be applied when taking layout-design algorithm calculation into consideration.
Keywords: Troops’ cafeteria, layout optimization, dining efficiency, AnyLogic simulation, field experiment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 509453 Breast Cancer Survivability Prediction via Classifier Ensemble
Authors: Mohamed Al-Badrashiny, Abdelghani Bellaachia
Abstract:
This paper presents a classifier ensemble approach for predicting the survivability of the breast cancer patients using the latest database version of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute. The system consists of two main components; features selection and classifier ensemble components. The features selection component divides the features in SEER database into four groups. After that it tries to find the most important features among the four groups that maximizes the weighted average F-score of a certain classification algorithm. The ensemble component uses three different classifiers, each of which models different set of features from SEER through the features selection module. On top of them, another classifier is used to give the final decision based on the output decisions and confidence scores from each of the underlying classifiers. Different classification algorithms have been examined; the best setup found is by using the decision tree, Bayesian network, and Na¨ıve Bayes algorithms for the underlying classifiers and Na¨ıve Bayes for the classifier ensemble step. The system outperforms all published systems to date when evaluated against the exact same data of SEER (period of 1973-2002). It gives 87.39% weighted average F-score compared to 85.82% and 81.34% of the other published systems. By increasing the data size to cover the whole database (period of 1973-2014), the overall weighted average F-score jumps to 92.4% on the held out unseen test set.Keywords: Classifier ensemble, breast cancer survivability, data mining, SEER.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1671452 Pipelined Control-Path Effects on Area and Performance of a Wormhole-Switched Network-on-Chip
Authors: Faizal A. Samman, Thomas Hollstein, Manfred Glesner
Abstract:
This paper presents design trade-off and performance impacts of the amount of pipeline phase of control path signals in a wormhole-switched network-on-chip (NoC). The numbers of the pipeline phase of the control path vary between two- and one-cycle pipeline phase. The control paths consist of the routing request paths for output selection and the arbitration paths for input selection. Data communications between on-chip routers are implemented synchronously and for quality of service, the inter-router data transports are controlled by using a link-level congestion control to avoid lose of data because of an overflow. The trade-off between the area (logic cell area) and the performance (bandwidth gain) of two proposed NoC router microarchitectures are presented in this paper. The performance evaluation is made by using a traffic scenario with different number of workloads under 2D mesh NoC topology using a static routing algorithm. By using a 130-nm CMOS standard-cell technology, our NoC routers can be clocked at 1 GHz, resulting in a high speed network link and high router bandwidth capacity of about 320 Gbit/s. Based on our experiments, the amount of control path pipeline stages gives more significant impact on the NoC performance than the impact on the logic area of the NoC router.Keywords: Network-on-Chip, Synchronous Parallel Pipeline, Router Architecture, Wormhole Switching
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1483451 Multi-Level Air Quality Classification in China Using Information Gain and Support Vector Machine
Authors: Bingchun Liu, Pei-Chann Chang, Natasha Huang, Dun Li
Abstract:
Machine Learning and Data Mining are the two important tools for extracting useful information and knowledge from large datasets. In machine learning, classification is a wildly used technique to predict qualitative variables and is generally preferred over regression from an operational point of view. Due to the enormous increase in air pollution in various countries especially China, Air Quality Classification has become one of the most important topics in air quality research and modelling. This study aims at introducing a hybrid classification model based on information theory and Support Vector Machine (SVM) using the air quality data of four cities in China namely Beijing, Guangzhou, Shanghai and Tianjin from Jan 1, 2014 to April 30, 2016. China's Ministry of Environmental Protection has classified the daily air quality into 6 levels namely Serious Pollution, Severe Pollution, Moderate Pollution, Light Pollution, Good and Excellent based on their respective Air Quality Index (AQI) values. Using the information theory, information gain (IG) is calculated and feature selection is done for both categorical features and continuous numeric features. Then SVM Machine Learning algorithm is implemented on the selected features with cross-validation. The final evaluation reveals that the IG and SVM hybrid model performs better than SVM (alone), Artificial Neural Network (ANN) and K-Nearest Neighbours (KNN) models in terms of accuracy as well as complexity.
Keywords: Machine learning, air quality classification, air quality index, information gain, support vector machine, cross-validation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 948450 A Novel VLSI Architecture for Image Compression Model Using Low power Discrete Cosine Transform
Authors: Vijaya Prakash.A.M, K.S.Gurumurthy
Abstract:
In Image processing the Image compression can improve the performance of the digital systems by reducing the cost and time in image storage and transmission without significant reduction of the Image quality. This paper describes hardware architecture of low complexity Discrete Cosine Transform (DCT) architecture for image compression[6]. In this DCT architecture, common computations are identified and shared to remove redundant computations in DCT matrix operation. Vector processing is a method used for implementation of DCT. This reduction in computational complexity of 2D DCT reduces power consumption. The 2D DCT is performed on 8x8 matrix using two 1-Dimensional Discrete cosine transform blocks and a transposition memory [7]. Inverse discrete cosine transform (IDCT) is performed to obtain the image matrix and reconstruct the original image. The proposed image compression algorithm is comprehended using MATLAB code. The VLSI design of the architecture is implemented Using Verilog HDL. The proposed hardware architecture for image compression employing DCT was synthesized using RTL complier and it was mapped using 180nm standard cells. . The Simulation is done using Modelsim. The simulation results from MATLAB and Verilog HDL are compared. Detailed analysis for power and area was done using RTL compiler from CADENCE. Power consumption of DCT core is reduced to 1.027mW with minimum area[1].Keywords: Discrete Cosine Transform (DCT), Inverse DiscreteCosine Transform (IDCT), Joint Photographic Expert Group (JPEG), Low Power Design, Very Large Scale Integration (VLSI) .
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3139449 Earth Station Neural Network Control Methodology and Simulation
Authors: Hanaa T. El-Madany, Faten H. Fahmy, Ninet M. A. El-Rahman, Hassen T. Dorrah
Abstract:
Renewable energy resources are inexhaustible, clean as compared with conventional resources. Also, it is used to supply regions with no grid, no telephone lines, and often with difficult accessibility by common transport. Satellite earth stations which located in remote areas are the most important application of renewable energy. Neural control is a branch of the general field of intelligent control, which is based on the concept of artificial intelligence. This paper presents the mathematical modeling of satellite earth station power system which is required for simulating the system.Aswan is selected to be the site under consideration because it is a rich region with solar energy. The complete power system is simulated using MATLAB–SIMULINK.An artificial neural network (ANN) based model has been developed for the optimum operation of earth station power system. An ANN is trained using a back propagation with Levenberg–Marquardt algorithm. The best validation performance is obtained for minimum mean square error. The regression between the network output and the corresponding target is equal to 96% which means a high accuracy. Neural network controller architecture gives satisfactory results with small number of neurons, hence better in terms of memory and time are required for NNC implementation. The results indicate that the proposed control unit using ANN can be successfully used for controlling the satellite earth station power system.
Keywords: Satellite, neural network, MATLAB, power system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1868448 A Control Model for Improving Safety and Efficiency of Navigation System Based on Reinforcement Learning
Authors: Almutasim Billa A. Alanazi, Hal S. Tharp
Abstract:
Artificial Intelligence (AI), specifically Reinforcement Learning (RL), has proven helpful in many control path planning technologies by maximizing and enhancing their performance, such as navigation systems. Since it learns from experience by interacting with the environment to determine the optimal policy, the optimal policy takes the best action in a particular state, accounting for the long-term rewards. Most navigation systems focus primarily on "arriving faster," overlooking safety and efficiency while estimating the optimum path, as safety and efficiency are essential factors when planning for a long-distance journey. This paper represents an RL control model that proposes a control mechanism for improving navigation systems. Also, the model could be applied to other control path planning applications because it is adjustable and can accept different properties and parameters. However, the navigation system application has been taken as a case and evaluation study for the proposed model. The model utilized a Q-learning algorithm for training and updating the policy. It allows the agent to analyze the quality of an action made in the environment to maximize rewards. The model gives the ability to update rewards regularly based on safety and efficiency assessments, allowing the policy to consider the desired safety and efficiency benefits while making decisions, which improves the quality of the decisions taken for path planning compared to the conventional RL approaches.
Keywords: Artificial intelligence, control system, navigation systems, reinforcement learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 201447 Preliminary Roadway Alignment Design: A Spatial-Data Optimization Approach
Authors: Y. Abdelrazig, R. Moses
Abstract:
Roadway planning and design is a very complex process involving five key phases before a project is completed; planning, project development, final design, right-of-way, and construction. The planning phase for a new roadway transportation project is a very critical phase as it greatly affects all latter phases of the project. A location study is usually performed during the preliminary planning phase in a new roadway project. The objective of the location study is to develop alignment alternatives that are cost efficient considering land acquisition and construction costs. This paper describes a methodology to develop optimal preliminary roadway alignments utilizing spatial-data. Four optimization criteria are taken into consideration; roadway length, land cost, land slope, and environmental impacts. The basic concept of the methodology is to convert the proposed project area into a grid, which represents the search space for an optimal alignment. The aforementioned optimization criteria are represented in each of the grid’s cells. A spatial-data optimization technique is utilized to find the optimal alignment in the search space based on the four optimization criteria. Two case studies for new roadway projects in Duval County in the State of Florida are presented to illustrate the methodology. The optimization output alignments are compared to the proposed Florida Department of Transportation (FDOT) alignments. The comparison is based on right-of-way costs for the alignments. For both case studies, the right-of-way costs for the developed optimal alignments were found to be significantly lower than the FDOT alignments.Keywords: Optimization, planning, roadway alignment, FDOT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2034446 Scale Time Offset Robust Modulation (STORM) in a Code Division Multiaccess Environment
Authors: David M. Jenkins Jr.
Abstract:
Scale Time Offset Robust Modulation (STORM) [1]– [3] is a high bandwidth waveform design that adds time-scale to embedded reference modulations using only time-delay [4]. In an environment where each user has a specific delay and scale, identification of the user with the highest signal power and that user-s phase is facilitated by the STORM processor. Both of these parameters are required in an efficient multiuser detection algorithm. In this paper, the STORM modulation approach is evaluated with a direct sequence spread quadrature phase shift keying (DS-QPSK) system. A misconception of the STORM time scale modulation is that a fine temporal resolution is required at the receiver. STORM will be applied to a QPSK code division multiaccess (CDMA) system by modifying the spreading codes. Specifically, the in-phase code will use a typical spreading code, and the quadrature code will use a time-delayed and time-scaled version of the in-phase code. Subsequently, the same temporal resolution in the receiver is required before and after the application of STORM. In this paper, the bit error performance of STORM in a synchronous CDMA system is evaluated and compared to theory, and the bit error performance of STORM incorporated in a single user WCDMA downlink is presented to demonstrate the applicability of STORM in a modern communication system.Keywords: Pseudonoise coded communication, Cyclic codes, Code division multiaccess
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630445 Noise Reduction in Web Data: A Learning Approach Based on Dynamic User Interests
Authors: Julius Onyancha, Valentina Plekhanova
Abstract:
One of the significant issues facing web users is the amount of noise in web data which hinders the process of finding useful information in relation to their dynamic interests. Current research works consider noise as any data that does not form part of the main web page and propose noise web data reduction tools which mainly focus on eliminating noise in relation to the content and layout of web data. This paper argues that not all data that form part of the main web page is of a user interest and not all noise data is actually noise to a given user. Therefore, learning of noise web data allocated to the user requests ensures not only reduction of noisiness level in a web user profile, but also a decrease in the loss of useful information hence improves the quality of a web user profile. Noise Web Data Learning (NWDL) tool/algorithm capable of learning noise web data in web user profile is proposed. The proposed work considers elimination of noise data in relation to dynamic user interest. In order to validate the performance of the proposed work, an experimental design setup is presented. The results obtained are compared with the current algorithms applied in noise web data reduction process. The experimental results show that the proposed work considers the dynamic change of user interest prior to elimination of noise data. The proposed work contributes towards improving the quality of a web user profile by reducing the amount of useful information eliminated as noise.Keywords: Web log data, web user profile, user interest, noise web data learning, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1734444 Comparison of Two Maintenance Policies for a Two-Unit Series System Considering General Repair
Authors: Seyedvahid Najafi, Viliam Makis
Abstract:
In recent years, maintenance optimization has attracted special attention due to the growth of industrial systems complexity. Maintenance costs are high for many systems, and preventive maintenance is effective when it increases operations' reliability and safety at a reduced cost. The novelty of this research is to consider general repair in the modeling of multi-unit series systems and solve the maintenance problem for such systems using the semi-Markov decision process (SMDP) framework. We propose an opportunistic maintenance policy for a series system composed of two main units. Unit 1, which is more expensive than unit 2, is subjected to condition monitoring, and its deterioration is modeled using a gamma process. Unit 1 hazard rate is estimated by the proportional hazards model (PHM), and two hazard rate control limits are considered as the thresholds of maintenance interventions for unit 1. Maintenance is performed on unit 2, considering an age control limit. The objective is to find the optimal control limits and minimize the long-run expected average cost per unit time. The proposed algorithm is applied to a numerical example to compare the effectiveness of the proposed policy (policy Ⅰ) with policy Ⅱ, which is similar to policy Ⅰ, but instead of general repair, replacement is performed. Results show that policy Ⅰ leads to lower average cost compared with policy Ⅱ.
Keywords: Condition-based maintenance, proportional hazards model, semi-Markov decision process, two-unit series systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 584443 Multi-Objective Evolutionary Computation Based Feature Selection Applied to Behaviour Assessment of Children
Authors: F. Jiménez, R. Jódar, M. Martín, G. Sánchez, G. Sciavicco
Abstract:
Abstract—Attribute or feature selection is one of the basic strategies to improve the performances of data classification tasks, and, at the same time, to reduce the complexity of classifiers, and it is a particularly fundamental one when the number of attributes is relatively high. Its application to unsupervised classification is restricted to a limited number of experiments in the literature. Evolutionary computation has already proven itself to be a very effective choice to consistently reduce the number of attributes towards a better classification rate and a simpler semantic interpretation of the inferred classifiers. We present a feature selection wrapper model composed by a multi-objective evolutionary algorithm, the clustering method Expectation-Maximization (EM), and the classifier C4.5 for the unsupervised classification of data extracted from a psychological test named BASC-II (Behavior Assessment System for Children - II ed.) with two objectives: Maximizing the likelihood of the clustering model and maximizing the accuracy of the obtained classifier. We present a methodology to integrate feature selection for unsupervised classification, model evaluation, decision making (to choose the most satisfactory model according to a a posteriori process in a multi-objective context), and testing. We compare the performance of the classifier obtained by the multi-objective evolutionary algorithms ENORA and NSGA-II, and the best solution is then validated by the psychologists that collected the data.Keywords: Feature selection, multi-objective evolutionary computation, unsupervised classification, behavior assessment system for children.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1447442 Multi-Objective Optimization of a Solar-Powered Triple-Effect Absorption Chiller for Air-Conditioning Applications
Authors: Ali Shirazi, Robert A. Taylor, Stephen D. White, Graham L. Morrison
Abstract:
In this paper, a detailed simulation model of a solar-powered triple-effect LiBr–H2O absorption chiller is developed to supply both cooling and heating demand of a large-scale building, aiming to reduce the fossil fuel consumption and greenhouse gas emissions in building sector. TRNSYS 17 is used to simulate the performance of the system over a typical year. A combined energetic-economic-environmental analysis is conducted to determine the system annual primary energy consumption and the total cost, which are considered as two conflicting objectives. A multi-objective optimization of the system is performed using a genetic algorithm to minimize these objectives simultaneously. The optimization results show that the final optimal design of the proposed plant has a solar fraction of 72% and leads to an annual primary energy saving of 0.69 GWh and annual CO2 emissions reduction of ~166 tonnes, as compared to a conventional HVAC system. The economics of this design, however, is not appealing without public funding, which is often the case for many renewable energy systems. The results show that a good funding policy is required in order for these technologies to achieve satisfactory payback periods within the lifetime of the plant.Keywords: Economic, environmental, multi-objective optimization, solar air-conditioning, triple-effect absorption chiller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582441 Recent Advances in Pulse Width Modulation Techniques and Multilevel Inverters
Authors: Satish Kumar Peddapelli
Abstract:
This paper presents advances in pulse width modulation techniques which refers to a method of carrying information on train of pulses and the information be encoded in the width of pulses. Pulse Width Modulation is used to control the inverter output voltage. This is done by exercising the control within the inverter itself by adjusting the ON and OFF periods of inverter. By fixing the DC input voltage we get AC output voltage. In variable speed AC motors the AC output voltage from a constant DC voltage is obtained by using inverter. Recent developments in power electronics and semiconductor technology have lead improvements in power electronic systems. Hence, different circuit configurations namely multilevel inverters have became popular and considerable interest by researcher are given on them. A fast space-vector pulse width modulation (SVPWM) method for five-level inverter is also discussed. In this method, the space vector diagram of the five-level inverter is decomposed into six space vector diagrams of three-level inverters. In turn, each of these six space vector diagrams of three-level inverter is decomposed into six space vector diagrams of two-level inverters. After decomposition, all the remaining necessary procedures for the three-level SVPWM are done like conventional two-level inverter. The proposed method reduces the algorithm complexity and the execution time. It can be applied to the multilevel inverters above the five-level also. The experimental setup for three-level diode-clamped inverter is developed using TMS320LF2407 DSP controller and the experimental results are analyzed.
Keywords: Five-level inverter, Space vector pulse wide modulation, diode clamped inverter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7770440 A Spanning Tree for Enhanced Cluster Based Routing in Wireless Sensor Network
Authors: M. Saravanan, M. Madheswaran
Abstract:
Wireless Sensor Network (WSN) clustering architecture enables features like network scalability, communication overhead reduction, and fault tolerance. After clustering, aggregated data is transferred to data sink and reducing unnecessary, redundant data transfer. It reduces nodes transmitting, and so saves energy consumption. Also, it allows scalability for many nodes, reduces communication overhead, and allows efficient use of WSN resources. Clustering based routing methods manage network energy consumption efficiently. Building spanning trees for data collection rooted at a sink node is a fundamental data aggregation method in sensor networks. The problem of determining Cluster Head (CH) optimal number is an NP-Hard problem. In this paper, we combine cluster based routing features for cluster formation and CH selection and use Minimum Spanning Tree (MST) for intra-cluster communication. The proposed method is based on optimizing MST using Simulated Annealing (SA). In this work, normalized values of mobility, delay, and remaining energy are considered for finding optimal MST. Simulation results demonstrate the effectiveness of the proposed method in improving the packet delivery ratio and reducing the end to end delay.
Keywords: Wireless sensor network, clustering, minimum spanning tree, genetic algorithm, low energy adaptive clustering hierarchy, simulated annealing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1786439 Turbo-Coded Mobile Terrestrial Communication Systems in Urban and Suburban Areas for Wireless Multimedia Applications
Authors: F. Mehran
Abstract:
With the rapid popularization of internet services, it is apparent that the next generation terrestrial communication systems must be capable of supporting various applications like voice, video, and data. This paper presents the performance evaluation of turbo- coded mobile terrestrial communication systems, which are capable of providing high quality services for delay sensitive (voice or video) and delay tolerant (text transmission) multimedia applications in urban and suburban areas. Different types of multimedia information require different service qualities, which are generally expressed in terms of a maximum acceptable bit-error-rate (BER) and maximum tolerable latency. The breakthrough discovery of turbo codes allows us to significantly reduce the probability of bit errors with feasible latency. In a turbo-coded system, a trade-off between latency and BER results from the choice of convolutional component codes, interleaver type and size, decoding algorithm, and the number of decoding iterations. This trade-off can be exploited for multimedia applications by using optimal and suboptimal performance parameter amalgamations to achieve different service qualities. The results are therefore proposing an adaptive framework for turbo-coded wireless multimedia communications which incorporate a set of performance parameters that achieve an appropriate set of service qualities, depending on the application's requirements.
Keywords: Mobile communications, Turbo codes, wireless multimedia communication systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1597