Search results for: gravity gradient inversion algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3812

Search results for: gravity gradient inversion algorithm

152 Research on the Optimization of the Facility Layout of Efficient Cafeterias for Troops

Authors: Qing Zhang, Jiachen Nie, Yujia Wen, Guanyuan Kou, Peng Yu, Kun Xia, Qin Yang, Li Ding

Abstract:

Background: A facility layout problem (FLP) is an NP-complete (non-deterministic polynomial) problem, for which is hard to obtain an exact optimal solution. FLP has been widely studied in various limited spaces and workflows. For example, cafeterias with many types of equipment for troops cause chaotic processes when dining. Objective: This article tried to optimize the layout of a troops’ cafeteria and to improve the overall efficiency of the dining process. Methods: First, the original cafeteria layout design scheme was analyzed from an ergonomic perspective and two new design schemes were generated. Next, three facility layout models were designed, and further simulation was applied to compare the total time and density of troops between each scheme. Last, an experiment of the dining process with video observation and analysis verified the simulation results. Results: In a simulation, the dining time under the second new layout is shortened by 2.25% and 1.89% (p<0.0001, p=0.0001) compared with the other two layouts, while troops-flow density and interference both greatly reduced in the two new layouts. In the experiment, process completing time and the number of interferences reduced as well, which verified corresponding simulation results. Conclusion: Our two new layout schemes are tested to be optimal by a series of simulation and space experiments. In future research, similar approaches could be applied when taking layout-design algorithm calculation into consideration.

Keywords: Troops’ cafeteria, layout optimization, dining efficiency, AnyLogic simulation, field experiment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 509
151 Breast Cancer Survivability Prediction via Classifier Ensemble

Authors: Mohamed Al-Badrashiny, Abdelghani Bellaachia

Abstract:

This paper presents a classifier ensemble approach for predicting the survivability of the breast cancer patients using the latest database version of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute. The system consists of two main components; features selection and classifier ensemble components. The features selection component divides the features in SEER database into four groups. After that it tries to find the most important features among the four groups that maximizes the weighted average F-score of a certain classification algorithm. The ensemble component uses three different classifiers, each of which models different set of features from SEER through the features selection module. On top of them, another classifier is used to give the final decision based on the output decisions and confidence scores from each of the underlying classifiers. Different classification algorithms have been examined; the best setup found is by using the decision tree, Bayesian network, and Na¨ıve Bayes algorithms for the underlying classifiers and Na¨ıve Bayes for the classifier ensemble step. The system outperforms all published systems to date when evaluated against the exact same data of SEER (period of 1973-2002). It gives 87.39% weighted average F-score compared to 85.82% and 81.34% of the other published systems. By increasing the data size to cover the whole database (period of 1973-2014), the overall weighted average F-score jumps to 92.4% on the held out unseen test set.

Keywords: Classifier ensemble, breast cancer survivability, data mining, SEER.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1671
150 Pipelined Control-Path Effects on Area and Performance of a Wormhole-Switched Network-on-Chip

Authors: Faizal A. Samman, Thomas Hollstein, Manfred Glesner

Abstract:

This paper presents design trade-off and performance impacts of the amount of pipeline phase of control path signals in a wormhole-switched network-on-chip (NoC). The numbers of the pipeline phase of the control path vary between two- and one-cycle pipeline phase. The control paths consist of the routing request paths for output selection and the arbitration paths for input selection. Data communications between on-chip routers are implemented synchronously and for quality of service, the inter-router data transports are controlled by using a link-level congestion control to avoid lose of data because of an overflow. The trade-off between the area (logic cell area) and the performance (bandwidth gain) of two proposed NoC router microarchitectures are presented in this paper. The performance evaluation is made by using a traffic scenario with different number of workloads under 2D mesh NoC topology using a static routing algorithm. By using a 130-nm CMOS standard-cell technology, our NoC routers can be clocked at 1 GHz, resulting in a high speed network link and high router bandwidth capacity of about 320 Gbit/s. Based on our experiments, the amount of control path pipeline stages gives more significant impact on the NoC performance than the impact on the logic area of the NoC router.

Keywords: Network-on-Chip, Synchronous Parallel Pipeline, Router Architecture, Wormhole Switching

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1483
149 Multi-Level Air Quality Classification in China Using Information Gain and Support Vector Machine

Authors: Bingchun Liu, Pei-Chann Chang, Natasha Huang, Dun Li

Abstract:

Machine Learning and Data Mining are the two important tools for extracting useful information and knowledge from large datasets. In machine learning, classification is a wildly used technique to predict qualitative variables and is generally preferred over regression from an operational point of view. Due to the enormous increase in air pollution in various countries especially China, Air Quality Classification has become one of the most important topics in air quality research and modelling. This study aims at introducing a hybrid classification model based on information theory and Support Vector Machine (SVM) using the air quality data of four cities in China namely Beijing, Guangzhou, Shanghai and Tianjin from Jan 1, 2014 to April 30, 2016. China's Ministry of Environmental Protection has classified the daily air quality into 6 levels namely Serious Pollution, Severe Pollution, Moderate Pollution, Light Pollution, Good and Excellent based on their respective Air Quality Index (AQI) values. Using the information theory, information gain (IG) is calculated and feature selection is done for both categorical features and continuous numeric features. Then SVM Machine Learning algorithm is implemented on the selected features with cross-validation. The final evaluation reveals that the IG and SVM hybrid model performs better than SVM (alone), Artificial Neural Network (ANN) and K-Nearest Neighbours (KNN) models in terms of accuracy as well as complexity.

Keywords: Machine learning, air quality classification, air quality index, information gain, support vector machine, cross-validation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 948
148 A Novel VLSI Architecture for Image Compression Model Using Low power Discrete Cosine Transform

Authors: Vijaya Prakash.A.M, K.S.Gurumurthy

Abstract:

In Image processing the Image compression can improve the performance of the digital systems by reducing the cost and time in image storage and transmission without significant reduction of the Image quality. This paper describes hardware architecture of low complexity Discrete Cosine Transform (DCT) architecture for image compression[6]. In this DCT architecture, common computations are identified and shared to remove redundant computations in DCT matrix operation. Vector processing is a method used for implementation of DCT. This reduction in computational complexity of 2D DCT reduces power consumption. The 2D DCT is performed on 8x8 matrix using two 1-Dimensional Discrete cosine transform blocks and a transposition memory [7]. Inverse discrete cosine transform (IDCT) is performed to obtain the image matrix and reconstruct the original image. The proposed image compression algorithm is comprehended using MATLAB code. The VLSI design of the architecture is implemented Using Verilog HDL. The proposed hardware architecture for image compression employing DCT was synthesized using RTL complier and it was mapped using 180nm standard cells. . The Simulation is done using Modelsim. The simulation results from MATLAB and Verilog HDL are compared. Detailed analysis for power and area was done using RTL compiler from CADENCE. Power consumption of DCT core is reduced to 1.027mW with minimum area[1].

Keywords: Discrete Cosine Transform (DCT), Inverse DiscreteCosine Transform (IDCT), Joint Photographic Expert Group (JPEG), Low Power Design, Very Large Scale Integration (VLSI) .

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3139
147 Comparative Dynamic Performance of Load Frequency Control of Nonlinear Interconnected Hydro-Thermal System Using Intelligent Techniques

Authors: Banaja Mohanty, Prakash Kumar Hota

Abstract:

This paper demonstrates dynamic performance evaluation of load frequency control (LFC) with different intelligent techniques. All non-linearities and physical constraints have been considered in simulation studies such as governor dead band (GDB), generation rate constraint (GRC) and boiler dynamics. The conventional integral time absolute error has been considered as objective function. The design problem is formulated as an optimisation problem and particle swarm optimisation (PSO), bacterial foraging optimisation algorithm (BFOA) and differential evolution (DE) are employed to search optimal controller parameters. The superiority of the proposed approach has been shown by comparing the results with published fuzzy logic control (FLC) for the same interconnected power system. The comparison is done using various performance measures like overshoot, undershoot, settling time and standard error criteria of frequency and tie-line power deviation following a step load perturbation (SLP). It is noticed that, the dynamic performance of proposed controller is better than FLC. Further, robustness analysis is carried out by varying the time constants of speed governor, turbine, tie-line power in the range of +40% to -40% to demonstrate the robustness of the proposed DE optimized PID controller.

Keywords: Automatic generation control, governor dead band, generation rate constraint, differential evolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1060
146 Earth Station Neural Network Control Methodology and Simulation

Authors: Hanaa T. El-Madany, Faten H. Fahmy, Ninet M. A. El-Rahman, Hassen T. Dorrah

Abstract:

Renewable energy resources are inexhaustible, clean as compared with conventional resources. Also, it is used to supply regions with no grid, no telephone lines, and often with difficult accessibility by common transport. Satellite earth stations which located in remote areas are the most important application of renewable energy. Neural control is a branch of the general field of intelligent control, which is based on the concept of artificial intelligence. This paper presents the mathematical modeling of satellite earth station power system which is required for simulating the system.Aswan is selected to be the site under consideration because it is a rich region with solar energy. The complete power system is simulated using MATLAB–SIMULINK.An artificial neural network (ANN) based model has been developed for the optimum operation of earth station power system. An ANN is trained using a back propagation with Levenberg–Marquardt algorithm. The best validation performance is obtained for minimum mean square error. The regression between the network output and the corresponding target is equal to 96% which means a high accuracy. Neural network controller architecture gives satisfactory results with small number of neurons, hence better in terms of memory and time are required for NNC implementation. The results indicate that the proposed control unit using ANN can be successfully used for controlling the satellite earth station power system.

Keywords: Satellite, neural network, MATLAB, power system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1868
145 A Control Model for Improving Safety and Efficiency of Navigation System Based on Reinforcement Learning

Authors: Almutasim Billa A. Alanazi, Hal S. Tharp

Abstract:

Artificial Intelligence (AI), specifically Reinforcement Learning (RL), has proven helpful in many control path planning technologies by maximizing and enhancing their performance, such as navigation systems. Since it learns from experience by interacting with the environment to determine the optimal policy, the optimal policy takes the best action in a particular state, accounting for the long-term rewards. Most navigation systems focus primarily on "arriving faster," overlooking safety and efficiency while estimating the optimum path, as safety and efficiency are essential factors when planning for a long-distance journey. This paper represents an RL control model that proposes a control mechanism for improving navigation systems. Also, the model could be applied to other control path planning applications because it is adjustable and can accept different properties and parameters. However, the navigation system application has been taken as a case and evaluation study for the proposed model. The model utilized a Q-learning algorithm for training and updating the policy. It allows the agent to analyze the quality of an action made in the environment to maximize rewards. The model gives the ability to update rewards regularly based on safety and efficiency assessments, allowing the policy to consider the desired safety and efficiency benefits while making decisions, which improves the quality of the decisions taken for path planning compared to the conventional RL approaches.

Keywords: Artificial intelligence, control system, navigation systems, reinforcement learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 201
144 Visualization and Indexing of Spectral Databases

Authors: Tibor Kulcsar, Gabor Sarossy, Gabor Bereznai, Robert Auer, Janos Abonyi

Abstract:

On-line (near infrared) spectroscopy is widely used to support the operation of complex process systems. Information extracted from spectral database can be used to estimate unmeasured product properties and monitor the operation of the process. These techniques are based on looking for similar spectra by nearest neighborhood algorithms and distance based searching methods. Search for nearest neighbors in the spectral space is an NP-hard problem, the computational complexity increases by the number of points in the discrete spectrum and the number of samples in the database. To reduce the calculation time some kind of indexing could be used. The main idea presented in this paper is to combine indexing and visualization techniques to reduce the computational requirement of estimation algorithms by providing a two dimensional indexing that can also be used to visualize the structure of the spectral database. This 2D visualization of spectral database does not only support application of distance and similarity based techniques but enables the utilization of advanced clustering and prediction algorithms based on the Delaunay tessellation of the mapped spectral space. This means the prediction has not to use the high dimension space but can be based on the mapped space too. The results illustrate that the proposed method is able to segment (cluster) spectral databases and detect outliers that are not suitable for instance based learning algorithms.

Keywords: indexing high dimensional databases, dimensional reduction, clustering, similarity, k-nn algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1769
143 Scale Time Offset Robust Modulation (STORM) in a Code Division Multiaccess Environment

Authors: David M. Jenkins Jr.

Abstract:

Scale Time Offset Robust Modulation (STORM) [1]– [3] is a high bandwidth waveform design that adds time-scale to embedded reference modulations using only time-delay [4]. In an environment where each user has a specific delay and scale, identification of the user with the highest signal power and that user-s phase is facilitated by the STORM processor. Both of these parameters are required in an efficient multiuser detection algorithm. In this paper, the STORM modulation approach is evaluated with a direct sequence spread quadrature phase shift keying (DS-QPSK) system. A misconception of the STORM time scale modulation is that a fine temporal resolution is required at the receiver. STORM will be applied to a QPSK code division multiaccess (CDMA) system by modifying the spreading codes. Specifically, the in-phase code will use a typical spreading code, and the quadrature code will use a time-delayed and time-scaled version of the in-phase code. Subsequently, the same temporal resolution in the receiver is required before and after the application of STORM. In this paper, the bit error performance of STORM in a synchronous CDMA system is evaluated and compared to theory, and the bit error performance of STORM incorporated in a single user WCDMA downlink is presented to demonstrate the applicability of STORM in a modern communication system.

Keywords: Pseudonoise coded communication, Cyclic codes, Code division multiaccess

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630
142 Noise Reduction in Web Data: A Learning Approach Based on Dynamic User Interests

Authors: Julius Onyancha, Valentina Plekhanova

Abstract:

One of the significant issues facing web users is the amount of noise in web data which hinders the process of finding useful information in relation to their dynamic interests. Current research works consider noise as any data that does not form part of the main web page and propose noise web data reduction tools which mainly focus on eliminating noise in relation to the content and layout of web data. This paper argues that not all data that form part of the main web page is of a user interest and not all noise data is actually noise to a given user. Therefore, learning of noise web data allocated to the user requests ensures not only reduction of noisiness level in a web user profile, but also a decrease in the loss of useful information hence improves the quality of a web user profile. Noise Web Data Learning (NWDL) tool/algorithm capable of learning noise web data in web user profile is proposed. The proposed work considers elimination of noise data in relation to dynamic user interest. In order to validate the performance of the proposed work, an experimental design setup is presented. The results obtained are compared with the current algorithms applied in noise web data reduction process. The experimental results show that the proposed work considers the dynamic change of user interest prior to elimination of noise data. The proposed work contributes towards improving the quality of a web user profile by reducing the amount of useful information eliminated as noise.

Keywords: Web log data, web user profile, user interest, noise web data learning, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1734
141 Comparison of Two Maintenance Policies for a Two-Unit Series System Considering General Repair

Authors: Seyedvahid Najafi, Viliam Makis

Abstract:

In recent years, maintenance optimization has attracted special attention due to the growth of industrial systems complexity. Maintenance costs are high for many systems, and preventive maintenance is effective when it increases operations' reliability and safety at a reduced cost. The novelty of this research is to consider general repair in the modeling of multi-unit series systems and solve the maintenance problem for such systems using the semi-Markov decision process (SMDP) framework. We propose an opportunistic maintenance policy for a series system composed of two main units. Unit 1, which is more expensive than unit 2, is subjected to condition monitoring, and its deterioration is modeled using a gamma process. Unit 1 hazard rate is estimated by the proportional hazards model (PHM), and two hazard rate control limits are considered as the thresholds of maintenance interventions for unit 1. Maintenance is performed on unit 2, considering an age control limit. The objective is to find the optimal control limits and minimize the long-run expected average cost per unit time. The proposed algorithm is applied to a numerical example to compare the effectiveness of the proposed policy (policy Ⅰ) with policy Ⅱ, which is similar to policy Ⅰ, but instead of general repair, replacement is performed. Results show that policy Ⅰ leads to lower average cost compared with policy Ⅱ. 

Keywords: Condition-based maintenance, proportional hazards model, semi-Markov decision process, two-unit series systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 584
140 Multi-Objective Evolutionary Computation Based Feature Selection Applied to Behaviour Assessment of Children

Authors: F. Jiménez, R. Jódar, M. Martín, G. Sánchez, G. Sciavicco

Abstract:

Abstract—Attribute or feature selection is one of the basic strategies to improve the performances of data classification tasks, and, at the same time, to reduce the complexity of classifiers, and it is a particularly fundamental one when the number of attributes is relatively high. Its application to unsupervised classification is restricted to a limited number of experiments in the literature. Evolutionary computation has already proven itself to be a very effective choice to consistently reduce the number of attributes towards a better classification rate and a simpler semantic interpretation of the inferred classifiers. We present a feature selection wrapper model composed by a multi-objective evolutionary algorithm, the clustering method Expectation-Maximization (EM), and the classifier C4.5 for the unsupervised classification of data extracted from a psychological test named BASC-II (Behavior Assessment System for Children - II ed.) with two objectives: Maximizing the likelihood of the clustering model and maximizing the accuracy of the obtained classifier. We present a methodology to integrate feature selection for unsupervised classification, model evaluation, decision making (to choose the most satisfactory model according to a a posteriori process in a multi-objective context), and testing. We compare the performance of the classifier obtained by the multi-objective evolutionary algorithms ENORA and NSGA-II, and the best solution is then validated by the psychologists that collected the data.

Keywords: Feature selection, multi-objective evolutionary computation, unsupervised classification, behavior assessment system for children.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1447
139 Multi-Objective Optimization of a Solar-Powered Triple-Effect Absorption Chiller for Air-Conditioning Applications

Authors: Ali Shirazi, Robert A. Taylor, Stephen D. White, Graham L. Morrison

Abstract:

In this paper, a detailed simulation model of a solar-powered triple-effect LiBr–H2O absorption chiller is developed to supply both cooling and heating demand of a large-scale building, aiming to reduce the fossil fuel consumption and greenhouse gas emissions in building sector. TRNSYS 17 is used to simulate the performance of the system over a typical year. A combined energetic-economic-environmental analysis is conducted to determine the system annual primary energy consumption and the total cost, which are considered as two conflicting objectives. A multi-objective optimization of the system is performed using a genetic algorithm to minimize these objectives simultaneously. The optimization results show that the final optimal design of the proposed plant has a solar fraction of 72% and leads to an annual primary energy saving of 0.69 GWh and annual CO2 emissions reduction of ~166 tonnes, as compared to a conventional HVAC system. The economics of this design, however, is not appealing without public funding, which is often the case for many renewable energy systems. The results show that a good funding policy is required in order for these technologies to achieve satisfactory payback periods within the lifetime of the plant.

Keywords: Economic, environmental, multi-objective optimization, solar air-conditioning, triple-effect absorption chiller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582
138 Recent Advances in Pulse Width Modulation Techniques and Multilevel Inverters

Authors: Satish Kumar Peddapelli

Abstract:

This paper presents advances in pulse width modulation techniques which refers to a method of carrying information on train of pulses and the information be encoded in the width of pulses. Pulse Width Modulation is used to control the inverter output voltage. This is done by exercising the control within the inverter itself by adjusting the ON and OFF periods of inverter. By fixing the DC input voltage we get AC output voltage. In variable speed AC motors the AC output voltage from a constant DC voltage is obtained by using inverter. Recent developments in power electronics and semiconductor technology have lead improvements in power electronic systems. Hence, different circuit configurations namely multilevel inverters have became popular and considerable interest by researcher are given on them. A fast space-vector pulse width modulation (SVPWM) method for five-level inverter is also discussed. In this method, the space vector diagram of the five-level inverter is decomposed into six space vector diagrams of three-level inverters. In turn, each of these six space vector diagrams of three-level inverter is decomposed into six space vector diagrams of two-level inverters. After decomposition, all the remaining necessary procedures for the three-level SVPWM are done like conventional two-level inverter. The proposed method reduces the algorithm complexity and the execution time. It can be applied to the multilevel inverters above the five-level also. The experimental setup for three-level diode-clamped inverter is developed using TMS320LF2407 DSP controller and the experimental results are analyzed.

Keywords: Five-level inverter, Space vector pulse wide modulation, diode clamped inverter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7770
137 A Spanning Tree for Enhanced Cluster Based Routing in Wireless Sensor Network

Authors: M. Saravanan, M. Madheswaran

Abstract:

Wireless Sensor Network (WSN) clustering architecture enables features like network scalability, communication overhead reduction, and fault tolerance. After clustering, aggregated data is transferred to data sink and reducing unnecessary, redundant data transfer. It reduces nodes transmitting, and so saves energy consumption. Also, it allows scalability for many nodes, reduces communication overhead, and allows efficient use of WSN resources. Clustering based routing methods manage network energy consumption efficiently. Building spanning trees for data collection rooted at a sink node is a fundamental data aggregation method in sensor networks. The problem of determining Cluster Head (CH) optimal number is an NP-Hard problem. In this paper, we combine cluster based routing features for cluster formation and CH selection and use Minimum Spanning Tree (MST) for intra-cluster communication. The proposed method is based on optimizing MST using Simulated Annealing (SA). In this work, normalized values of mobility, delay, and remaining energy are considered for finding optimal MST. Simulation results demonstrate the effectiveness of the proposed method in improving the packet delivery ratio and reducing the end to end delay.

Keywords: Wireless sensor network, clustering, minimum spanning tree, genetic algorithm, low energy adaptive clustering hierarchy, simulated annealing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1786
136 Turbo-Coded Mobile Terrestrial Communication Systems in Urban and Suburban Areas for Wireless Multimedia Applications

Authors: F. Mehran

Abstract:

With the rapid popularization of internet services, it is apparent that the next generation terrestrial communication systems must be capable of supporting various applications like voice, video, and data. This paper presents the performance evaluation of turbo- coded mobile terrestrial communication systems, which are capable of providing high quality services for delay sensitive (voice or video) and delay tolerant (text transmission) multimedia applications in urban and suburban areas. Different types of multimedia information require different service qualities, which are generally expressed in terms of a maximum acceptable bit-error-rate (BER) and maximum tolerable latency. The breakthrough discovery of turbo codes allows us to significantly reduce the probability of bit errors with feasible latency. In a turbo-coded system, a trade-off between latency and BER results from the choice of convolutional component codes, interleaver type and size, decoding algorithm, and the number of decoding iterations. This trade-off can be exploited for multimedia applications by using optimal and suboptimal performance parameter amalgamations to achieve different service qualities. The results are therefore proposing an adaptive framework for turbo-coded wireless multimedia communications which incorporate a set of performance parameters that achieve an appropriate set of service qualities, depending on the application's requirements.

Keywords: Mobile communications, Turbo codes, wireless multimedia communication systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1597
135 Enhanced GA-Fuzzy OPF under both Normal and Contingent Operation States

Authors: Ashish Saini, A.K. Saxena

Abstract:

The genetic algorithm (GA) based solution techniques are found suitable for optimization because of their ability of simultaneous multidimensional search. Many GA-variants have been tried in the past to solve optimal power flow (OPF), one of the nonlinear problems of electric power system. The issues like convergence speed and accuracy of the optimal solution obtained after number of generations using GA techniques and handling system constraints in OPF are subjects of discussion. The results obtained for GA-Fuzzy OPF on various power systems have shown faster convergence and lesser generation costs as compared to other approaches. This paper presents an enhanced GA-Fuzzy OPF (EGAOPF) using penalty factors to handle line flow constraints and load bus voltage limits for both normal network and contingency case with congestion. In addition to crossover and mutation rate adaptation scheme that adapts crossover and mutation probabilities for each generation based on fitness values of previous generations, a block swap operator is also incorporated in proposed EGA-OPF. The line flow limits and load bus voltage magnitude limits are handled by incorporating line overflow and load voltage penalty factors respectively in each chromosome fitness function. The effects of different penalty factors settings are also analyzed under contingent state.

Keywords: Contingent operation state, Fuzzy rule base, Genetic Algorithms, Optimal Power Flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1615
134 Optimization of Three-dimensional Electrical Performance in a Solid Oxide Fuel Cell Stack by a Neural Network

Authors: Shih-Bin Wang, Ping Yuan, Syu-Fang Liu, Ming-Jun Kuo

Abstract:

By the application of an improved back-propagation neural network (BPNN), a model of current densities for a solid oxide fuel cell (SOFC) with 10 layers is established in this study. To build the learning data of BPNN, Taguchi orthogonal array is applied to arrange the conditions of operating parameters, which totally 7 factors act as the inputs of BPNN. Also, the average current densities achieved by numerical method acts as the outputs of BPNN. Comparing with the direct solution, the learning errors for all learning data are smaller than 0.117%, and the predicting errors for 27 forecasting cases are less than 0.231%. The results show that the presented model effectively builds a mathematical algorithm to predict performance of a SOFC stack immediately in real time. Also, the calculating algorithms are applied to proceed with the optimization of the average current density for a SOFC stack. The operating performance window of a SOFC stack is found to be between 41137.11 and 53907.89. Furthermore, an inverse predicting model of operating parameters of a SOFC stack is developed here by the calculating algorithms of the improved BPNN, which is proved to effectively predict operating parameters to achieve a desired performance output of a SOFC stack.

Keywords: a SOFC stack, BPNN, inverse predicting model of operating parameters, optimization of the average current density

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1364
133 Dynamic Fault Diagnosis for Semi-Batch Reactor under Closed-Loop Control via Independent Radial Basis Function Neural Network

Authors: Abdelkarim M. Ertiame, D. W. Yu, D. L. Yu, J. B. Gomm

Abstract:

In this paper, a robust fault detection and isolation (FDI) scheme is developed to monitor a multivariable nonlinear chemical process called the Chylla-Haase polymerization reactor, when it is under the cascade PI control. The scheme employs a radial basis function neural network (RBFNN) in an independent mode to model the process dynamics, and using the weighted sum-squared prediction error as the residual. The Recursive Orthogonal Least Squares algorithm (ROLS) is employed to train the model to overcome the training difficulty of the independent mode of the network. Then, another RBFNN is used as a fault classifier to isolate faults from different features involved in the residual vector. Several actuator and sensor faults are simulated in a nonlinear simulation of the reactor in Simulink. The scheme is used to detect and isolate the faults on-line. The simulation results show the effectiveness of the scheme even the process is subjected to disturbances and uncertainties including significant changes in the monomer feed rate, fouling factor, impurity factor, ambient temperature, and measurement noise. The simulation results are presented to illustrate the effectiveness and robustness of the proposed method.

Keywords: Robust fault detection, cascade control, independent RBF model, RBF neural networks, Chylla-Haase reactor, FDI under closed-loop control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1835
132 Understanding and Designing Situation-Aware Mobile and Ubiquitous Computing Systems

Authors: Kai Häussermann, Christoph Hubig, Paul Levi, Frank Leymann, Oliver Siemoneit, Matthias Wieland, Oliver Zweigle

Abstract:

Using spatial models as a shared common basis of information about the environment for different kinds of contextaware systems has been a heavily researched topic in the last years. Thereby the research focused on how to create, to update, and to merge spatial models so as to enable highly dynamic, consistent and coherent spatial models at large scale. In this paper however, we want to concentrate on how context-aware applications could use this information so as to adapt their behavior according to the situation they are in. The main idea is to provide the spatial model infrastructure with a situation recognition component based on generic situation templates. A situation template is – as part of a much larger situation template library – an abstract, machinereadable description of a certain basic situation type, which could be used by different applications to evaluate their situation. In this paper, different theoretical and practical issues – technical, ethical and philosophical ones – are discussed important for understanding and developing situation dependent systems based on situation templates. A basic system design is presented which allows for the reasoning with uncertain data using an improved version of a learning algorithm for the automatic adaption of situation templates. Finally, for supporting the development of adaptive applications, we present a new situation-aware adaptation concept based on workflows.

Keywords: context-awareness, ethics, facilitation of system use through workflows, situation recognition and learning based on situation templates and situation ontology's, theory of situationaware systems

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1759
131 Development of Precise Ephemeris Generation Module for Thaichote Satellite Operations

Authors: Manop Aorpimai, Ponthep Navakitkanok

Abstract:

In this paper, the development of the ephemeris generation module used for the Thaichote satellite operations is presented. It is a vital part of the flight dynamics system, which comprises, the orbit determination, orbit propagation, event prediction and station-keeping maneouvre modules. In the generation of the spacecraft ephemeris data, the estimated orbital state vector from the orbit determination module is used as an initial condition. The equations of motion are then integrated forward in time to predict the satellite states. The higher geopotential harmonics, as well as other disturbing forces, are taken into account to resemble the environment in low-earth orbit. Using a highly accurate numerical integrator based on the Burlish-Stoer algorithm the ephemeris data can be generated for long-term predictions, by using a relatively small computation burden and short calculation time. Some events occurring during the prediction course that are related to the mission operations, such as the satellite’s rise/set viewed from the ground station, Earth and Moon eclipses, the drift in groundtrack as well as the drift in the local solar time of the orbital plane are all detected and reported. When combined with other modules to form a flight dynamics system, this application is aimed to be applied for the Thaichote satellite and successive Thailand’s Earth-observation missions. 

Keywords: Flight Dynamics System, Orbit Propagation, Satellite Ephemeris, Thailand’s Earth Observation Satellite.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3040
130 Rapid Finite-Element Based Airport Pavement Moduli Solutions using Neural Networks

Authors: Kasthurirangan Gopalakrishnan, Marshall R. Thompson, Anshu Manik

Abstract:

This paper describes the use of artificial neural networks (ANN) for predicting non-linear layer moduli of flexible airfield pavements subjected to new generation aircraft (NGA) loading, based on the deflection profiles obtained from Heavy Weight Deflectometer (HWD) test data. The HWD test is one of the most widely used tests for routinely assessing the structural integrity of airport pavements in a non-destructive manner. The elastic moduli of the individual pavement layers backcalculated from the HWD deflection profiles are effective indicators of layer condition and are used for estimating the pavement remaining life. HWD tests were periodically conducted at the Federal Aviation Administration-s (FAA-s) National Airport Pavement Test Facility (NAPTF) to monitor the effect of Boeing 777 (B777) and Beoing 747 (B747) test gear trafficking on the structural condition of flexible pavement sections. In this study, a multi-layer, feed-forward network which uses an error-backpropagation algorithm was trained to approximate the HWD backcalculation function. The synthetic database generated using an advanced non-linear pavement finite-element program was used to train the ANN to overcome the limitations associated with conventional pavement moduli backcalculation. The changes in ANN-based backcalculated pavement moduli with trafficking were used to compare the relative severity effects of the aircraft landing gears on the NAPTF test pavements.

Keywords: Airfield pavements, ANN, backcalculation, newgeneration aircraft

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2185
129 Web Proxy Detection via Bipartite Graphs and One-Mode Projections

Authors: Zhipeng Chen, Peng Zhang, Qingyun Liu, Li Guo

Abstract:

With the Internet becoming the dominant channel for business and life, many IPs are increasingly masked using web proxies for illegal purposes such as propagating malware, impersonate phishing pages to steal sensitive data or redirect victims to other malicious targets. Moreover, as Internet traffic continues to grow in size and complexity, it has become an increasingly challenging task to detect the proxy service due to their dynamic update and high anonymity. In this paper, we present an approach based on behavioral graph analysis to study the behavior similarity of web proxy users. Specifically, we use bipartite graphs to model host communications from network traffic and build one-mode projections of bipartite graphs for discovering social-behavior similarity of web proxy users. Based on the similarity matrices of end-users from the derived one-mode projection graphs, we apply a simple yet effective spectral clustering algorithm to discover the inherent web proxy users behavior clusters. The web proxy URL may vary from time to time. Still, the inherent interest would not. So, based on the intuition, by dint of our private tools implemented by WebDriver, we examine whether the top URLs visited by the web proxy users are web proxies. Our experiment results based on real datasets show that the behavior clusters not only reduce the number of URLs analysis but also provide an effective way to detect the web proxies, especially for the unknown web proxies.

Keywords: Bipartite graph, clustering, one-mode projection, web proxy detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 746
128 Pilot-Assisted Direct-Current Biased Optical Orthogonal Frequency Division Multiplexing Visible Light Communication System

Authors: Ayad A. Abdulkafi, Shahir F. Nawaf, Mohammed K. Hussein, Ibrahim K. Sileh, Fouad A. Abdulkafi

Abstract:

Visible light communication (VLC) is a new approach of optical wireless communication proposed to support the congested radio frequency (RF) spectrum. VLC systems are combined with orthogonal frequency division multiplexing (OFDM) to achieve high rate transmission and high spectral efficiency. In this paper, we investigate the Pilot-Assisted Channel Estimation for DC biased Optical OFDM (PACE-DCO-OFDM) systems to reduce the effects of the distortion on the transmitted signal. Least-square (LS) and linear minimum mean-squared error (LMMSE) estimators are implemented in MATLAB/Simulink to enhance the bit-error-rate (BER) of PACE-DCO-OFDM. Results show that DCO-OFDM system based on PACE scheme has achieved better BER performance compared to conventional system without pilot assisted channel estimation. Simulation results show that the proposed PACE-DCO-OFDM based on LMMSE algorithm can more accurately estimate the channel and achieves better BER performance when compared to the LS based PACE-DCO-OFDM and the traditional system without PACE. For the same signal to noise ratio (SNR) of 25 dB, the achieved BER is about 5×10-4 for LMMSE-PACE and 4.2×10-3 with LS-PACE while it is about 2×10-1 for system without PACE scheme.

Keywords: Channel estimation, OFDM, pilot-assist, VLC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 668
127 Star-Hexagon Transformer Supported UPQC

Authors: Yash Pal, A.Swarup, Bhim Singh

Abstract:

A new topology of unified power quality conditioner (UPQC) is proposed for different power quality (PQ) improvement in a three-phase four-wire (3P-4W) distribution system. For neutral current mitigation, a star-hexagon transformer is connected in shunt near the load along with three-leg voltage source inverters (VSIs) based UPQC. For the mitigation of source neutral current, the uses of passive elements are advantageous over the active compensation due to ruggedness and less complexity of control. In addition to this, by connecting a star-hexagon transformer for neutral current mitigation the over all rating of the UPQC is reduced. The performance of the proposed topology of 3P-4W UPQC is evaluated for power-factor correction, load balancing, neutral current mitigation and mitigation of voltage and currents harmonics. A simple control algorithm based on Unit Vector Template (UVT) technique is used as a control strategy of UPQC for mitigation of different PQ problems. In this control scheme, the current/voltage control is applied over the fundamental supply currents/voltages instead of fast changing APFs currents/voltages, thereby reducing the computational delay. Moreover, no extra control is required for neutral source current compensation; hence the numbers of current sensors are reduced. The performance of the proposed topology of UPQC is analyzed through simulations results using MATLAB software with its Simulink and Power System Block set toolboxes.

Keywords: Power-factor correction, Load balancing, UPQC, Voltage and Current harmonics, Neutral current mitigation, Starhexagon transformer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2328
126 A Case Study on Optimization of Contractor’s Financing through Allocation of Subcontractors

Authors: Helen S. Ghali, Engy Serag, A. Samer Ezeldin

Abstract:

In many countries, the construction industry relies heavily on outsourcing models in executing their projects and expanding their businesses to fit in the diverse market. Such extensive integration of subcontractors is becoming an influential factor in contractor’s cash flow management. Accordingly, subcontractors’ financial terms are important phenomena and pivotal components for the well-being of the contractor’s cash flow. The aim of this research is to study the contractor’s cash flow with respect to the owner and subcontractor’s payment management plans, considering variable advance payment, payment frequency, and lag and retention policies. The model is developed to provide contractors with a decision support tool that can assist in selecting the optimum subcontracting plan to minimize the contractor’s financing limits and optimize the profit values. The model is built using Microsoft Excel VBA coding, and the genetic algorithm is utilized as the optimization tool. Three objective functions are investigated, which are minimizing the highest negative overdraft value, minimizing the net present worth of overdraft, and maximizing the project net profit. The model is validated on a full-scale project which includes both self-performed and subcontracted work packages. The results show potential outputs in optimizing the contractor’s negative cash flow values and, in the meantime, assisting contractors in selecting suitable subcontractors to achieve the objective function.

Keywords: Cash flow optimization, payment plan, procurement management, subcontracting plan.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 208
125 Graph-based High Level Motion Segmentation using Normalized Cuts

Authors: Sungju Yun, Anjin Park, Keechul Jung

Abstract:

Motion capture devices have been utilized in producing several contents, such as movies and video games. However, since motion capture devices are expensive and inconvenient to use, motions segmented from captured data was recycled and synthesized to utilize it in another contents, but the motions were generally segmented by contents producers in manual. Therefore, automatic motion segmentation is recently getting a lot of attentions. Previous approaches are divided into on-line and off-line, where on-line approaches segment motions based on similarities between neighboring frames and off-line approaches segment motions by capturing the global characteristics in feature space. In this paper, we propose a graph-based high-level motion segmentation method. Since high-level motions consist of several repeated frames within temporal distances, we consider all similarities among all frames within the temporal distance. This is achieved by constructing a graph, where each vertex represents a frame and the edges between the frames are weighted by their similarity. Then, normalized cuts algorithm is used to partition the constructed graph into several sub-graphs by globally finding minimum cuts. In the experiments, the results using the proposed method showed better performance than PCA-based method in on-line and GMM-based method in off-line, as the proposed method globally segment motions from the graph constructed based similarities between neighboring frames as well as similarities among all frames within temporal distances.

Keywords: Capture Devices, High-Level Motion, Motion Segmentation, Normalized Cuts

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1316
124 Using Time-Series NDVI to Model Land Cover Change: A Case Study in the Berg River Catchment Area, Western Cape, South Africa

Authors: A. S. Adesuyi, Z. Munch

Abstract:

This study investigates the use of a time-series of MODIS NDVI data to identify agricultural land cover change on an annual time step (2007 - 2012) and characterize the trend. Following an ISODATA classification of the MODIS imagery to selectively mask areas not agriculture or semi-natural, NDVI signatures were created to identify areas cereals and vineyards with the aid of ancillary, pictometry and field sample data for 2010. The NDVI signature curve and training samples were used to create a decision tree model in WEKA 3.6.9 using decision tree classifier (J48) algorithm; Model 1 including ISODATA classification and Model 2 not. These two models were then used to classify all data for the study area for 2010, producing land cover maps with classification accuracies of 77% and 80% for Model 1 and 2 respectively. Model 2 was subsequently used to create land cover classification and change detection maps for all other years. Subtle changes and areas of consistency (unchanged) were observed in the agricultural classes and crop practices. Over the years as predicted by the land cover classification. Forty one percent of the catchment comprised of cereals with 35% possibly following a crop rotation system. Vineyards largely remained constant with only one percent conversion to vineyard from other land cover classes.

Keywords: Change detection, Land cover, NDVI, time-series.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2290
123 Effect of Scene Changing on Image Sequences Compression Using Zero Tree Coding

Authors: Mbainaibeye Jérôme, Noureddine Ellouze

Abstract:

We study in this paper the effect of the scene changing on image sequences coding system using Embedded Zerotree Wavelet (EZW). The scene changing considered here is the full motion which may occurs. A special image sequence is generated where the scene changing occurs randomly. Two scenarios are considered: In the first scenario, the system must provide the reconstruction quality as best as possible by the management of the bit rate (BR) while the scene changing occurs. In the second scenario, the system must keep the bit rate as constant as possible by the management of the reconstruction quality. The first scenario may be motivated by the availability of a large band pass transmission channel where an increase of the bit rate may be possible to keep the reconstruction quality up to a given threshold. The second scenario may be concerned by the narrow band pass transmission channel where an increase of the bit rate is not possible. In this last case, applications for which the reconstruction quality is not a constraint may be considered. The simulations are performed with five scales wavelet decomposition using the 9/7-tap filter bank biorthogonal wavelet. The entropy coding is performed using a specific defined binary code book and EZW algorithm. Experimental results are presented and compared to LEAD H263 EVAL. It is shown that if the reconstruction quality is the constraint, the system increases the bit rate to obtain the required quality. In the case where the bit rate must be constant, the system is unable to provide the required quality if the scene change occurs; however, the system is able to improve the quality while the scene changing disappears.

Keywords: Image Sequence Compression, Wavelet Transform, Scene Changing, Zero Tree, Bit Rate, Quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1356