Search results for: controlled elitism non-dominated sorting genetic algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4262

Search results for: controlled elitism non-dominated sorting genetic algorithm

272 State Estimation of a Biotechnological Process Using Extended Kalman Filter and Particle Filter

Authors: R. Simutis, V. Galvanauskas, D. Levisauskas, J. Repsyte, V. Grincas

Abstract:

This paper deals with advanced state estimation algorithms for estimation of biomass concentration and specific growth rate in a typical fed-batch biotechnological process. This biotechnological process was represented by a nonlinear mass-balance based process model. Extended Kalman Filter (EKF) and Particle Filter (PF) was used to estimate the unmeasured state variables from oxygen uptake rate (OUR) and base consumption (BC) measurements. To obtain more general results, a simplified process model was involved in EKF and PF estimation algorithms. This model doesn’t require any special growth kinetic equations and could be applied for state estimation in various bioprocesses. The focus of this investigation was concentrated on the comparison of the estimation quality of the EKF and PF estimators by applying different measurement noises. The simulation results show that Particle Filter algorithm requires significantly more computation time for state estimation but gives lower estimation errors both for biomass concentration and specific growth rate. Also the tuning procedure for Particle Filter is simpler than for EKF. Consequently, Particle Filter should be preferred in real applications, especially for monitoring of industrial bioprocesses where the simplified implementation procedures are always desirable.

Keywords: Biomass concentration, Extended Kalman Filter, Particle Filter, State estimation, Specific growth rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2907
271 A Finite Precision Block Floating Point Treatment to Direct Form, Cascaded and Parallel FIR Digital Filters

Authors: Abhijit Mitra

Abstract:

This paper proposes an efficient finite precision block floating point (BFP) treatment to the fixed coefficient finite impulse response (FIR) digital filter. The treatment includes effective implementation of all the three forms of the conventional FIR filters, namely, direct form, cascaded and par- allel, and a roundoff error analysis of them in the BFP format. An effective block formatting algorithm together with an adaptive scaling factor is pro- posed to make the realizations more simple from hardware view point. To this end, a generic relation between the tap weight vector length and the input block length is deduced. The implementation scheme also emphasises on a simple block exponent update technique to prevent overflow even during the block to block transition phase. The roundoff noise is also investigated along the analogous lines, taking into consideration these implementational issues. The simulation results show that the BFP roundoff errors depend on the sig- nal level almost in the same way as floating point roundoff noise, resulting in approximately constant signal to noise ratio over a relatively large dynamic range.

Keywords: Finite impulse response digital filters, Cascade structure, Parallel structure, Block floating point arithmetic, Roundoff error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1606
270 Lane Changing and Merging Maneuvers of Carlike Robots

Authors: Bibhya Sharma, Jito Vanualailai, Ravindra Rai

Abstract:

This research paper designs a unique motion planner of multiple platoons of nonholonomic car-like robots as a feasible solution to the lane changing/merging maneuvers. The decentralized planner with a leaderless approach and a path-guidance principle derived from the Lyapunov-based control scheme generates collision free avoidance and safe merging maneuvers from multiple lanes to a single lane by deploying a split/merge strategy. The fixed obstacles are the markings and boundaries of the road lanes, while the moving obstacles are the robots themselves. Real and virtual road lane markings and the boundaries of road lanes are incorporated into a workspace to achieve the desired formation and configuration of the robots. Convergence of the robots to goal configurations and the repulsion of the robots from specified obstacles are achieved by suitable attractive and repulsive potential field functions, respectively. The results can be viewed as a significant contribution to the avoidance algorithm of the intelligent vehicle systems (IVS). Computer simulations highlight the effectiveness of the split/merge strategy and the acceleration-based controllers.

Keywords: Lane merging, Lyapunov-based control scheme, path-guidance principle, split/merge strategy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1608
269 Optimal Green Facility Planning - Implementation of Organic Rankine Cycle System for Factory Waste Heat Recovery

Authors: Chun-Wei Lin, Yu-Lin Chen

Abstract:

As global industry developed rapidly, the energy demand also rises simultaneously. In the production process, there’s a lot of energy consumed in the process. Formally, the energy used in generating the heat in the production process. In the total energy consumption, 40% of the heat was used in process heat, mechanical work, chemical energy and electricity. The remaining 50% were released into the environment. It will cause energy waste and environment pollution. There are many ways for recovering the waste heat in factory. Organic Rankine Cycle (ORC) system can produce electricity and reduce energy costs by recovering the waste of low temperature heat in the factory. In addition, ORC is the technology with the highest power generating efficiency in low-temperature heat recycling. However, most of factories executives are still hesitated because of the high implementation cost of the ORC system, even a lot of heat are wasted. Therefore, this study constructs a nonlinear mathematical model of waste heat recovery equipment configuration to maximize profits. A particle swarm optimization algorithm is developed to generate the optimal facility installation plan for the ORC system.

Keywords: Green facility planning, organic rankine cycle, particle swarm optimization, waste heat recovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1944
268 Efficient Boosting-Based Active Learning for Specific Object Detection Problems

Authors: Thuy Thi Nguyen, Nguyen Dang Binh, Horst Bischof

Abstract:

In this work, we present a novel active learning approach for learning a visual object detection system. Our system is composed of an active learning mechanism as wrapper around a sub-algorithm which implement an online boosting-based learning object detector. In the core is a combination of a bootstrap procedure and a semi automatic learning process based on the online boosting procedure. The idea is to exploit the availability of classifier during learning to automatically label training samples and increasingly improves the classifier. This addresses the issue of reducing labeling effort meanwhile obtain better performance. In addition, we propose a verification process for further improvement of the classifier. The idea is to allow re-update on seen data during learning for stabilizing the detector. The main contribution of this empirical study is a demonstration that active learning based on an online boosting approach trained in this manner can achieve results comparable or even outperform a framework trained in conventional manner using much more labeling effort. Empirical experiments on challenging data set for specific object deteciton problems show the effectiveness of our approach.

Keywords: Computer vision, object detection, online boosting, active learning, labeling complexity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1747
267 Introducing Sequence-Order Constraint into Prediction of Protein Binding Sites with Automatically Extracted Templates

Authors: Yi-Zhong Weng, Chien-Kang Huang, Yu-Feng Huang, Chi-Yuan Yu, Darby Tien-Hao Chang

Abstract:

Search for a tertiary substructure that geometrically matches the 3D pattern of the binding site of a well-studied protein provides a solution to predict protein functions. In our previous work, a web server has been built to predict protein-ligand binding sites based on automatically extracted templates. However, a drawback of such templates is that the web server was prone to resulting in many false positive matches. In this study, we present a sequence-order constraint to reduce the false positive matches of using automatically extracted templates to predict protein-ligand binding sites. The binding site predictor comprises i) an automatically constructed template library and ii) a local structure alignment algorithm for querying the library. The sequence-order constraint is employed to identify the inconsistency between the local regions of the query protein and the templates. Experimental results reveal that the sequence-order constraint can largely reduce the false positive matches and is effective for template-based binding site prediction.

Keywords: Protein structure, binding site, functional prediction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1419
266 Using HMM-based Classifier Adapted to Background Noises with Improved Sounds Features for Audio Surveillance Application

Authors: Asma Rabaoui, Zied Lachiri, Noureddine Ellouze

Abstract:

Discrimination between different classes of environmental sounds is the goal of our work. The use of a sound recognition system can offer concrete potentialities for surveillance and security applications. The first paper contribution to this research field is represented by a thorough investigation of the applicability of state-of-the-art audio features in the domain of environmental sound recognition. Additionally, a set of novel features obtained by combining the basic parameters is introduced. The quality of the features investigated is evaluated by a HMM-based classifier to which a great interest was done. In fact, we propose to use a Multi-Style training system based on HMMs: one recognizer is trained on a database including different levels of background noises and is used as a universal recognizer for every environment. In order to enhance the system robustness by reducing the environmental variability, we explore different adaptation algorithms including Maximum Likelihood Linear Regression (MLLR), Maximum A Posteriori (MAP) and the MAP/MLLR algorithm that combines MAP and MLLR. Experimental evaluation shows that a rather good recognition rate can be reached, even under important noise degradation conditions when the system is fed by the convenient set of features.

Keywords: Sounds recognition, HMM classifier, Multi-style training, Environmental Adaptation, Feature combinations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1593
265 MONPAR - A Page Replacement Algorithm for a Spatiotemporal Database

Authors: U. Kalay, O. Kalıpsız

Abstract:

For a spatiotemporal database management system, I/O cost of queries and other operations is an important performance criterion. In order to optimize this cost, an intense research on designing robust index structures has been done in the past decade. With these major considerations, there are still other design issues that deserve addressing due to their direct impact on the I/O cost. Having said this, an efficient buffer management strategy plays a key role on reducing redundant disk access. In this paper, we proposed an efficient buffer strategy for a spatiotemporal database index structure, specifically indexing objects moving over a network of roads. The proposed strategy, namely MONPAR, is based on the data type (i.e. spatiotemporal data) and the structure of the index structure. For the purpose of an experimental evaluation, we set up a simulation environment that counts the number of disk accesses while executing a number of spatiotemporal range-queries over the index. We reiterated simulations with query sets with different distributions, such as uniform query distribution and skewed query distribution. Based on the comparison of our strategy with wellknown page-replacement techniques, like LRU-based and Prioritybased buffers, we conclude that MONPAR behaves better than its competitors for small and medium size buffers under all used query-distributions.

Keywords: Buffer Management, Spatiotemporal databases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1442
264 Lead and Cadmium Spatial Pattern and Risk Assessment around Coal Mine in Hyrcanian Forest, North Iran

Authors: Mahsa Tavakoli, Seyed Mohammad Hojjati, Yahya Kooch

Abstract:

In this study, the effect of coal mining activities on lead and cadmium concentrations and distribution in soil was investigated in Hyrcanian forest, North Iran. 16 plots (20×20 m2) were established by systematic-randomly (60×60 m2) in an area of 4 ha (200×200 m2-mine entrance placed at center). An area adjacent to the mine was not affected by the mining activity; considered as the controlled area. In order to investigate soil lead and cadmium concentration, one sample was taken from the 0-10 cm in each plot. To study the spatial pattern of soil properties and lead and cadmium concentrations in the mining area, an area of 80×80m2 (the mine as the center) was considered and 80 soil samples were systematic-randomly taken (10 m intervals). Geostatistical analysis was performed via Kriging method and GS+ software (version 5.1). In order to estimate the impact of coal mining activities on soil quality, pollution index was measured. Lead and cadmium concentrations were significantly higher in mine area (Pb: 10.97±0.30, Cd: 184.47±6.26 mg.kg-1) in comparison to control area (Pb: 9.42±0.17, Cd: 131.71±15.77 mg.kg-1). The mean values of the PI index indicate that Pb (1.16) and Cd (1.77) presented slightly polluted. Results of the NIPI index showed that Pb (1.44) and Cd (2.52) presented slight pollution and moderate pollution respectively. Results of variography and kriging method showed that it is possible to prepare interpolation maps of lead and cadmium around the mining areas in Hyrcanian forest. According to results of pollution and risk assessments, forest soil was contaminated by heavy metals (lead and cadmium); therefore, using reclamation and remediation techniques in these areas is necessary.

Keywords: Traditional coal mining, heavy metals, pollution indicators, geostatistics, caspian forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 996
263 Kinetics and Thermodynamics Adsorption of Phenolic Compounds on Organic-Inorganic Hybrid Mesoporous Material

Authors: Makhlouf Mourad, Messabih Sidi Mohamed, Bouchher Omar, Houali Farida, Benrachedi Khaled

Abstract:

Mesoporous materials are very commonly used as adsorbent materials for removing phenolic compounds. However, the adsorption mechanism of these compounds is still poorly controlled. However, understanding the interactions mesoporous materials/adsorbed molecules is very important in order to optimize the processes of liquid phase adsorption. The difficulty of synthesis is to keep an orderly and cubic pore structure and achieve a homogeneous surface modification. The grafting of Si(CH3)3 was chosen, to transform hydrophilic surfaces hydrophobic surfaces. The aim of this work is to study the kinetics and thermodynamics of two volatile organic compounds VOC phenol (PhOH) and P hydroxy benzoic acid (4AHB) on a mesoporous material of type MCM-48 grafted with an organosilane of the Trimethylchlorosilane (TMCS) type, the material thus grafted or functionalized (hereinafter referred to as MCM-48-G). In a first step, the kinetic and thermodynamic study of the adsorption isotherms of each of the VOCs in mono-solution was carried out. In a second step, a similar study was carried out on a mixture of these two compounds. Kinetic models (pseudo-first order, pseudo-second order) were used to determine kinetic adsorption parameters. The thermodynamic parameters of the adsorption isotherms were determined by the adsorption models (Langmuir, Freundlich). The comparative study of adsorption of PhOH and 4AHB proved that MCM-48-G had a high adsorption capacity for PhOH and 4AHB; this may be related to the hydrophobicity created by the organic function of TMCS in MCM-48-G. The adsorption results for the two compounds using the Freundlich and Langmuir models show that the adsorption of 4AHB was higher than PhOH. The values ​​obtained by the adsorption thermodynamics show that the adsorption interactions for our sample with the phenol and 4AHB are of a physical nature. The adsorption of our VOCs on the MCM-48 (G) is a spontaneous and exothermic process.

Keywords: Adsorption, kinetics, isotherm, mesoporous materials, TMCS, phenol, P-hydroxy benzoic acid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 751
262 The Design of Axisymmetric Ducts for Incompressible Flow with a Parabolic Axial Velocity Inlet Profile

Authors: V.Pavlika

Abstract:

In this paper a numerical algorithm is described for solving the boundary value problem associated with axisymmetric, inviscid, incompressible, rotational (and irrotational) flow in order to obtain duct wall shapes from prescribed wall velocity distributions. The governing equations are formulated in terms of the stream function ψ (x,y)and the function φ (x,y)as independent variables where for irrotational flow φ (x,y)can be recognized as the velocity potential function, for rotational flow φ (x,y)ceases being the velocity potential function but does remain orthogonal to the stream lines. A numerical method based on the finite difference scheme on a uniform mesh is employed. The technique described is capable of tackling the so-called inverse problem where the velocity wall distributions are prescribed from which the duct wall shape is calculated, as well as the direct problem where the velocity distribution on the duct walls are calculated from prescribed duct geometries. The two different cases as outlined in this paper are in fact boundary value problems with Neumann and Dirichlet boundary conditions respectively. Even though both approaches are discussed, only numerical results for the case of the Dirichlet boundary conditions are given. A downstream condition is prescribed such that cylindrical flow, that is flow which is independent of the axial coordinate, exists.

Keywords: Inverse problem, irrotational incompressible flow, Boundary value problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1593
261 Numerical Simulation of Natural Gas Dispersion from Low Pressure Pipelines

Authors: Omid Adibi, Nategheh Najafpour, Bijan Farhanieh, Hossein Afshin

Abstract:

Gas release from the pipelines is one of the main factors in the gas industry accidents. Released gas ejects from the pipeline as a free jet and in the growth process, the fuel gets mixed with the ambient air. Accordingly, an accidental spark will release the chemical energy of the mixture with an explosion. Gas explosion damages the equipment and endangers the life of staffs. So due to importance of safety in gas industries, prevision of accident can reduce the number of the casualties. In this paper, natural gas leakages from the low pressure pipelines are studied in two steps: 1) the simulation of mixing process and identification of flammable zones and 2) the simulation of wind effects on the mixing process. The numerical simulations were performed by using the finite volume method and the pressure-based algorithm. Also, for the grid generation the structured method was used. The results show that, in just 6.4 s after accident, released natural gas could penetrate to 40 m in vertical and 20 m in horizontal direction. Moreover, the results show that the wind speed is a key factor in dispersion process. In fact, the wind transports the flammable zones into the downstream. Hence, to improve the safety of the people and human property, it is preferable to construct gas facilities and buildings in the opposite side of prevailing wind direction.

Keywords: Flammable zones, gas pipelines, numerical simulation, wind effects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1106
260 Stackelberg Security Game for Optimizing Security of Federated Internet of Things Platform Instances

Authors: Violeta Damjanovic-Behrendt

Abstract:

This paper presents an approach for optimal cyber security decisions to protect instances of a federated Internet of Things (IoT) platform in the cloud. The presented solution implements the repeated Stackelberg Security Game (SSG) and a model called Stochastic Human behaviour model with AttRactiveness and Probability weighting (SHARP). SHARP employs the Subjective Utility Quantal Response (SUQR) for formulating a subjective utility function, which is based on the evaluations of alternative solutions during decision-making. We augment the repeated SSG (including SHARP and SUQR) with a reinforced learning algorithm called Naïve Q-Learning. Naïve Q-Learning belongs to the category of active and model-free Machine Learning (ML) techniques in which the agent (either the defender or the attacker) attempts to find an optimal security solution. In this way, we combine GT and ML algorithms for discovering optimal cyber security policies. The proposed security optimization components will be validated in a collaborative cloud platform that is based on the Industrial Internet Reference Architecture (IIRA) and its recently published security model.

Keywords: Security, internet of things, cloud computing, Stackelberg security game, machine learning, Naïve Q-learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1494
259 Discrete Polyphase Matched Filtering-based Soft Timing Estimation for Mobile Wireless Systems

Authors: Thomas O. Olwal, Michael A. van Wyk, Barend J. van Wyk

Abstract:

In this paper we present a soft timing phase estimation (STPE) method for wireless mobile receivers operating in low signal to noise ratios (SNRs). Discrete Polyphase Matched (DPM) filters, a Log-maximum a posterior probability (MAP) and/or a Soft-output Viterbi algorithm (SOVA) are combined to derive a new timing recovery (TR) scheme. We apply this scheme to wireless cellular communication system model that comprises of a raised cosine filter (RCF), a bit-interleaved turbo-coded multi-level modulation (BITMM) scheme and the channel is assumed to be memory-less. Furthermore, no clock signals are transmitted to the receiver contrary to the classical data aided (DA) models. This new model ensures that both the bandwidth and power of the communication system is conserved. However, the computational complexity of ideal turbo synchronization is increased by 50%. Several simulation tests on bit error rate (BER) and block error rate (BLER) versus low SNR reveal that the proposed iterative soft timing recovery (ISTR) scheme outperforms the conventional schemes.

Keywords: discrete polyphase matched filters, maximum likelihood estimators, soft timing phase estimation, wireless mobile systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
258 An Efficient Architecture for Interleaved Modular Multiplication

Authors: Ahmad M. Abdel Fattah, Ayman M. Bahaa El-Din, Hossam M.A. Fahmy

Abstract:

Modular multiplication is the basic operation in most public key cryptosystems, such as RSA, DSA, ECC, and DH key exchange. Unfortunately, very large operands (in order of 1024 or 2048 bits) must be used to provide sufficient security strength. The use of such big numbers dramatically slows down the whole cipher system, especially when running on embedded processors. So far, customized hardware accelerators - developed on FPGAs or ASICs - were the best choice for accelerating modular multiplication in embedded environments. On the other hand, many algorithms have been developed to speed up such operations. Examples are the Montgomery modular multiplication and the interleaved modular multiplication algorithms. Combining both customized hardware with an efficient algorithm is expected to provide a much faster cipher system. This paper introduces an enhanced architecture for computing the modular multiplication of two large numbers X and Y modulo a given modulus M. The proposed design is compared with three previous architectures depending on carry save adders and look up tables. Look up tables should be loaded with a set of pre-computed values. Our proposed architecture uses the same carry save addition, but replaces both look up tables and pre-computations with an enhanced version of sign detection techniques. The proposed architecture supports higher frequencies than other architectures. It also has a better overall absolute time for a single operation.

Keywords: Montgomery multiplication, modular multiplication, efficient architecture, FPGA, RSA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2403
257 Effect of Atmospheric Turbulence on Hybrid FSO/RF Link Availability under Qatar Harsh Climate

Authors: Abir Touati, Syed Jawad Hussain, Farid Touati, Ammar Bouallegue

Abstract:

Although there has been a growing interest in the hybrid free-space optical link and radio frequency FSO/RF communication system, the current literature is limited to results obtained in moderate or cold environment. In this paper, using a soft switching approach, we investigate the effect of weather inhomogeneities on the strength of turbulence hence the channel refractive index under Qatar harsh environment and their influence on the hybrid FSO/RF availability. In this approach, either FSO/RF or simultaneous or none of them can be active. Based on soft switching approach and a finite state Markov Chain (FSMC) process, we model the channel fading for the two links and derive a mathematical expression for the outage probability of the hybrid system. Then, we evaluate the behavior of the hybrid FSO/RF under hazy and harsh weather. Results show that the FSO/RF soft switching renders the system outage probability less than that of each link individually. A soft switching algorithm is being implemented on FPGAs using Raptor code interfaced to the two terminals of a 1Gbps/100 Mbps FSO/RF hybrid system, the first being implemented in the region. Experimental results are compared to the above simulation results.

Keywords: Atmospheric turbulence, haze, soft switching, Raptor codes, refractive index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2536
256 Network-Constrained AC Unit Commitment under Uncertainty Using a Bender’s Decomposition Approach

Authors: B. Janani, S. Thiruvenkadam

Abstract:

In this work, the system evaluates the impact of considering a stochastic approach on the day ahead basis Unit Commitment. Comparisons between stochastic and deterministic Unit Commitment solutions are provided. The Unit Commitment model consists in the minimization of the total operation costs considering unit’s technical constraints like ramping rates, minimum up and down time. Load shedding and wind power spilling is acceptable, but at inflated operational costs. The evaluation process consists in the calculation of the optimal unit commitment and in verifying the fulfillment of the considered constraints. For the calculation of the optimal unit commitment, an algorithm based on the Benders Decomposition, namely on the Dual Dynamic Programming, was developed. Two approaches were considered on the construction of stochastic solutions. Data related to wind power outputs from two different operational days are considered on the analysis. Stochastic and deterministic solutions are compared based on the actual measured wind power output at the operational day. Through a technique capability of finding representative wind power scenarios and its probabilities, the system can analyze a more detailed process about the expected final operational cost.

Keywords: Benders’ decomposition, network constrained AC unit commitment, stochastic programming, wind power uncertainty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1276
255 Reduced Dynamic Time Warping for Handwriting Recognition Based on Multidimensional Time Series of a Novel Pen Device

Authors: Muzaffar Bashir, Jürgen Kempf

Abstract:

The purpose of this paper is to present a Dynamic Time Warping technique which reduces significantly the data processing time and memory size of multi-dimensional time series sampled by the biometric smart pen device BiSP. The acquisition device is a novel ballpoint pen equipped with a diversity of sensors for monitoring the kinematics and dynamics of handwriting movement. The DTW algorithm has been applied for time series analysis of five different sensor channels providing pressure, acceleration and tilt data of the pen generated during handwriting on a paper pad. But the standard DTW has processing time and memory space problems which limit its practical use for online handwriting recognition. To face with this problem the DTW has been applied to the sum of the five sensor signals after an adequate down-sampling of the data. Preliminary results have shown that processing time and memory size could significantly be reduced without deterioration of performance in single character and word recognition. Further excellent accuracy in recognition was achieved which is mainly due to the reduced dynamic time warping RDTW technique and a novel pen device BiSP.

Keywords: Biometric character recognition, biometric person authentication, biometric smart pen BiSP, dynamic time warping DTW, online-handwriting recognition, multidimensional time series.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2367
254 Fast Wavelet Image Denoising Based on Local Variance and Edge Analysis

Authors: Gaoyong Luo

Abstract:

The approach based on the wavelet transform has been widely used for image denoising due to its multi-resolution nature, its ability to produce high levels of noise reduction and the low level of distortion introduced. However, by removing noise, high frequency components belonging to edges are also removed, which leads to blurring the signal features. This paper proposes a new method of image noise reduction based on local variance and edge analysis. The analysis is performed by dividing an image into 32 x 32 pixel blocks, and transforming the data into wavelet domain. Fast lifting wavelet spatial-frequency decomposition and reconstruction is developed with the advantages of being computationally efficient and boundary effects minimized. The adaptive thresholding by local variance estimation and edge strength measurement can effectively reduce image noise while preserve the features of the original image corresponding to the boundaries of the objects. Experimental results demonstrate that the method performs well for images contaminated by natural and artificial noise, and is suitable to be adapted for different class of images and type of noises. The proposed algorithm provides a potential solution with parallel computation for real time or embedded system application.

Keywords: Edge strength, Fast lifting wavelet, Image denoising, Local variance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1986
253 Hydrogen and Diesel Combustion on a Single Cylinder Four Stroke Diesel Engine in Dual Fuel mode with Varying Injection Strategies

Authors: Probir Kumar Bose, Rahul Banerjee, Madhujit Deb

Abstract:

The present energy situation and the concerns about global warming has stimulated active research interest in non-petroleum, carbon free compounds and non-polluting fuels, particularly for transportation, power generation, and agricultural sectors. Environmental concerns and limited amount of petroleum fuels have caused interests in the development of alternative fuels for internal combustion (IC) engines. The petroleum crude reserves however, are declining and consumption of transport fuels particularly in the developing countries is increasing at high rates. Severe shortage of liquid fuels derived from petroleum may be faced in the second half of this century. Recently more and more stringent environmental regulations being enacted in the USA and Europe have led to the research and development activities on clean alternative fuels. Among the gaseous fuels hydrogen is considered to be one of the clean alternative fuel. Hydrogen is an interesting candidate for future internal combustion engine based power trains. In this experimental investigation, the performance and combustion analysis were carried out on a direct injection (DI) diesel engine using hydrogen with diesel following the TMI(Time Manifold Injection) technique at different injection timings of 10 degree,45 degree and 80 degree ATDC using an electronic control unit (ECU) and injection durations were controlled. Further, the tests have been carried out at a constant speed of 1500rpm at different load conditions and it can be observed that brake thermal efficiency increases with increase in load conditions with a maximum gain of 15% at full load conditions during all injection strategies of hydrogen. It was also observed that with the increase in hydrogen energy share BSEC started reducing and it reduced to a maximum of 9% as compared to baseline diesel at 10deg ATDC injection during maximum injection proving the exceptional combustion properties of hydrogen.

Keywords: Hydrogen, performance, combustion, alternative fuels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3372
252 A Critics Study of Neural Networks Applied to ion-Exchange Process

Authors: John Kabuba, Antoine Mulaba-Bafubiandi, Kim Battle

Abstract:

This paper presents a critical study about the application of Neural Networks to ion-exchange process. Ionexchange is a complex non-linear process involving many factors influencing the ions uptake mechanisms from the pregnant solution. The following step includes the elution. Published data presents empirical isotherm equations with definite shortcomings resulting in unreliable predictions. Although Neural Network simulation technique encounters a number of disadvantages including its “black box", and a limited ability to explicitly identify possible causal relationships, it has the advantage to implicitly handle complex nonlinear relationships between dependent and independent variables. In the present paper, the Neural Network model based on the back-propagation algorithm Levenberg-Marquardt was developed using a three layer approach with a tangent sigmoid transfer function (tansig) at hidden layer with 11 neurons and linear transfer function (purelin) at out layer. The above mentioned approach has been used to test the effectiveness in simulating ion exchange processes. The modeling results showed that there is an excellent agreement between the experimental data and the predicted values of copper ions removed from aqueous solutions.

Keywords: Copper, ion-exchange process, neural networks, simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1593
251 Development of Numerical Model to Compute Water Hammer Transients in Pipe Flow

Authors: Jae-Young Lee, Woo-Young Jung, Myeong-Jun Nam

Abstract:

Water hammer is a hydraulic transient problem which is commonly encountered in the penstocks of hydropower plants. The numerical model was developed to estimate the transient behavior of pressure waves in pipe systems. The computational algorithm was proposed to model the water hammer phenomenon in a pipe system with pump shutdown at midstream and sudden valve closure at downstream. To predict the pressure head and flow velocity as a function of time as a result of rapidly closing a valve and pump shutdown, two boundary conditions at the ends considering pump operation and valve control can be implemented as specified equations of the pressure head and flow velocity based on the characteristics method. It was shown that the effects of transient flow make it determine the needs for protection devices, such as surge tanks, surge relief valves, or air valves, at various points in the system against overpressure and low pressure. It produced reasonably good performance with the results of the proposed transient model for pipeline systems. The proposed numerical model can be used as an efficient tool for the safety assessment of hydropower plants due to water hammer.

Keywords: Water hammer, hydraulic transient, pipe systems, characteristics method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 991
250 Gaits Stability Analysis for a Pneumatic Quadruped Robot Using Reinforcement Learning

Authors: Soofiyan Atar, Adil Shaikh, Sahil Rajpurkar, Pragnesh Bhalala, Aniket Desai, Irfan Siddavatam

Abstract:

Deep reinforcement learning (deep RL) algorithms leverage the symbolic power of complex controllers by automating it by mapping sensory inputs to low-level actions. Deep RL eliminates the complex robot dynamics with minimal engineering. Deep RL provides high-risk involvement by directly implementing it in real-world scenarios and also high sensitivity towards hyperparameters. Tuning of hyperparameters on a pneumatic quadruped robot becomes very expensive through trial-and-error learning. This paper presents an automated learning control for a pneumatic quadruped robot using sample efficient deep Q learning, enabling minimal tuning and very few trials to learn the neural network. Long training hours may degrade the pneumatic cylinder due to jerk actions originated through stochastic weights. We applied this method to the pneumatic quadruped robot, which resulted in a hopping gait. In our process, we eliminated the use of a simulator and acquired a stable gait. This approach evolves so that the resultant gait matures more sturdy towards any stochastic changes in the environment. We further show that our algorithm performed very well as compared to programmed gait using robot dynamics.

Keywords: model-based reinforcement learning, gait stability, supervised learning, pneumatic quadruped

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 529
249 Image Compression with Back-Propagation Neural Network using Cumulative Distribution Function

Authors: S. Anna Durai, E. Anna Saro

Abstract:

Image Compression using Artificial Neural Networks is a topic where research is being carried out in various directions towards achieving a generalized and economical network. Feedforward Networks using Back propagation Algorithm adopting the method of steepest descent for error minimization is popular and widely adopted and is directly applied to image compression. Various research works are directed towards achieving quick convergence of the network without loss of quality of the restored image. In general the images used for compression are of different types like dark image, high intensity image etc. When these images are compressed using Back-propagation Network, it takes longer time to converge. The reason for this is, the given image may contain a number of distinct gray levels with narrow difference with their neighborhood pixels. If the gray levels of the pixels in an image and their neighbors are mapped in such a way that the difference in the gray levels of the neighbors with the pixel is minimum, then compression ratio as well as the convergence of the network can be improved. To achieve this, a Cumulative distribution function is estimated for the image and it is used to map the image pixels. When the mapped image pixels are used, the Back-propagation Neural Network yields high compression ratio as well as it converges quickly.

Keywords: Back-propagation Neural Network, Cumulative Distribution Function, Correlation, Convergence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2514
248 Selection of Designs in Ordinal Regression Models under Linear Predictor Misspecification

Authors: Ishapathik Das

Abstract:

The purpose of this article is to find a method of comparing designs for ordinal regression models using quantile dispersion graphs in the presence of linear predictor misspecification. The true relationship between response variable and the corresponding control variables are usually unknown. Experimenter assumes certain form of the linear predictor of the ordinal regression models. The assumed form of the linear predictor may not be correct always. Thus, the maximum likelihood estimates (MLE) of the unknown parameters of the model may be biased due to misspecification of the linear predictor. In this article, the uncertainty in the linear predictor is represented by an unknown function. An algorithm is provided to estimate the unknown function at the design points where observations are available. The unknown function is estimated at all points in the design region using multivariate parametric kriging. The comparison of the designs are based on a scalar valued function of the mean squared error of prediction (MSEP) matrix, which incorporates both variance and bias of the prediction caused by the misspecification in the linear predictor. The designs are compared using quantile dispersion graphs approach. The graphs also visually depict the robustness of the designs on the changes in the parameter values. Numerical examples are presented to illustrate the proposed methodology.

Keywords: Model misspecification, multivariate kriging, multivariate logistic link, ordinal response models, quantile dispersion graphs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 954
247 Revised PLWAP Tree with Non-frequent Items for Mining Sequential Pattern

Authors: R. Vishnu Priya, A. Vadivel

Abstract:

Sequential pattern mining is a challenging task in data mining area with large applications. One among those applications is mining patterns from weblog. Recent times, weblog is highly dynamic and some of them may become absolute over time. In addition, users may frequently change the threshold value during the data mining process until acquiring required output or mining interesting rules. Some of the recently proposed algorithms for mining weblog, build the tree with two scans and always consume large time and space. In this paper, we build Revised PLWAP with Non-frequent Items (RePLNI-tree) with single scan for all items. While mining sequential patterns, the links related to the nonfrequent items are not considered. Hence, it is not required to delete or maintain the information of nodes while revising the tree for mining updated transactions. The algorithm supports both incremental and interactive mining. It is not required to re-compute the patterns each time, while weblog is updated or minimum support changed. The performance of the proposed tree is better, even the size of incremental database is more than 50% of existing one. For evaluation purpose, we have used the benchmark weblog dataset and found that the performance of proposed tree is encouraging compared to some of the recently proposed approaches.

Keywords: Sequential pattern mining, weblog, frequent and non-frequent items, incremental and interactive mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893
246 Performance Based Seismic Retrofit of Masonry Infilled Reinforced Concrete Frames Using Passive Energy Dissipation Devices

Authors: Alok Madan, Arshad K. Hashmi

Abstract:

The paper presents a plastic analysis procedure based on the energy balance concept for performance based seismic retrofit of multi-story multi-bay masonry infilled reinforced concrete (R/C) frames with a ‘soft’ ground story using passive energy dissipation (PED) devices with the objective of achieving a target performance level of the retrofitted R/C frame for a given seismic hazard level at the building site. The proposed energy based plastic analysis procedure was employed for developing performance based design (PBD) formulations for PED devices for a simulated application in seismic retrofit of existing frame structures designed in compliance with the prevalent standard codes of practice. The PBD formulations developed for PED devices were implemented for simulated seismic retrofit of a representative code-compliant masonry infilled R/C frame with a ‘soft’ ground story using friction dampers as the PED device. Non-linear dynamic analyses of the retrofitted masonry infilled R/C frames is performed to investigate the efficacy and accuracy of the proposed energy based plastic analysis procedure in achieving the target performance level under design level earthquakes. Results of non-linear dynamic analyses demonstrate that the maximum inter-story drifts in the masonry infilled R/C frames with a ‘soft’ ground story that is retrofitted with the friction dampers designed using the proposed PBD formulations are controlled within the target drifts under near-field as well far-field earthquakes.

Keywords: Energy Methods, Masonry Infilled Frame, Near-field Earthquakes, Seismic Protection, Supplemental damping devices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2504
245 An Approach to Polynomial Curve Comparison in Geometric Object Database

Authors: Chanon Aphirukmatakun, Natasha Dejdumrong

Abstract:

In image processing and visualization, comparing two bitmapped images needs to be compared from their pixels by matching pixel-by-pixel. Consequently, it takes a lot of computational time while the comparison of two vector-based images is significantly faster. Sometimes these raster graphics images can be approximately converted into the vector-based images by various techniques. After conversion, the problem of comparing two raster graphics images can be reduced to the problem of comparing vector graphics images. Hence, the problem of comparing pixel-by-pixel can be reduced to the problem of polynomial comparisons. In computer aided geometric design (CAGD), the vector graphics images are the composition of curves and surfaces. Curves are defined by a sequence of control points and their polynomials. In this paper, the control points will be considerably used to compare curves. The same curves after relocated or rotated are treated to be equivalent while two curves after different scaled are considered to be similar curves. This paper proposed an algorithm for comparing the polynomial curves by using the control points for equivalence and similarity. In addition, the geometric object-oriented database used to keep the curve information has also been defined in XML format for further used in curve comparisons.

Keywords: Bezier curve, Said-Ball curve, Wang-Ball curve, DP curve, CAGD, comparison, geometric object database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2172
244 Impact of Fluid Flow Patterns on Metastable Zone Width of Borax in Dual Radial Impeller Crystallizer at Different Impeller Spacings

Authors: A. Čelan, M. Ćosić, D. Rušić, N. Kuzmanić

Abstract:

Conducting crystallization in an agitated vessel requires a proper selection of mixing parameters that would result in a production of crystals of specific properties. In dual impeller systems, which are characterized by a more complex hydrodynamics due to the possible fluid flow interactions, revealing a clear link between mixing parameters and crystallization kinetics is still an open issue. The aim of this work is to establish this connection by investigating how fluid flow patterns, generated by two impellers mounted on the same shaft, reflect on metastable zone width of borax decahydrate, one of the most important parameters of the crystallization process. Investigation was carried out in a 15-dm3 bench scale batch cooling crystallizer with an aspect ratio (H/T) equal to 1.3. For this reason, two radial straight blade turbines (4-SBT) were used for agitation. Experiments were conducted at different impeller spacings at the state of complete suspension. During the process of an unseeded batch cooling crystallization, solution temperature and supersaturation were continuously monitored what enabled a determination of the metastable zone width. Hydrodynamic conditions in the vessel achieved at different impeller spacings investigated were analyzed in detail. This was done firstly by measuring the mixing time required to attain the desired level of homogeneity. Secondly, fluid flow patterns generated in a described dual impeller system were both photographed and simulated by VisiMix Turbulent software. Also, a comparison of these two visualization methods was performed. Experimentally obtained results showed that metastable zone width is definitely affected by the hydrodynamics in the crystallizer. This means that this crystallization parameter can be controlled not only by adjusting the saturation temperature or cooling rate, as is usually done, but also by choosing a suitable impeller spacing that will result in a formation of crystals of wanted size distribution.

Keywords: Dual impeller crystallizer, fluid flow pattern, metastable zone width, mixing time, radial impeller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 832
243 An Advanced Nelder Mead Simplex Method for Clustering of Gene Expression Data

Authors: M. Pandi, K. Premalatha

Abstract:

The DNA microarray technology concurrently monitors the expression levels of thousands of genes during significant biological processes and across the related samples. The better understanding of functional genomics is obtained by extracting the patterns hidden in gene expression data. It is handled by clustering which reveals natural structures and identify interesting patterns in the underlying data. In the proposed work clustering gene expression data is done through an Advanced Nelder Mead (ANM) algorithm. Nelder Mead (NM) method is a method designed for optimization process. In Nelder Mead method, the vertices of a triangle are considered as the solutions. Many operations are performed on this triangle to obtain a better result. In the proposed work, the operations like reflection and expansion is eliminated and a new operation called spread-out is introduced. The spread-out operation will increase the global search area and thus provides a better result on optimization. The spread-out operation will give three points and the best among these three points will be used to replace the worst point. The experiment results are analyzed with optimization benchmark test functions and gene expression benchmark datasets. The results show that ANM outperforms NM in both benchmarks.

Keywords: Spread out, simplex, multi-minima, fitness function, optimization, search area, monocyte, solution, genomes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2457