Search results for: Particle Swarm Optimization algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5068

Search results for: Particle Swarm Optimization algorithm

268 Multi-Layer Multi-Feature Background Subtraction Using Codebook Model Framework

Authors: Yun-Tao Zhang, Jong-Yeop Bae, Whoi-Yul Kim

Abstract:

Background modeling and subtraction in video analysis has been widely used as an effective method for moving objects detection in many computer vision applications. Recently, a large number of approaches have been developed to tackle different types of challenges in this field. However, the dynamic background and illumination variations are the most frequently occurred problems in the practical situation. This paper presents a favorable two-layer model based on codebook algorithm incorporated with local binary pattern (LBP) texture measure, targeted for handling dynamic background and illumination variation problems. More specifically, the first layer is designed by block-based codebook combining with LBP histogram and mean value of each RGB color channel. Because of the invariance of the LBP features with respect to monotonic gray-scale changes, this layer can produce block wise detection results with considerable tolerance of illumination variations. The pixel-based codebook is employed to reinforce the precision from the output of the first layer which is to eliminate false positives further. As a result, the proposed approach can greatly promote the accuracy under the circumstances of dynamic background and illumination changes. Experimental results on several popular background subtraction datasets demonstrate very competitive performance compared to previous models.

Keywords: Background subtraction, codebook model, local binary pattern, dynamic background, illumination changes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1958
267 Speaker Identification using Neural Networks

Authors: R.V Pawar, P.P.Kajave, S.N.Mali

Abstract:

The speech signal conveys information about the identity of the speaker. The area of speaker identification is concerned with extracting the identity of the person speaking the utterance. As speech interaction with computers becomes more pervasive in activities such as the telephone, financial transactions and information retrieval from speech databases, the utility of automatically identifying a speaker is based solely on vocal characteristic. This paper emphasizes on text dependent speaker identification, which deals with detecting a particular speaker from a known population. The system prompts the user to provide speech utterance. System identifies the user by comparing the codebook of speech utterance with those of the stored in the database and lists, which contain the most likely speakers, could have given that speech utterance. The speech signal is recorded for N speakers further the features are extracted. Feature extraction is done by means of LPC coefficients, calculating AMDF, and DFT. The neural network is trained by applying these features as input parameters. The features are stored in templates for further comparison. The features for the speaker who has to be identified are extracted and compared with the stored templates using Back Propogation Algorithm. Here, the trained network corresponds to the output; the input is the extracted features of the speaker to be identified. The network does the weight adjustment and the best match is found to identify the speaker. The number of epochs required to get the target decides the network performance.

Keywords: Average Mean Distance function, Backpropogation, Linear Predictive Coding, MultilayeredPerceptron,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1890
266 A Numerical Study on Semi-Active Control of a Bridge Deck under Seismic Excitation

Authors: A. Yanik, U. Aldemir

Abstract:

This study investigates the benefits of implementing the semi-active devices in relation to passive viscous damping in the context of seismically isolated bridge structures. Since the intrinsically nonlinear nature of semi-active devices prevents the direct evaluation of Laplace transforms, frequency response functions are compiled from the computed time history response to sinusoidal and pulse-like seismic excitation. A simple semi-active control policy is used in regard to passive linear viscous damping and an optimal non-causal semi-active control strategy. The control strategy requires optimization. Euler-Lagrange equations are solved numerically during this procedure. The optimal closed-loop performance is evaluated for an idealized controllable dash-pot. A simplified single-degree-of-freedom model of an isolated bridge is used as numerical example. Two bridge cases are investigated. These cases are; bridge deck without the isolation bearing and bridge deck with the isolation bearing. To compare the performances of the passive and semi-active control cases, frequency dependent acceleration, velocity and displacement response transmissibility ratios Ta(w), Tv(w), and Td(w) are defined. To fully investigate the behavior of the structure subjected to the sinusoidal and pulse type excitations, different damping levels are considered. Numerical results showed that, under the effect of external excitation, bridge deck with semi-active control showed better structural performance than the passive bridge deck case.

Keywords: Bridge structures, passive control, seismic, semi-active control, viscous damping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 759
265 Flexible Wormhole-Switched Network-on-chip with Two-Level Priority Data Delivery Service

Authors: Faizal A. Samman, Thomas Hollstein, Manfred Glesner

Abstract:

A synchronous network-on-chip using wormhole packet switching and supporting guaranteed-completion best-effort with low-priority (LP) and high-priority (HP) wormhole packet delivery service is presented in this paper. Both our proposed LP and HP message services deliver a good quality of service in term of lossless packet completion and in-order message data delivery. However, the LP message service does not guarantee minimal completion bound. The HP packets will absolutely use 100% bandwidth of their reserved links if the HP packets are injected from the source node with maximum injection. Hence, the service are suitable for small size messages (less than hundred bytes). Otherwise the other HP and LP messages, which require also the links, will experience relatively high latency depending on the size of the HP message. The LP packets are routed using a minimal adaptive routing, while the HP packets are routed using a non-minimal adaptive routing algorithm. Therefore, an additional 3-bit field, identifying the packet type, is introduced in their packet headers to classify and to determine the type of service committed to the packet. Our NoC prototypes have been also synthesized using a 180-nm CMOS standard-cell technology to evaluate the cost of implementing the combination of both services.

Keywords: Network-on-Chip, Parallel Pipeline Router Architecture, Wormhole Switching, Two-Level Priority Service.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1763
264 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators

Authors: Wei Zhang

Abstract:

With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.

Keywords: Deep learning, field programmable gate array, FPGA, hardware acceleration, convolutional neural networks, CNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 885
263 Evaluating Machine Learning Techniques for Activity Classification in Smart Home Environments

Authors: Talal Alshammari, Nasser Alshammari, Mohamed Sedky, Chris Howard

Abstract:

With the widespread adoption of the Internet-connected devices, and with the prevalence of the Internet of Things (IoT) applications, there is an increased interest in machine learning techniques that can provide useful and interesting services in the smart home domain. The areas that machine learning techniques can help advance are varied and ever-evolving. Classifying smart home inhabitants’ Activities of Daily Living (ADLs), is one prominent example. The ability of machine learning technique to find meaningful spatio-temporal relations of high-dimensional data is an important requirement as well. This paper presents a comparative evaluation of state-of-the-art machine learning techniques to classify ADLs in the smart home domain. Forty-two synthetic datasets and two real-world datasets with multiple inhabitants are used to evaluate and compare the performance of the identified machine learning techniques. Our results show significant performance differences between the evaluated techniques. Such as AdaBoost, Cortical Learning Algorithm (CLA), Decision Trees, Hidden Markov Model (HMM), Multi-layer Perceptron (MLP), Structured Perceptron and Support Vector Machines (SVM). Overall, neural network based techniques have shown superiority over the other tested techniques.

Keywords: Activities of daily living, classification, internet of things, machine learning, smart home.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1763
262 Numerical Studies on Thrust Vectoring Using Shock-Induced Self Impinging Secondary Jets

Authors: S. Vignesh, N. Vishnu, S. Vigneshwaran, M. Vishnu Anand, Dinesh Kumar Babu, V. R. Sanal Kumar

Abstract:

Numerical studies have been carried out using a validated two-dimensional standard k-omega turbulence model for the design optimization of a thrust vector control system using shock induced self-impinging supersonic secondary double jet. Parametric analytical studies have been carried out at different secondary injection locations to identifying the highest unsymmetrical distribution of the main gas flow due to shock waves, which produces a desirable side force more lucratively for vectoring. The results from the parametric studies of the case on hand reveal that the shock induced self-impinging supersonic secondary double jet is more efficient in certain locations at the divergent region of a CD nozzle than a case with supersonic single jet with same mass flow rate. We observed that the best axial location of the self-impinging supersonic secondary double jet nozzle with a given jet interaction angle, built-in to a CD nozzle having area ratio 1.797, is 0.991 times the primary nozzle throat diameter from the throat location. We also observed that the flexible steering is possible after invoking ON/OFF facility to the secondary nozzles for meeting the onboard mission requirements. Through our case studies we concluded that the supersonic self-impinging secondary double jet at predesigned jet interaction angle and location can provide more flexible steering options facilitating with 8.81% higher thrust vectoring efficiency than the conventional supersonic single secondary jet without compromising the payload capability of any supersonic aerospace vehicle.

Keywords: Fluidic thrust vectoring, rocket steering, self-impinging secondary supersonic jet, TVC in aerospace vehicles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2675
261 River Stage-Discharge Forecasting Based on Multiple-Gauge Strategy Using EEMD-DWT-LSSVM Approach

Authors: Farhad Alizadeh, Alireza Faregh Gharamaleki, Mojtaba Jalilzadeh, Houshang Gholami, Ali Akhoundzadeh

Abstract:

This study presented hybrid pre-processing approach along with a conceptual model to enhance the accuracy of river discharge prediction. In order to achieve this goal, Ensemble Empirical Mode Decomposition algorithm (EEMD), Discrete Wavelet Transform (DWT) and Mutual Information (MI) were employed as a hybrid pre-processing approach conjugated to Least Square Support Vector Machine (LSSVM). A conceptual strategy namely multi-station model was developed to forecast the Souris River discharge more accurately. The strategy used herein was capable of covering uncertainties and complexities of river discharge modeling. DWT and EEMD was coupled, and the feature selection was performed for decomposed sub-series using MI to be employed in multi-station model. In the proposed feature selection method, some useless sub-series were omitted to achieve better performance. Results approved efficiency of the proposed DWT-EEMD-MI approach to improve accuracy of multi-station modeling strategies.

Keywords: River stage-discharge process, LSSVM, discrete wavelet transform (DWT), ensemble empirical decomposition mode (EEMD), multi-station modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 657
260 Solar Radiation Time Series Prediction

Authors: Cameron Hamilton, Walter Potter, Gerrit Hoogenboom, Ronald McClendon, Will Hobbs

Abstract:

A model was constructed to predict the amount of solar radiation that will make contact with the surface of the earth in a given location an hour into the future. This project was supported by the Southern Company to determine at what specific times during a given day of the year solar panels could be relied upon to produce energy in sufficient quantities. Due to their ability as universal function approximators, an artificial neural network was used to estimate the nonlinear pattern of solar radiation, which utilized measurements of weather conditions collected at the Griffin, Georgia weather station as inputs. A number of network configurations and training strategies were utilized, though a multilayer perceptron with a variety of hidden nodes trained with the resilient propagation algorithm consistently yielded the most accurate predictions. In addition, a modeled direct normal irradiance field and adjacent weather station data were used to bolster prediction accuracy. In later trials, the solar radiation field was preprocessed with a discrete wavelet transform with the aim of removing noise from the measurements. The current model provides predictions of solar radiation with a mean square error of 0.0042, though ongoing efforts are being made to further improve the model’s accuracy.

Keywords: Artificial Neural Networks, Resilient Propagation, Solar Radiation, Time Series Forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2756
259 Modeling Parametric Vibration of Multistage Gear Systems as a Tool for Design Optimization

Authors: James Kuria, John Kihiu

Abstract:

This work presents a numerical model developed to simulate the dynamics and vibrations of a multistage tractor gearbox. The effect of time varying mesh stiffness, time varying frictional torque on the gear teeth, lateral and torsional flexibility of the shafts and flexibility of the bearings were included in the model. The model was developed by using the Lagrangian method, and it was applied to study the effect of three design variables on the vibration and stress levels on the gears. The first design variable, module, had little effect on the vibration levels but a higher module resulted to higher bending stress levels. The second design variable, pressure angle, had little effect on the vibration levels, but had a strong effect on the stress levels on the pinion of a high reduction ratio gear pair. A pressure angle of 25o resulted to lower stress levels for a pinion with 14 teeth than a pressure angle of 20o. The third design variable, contact ratio, had a very strong effect on both the vibration levels and bending stress levels. Increasing the contact ratio to 2.0 reduced both the vibration levels and bending stress levels significantly. For the gear train design used in this study, a module of 2.5 and contact ratio of 2.0 for the various meshes was found to yield the best combination of low vibration levels and low bending stresses. The model can therefore be used as a tool for obtaining the optimum gear design parameters for a given multistage spur gear train.

Keywords: bending stress levels, frictional torque, gear designparameters, mesh stiffness, multistage gear train, vibration levels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2564
258 Model Order Reduction of Linear Time Variant High Speed VLSI Interconnects using Frequency Shift Technique

Authors: J.V.R.Ravindra, M.B.Srinivas,

Abstract:

Accurate modeling of high speed RLC interconnects has become a necessity to address signal integrity issues in current VLSI design. To accurately model a dispersive system of interconnects at higher frequencies; a full-wave analysis is required. However, conventional circuit simulation of interconnects with full wave models is extremely CPU expensive. We present an algorithm for reducing large VLSI circuits to much smaller ones with similar input-output behavior. A key feature of our method, called Frequency Shift Technique, is that it is capable of reducing linear time-varying systems. This enables it to capture frequency-translation and sampling behavior, important in communication subsystems such as mixers, RF components and switched-capacitor filters. Reduction is obtained by projecting the original system described by linear differential equations into a lower dimension. Experiments have been carried out using Cadence Design Simulator cwhich indicates that the proposed technique achieves more % reduction with less CPU time than the other model order reduction techniques existing in literature. We also present applications to RF circuit subsystems, obtaining size reductions and evaluation speedups of orders of magnitude with insignificant loss of accuracy.

Keywords: Model order Reduction, RLC, crosstalk

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646
257 Maximizing Nitrate Absorption of Agricultural Waste Water in a Tubular Microalgae Reactor by Adapting the Illumination Spectrum

Authors: J. Martin, A. Dannenberg, G. Detrell, R. Ewald, S. Fasoulas

Abstract:

Microalgae-based photobioreactors (PBR) for Life Support Systems (LSS) are currently being investigated for future space missions such as a crewed base on planets or moons. Biological components may help reducing resupply masses by closing material mass flows with the help of regenerative components. Via photosynthesis, the microalgae use CO2, water, light and nutrients to provide oxygen and biomass for the astronauts. These capabilities could have synergies with Earth applications that tackle current problems and the developed technologies can be transferred. For example, a current worldwide discussed issue is the increased nitrate and phosphate pollution of ground water from agricultural waste waters. To investigate the potential use of a biological system based on the ability of the microalgae to extract and use nitrate and phosphate for the treatment of polluted ground water from agricultural applications, a scalable test stand is being developed. This test stand investigates the maximization of intake rates of nitrate and quantifies the produced biomass and oxygen. To minimize the required energy, for the uptake of nitrate from artificial waste water (AWW) the Flashing Light Effect (FLE) and the adaption of the illumination spectrum were realized. This paper describes the composition of the AWW, the development of the illumination unit and the possibility of non-invasive process optimization and control via the adaption of the illumination spectrum and illumination cycles. The findings were a doubling of the energy related growth rate by adapting the illumination setting.

Keywords: Microalgae, illumination, nitrate uptake, flashing light effect.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 638
256 Noninvasive Brain-Machine Interface to Control Both Mecha TE Robotic Hands Using Emotiv EEG Neuroheadset

Authors: Adrienne Kline, Jaydip Desai

Abstract:

Electroencephalogram (EEG) is a noninvasive technique that registers signals originating from the firing of neurons in the brain. The Emotiv EEG Neuroheadset is a consumer product comprised of 14 EEG channels and was used to record the reactions of the neurons within the brain to two forms of stimuli in 10 participants. These stimuli consisted of auditory and visual formats that provided directions of ‘right’ or ‘left.’ Participants were instructed to raise their right or left arm in accordance with the instruction given. A scenario in OpenViBE was generated to both stimulate the participants while recording their data. In OpenViBE, the Graz Motor BCI Stimulator algorithm was configured to govern the duration and number of visual stimuli. Utilizing EEGLAB under the cross platform MATLAB®, the electrodes most stimulated during the study were defined. Data outputs from EEGLAB were analyzed using IBM SPSS Statistics® Version 20. This aided in determining the electrodes to use in the development of a brain-machine interface (BMI) using real-time EEG signals from the Emotiv EEG Neuroheadset. Signal processing and feature extraction were accomplished via the Simulink® signal processing toolbox. An Arduino™ Duemilanove microcontroller was used to link the Emotiv EEG Neuroheadset and the right and left Mecha TE™ Hands.

Keywords: Brain-machine interface, EEGLAB, emotiv EEG neuroheadset, openViBE, simulink.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2798
255 An Efficient Tool for Mitigating Voltage Unbalance with Reactive Power Control of Distributed Grid-Connected Photovoltaic Systems

Authors: Malinwo Estone Ayikpa

Abstract:

With the rapid increase of grid-connected PV systems over the last decades, genuine challenges have arisen for engineers and professionals of energy field in the planning and operation of existing distribution networks with the integration of new generation sources. However, the conventional distribution network, in its design was not expected to receive other generation outside the main power supply. The tools generally used to analyze the networks become inefficient and cannot take into account all the constraints related to the operation of grid-connected PV systems. Some of these constraints are voltage control difficulty, reverse power flow, and especially voltage unbalance which could be due to the poor distribution of single-phase PV systems in the network. In order to analyze the impact of the connection of small and large number of PV systems to the distribution networks, this paper presents an efficient optimization tool that minimizes voltage unbalance in three-phase distribution networks with active and reactive power injections from the allocation of single-phase and three-phase PV plants. Reactive power can be generated or absorbed using the available capacity and the adjustable power factor of the inverter. Good reduction of voltage unbalance can be achieved by reactive power control of the PV systems. The presented tool is based on the three-phase current injection method and the PV systems are modeled via an equivalent circuit. The primal-dual interior point method is used to obtain the optimal operating points for the systems.

Keywords: Photovoltaic generation, primal-dual interior point method, three-phase optimal power flow, unbalanced system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1084
254 Vision-Based Collision Avoidance for Unmanned Aerial Vehicles by Recurrent Neural Networks

Authors: Yao-Hong Tsai

Abstract:

Due to the sensor technology, video surveillance has become the main way for security control in every big city in the world. Surveillance is usually used by governments for intelligence gathering, the prevention of crime, the protection of a process, person, group or object, or the investigation of crime. Many surveillance systems based on computer vision technology have been developed in recent years. Moving target tracking is the most common task for Unmanned Aerial Vehicle (UAV) to find and track objects of interest in mobile aerial surveillance for civilian applications. The paper is focused on vision-based collision avoidance for UAVs by recurrent neural networks. First, images from cameras on UAV were fused based on deep convolutional neural network. Then, a recurrent neural network was constructed to obtain high-level image features for object tracking and extracting low-level image features for noise reducing. The system distributed the calculation of the whole system to local and cloud platform to efficiently perform object detection, tracking and collision avoidance based on multiple UAVs. The experiments on several challenging datasets showed that the proposed algorithm outperforms the state-of-the-art methods.

Keywords: Unmanned aerial vehicle, object tracking, deep learning, collision avoidance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 947
253 An Intelligent Transportation System for Safety and Integrated Management of Railway Crossings

Authors: M. Magrini, D. Moroni, G. Palazzese, G. Pieri, D. Azzarelli, A. Spada, L. Fanucci, O. Salvetti

Abstract:

Railway crossings are complex entities whose optimal management cannot be addressed unless with the help of an intelligent transportation system integrating information both on train and vehicular flows. In this paper, we propose an integrated system named SIMPLE (Railway Safety and Infrastructure for Mobility applied at level crossings) that, while providing unparalleled safety in railway level crossings, collects data on rail and road traffic and provides value-added services to citizens and commuters. Such services include for example alerts, via variable message signs to drivers and suggestions for alternative routes, towards a more sustainable, eco-friendly and efficient urban mobility. To achieve these goals, SIMPLE is organized as a System of Systems (SoS), with a modular architecture whose components range from specially-designed radar sensors for obstacle detection to smart ETSI M2M-compliant camera networks for urban traffic monitoring. Computational unit for performing forecast according to adaptive models of train and vehicular traffic are also included. The proposed system has been tested and validated during an extensive trial held in the mid-sized Italian town of Montecatini, a paradigmatic case where the rail network is inextricably linked with the fabric of the city. Results of the tests are reported and discussed.

Keywords: Intelligent Transportation Systems (ITS), railway, railroad crossing, smart camera networks, radar obstacle detection, real-time traffic optimization, IoT, ETSI M2M, transport safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1408
252 Effect of Supplementary Premium on the Optimal Portfolio Policy in a Defined Contribution Pension Scheme with Refund of Premium Clauses

Authors: Edikan E. Akpanibah Obinichi C. Mandah Imoleayo S. Asiwaju

Abstract:

In this paper, we studied the effect of supplementary premium on the optimal portfolio policy in a defined contribution (DC) pension scheme with refund of premium clauses. This refund clause allows death members’ next of kin to withdraw their relative’s accumulated wealth during the accumulation period. The supplementary premium is to help sustain the scheme and is assumed to be stochastic. We considered cases when the remaining wealth is equally distributed and when it is not equally distributed among the remaining members. Next, we considered investments in cash and equity to help increase the remaining accumulated funds to meet up with the retirement needs of the remaining members and composed the problem as a continuous time mean-variance stochastic optimal control problem using the actuarial symbol and established an optimization problem from the extended Hamilton Jacobi Bellman equations. The optimal portfolio policy, the corresponding optimal fund size for the two assets and also the efficient frontier of the pension members for the two cases was obtained. Furthermore, the numerical simulations of the optimal portfolio policies with time were presented and the effect of the supplementary premium on the optimal portfolio policy was discussed and observed that the supplementary premium decreases the optimal portfolio policy of the risky asset (equity). Secondly we observed a disparity between the optimal policies for the two cases.

Keywords: Defined contribution pension scheme, extended Hamilton Jacobi Bellman equations, optimal portfolio policies, refund of premium clauses, supplementary premium.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 657
251 Estimation and Removal of Chlorophenolic Compounds from Paper Mill Waste Water by Electrochemical Treatment

Authors: R. Sharma, S. Kumar, C. Sharma

Abstract:

A number of toxic chlorophenolic compounds are formed during pulp bleaching. The nature and concentration of these chlorophenolic compounds largely depends upon the amount and nature of bleaching chemicals used. These compounds are highly recalcitrant and difficult to remove but are partially removed by the biochemical treatment processes adopted by the paper industry. Identification and estimation of these chlorophenolic compounds has been carried out in the primary and secondary clarified effluents from the paper mill by GCMS. Twenty-six chorophenolic compounds have been identified and estimated in paper mill waste waters. Electrochemical treatment is an efficient method for oxidation of pollutants and has successfully been used to treat textile and oil waste water. Electrochemical treatment using less expensive anode material, stainless steel electrodes has been tried to study their removal. The electrochemical assembly comprised a DC power supply, a magnetic stirrer and stainless steel (316 L) electrode. The optimization of operating conditions has been carried out and treatment has been performed under optimized treatment conditions. Results indicate that 68.7% and 83.8% of cholorphenolic compounds are removed during 2 h of electrochemical treatment from primary and secondary clarified effluent respectively. Further, there is a reduction of 65.1, 60 and 92.6% of COD, AOX and color, respectively for primary clarified and 83.8%, 75.9% and 96.8% of COD, AOX and color, respectively for secondary clarified effluent. EC treatment has also been found to increase significantly the biodegradability index of wastewater because of conversion of non- biodegradable fraction into biodegradable fraction. Thus, electrochemical treatment is an efficient method for the degradation of cholorophenolic compounds, removal of color, AOX and other recalcitrant organic matter present in paper mill waste water.

Keywords: Chlorophenolics, effluent, electrochemical treatment, wastewater.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1891
250 Time/Temperature-Dependent Finite Element Model of Laminated Glass Beams

Authors: Alena Zemanová, Jan Zeman, Michal Šejnoha

Abstract:

The polymer foil used for manufacturing of laminated glass members behaves in a viscoelastic manner with temperature dependance. This contribution aims at incorporating the time/temperature-dependent behavior of interlayer to our earlier elastic finite element model for laminated glass beams. The model is based on a refined beam theory: each layer behaves according to the finite-strain shear deformable formulation by Reissner and the adjacent layers are connected via the Lagrange multipliers ensuring the inter-layer compatibility of a laminated unit. The time/temperature-dependent behavior of the interlayer is accounted for by the generalized Maxwell model and by the time-temperature superposition principle due to the Williams, Landel, and Ferry. The resulting system is solved by the Newton method with consistent linearization and the viscoelastic response is determined incrementally by the exponential algorithm. By comparing the model predictions against available experimental data, we demonstrate that the proposed formulation is reliable and accurately reproduces the behavior of the laminated glass units.

Keywords: Laminated glass, finite element method, finite-strain Reissner model, Lagrange multipliers, generalized Maxwell model, Williams-Landel-Ferry equation, Newton method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1681
249 Tensile and Fracture Properties of Cast and Forged Composite Synthesized by Addition of in-situ Generated Al3Ti-Al2O3 Particles to Magnesium

Authors: H. M. Nanjundaswamy, S. K. Nath, S. Ray

Abstract:

TiO2 particles have been added in molten aluminium to result in aluminium based cast Al/Al3Ti-Al2O3 composite, which has been added then to molten magnesium to synthesize magnesium based cast Mg-Al/Al3Ti-Al2O3 composite. The nominal compositions in terms of Mg, Al, and TiO2 contents in the magnesium based composites are Mg-9Al-0.6TiO2, Mg-9Al-0.8TiO2, Mg-9Al-1.0TiO2 and Mg-9Al-1.2TiO2 designated respectively as MA6T, MA8T, MA10T and MA12T. The microstructure of the cast magnesium based composite shows grayish rods of intermetallics Al3Ti, inherited from aluminium based composite but these rods, on hot forging, breaks into smaller lengths decreasing the average aspect ratio (length to diameter) from 7.5 to 3.0. There are also cavities in between the broken segments of rods. β-phase in cast microstructure, Mg17Al12, dissolves during heating prior to forging and re-precipitates as relatively finer particles on cooling. The amount of β-phase also decreases on forging as segregation is removed. In both the cast and forged composite, the Brinell hardness increases rapidly with increasing addition of TiO2 but the hardness is higher in forged composites by about 80 BHN. With addition of higher level of TiO2 in magnesium based cast composite, yield strength decreases progressively but there is marginal increase in yield strength over that of the cast Mg-9 wt. pct. Al, designated as MA alloy. But the ultimate tensile strength (UTS) in the cast composites decreases with the increasing particle content indicating possibly an early initiation of crack in the brittle inter-dendritic region and their easy propagation through the interfaces of the particles. In forged composites, there is a significant improvement in both yield strength and UTS with increasing TiO2 addition and also, over those observed in their cast counterpart, but at higher addition it decreases. It may also be noted that as in forged MA alloy, incomplete recovery of forging strain increases the strength of the matrix in the composites and the ductility decreases both in the forged alloy and the composites. Initiation fracture toughness, JIC, decreases drastically in cast composites compared to that in MA alloy due to the presence of intermetallic Al3Ti and Al2O3 particles in the composite. There is drastic reduction of JIC on forging both in the alloy and the composites, possibly due to incomplete recovery of forging strain in both as well as breaking of Al3Ti rods and the voids between the broken segments of Al3Ti rods in composites. The ratio of tearing modulus to elastic modulus in cast composites show higher ratio, which increases with the increasing TiO2 addition. The ratio decreases comparatively more on forging of cast MA alloy than those in forged composites.

Keywords: Composite, fracture toughness, forging, tensile properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1362
248 Equilibrium, Kinetic and Thermodynamic Studies of the Biosorption of Textile Dye (Yellow Bemacid) onto Brahea edulis

Authors: G. Henini, Y. Laidani, F. Souahi, A. Labbaci, S. Hanini

Abstract:

Environmental contamination is a major problem being faced by the society today. Industrial, agricultural, and domestic wastes, due to the rapid development in the technology, are discharged in the several receivers. Generally, this discharge is directed to the nearest water sources such as rivers, lakes, and seas. While the rates of development and waste production are not likely to diminish, efforts to control and dispose of wastes are appropriately rising. Wastewaters from textile industries represent a serious problem all over the world. They contain different types of synthetic dyes which are known to be a major source of environmental pollution in terms of both the volume of dye discharged and the effluent composition. From an environmental point of view, the removal of synthetic dyes is of great concern. Among several chemical and physical methods, adsorption is a promising technique due to the ease of use and low cost compared to other applications in the process of discoloration, especially if the adsorbent is inexpensive and readily available. The focus of the present study was to assess the potentiality of Brahea edulis (BE) for the removal of synthetic dye Yellow bemacid (YB) from aqueous solutions. The results obtained here may transfer to other dyes with a similar chemical structure. Biosorption studies were carried out under various parameters such as mass adsorbent particle, pH, contact time, initial dye concentration, and temperature. The biosorption kinetic data of the material (BE) was tested by the pseudo first-order and the pseudo-second-order kinetic models. Thermodynamic parameters including the Gibbs free energy ΔG, enthalpy ΔH, and entropy ΔS have revealed that the adsorption of YB on the BE is feasible, spontaneous, and endothermic. The equilibrium data were analyzed by using Langmuir, Freundlich, Elovich, and Temkin isotherm models. The experimental results show that the percentage of biosorption increases with an increase in the biosorbent mass (0.25 g: 12 mg/g; 1.5 g: 47.44 mg/g). The maximum biosorption occurred at around pH value of 2 for the YB. The equilibrium uptake was increased with an increase in the initial dye concentration in solution (Co = 120 mg/l; q = 35.97 mg/g). Biosorption kinetic data were properly fitted with the pseudo-second-order kinetic model. The best fit was obtained by the Langmuir model with high correlation coefficient (R2 > 0.998) and a maximum monolayer adsorption capacity of 35.97 mg/g for YB.

Keywords: Adsorption, Brahea edulis, isotherm, yellow bemacid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1269
247 Pectoral Muscles Suppression in Digital Mammograms Using Hybridization of Soft Computing Methods

Authors: I. Laurence Aroquiaraj, K. Thangavel

Abstract:

Breast region segmentation is an essential prerequisite in computerized analysis of mammograms. It aims at separating the breast tissue from the background of the mammogram and it includes two independent segmentations. The first segments the background region which usually contains annotations, labels and frames from the whole breast region, while the second removes the pectoral muscle portion (present in Medio Lateral Oblique (MLO) views) from the rest of the breast tissue. In this paper we propose hybridization of Connected Component Labeling (CCL), Fuzzy, and Straight line methods. Our proposed methods worked good for separating pectoral region. After removal pectoral muscle from the mammogram, further processing is confined to the breast region alone. To demonstrate the validity of our segmentation algorithm, it is extensively tested using over 322 mammographic images from the Mammographic Image Analysis Society (MIAS) database. The segmentation results were evaluated using a Mean Absolute Error (MAE), Hausdroff Distance (HD), Probabilistic Rand Index (PRI), Local Consistency Error (LCE) and Tanimoto Coefficient (TC). The hybridization of fuzzy with straight line method is given more than 96% of the curve segmentations to be adequate or better. In addition a comparison with similar approaches from the state of the art has been given, obtaining slightly improved results. Experimental results demonstrate the effectiveness of the proposed approach.

Keywords: X-ray Mammography, CCL, Fuzzy, Straight line.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1750
246 EEG Analysis of Brain Dynamics in Children with Language Disorders

Authors: Hamed Alizadeh Dashagholi, Hossein Yousefi-Banaem, Mina Naeimi

Abstract:

Current study established for EEG signal analysis in patients with language disorder. Language disorder can be defined as meaningful delay in the use or understanding of spoken or written language. The disorder can include the content or meaning of language, its form, or its use. Here we applied Z-score, power spectrum, and coherence methods to discriminate the language disorder data from healthy ones. Power spectrum of each channel in alpha, beta, gamma, delta, and theta frequency bands was measured. In addition, intra hemispheric Z-score obtained by scoring algorithm. Obtained results showed high Z-score and power spectrum in posterior regions. Therefore, we can conclude that peoples with language disorder have high brain activity in frontal region of brain in comparison with healthy peoples. Results showed that high coherence correlates with irregularities in the ERP and is often found during complex task, whereas low coherence is often found in pathological conditions. The results of the Z-score analysis of the brain dynamics showed higher Z-score peak frequency in delta, theta and beta sub bands of Language Disorder patients. In this analysis there were activity signs in both hemispheres and the left-dominant hemisphere was more active than the right.

Keywords: EEG, electroencephalography, coherence methods, language disorder, power spectrum, z-score.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2545
245 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen

Abstract:

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluates the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

Keywords: lexical semantics, feature representation, semantic decision, convolutional neural network, electronic medical record

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 586
244 Face Recognition Using Double Dimension Reduction

Authors: M. A Anjum, M. Y. Javed, A. Basit

Abstract:

In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.

Keywords: Biometrics, DCT, Face Recognition, Feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1488
243 Seamless Handover in Urban 5G-UAV Systems Using Entropy Weighted Method

Authors: Anirudh Sunil Warrier, Saba Al-Rubaye, Dimitrios Panagiotakopoulos, Gokhan Inalhan, Antonios Tsourdos

Abstract:

The demand for increased data transfer rate and network traffic capacity has given rise to the concept of heterogeneous networks. Heterogeneous networks are wireless networks, consisting of devices using different underlying radio access technologies (RAT). For Unmanned Aerial Vehicles (UAVs) this enhanced data rate and network capacity are even more critical especially in their applications of medicine, delivery missions and military. In an urban heterogeneous network environment, the UAVs must be able switch seamlessly from one base station (BS) to another for maintaining a reliable link. Therefore, seamless handover in such urban environments has become a major challenge. In this paper, a scheme to achieve seamless handover is developed, an algorithm based on Received Signal Strength (RSS) criterion for network selection is used and Entropy Weighted Method (EWM) is implemented for decision making. Seamless handover using EWM decision-making is demonstrated successfully for a UAV moving across fifth generation (5G) and long-term evolution (LTE) networks via a simulation level analysis. Thus, a solution for UAV-5G communication, specifically the mobility challenge in heterogeneous networks is solved and this work could act as step forward in making UAV-5G architecture integration a possibility.

Keywords: Air to ground, A2G, fifth generation, 5G, handover, mobility, unmanned aerial vehicle, UAV, urban environments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 417
242 In Search of a Suitable Neural Network Capable of Fast Monitoring of Congestion Level in Electric Power Systems

Authors: Pradyumna Kumar Sahoo, Prasanta Kumar Satpathy

Abstract:

This paper aims at finding a suitable neural network for monitoring congestion level in electrical power systems. In this paper, the input data has been framed properly to meet the target objective through supervised learning mechanism by defining normal and abnormal operating conditions for the system under study. The congestion level, expressed as line congestion index (LCI), is evaluated for each operating condition and is presented to the NN along with the bus voltages to represent the input and target data. Once, the training goes successful, the NN learns how to deal with a set of newly presented data through validation and testing mechanism. The crux of the results presented in this paper rests on performance comparison of a multi-layered feed forward neural network with eleven types of back propagation techniques so as to evolve the best training criteria. The proposed methodology has been tested on the standard IEEE-14 bus test system with the support of MATLAB based NN toolbox. The results presented in this paper signify that the Levenberg-Marquardt backpropagation algorithm gives best training performance of all the eleven cases considered in this paper, thus validating the proposed methodology.

Keywords: Line congestion index, critical bus, contingency, neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1783
241 Automatic Detection of Defects in Ornamental Limestone Using Wavelets

Authors: Maria C. Proença, Marco Aniceto, Pedro N. Santos, José C. Freitas

Abstract:

A methodology based on wavelets is proposed for the automatic location and delimitation of defects in limestone plates. Natural defects include dark colored spots, crystal zones trapped in the stone, areas of abnormal contrast colors, cracks or fracture lines, and fossil patterns. Although some of these may or may not be considered as defects according to the intended use of the plate, the goal is to pair each stone with a map of defects that can be overlaid on a computer display. These layers of defects constitute a database that will allow the preliminary selection of matching tiles of a particular variety, with specific dimensions, for a requirement of N square meters, to be done on a desktop computer rather than by a two-hour search in the storage park, with human operators manipulating stone plates as large as 3 m x 2 m, weighing about one ton. Accident risks and work times are reduced, with a consequent increase in productivity. The base for the algorithm is wavelet decomposition executed in two instances of the original image, to detect both hypotheses – dark and clear defects. The existence and/or size of these defects are the gauge to classify the quality grade of the stone products. The tuning of parameters that are possible in the framework of the wavelets corresponds to different levels of accuracy in the drawing of the contours and selection of the defects size, which allows for the use of the map of defects to cut a selected stone into tiles with minimum waste, according the dimension of defects allowed.

Keywords: Automatic detection, wavelets, defects, fracture lines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1161
240 A Multi-layer Artificial Neural Network Architecture Design for Load Forecasting in Power Systems

Authors: Axay J Mehta, Hema A Mehta, T.C.Manjunath, C. Ardil

Abstract:

In this paper, the modelling and design of artificial neural network architecture for load forecasting purposes is investigated. The primary pre-requisite for power system planning is to arrive at realistic estimates of future demand of power, which is known as Load Forecasting. Short Term Load Forecasting (STLF) helps in determining the economic, reliable and secure operating strategies for power system. The dependence of load on several factors makes the load forecasting a very challenging job. An over estimation of the load may cause premature investment and unnecessary blocking of the capital where as under estimation of load may result in shortage of equipment and circuits. It is always better to plan the system for the load slightly higher than expected one so that no exigency may arise. In this paper, a load-forecasting model is proposed using a multilayer neural network with an appropriately modified back propagation learning algorithm. Once the neural network model is designed and trained, it can forecast the load of the power system 24 hours ahead on daily basis and can also forecast the cumulative load on daily basis. The real load data that is used for the Artificial Neural Network training was taken from LDC, Gujarat Electricity Board, Jambuva, Gujarat, India. The results show that the load forecasting of the ANN model follows the actual load pattern more accurately throughout the forecasted period.

Keywords: Power system, Load forecasting, Neural Network, Neuron, Stabilization, Network structure, Load.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3416
239 Trajectory Guided Recognition of Hand Gestures having only Global Motions

Authors: M. K. Bhuyan, P. K. Bora, D. Ghosh

Abstract:

One very interesting field of research in Pattern Recognition that has gained much attention in recent times is Gesture Recognition. In this paper, we consider a form of dynamic hand gestures that are characterized by total movement of the hand (arm) in space. For these types of gestures, the shape of the hand (palm) during gesturing does not bear any significance. In our work, we propose a model-based method for tracking hand motion in space, thereby estimating the hand motion trajectory. We employ the dynamic time warping (DTW) algorithm for time alignment and normalization of spatio-temporal variations that exist among samples belonging to the same gesture class. During training, one template trajectory and one prototype feature vector are generated for every gesture class. Features used in our work include some static and dynamic motion trajectory features. Recognition is accomplished in two stages. In the first stage, all unlikely gesture classes are eliminated by comparing the input gesture trajectory to all the template trajectories. In the next stage, feature vector extracted from the input gesture is compared to all the class prototype feature vectors using a distance classifier. Experimental results demonstrate that our proposed trajectory estimator and classifier is suitable for Human Computer Interaction (HCI) platform.

Keywords: Hand gesture, human computer interaction, key video object plane, dynamic time warping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2738