Search results for: computational fluid dynamics.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2358

Search results for: computational fluid dynamics.

168 Face Recognition Using Double Dimension Reduction

Authors: M. A Anjum, M. Y. Javed, A. Basit

Abstract:

In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.

Keywords: Biometrics, DCT, Face Recognition, Feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1449
167 FSM-based Recognition of Dynamic Hand Gestures via Gesture Summarization Using Key Video Object Planes

Authors: M. K. Bhuyan

Abstract:

The use of human hand as a natural interface for humancomputer interaction (HCI) serves as the motivation for research in hand gesture recognition. Vision-based hand gesture recognition involves visual analysis of hand shape, position and/or movement. In this paper, we use the concept of object-based video abstraction for segmenting the frames into video object planes (VOPs), as used in MPEG-4, with each VOP corresponding to one semantically meaningful hand position. Next, the key VOPs are selected on the basis of the amount of change in hand shape – for a given key frame in the sequence the next key frame is the one in which the hand changes its shape significantly. Thus, an entire video clip is transformed into a small number of representative frames that are sufficient to represent a gesture sequence. Subsequently, we model a particular gesture as a sequence of key frames each bearing information about its duration. These constitute a finite state machine. For recognition, the states of the incoming gesture sequence are matched with the states of all different FSMs contained in the database of gesture vocabulary. The core idea of our proposed representation is that redundant frames of the gesture video sequence bear only the temporal information of a gesture and hence discarded for computational efficiency. Experimental results obtained demonstrate the effectiveness of our proposed scheme for key frame extraction, subsequent gesture summarization and finally gesture recognition.

Keywords: Hand gesture, MPEG-4, Hausdorff distance, finite state machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1990
166 Fuzzy Wavelet Packet based Feature Extraction Method for Multifunction Myoelectric Control

Authors: Rami N. Khushaba, Adel Al-Jumaily

Abstract:

The myoelectric signal (MES) is one of the Biosignals utilized in helping humans to control equipments. Recent approaches in MES classification to control prosthetic devices employing pattern recognition techniques revealed two problems, first, the classification performance of the system starts degrading when the number of motion classes to be classified increases, second, in order to solve the first problem, additional complicated methods were utilized which increase the computational cost of a multifunction myoelectric control system. In an effort to solve these problems and to achieve a feasible design for real time implementation with high overall accuracy, this paper presents a new method for feature extraction in MES recognition systems. The method works by extracting features using Wavelet Packet Transform (WPT) applied on the MES from multiple channels, and then employs Fuzzy c-means (FCM) algorithm to generate a measure that judges on features suitability for classification. Finally, Principle Component Analysis (PCA) is utilized to reduce the size of the data before computing the classification accuracy with a multilayer perceptron neural network. The proposed system produces powerful classification results (99% accuracy) by using only a small portion of the original feature set.

Keywords: Biomedical Signal Processing, Data mining andInformation Extraction, Machine Learning, Rehabilitation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697
165 Sleep Scheduling Schemes Based on Location of Mobile User in Sensor-Cloud

Authors: N. Mahendran, R. Priya

Abstract:

The mobile cloud computing (MCC) with wireless sensor networks (WSNs) technology gets more attraction by research scholars because its combines the sensors data gathering ability with the cloud data processing capacity. This approach overcomes the limitation of data storage capacity and computational ability of sensor nodes. Finally, the stored data are sent to the mobile users when the user sends the request. The most of the integrated sensor-cloud schemes fail to observe the following criteria: 1) The mobile users request the specific data to the cloud based on their present location. 2) Power consumption since most of them are equipped with non-rechargeable batteries. Mostly, the sensors are deployed in hazardous and remote areas. This paper focuses on above observations and introduces an approach known as collaborative location-based sleep scheduling (CLSS) scheme. Both awake and asleep status of each sensor node is dynamically devised by schedulers and the scheduling is done purely based on the of mobile users’ current location; in this manner, large amount of energy consumption is minimized at WSN. CLSS work depends on two different methods; CLSS1 scheme provides lower energy consumption and CLSS2 provides the scalability and robustness of the integrated WSN.

Keywords: Sleep scheduling, mobile cloud computing, wireless sensor network, integration, location, network lifetime.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 939
164 Solving Part Type Selection and Loading Problem in Flexible Manufacturing System Using Real Coded Genetic Algorithms – Part II: Optimization

Authors: Wayan F. Mahmudy, Romeo M. Marian, Lee H. S. Luong

Abstract:

This paper presents modeling and optimization of two NP-hard problems in flexible manufacturing system (FMS), part type selection problem and loading problem. Due to the complexity and extent of the problems, the paper was split into two parts. The first part of the papers has discussed the modeling of the problems and showed how the real coded genetic algorithms (RCGA) can be applied to solve the problems. This second part discusses the effectiveness of the RCGA which uses an array of real numbers as chromosome representation. The novel proposed chromosome representation produces only feasible solutions which minimize a computational time needed by GA to push its population toward feasible search space or repair infeasible chromosomes. The proposed RCGA improves the FMS performance by considering two objectives, maximizing system throughput and maintaining the balance of the system (minimizing system unbalance). The resulted objective values are compared to the optimum values produced by branch-and-bound method. The experiments show that the proposed RCGA could reach near optimum solutions in a reasonable amount of time.

Keywords: Flexible manufacturing system, production planning, part type selection problem, loading problem, real-coded genetic algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1925
163 Markov Game Controller Design Algorithms

Authors: Rajneesh Sharma, M. Gopal

Abstract:

Markov games are a generalization of Markov decision process to a multi-agent setting. Two-player zero-sum Markov game framework offers an effective platform for designing robust controllers. This paper presents two novel controller design algorithms that use ideas from game-theory literature to produce reliable controllers that are able to maintain performance in presence of noise and parameter variations. A more widely used approach for controller design is the H∞ optimal control, which suffers from high computational demand and at times, may be infeasible. Our approach generates an optimal control policy for the agent (controller) via a simple Linear Program enabling the controller to learn about the unknown environment. The controller is facing an unknown environment, and in our formulation this environment corresponds to the behavior rules of the noise modeled as the opponent. Proposed controller architectures attempt to improve controller reliability by a gradual mixing of algorithmic approaches drawn from the game theory literature and the Minimax-Q Markov game solution approach, in a reinforcement-learning framework. We test the proposed algorithms on a simulated Inverted Pendulum Swing-up task and compare its performance against standard Q learning.

Keywords: Reinforcement learning, Markov Decision Process, Matrix Games, Markov Games, Smooth Fictitious play, Controller, Inverted Pendulum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1477
162 Numerical Modelling of Shear Zone and Its Implications on Slope Instability at Letšeng Diamond Open Pit Mine, Lesotho

Authors: M. Ntšolo, D. Kalumba, N. Lefu, G. Letlatsa

Abstract:

Rock mass damage due to shear tectonic activity has been investigated largely in geoscience where fluid transport is of major interest. However, little has been studied on the effect of shear zones on rock mass behavior and its impact on stability of rock slopes. At Letšeng Diamonds open pit mine in Lesotho, the shear zone composed of sheared kimberlite material, calcite and altered basalt is forming part of the haul ramp into the main pit cut 3. The alarming rate at which the shear zone is deteriorating has triggered concerns about both local and global stability of pit the walls. This study presents the numerical modelling of the open pit slope affected by shear zone at Letšeng Diamond Mine (LDM). Analysis of the slope involved development of the slope model by using a two-dimensional finite element code RS2. Interfaces between shear zone and host rock were represented by special joint elements incorporated in the finite element code. The analysis of structural geological mapping data provided a good platform to understand the joint network. Major joints including shear zone were incorporated into the model for simulation. This approach proved successful by demonstrating that continuum modelling can be used to evaluate evolution of stresses, strain, plastic yielding and failure mechanisms that are consistent with field observations. Structural control due to geological shear zone structure proved to be important in its location, size and orientation. Furthermore, the model analyzed slope deformation and sliding possibility along shear zone interfaces. This type of approach can predict shear zone deformation and failure mechanism, hence mitigation strategies can be deployed for safety of human lives and property within mine pits.

Keywords: Numerical modeling, open pit mine, shear zone, slope stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712
161 Reinforcement of Calcium Phosphate Cement with E-Glass Fibre

Authors: Sudip Dasgupta, Debosmita Pani, Kanchan Maji

Abstract:

Calcium Phosphate Cement (CPC) due to its high bioactivity and optimum bioresorbability shows excellent bone regeneration capability. Despite it has limited applications as bone implant due to its macro-porous microstructure causing its poor mechanical strength. The reinforcement of apatitic CPCs with biocompatible fibre glass phase is an attractive area of research to improve upon its mechanical strength. Here, we study the setting behaviour of Si-doped and un-doped α tri calcium phosphate (α - TCP) based CPC and its reinforcement with addition of E-glass fibre. Alpha Tri calcium phosphate powders were prepared by solid state sintering of CaCO3 , CaHPO4 and Tetra Ethyl Ortho Silicate (TEOS) was used as silicon source to synthesize Si doped α-TCP powders. Both initial and final setting time of the developed cement was delayed because of Si addition. Crystalline phases of HA (JCPDS 9- 432), α-TCP (JCPDS 29-359) and β-TCP (JCPDS 9-169) were detected in the X-ray diffraction (XRD) pattern after immersion of CPC in simulated body fluid (SBF) for 0 hours to 10 days. As Si incorporation in the crystal lattice stabilized the TCP phase, Si doped CPC showed little slower rate of conversion into HA phase as compared to un-doped CPC. The SEM image of the microstructure of hardened CPC showed lower grain size of HA in un-doped CPC because of premature setting and faster hydrolysis of un-doped CPC in SBF as compared that in Si-doped CPC. Premature setting caused generation of micro and macro porosity in un-doped CPC structure which resulted in its lower mechanical strength as compared to that in Si-doped CPC. It was found that addition of 10 wt% of E-glass fibre into Si-doped α-TCP increased the average DTS of CPC from 8 MPa to 15 MPa as the fibres could resists the propagation of crack by deflecting the crack tip. Our study shows that biocompatible E-glass fibre in optimum proportion in CPC matrix can enhance the mechanical strength of CPC without affecting its biocompatibility. 

Keywords: Calcium phosphate cement, biocompatibility, e-glass fibre, diametral tensile strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2165
160 A New Fast Intra Prediction Mode Decision Algorithm for H.264/AVC Encoders

Authors: A. Elyousfi, A. Tamtaoui, E. Bouyakhf

Abstract:

The H.264/AVC video coding standard contains a number of advanced features. Ones of the new features introduced in this standard is the multiple intramode prediction. Its function exploits directional spatial correlation with adjacent block for intra prediction. With this new features, intra coding of H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression standard, but computational complexity is increased significantly when brut force rate distortion optimization (RDO) algorithm is used. In this paper, we propose a new fast intra prediction mode decision method for the complexity reduction of H.264 video coding. for luma intra prediction, the proposed method consists of two step: in the first step, we make the RDO for four mode of intra 4x4 block, based the distribution of RDO cost of those modes and the idea that the fort correlation with adjacent mode, we select the best mode of intra 4x4 block. In the second step, we based the fact that the dominating direction of a smaller block is similar to that of bigger block, the candidate modes of 8x8 blocks and 16x16 macroblocks are determined. So, in case of chroma intra prediction, the variance of the chroma pixel values is much smaller than that of luma ones, since our proposed uses only the mode DC. Experimental results show that the new fast intra mode decision algorithm increases the speed of intra coding significantly with negligible loss of PSNR.

Keywords: Intra prediction, H264/AVC, video coding, encodercomplexity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2442
159 Surrogate based Evolutionary Algorithm for Design Optimization

Authors: Maumita Bhattacharya

Abstract:

Optimization is often a critical issue for most system design problems. Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, finding optimal solution to complex high dimensional, multimodal problems often require highly computationally expensive function evaluations and hence are practically prohibitive. The Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model presented in our earlier work [14] reduced computation time by controlled use of meta-models to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the meta-model are generated from a single uniform model. Situations like model formation involving variable input dimensions and noisy data certainly can not be covered by this assumption. In this paper we present an enhanced version of DAFHEA that incorporates a multiple-model based learning approach for the SVM approximator. DAFHEA-II (the enhanced version of the DAFHEA framework) also overcomes the high computational expense involved with additional clustering requirements of the original DAFHEA framework. The proposed framework has been tested on several benchmark functions and the empirical results illustrate the advantages of the proposed technique.

Keywords: Evolutionary algorithm, Fitness function, Optimization, Meta-model, Stochastic method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1530
158 A Microcontroller Implementation of Constrained Model Predictive Control

Authors: Amira Kheriji Abbes, Faouzi Bouani, Mekki Ksouri

Abstract:

Model Predictive Control (MPC) is an established control technique in a wide range of process industries. The reason for this success is its ability to handle multivariable systems and systems having input, output or state constraints. Neverthless comparing to PID controller, the implementation of the MPC in miniaturized devices like Field Programmable Gate Arrays (FPGA) and microcontrollers has historically been very small scale due to its complexity in implementation and its computation time requirement. At the same time, such embedded technologies have become an enabler for future manufacturing enterprisers as well as a transformer of organizations and markets. In this work, we take advantage of these recent advances in this area in the deployment of one of the most studied and applied control technique in the industrial engineering. In this paper, we propose an efficient firmware for the implementation of constrained MPC in the performed STM32 microcontroller using interior point method. Indeed, performances study shows good execution speed and low computational burden. These results encourage to develop predictive control algorithms to be programmed in industrial standard processes. The PID anti windup controller was also implemented in the STM32 in order to make a performance comparison with the MPC. The main features of the proposed constrained MPC framework are illustrated through two examples.

Keywords: Embedded software, microcontroller, constrainedModel Predictive Control, interior point method, PID antiwindup, Keil tool, C/Cµ language.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2745
157 Modeling and Optimization of Part Type Selection and Loading Problem in Flexible Manufacturing System Using Real Coded Genetic Algorithms

Authors: Wayan F. Mahmudy, Romeo M. Marian, Lee H. S. Luong

Abstract:

 This paper deals with modeling and optimization of two NP-hard problems in production planning of flexible manufacturing system (FMS), part type selection problem and loading problem. The part type selection problem and the loading problem are strongly related and heavily influence the system’s efficiency and productivity. These problems have been modeled and solved simultaneously by using real coded genetic algorithms (RCGA) which uses an array of real numbers as chromosome representation. The novel proposed chromosome representation produces only feasible solutions which minimize a computational time needed by GA to push its population toward feasible search space or repair infeasible chromosomes. The proposed RCGA improves the FMS performance by considering two objectives, maximizing system throughput and maintaining the balance of the system (minimizing system unbalance). The resulted objective values are compared to the optimum values produced by branch-and-bound method. The experiments show that the proposed RCGA could reach near optimum solutions in a reasonable amount of time.

Keywords: Flexible manufacturing system, production planning, part type selection problem, loading problem, real-coded genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2596
156 Motion Prediction and Motion Vector Cost Reduction during Fast Block Motion Estimation in MCTF

Authors: Karunakar A K, Manohara Pai M M

Abstract:

In 3D-wavelet video coding framework temporal filtering is done along the trajectory of motion using Motion Compensated Temporal Filtering (MCTF). Hence computationally efficient motion estimation technique is the need of MCTF. In this paper a predictive technique is proposed in order to reduce the computational complexity of the MCTF framework, by exploiting the high correlation among the frames in a Group Of Picture (GOP). The proposed technique applies coarse and fine searches of any fast block based motion estimation, only to the first pair of frames in a GOP. The generated motion vectors are supplied to the next consecutive frames, even to subsequent temporal levels and only fine search is carried out around those predicted motion vectors. Hence coarse search is skipped for all the motion estimation in a GOP except for the first pair of frames. The technique has been tested for different fast block based motion estimation algorithms over different standard test sequences using MC-EZBC, a state-of-the-art scalable video coder. The simulation result reveals substantial reduction (i.e. 20.75% to 38.24%) in the number of search points during motion estimation, without compromising the quality of the reconstructed video compared to non-predictive techniques. Since the motion vectors of all the pair of frames in a GOP except the first pair will have value ±1 around the motion vectors of the previous pair of frames, the number of bits required for motion vectors is also reduced by 50%.

Keywords: Motion Compensated Temporal Filtering, predictivemotion estimation, lifted wavelet transform, motion vector

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1572
155 Reduction of Plants Biodiversity in Hyrcanian Forest by Coal Mining Activities

Authors: Mahsa Tavakoli, Seyed Mohammad Hojjati, Yahya Kooch

Abstract:

Considering that coal mining is one of the important industrial activities, it may cause damages to environment. According to the author’s best knowledge, the effect of traditional coal mining activities on plant biodiversity has not been investigated in the Hyrcanian forests. Therefore, in this study, the effect of coal mining activities on vegetation and tree diversity was investigated in Hyrcanian forest, North Iran. After filed visiting and determining the mine, 16 plots (20×20 m2) were established by systematic-randomly (60×60 m2) in an area of 4 ha (200×200 m2-mine entrance placed at center). An area adjacent to the mine was not affected by the mining activity, and it is considered as the control area. In each plot, the data about trees such as number and type of species were recorded. The biodiversity of vegetation cover was considered 5 square sub-plots (1 m2) in each plot. PAST software and Ecological Methodology were used to calculate Biodiversity indices. The value of Shannon Wiener and Simpson diversity indices for tree cover in control area (1.04±0.34 and 0.62±0.20) was significantly higher than mining area (0.78±0.27 and 0.45±0.14). The value of evenness indices for tree cover in the mining area was significantly lower than that of the control area. The value of Shannon Wiener and Simpson diversity indices for vegetation cover in the control area (1.37±0.06 and 0.69±0.02) was significantly higher than the mining area (1.02±0.13 and 0.50±0.07). The value of evenness index in the control area was significantly higher than the mining area. Plant communities are a good indicator of the changes in the site. Study about changes in vegetation biodiversity and plant dynamics in the degraded land can provide necessary information for forest management and reforestation of these areas.

Keywords: Vegetation biodiversity, species composition, traditional coal mining, caspian forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 831
154 A Novel Approach for Coin Identification using Eigenvalues of Covariance Matrix, Hough Transform and Raster Scan Algorithms

Authors: J. Prakash, K. Rajesh

Abstract:

In this paper we present a new method for coin identification. The proposed method adopts a hybrid scheme using Eigenvalues of covariance matrix, Circular Hough Transform (CHT) and Bresenham-s circle algorithm. The statistical and geometrical properties of the small and large Eigenvalues of the covariance matrix of a set of edge pixels over a connected region of support are explored for the purpose of circular object detection. Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain only a small number of non-zero elements, they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of the circumference pixels is identified using Raster scan algorithm which uses geometrical symmetry property. After finding circular objects, the proposed method uses the texture on the surface of the coins called texton, which are unique properties of coins, refers to the fundamental micro structure in generic natural images. This method has been tested on several real world images including coin and non-coin images. The performance is also evaluated based on the noise withstanding capability.

Keywords: Circular Hough Transform, Coin detection, Covariance matrix, Eigenvalues, Raster scan Algorithm, Texton.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1843
153 Variable vs. Fixed Window Width Code Correlation Reference Waveform Receivers for Multipath Mitigation in Global Navigation Satellite Systems with Binary Offset Carrier and Multiplexed Binary Offset Carrier Signals

Authors: Fahad Alhussein, Huaping Liu

Abstract:

This paper compares the multipath mitigation performance of code correlation reference waveform receivers with variable and fixed window width, for binary offset carrier and multiplexed binary offset carrier signals typically used in global navigation satellite systems. In the variable window width method, such width is iteratively reduced until the distortion on the discriminator with multipath is eliminated. This distortion is measured as the Euclidean distance between the actual discriminator (obtained with the incoming signal), and the local discriminator (generated with a local copy of the signal). The variable window width have shown better performance compared to the fixed window width. In particular, the former yields zero error for all delays for the BOC and MBOC signals considered, while the latter gives rather large nonzero errors for small delays in all cases. Due to its computational simplicity, the variable window width method is perfectly suitable for implementation in low-cost receivers.

Keywords: Correlation reference waveform receivers, binary offset carrier, multiplexed binary offset carrier, global navigation satellite systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 423
152 Brain Image Segmentation Using Conditional Random Field Based On Modified Artificial Bee Colony Optimization Algorithm

Authors: B. Thiagarajan, R. Bremananth

Abstract:

Tumor is an uncontrolled growth of tissues in any part of the body. Tumors are of different types and they have different characteristics and treatments. Brain tumor is inherently serious and life-threatening because of its character in the limited space of the intracranial cavity (space formed inside the skull). Locating the tumor within MR (magnetic resonance) image of brain is integral part of the treatment of brain tumor. This segmentation task requires classification of each voxel as either tumor or non-tumor, based on the description of the voxel under consideration. Many studies are going on in the medical field using Markov Random Fields (MRF) in segmentation of MR images. Even though the segmentation process is better, computing the probability and estimation of parameters is difficult. In order to overcome the aforementioned issues, Conditional Random Field (CRF) is used in this paper for segmentation, along with the modified artificial bee colony optimization and modified fuzzy possibility c-means (MFPCM) algorithm. This work is mainly focused to reduce the computational complexities, which are found in existing methods and aimed at getting higher accuracy. The efficiency of this work is evaluated using the parameters such as region non-uniformity, correlation and computation time. The experimental results are compared with the existing methods such as MRF with improved Genetic Algorithm (GA) and MRF-Artificial Bee Colony (MRF-ABC) algorithm.

Keywords: Conditional random field, Magnetic resonance, Markov random field, Modified artificial bee colony.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2899
151 Sedimentary Response to Coastal Defense Works in São Vicente Bay, São Paulo

Authors: L. C. Ansanelli, P. Alfredini

Abstract:

The article presents the evaluation of the effectiveness of two groins located at Gonzaguinha and Milionários Beaches, situated on the southeast coast of Brazil. The effectiveness of these coastal defense structures is evaluated in terms of sedimentary dynamics, which is one of the most important environmental processes to be assessed in coastal engineering studies. The applied method is based on the implementation of the Delft3D numerical model system tools. Delft3D-WAVE module was used for waves modelling, Delft3D-FLOW for hydrodynamic modelling and Delft3D-SED for sediment transport modelling. The calibration of the models was carried out in a way that the simulations adequately represent the region studied, evaluating improvements in the model elements with the use of statistical comparisons of similarity between the results and waves, currents and tides data recorded in the study area. Analysis of the maximum wave heights was carried to select the months with higher accumulated energy to implement these conditions in the engineering scenarios. The engineering studies were performed for two scenarios: 1) numerical simulation of the area considering only the two existing groins; 2) conception of breakwaters coupled at the ends of the existing groins, resulting in two “T” shaped structures. The sediment model showed that, for the simulated period, the area is affected by erosive processes and that the existing groins have little effectiveness in defending the coast in question. The implemented T structures showed some effectiveness in protecting the beaches against erosion and provided the recovery of the portion directly covered by it on the Milionários Beach. In order to complement this study, it is suggested the conception of further engineering scenarios that might recover other areas of the studied region.

Keywords: Coastal engineering, coastal erosion, Sao Vicente Bay, Delft3D, coastal engineering works.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 679
150 Structural Performance Evaluation of Electronic Road Sign Panels Reflecting Damage Scenarios

Authors: Junwon Seo, Bipin Adhikari, Euiseok Jeong

Abstract:

This paper is intended to evaluate the structural performance of welded electronic road signs under various damage scenarios (DSs) using a finite element (FE) model calibrated with full-scale ultimate load testing results. The tested electronic road sign specimen was built with a back skin made of 5052 aluminum and two channels and a frame made of 6061 aluminum, where the back skin was connected to the frame by welding. The size of the tested specimen was 1.52 m long, 1.43 m wide, and 0.28 m deep. An actuator applied vertical loads at the center of the back skin of the specimen, resulting in a displacement of 158.7 mm and an ultimate load of 153.46 kN. Using these testing data, generation and calibration of a FE model of the tested specimen were executed in ABAQUS, indicating that the difference in the ultimate load between the calibrated model simulation and full-scale testing was only 3.32%. Then, six different DSs were simulated where the areas of the welded connection in the calibrated model were diminished for the DSs. It was found that the corners at the back skin-frame joint were prone to connection failure for all the DSs, and failure of the back skin-frame connection occurred remarkably from the distant edges.

Keywords: Computational analysis, damage scenarios, electronic road signs, finite element, welded connections.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 319
149 Using Emotional Learning in Rescue Simulation Environment

Authors: Maziar Ahmad Sharbafi, Caro Lucas, Abolfazel Toroghi Haghighat, Omid AmirGhiasvand, Omid Aghazade

Abstract:

RoboCup Rescue simulation as a large-scale Multi agent system (MAS) is one of the challenging environments for keeping coordination between agents to achieve the objectives despite sensing and communication limitations. The dynamicity of the environment and intensive dependency between actions of different kinds of agents make the problem more complex. This point encouraged us to use learning-based methods to adapt our decision making to different situations. Our approach is utilizing reinforcement leaning. Using learning in rescue simulation is one of the current ways which has been the subject of several researches in recent years. In this paper we present an innovative learning method implemented for Police Force (PF) Agent. This method can cope with the main difficulties that exist in other learning approaches. Different methods used in the literature have been examined. Their drawbacks and possible improvements have led us to the method proposed in this paper which is fast and accurate. The Brain Emotional Learning Based Intelligent Controller (BELBIC) is our solution for learning in this environment. BELBIC is a physiologically motivated approach based on a computational model of amygdale and limbic system. The paper presents the results obtained by the proposed approach, showing the power of BELBIC as a decision making tool in complex and dynamic situation.

Keywords: Emotional learning, rescue, simulation environment, RoboCup, multi-agent system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1568
148 Computer Models of the Vestibular Head Tilt Response, and Their Relationship to EVestG and Meniere's Disease

Authors: Daniel Heibert, Brian Lithgow, Kerry Hourigan

Abstract:

This paper attempts to explain response components of Electrovestibulography (EVestG) using a computer simulation of a three-canal model of the vestibular system. EVestG is a potentially new diagnostic method for Meniere's disease. EVestG is a variant of Electrocochleography (ECOG), which has been used as a standard method for diagnosing Meniere's disease - it can be used to measure the SP/AP ratio, where an SP/AP ratio greater than 0.4-0.5 is indicative of Meniere-s Disease. In EVestG, an applied head tilt replaces the acoustic stimulus of ECOG. The EVestG output is also an SP/AP type plot, where SP is the summing potential, and AP is the action potential amplitude. AP is thought of as being proportional to the size of a population of afferents in an excitatory neural firing state. A simulation of the fluid volume displacement in the vestibular labyrinth in response to various types of head tilts (ipsilateral, backwards and horizontal rotation) was performed, and a simple neural model based on these simulations developed. The simple neural model shows that the change in firing rate of the utricle is much larger in magnitude than the change in firing rates of all three semi-circular canals following a head tilt (except in a horizontal rotation). The data suggests that the change in utricular firing rate is a minimum 2-3 orders of magnitude larger than changes in firing rates of the canals during ipsilateral/backward tilts. Based on these results, the neural response recorded by the electrode in our EVestG recordings is expected to be dominated by the utricle in ipsilateral/backward tilts (It is important to note that the effect of the saccule and efferent signals were not taken into account in this model). If the utricle response dominates the EVestG recordings as the modeling results suggest, then EVestG has the potential to diagnose utricular hair cell damage due to a viral infection (which has been cited as one possible cause of Meniere's Disease).

Keywords: Diagnostic, endolymph hydrops, Meniere's disease, modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1471
147 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information

Authors: Haifeng Wang, Haili Zhang

Abstract:

Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.

Keywords: Computational social science, movie preference, machine learning, SVM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1598
146 Iterative Process to Improve Simple Adaptive Subdivision Surfaces Method with Butterfly Scheme

Authors: Noor Asma Husain, Mohd Shafry Mohd Rahim, Abdullah Bade

Abstract:

Subdivision surfaces were applied to the entire meshes in order to produce smooth surfaces refinement from coarse mesh. Several schemes had been introduced in this area to provide a set of rules to converge smooth surfaces. However, to compute and render all the vertices are really inconvenient in terms of memory consumption and runtime during the subdivision process. It will lead to a heavy computational load especially at a higher level of subdivision. Adaptive subdivision is a method that subdivides only at certain areas of the meshes while the rest were maintained less polygons. Although adaptive subdivision occurs at the selected areas, the quality of produced surfaces which is their smoothness can be preserved similar as well as regular subdivision. Nevertheless, adaptive subdivision process burdened from two causes; calculations need to be done to define areas that are required to be subdivided and to remove cracks created from the subdivision depth difference between the selected and unselected areas. Unfortunately, the result of adaptive subdivision when it reaches to the higher level of subdivision, it still brings the problem with memory consumption. This research brings to iterative process of adaptive subdivision to improve the previous adaptive method that will reduce memory consumption applied on triangular mesh. The result of this iterative process was acceptable better in memory and appearance in order to produce fewer polygons while it preserves smooth surfaces.

Keywords: Subdivision surfaces, adaptive subdivision, selectioncriteria, handle cracks, smooth surface

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1576
145 Off-Policy Q-learning Technique for Intrusion Response in Network Security

Authors: Zheni S. Stefanova, Kandethody M. Ramachandran

Abstract:

With the increasing dependency on our computer devices, we face the necessity of adequate, efficient and effective mechanisms, for protecting our network. There are two main problems that Intrusion Detection Systems (IDS) attempt to solve. 1) To detect the attack, by analyzing the incoming traffic and inspect the network (intrusion detection). 2) To produce a prompt response when the attack occurs (intrusion prevention). It is critical creating an Intrusion detection model that will detect a breach in the system on time and also challenging making it provide an automatic and with an acceptable delay response at every single stage of the monitoring process. We cannot afford to adopt security measures with a high exploiting computational power, and we are not able to accept a mechanism that will react with a delay. In this paper, we will propose an intrusion response mechanism that is based on artificial intelligence, and more precisely, reinforcement learning techniques (RLT). The RLT will help us to create a decision agent, who will control the process of interacting with the undetermined environment. The goal is to find an optimal policy, which will represent the intrusion response, therefore, to solve the Reinforcement learning problem, using a Q-learning approach. Our agent will produce an optimal immediate response, in the process of evaluating the network traffic.This Q-learning approach will establish the balance between exploration and exploitation and provide a unique, self-learning and strategic artificial intelligence response mechanism for IDS.

Keywords: Intrusion prevention, network security, optimal policy, Q-learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 965
144 Liquidity Risk of Banks in Light of a Dominant Share of Foreign Capital in the Polish Banking Sector

Authors: Karolina Patora

Abstract:

This article investigates liquidity risk management by banks, which has gained significant importance since the global financial crisis of 2008. The issue is of particular interest for countries like Poland, in which foreign capital plays a dominant role. Such an ownership structure poses certain risks to the local banking sector, which faces an increased probability of the withdrawal of funding or assets’ transfers abroad in case of a crisis. Both these factors can have a detrimental influence on the liquidity position of foreign-owned banks and hence negatively affect the financial stability of the whole banking sector. The aim of this study is to evaluate the impact of a dominating share of foreign investors in the Polish banking sector on the liquidity position of commercial banks. The study hypothesizes that the ownership structure of the Polish banking sector, in which there are banks predominantly controlled by foreign investors, does not pose a threat to the liquidity position of Polish banks. A supplementary research hypothesis is that the liquidity risk profile of foreign-owned banks differs from that of domestic banks. The sample consists of 14 foreign-owned banks and 5 domestic banks owned by local investors, which together constitute approximately 87% of the banking sector’s assets. The data covers the period of 2004–2014. The results of the regression models show no evidence of significant differences in terms of the dynamics of changes of the liquidity buffers between the foreign-owned and domestic banks, although the signs of the coefficients might suggest that the foreign-owned banks were decreasing the holdings of liquid assets at a slower pace over the examined period, compared to the domestic banks. However, no proof of the statistical significance of these findings has been found. The supplementary research hypothesis that the liquidity risk profile of foreign-controlled banks differs from that of domestic banks was rejected.

Keywords: Financial stability, foreign-owned banks, liquidity position, liquidity risk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1052
143 The Determination of Cellulose Spiral Angle by Small-Angle X-Ray Scattering from Structurally Characterized Acacia mangium Cell Wall

Authors: Tamer A. Tabet, Fauziah Abdul Aziz, Shahidan Radiman

Abstract:

The spiral angle of the elementary cellulose fibril in the wood cell wall, often called microfibril angle, (MFA). Microfibril angle in hardwood is one of the key determinants of solid timber performance due to its strong influence on the stiffness, strength, shrinkage, swelling, thermal-dynamics mechanical properties and dimensional stability of wood. Variation of MFA (degree) in the S2 layer of the cell walls among Acacia mangium trees was determined using small-angle X-ray scattering (SAXS). The length and orientation of the microfibrils of the cell walls in the irradiated volume of the thin samples are measured using SAXS and optical microscope for 3D surface measurement. The undetermined parameters in the analysis are the MFA, (M) and the standard deviation (σФ) of the intensity distribution arising from the wandering of the fibril orientation about the mean value. Nine separate pairs of values are determined for nine different values of the angle of the incidence of the X-ray beam relative to the normal to the radial direction in the sample. The results show good agreement. The curve distribution of scattered intensity for the real cell wall structure is compared with that calculated with that assembly of rectangular cells with the same ratio of transverse to radial cell wall length. It is demonstrated that for β = 45°, the peaks in the curve intensity distribution for the real and the rectangular cells coincide. If this peak position is Ф45, then the MFA can be determined from the relation M = tan-1 (tan Ф45 / cos 45°), which is precise for rectangular cells. It was found that 92.93% of the variation of MFA can be attributed to the distance from pith to bark. Here we shall present our results of the MFA in the cell wall with respect to its shape, structure and the distance from pith to park as an important fast check and yet accurate towards the quality of wood, its uses and application.

Keywords: Small-Angle X-Ray Scattering, Microfibril Angle, MFA, rectangular cell wall and real cell wall, Acacia mangium.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1764
142 Port Positions on the Mixing Efficiency of a Rotor-Type Mixer – A Numerical Study

Authors: Y. C. Liou, J. M. Miao, T. L. Liu, M. H. Ho

Abstract:

The purpose of this study was to explore the complex flow structure a novel active-type micromixer that based on concept of Wankle-type rotor. The characteristics of this micromixer are two folds; a rapid mixing of reagents in a limited space due to the generation of multiple vortices and a graduate increment in dynamic pressure as the mixed reagents is delivered to the output ports. Present micro-mixer is consisted of a rotor with shape of triangle column, a blending chamber and several inlet and outlet ports. The geometry of blending chamber is designed to make the rotor can be freely internal rotated with a constant eccentricity ratio. When the shape of the blending chamber and the rotor are fixed, the effects of rotating speed of rotor and the relative locations of ports on the mixing efficiency are numerical studied. The governing equations are unsteady, two-dimensional incompressible Navier-Stokes equation and the working fluid is the water. The species concentration equation is also solved to reveal the mass transfer process of reagents in various regions then to evaluate the mixing efficiency. The dynamic mesh technique was implemented to model the dynamic volume shrinkage and expansion of three individual sub-regions of blending chamber when the rotor conducted a complete rotating cycle. Six types of ports configuration on the mixing efficiency are considered in a range of Reynolds number from 10 to 300. The rapid mixing process was accomplished with the multiple vortex structures within a tiny space due to the equilibrium of shear force, viscous force and inertial force. Results showed that the highest mixing efficiency could be attained in the following conditions: two inlet and two outlet ports configuration, that is an included angle of 60 degrees between two inlets and an included angle of 120 degrees between inlet and outlet ports when Re=10.

Keywords: active micro-mixer, CFD, mixing efficiency, ports configuration, Reynolds number, Wankle-type rotor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630
141 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore

Authors: Ronal Muresano, Andrea Pagano

Abstract:

Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.

Keywords: Algorithm optimization, Bank Failures, OpenMP, Parallel Techniques, Statistical tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1849
140 Simulation of Solar Assisted Absorption Cooling and Electricity Generation along with Thermal Storage

Authors: Faezeh Mosallat, Eric L. Bibeau, Tarek El Mekkawy

Abstract:

Parabolic solar trough systems have seen limited deployments in cold northern climates as they are more suitable for electricity production in southern latitudes. A numerical dynamic model is developed to simulate troughs installed in cold climates and validated using a parabolic solar trough facility in Winnipeg. The model is developed in Simulink and will be utilized to simulate a trigeneration system for heating, cooling and electricity generation in remote northern communities. The main objective of this simulation is to obtain operational data of solar troughs in cold climates and use the model to determine ways to improve the economics and address cold weather issues. In this paper the validated Simulink model is applied to simulate a solar assisted absorption cooling system along with electricity generation using Organic Rankine Cycle (ORC) and thermal storage. A control strategy is employed to distribute the heated oil from solar collectors among the above three systems considering the temperature requirements. This modelling provides dynamic performance results using measured meteorological data recorded every minute at the solar facility location. The purpose of this modeling approach is to accurately predict system performance at each time step considering the solar radiation fluctuations due to passing clouds. Optimization of the controller in cold temperatures is another goal of the simulation to for example minimize heat losses in winter when energy demand is high and solar resources are low. The solar absorption cooling is modeled to use the generated heat from the solar trough system and provide cooling in summer for a greenhouse which is located next to the solar field. The results of the simulation are presented for a summer day in Winnipeg which includes comparison of performance parameters of the absorption cooling and ORC systems at different heat transfer fluid (HTF) temperatures.

Keywords: Absorption cooling, parabolic solar trough, remote community, organic Rankine cycle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3071
139 Estimation of the Road Traffic Emissions and Dispersion in the Developing Countries Conditions

Authors: Hicham Gourgue, Ahmed Aharoune, Ahmed Ihlal

Abstract:

We present in this work our model of road traffic emissions (line sources) and dispersion of these emissions, named DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission Model). In its emission part, this model was designed to keep the consistent bottom-up and top-down approaches. It also allows to generate emission inventories from reduced input parameters being adapted to existing conditions in Morocco and in the other developing countries. While several simplifications are made, all the performance of the model results are kept. A further important advantage of the model is that it allows the uncertainty calculation and emission rate uncertainty according to each of the input parameters. In the dispersion part of the model, an improved line source model has been developed, implemented and tested against a reference solution. It provides improvement in accuracy over previous formulas of line source Gaussian plume model, without being too demanding in terms of computational resources. In the case study presented here, the biggest errors were associated with the ends of line source sections; these errors will be canceled by adjacent sections of line sources during the simulation of a road network. In cases where the wind is parallel to the source line, the use of the combination discretized source and analytical line source formulas minimizes remarkably the error. Because this combination is applied only for a small number of wind directions, it should not excessively increase the calculation time.

Keywords: Air pollution, dispersion, emissions, line sources, road traffic, urban transport.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1881