Search results for: Simulated Annealing Algorithm.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4284

Search results for: Simulated Annealing Algorithm.

534 Design of an Efficient Retimed CIC Compensation Filter

Authors: Vishal Awasthi, Krishna Raj

Abstract:

Unwanted side effects because of spectral aliasing and spectral imaging during signal processing would be the major concern over the sampling rate alteration. Multirate-multistage implementation of digital filter could come about a large computational saving than single rate filter suitable for sample rate conversion. This implementation can further improve through high-level architectural transformation in circuit level. Reallocating registers and  relocating flip-flops across logic gates through retiming certainly a prominent sequential transformation technology, that optimize hardware circuits to achieve faster clocking speed without affecting the functionality. In this paper, we proposed an efficient compensated cascade Integrator comb (CIC) decimation filter structure that analyze the consequence of filter order variation which has a retimed FIR filter being compensator while using the cutset retiming technique and achieved an improvement in the passband droop by 14% to 39%, in computation time by 38.04%, 25.78%, 12.21%, 6.69% and 4.44% and reduction in path delay by 62.27%, 72%, 86.63%, 91.56% and 94.42% of 3, 6, 8, 12 and 24 order filter respectively than the non-retimed CIC compensation filter.

Keywords: Multirate Filtering, CIC decimation filter, Compensation theory, Retiming, Retiming algorithm, Filter order, Synchronous dataflow graph.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3674
533 Cost Effective Real-Time Image Processing Based Optical Mark Reader

Authors: Amit Kumar, Himanshu Singal, Arnav Bhavsar

Abstract:

In this modern era of automation, most of the academic exams and competitive exams are Multiple Choice Questions (MCQ). The responses of these MCQ based exams are recorded in the Optical Mark Reader (OMR) sheet. Evaluation of the OMR sheet requires separate specialized machines for scanning and marking. The sheets used by these machines are special and costs more than a normal sheet. Available process is non-economical and dependent on paper thickness, scanning quality, paper orientation, special hardware and customized software. This study tries to tackle the problem of evaluating the OMR sheet without any special hardware and making the whole process economical. We propose an image processing based algorithm which can be used to read and evaluate the scanned OMR sheets with no special hardware required. It will eliminate the use of special OMR sheet. Responses recorded in normal sheet is enough for evaluation. The proposed system takes care of color, brightness, rotation, little imperfections in the OMR sheet images.

Keywords: OMR, image processing, hough circle transform, interpolation, detection, Binary Thresholding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1504
532 Exploring Management of the Fuzzy Front End of Innovation in a Product Driven Startup Company

Authors: Dmitry K. Shaytan, Georgy D. Laptev

Abstract:

In our research we aimed to test a managerial approach for the fuzzy front end (FFE) of innovation by creating controlled experiment/ business case in a breakthrough innovation development. The experiment was in the sport industry and covered all aspects of the customer discovery stage from ideation to prototyping followed by patent application. In the paper we describe and analyze mile stones, tasks, management challenges, decisions made to create the break through innovation, evaluate overall managerial efficiency that was at the considered FFE stage. We set managerial outcome of the FFE stage as a valid product concept in hand. In our paper we introduce hypothetical construct “Q-factor” that helps us in the experiment to distinguish quality of FFE outcomes. The experiment simulated for entrepreneur the FFE of innovation and put on his shoulders responsibility for the outcome of valid product concept. While developing managerial approach to reach the outcome there was a decision to look on product concept from the cognitive psychology and cognitive science point of view. This view helped us to develop the profile of a person whose projection (mental representation) of a new product could optimize for a manager or entrepreneur FFE activities. In the experiment this profile was tested to develop breakthrough innovation for swimmers. Following the managerial approach the product concept was created to help swimmers to feel/sense water. The working prototype was developed to estimate the product concept validity and value added effect for customers. Based on feedback from coachers and swimmers there were strong positive effect that gave high value for customers, and for the experiment – the valid product concept being developed by proposed managerial approach for the FFE. In conclusions there is a suggestion of managerial approach that was derived from experiment.

Keywords: Concept development, concept testing, customer discovery, entrepreneurship, entrepreneurial management, idea generation, idea screening, startup management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827
531 A Novel Approach to Improve Users Search Goal in Web Usage Mining

Authors: R. Lokeshkumar, P. Sengottuvelan

Abstract:

Web mining is to discover and extract useful Information. Different users may have different search goals when they search by giving queries and submitting it to a search engine. The inference and analysis of user search goals can be very useful for providing an experience result for a user search query. In this project, we propose a novel approach to infer user search goals by analyzing search web logs. First, we propose a novel approach to infer user search goals by analyzing search engine query logs, the feedback sessions are constructed from user click-through logs and it efficiently reflect the information needed for users. Second we propose a preprocessing technique to clean the unnecessary data’s from web log file (feedback session). Third we propose a technique to generate pseudo-documents to representation of feedback sessions for clustering. Finally we implement k-medoids clustering algorithm to discover different user search goals and to provide a more optimal result for a search query based on feedback sessions for the user.

Keywords: Data Preprocessing, Session Identification, Web log mining, Web Personalization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1995
530 Simulation of Internal Flow Field of Pitot-Tube Jet Pump

Authors: Iqra Noor, Ihtzaz Qamar

Abstract:

Pitot-tube Jet pump, single-stage pump with low flow rate and high head, consists of a radial impeller that feeds water to rotating cavity. Water then enters stationary pitot-tube collector (diffuser), which discharges to the outside. By means of ANSYS Fluent 15.0, the internal flow characteristics for Pitot-tube Jet pump with standard pitot and curved pitot are studied. Under design condition, realizable k-e turbulence model and SIMPLEC algorithm are used to calculate 3D flow field inside both pumps. The simulation results reveal that energy is imparted to the flow by impeller and inside the rotor, forced vortex type flow is observed. Total pressure decreases inside pitot-tube whereas static pressure increases. Changing pitot-tube from standard to curved shape results in minimum flow circulation inside pitot-tube and leads to a higher pump performance.

Keywords: CFD, flow circulation, high pressure pump, impeller, internal flow, pickup tube pump, rectangle channels, rotating casing, turbulence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 715
529 On the Reduction of Side Effects in Tomography

Authors: V. Masilamani, C. Vanniarajan, Kamala Krithivasan

Abstract:

As the Computed Tomography(CT) requires normally hundreds of projections to reconstruct the image, patients are exposed to more X-ray energy, which may cause side effects such as cancer. Even when the variability of the particles in the object is very less, Computed Tomography requires many projections for good quality reconstruction. In this paper, less variability of the particles in an object has been exploited to obtain good quality reconstruction. Though the reconstructed image and the original image have same projections, in general, they need not be the same. In addition to projections, if a priori information about the image is known, it is possible to obtain good quality reconstructed image. In this paper, it has been shown by experimental results why conventional algorithms fail to reconstruct from a few projections, and an efficient polynomial time algorithm has been given to reconstruct a bi-level image from its projections along row and column, and a known sub image of unknown image with smoothness constraints by reducing the reconstruction problem to integral max flow problem. This paper also discusses the necessary and sufficient conditions for uniqueness and extension of 2D-bi-level image reconstruction to 3D-bi-level image reconstruction.

Keywords: Discrete Tomography, Image Reconstruction, Projection, Computed Tomography, Integral Max Flow Problem, Smooth Binary Image.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1347
528 Morphing Human Faces: Automatic Control Points Selection and Color Transition

Authors: Stephen Karungaru, Minoru Fukumi, Norio Akamatsu

Abstract:

In this paper, we propose a morphing method by which face color images can be freely transformed. The main focus of this work is the transformation of one face image to another. This method is fully automatic in that it can morph two face images by automatically detecting all the control points necessary to perform the morph. A face detection neural network, edge detection and medium filters are employed to detect the face position and features. Five control points, for both the source and target images, are then extracted based on the facial features. Triangulation method is then used to match and warp the source image to the target image using the control points. Finally color interpolation is done using a color Gaussian model that calculates the color for each particular frame depending on the number of frames used. A real coded Genetic algorithm is used in both the image warping and color blending steps to assist in step size decisions and speed up the morphing. This method results in ''very smooth'' morphs and is fast to process.

Keywords: color transition, genetic algorithms morphing, warping

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2792
527 Optimal Maintenance Policy for a Partially Observable Two-Unit System

Authors: Leila Jafari, Viliam Makis, Akram Khaleghei G.B.

Abstract:

In this paper, we present a maintenance model of a two-unit series system with economic dependence. Unit#1 which is considered to be more expensive and more important, is subject to condition monitoring (CM) at equidistant, discrete time epochs and unit#2, which is not subject to CM has a general lifetime distribution. The multivariate observation vectors obtained through condition monitoring carry partial information about the hidden state of unit#1, which can be in a healthy or a warning state while operating. Only the failure state is assumed to be observable for both units. The objective is to find an optimal opportunistic maintenance policy minimizing the long-run expected average cost per unit time. The problem is formulated and solved in the partially observable semi-Markov decision process framework. An effective computational algorithm for finding the optimal policy and the minimum average cost is developed, illustrated by a numerical example.

Keywords: Condition-Based Maintenance, Semi-Markov Decision Process, Multivariate Bayesian Control Chart, Partially Observable System, Two-unit System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2266
526 Implementing an Intuitive Reasoner with a Large Weather Database

Authors: Yung-Chien Sun, O. Grant Clark

Abstract:

In this paper, the implementation of a rule-based intuitive reasoner is presented. The implementation included two parts: the rule induction module and the intuitive reasoner. A large weather database was acquired as the data source. Twelve weather variables from those data were chosen as the “target variables" whose values were predicted by the intuitive reasoner. A “complex" situation was simulated by making only subsets of the data available to the rule induction module. As a result, the rules induced were based on incomplete information with variable levels of certainty. The certainty level was modeled by a metric called "Strength of Belief", which was assigned to each rule or datum as ancillary information about the confidence in its accuracy. Two techniques were employed to induce rules from the data subsets: decision tree and multi-polynomial regression, respectively for the discrete and the continuous type of target variables. The intuitive reasoner was tested for its ability to use the induced rules to predict the classes of the discrete target variables and the values of the continuous target variables. The intuitive reasoner implemented two types of reasoning: fast and broad where, by analogy to human thought, the former corresponds to fast decision making and the latter to deeper contemplation. . For reference, a weather data analysis approach which had been applied on similar tasks was adopted to analyze the complete database and create predictive models for the same 12 target variables. The values predicted by the intuitive reasoner and the reference approach were compared with actual data. The intuitive reasoner reached near-100% accuracy for two continuous target variables. For the discrete target variables, the intuitive reasoner predicted at least 70% as accurately as the reference reasoner. Since the intuitive reasoner operated on rules derived from only about 10% of the total data, it demonstrated the potential advantages in dealing with sparse data sets as compared with conventional methods.

Keywords: Artificial intelligence, intuition, knowledge acquisition, limited certainty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1359
525 A Novel Method to Evaluate Line Loadability for Distribution Systems with Realistic Loads

Authors: K. Nagaraju, S. Sivanagaraju, T. Ramana, V. Ganesh

Abstract:

This paper presents a simple method for estimation of additional load as a factor of the existing load that may be drawn before reaching the point of line maximum loadability of radial distribution system (RDS) with different realistic load models at different substation voltages. The proposed method involves a simple line loadability index (LLI) that gives a measure of the proximity of the present state of a line in the distribution system. The LLI can use to assess voltage instability and the line loading margin. The proposed method also compares with the existing method of maximum loadability index [10]. The simulation results show that the LLI can identify not only the weakest line/branch causing system instability but also the system voltage collapse point when it is near one. This feature enables us to set an index threshold to monitor and predict system stability on-line so that a proper action can be taken to prevent the system from collapse. To demonstrate the validity of the proposed algorithm, computer simulations are carried out on two bus and 69 bus RDS.

Keywords: line loadability index, line loading margin, maximum line loadability, system stability, radial distribution system

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1931
524 Practical Applications and Connectivity Algorithms in Future Wireless Sensor Networks

Authors: Mohamed K. Watfa

Abstract:

Like any sentient organism, a smart environment relies first and foremost on sensory data captured from the real world. The sensory data come from sensor nodes of different modalities deployed on different locations forming a Wireless Sensor Network (WSN). Embedding smart sensors in humans has been a research challenge due to the limitations imposed by these sensors from computational capabilities to limited power. In this paper, we first propose a practical WSN application that will enable blind people to see what their neighboring partners can see. The challenge is that the actual mapping between the input images to brain pattern is too complex and not well understood. We also study the connectivity problem in 3D/2D wireless sensor networks and propose distributed efficient algorithms to accomplish the required connectivity of the system. We provide a new connectivity algorithm CDCA to connect disconnected parts of a network using cooperative diversity. Through simulations, we analyze the connectivity gains and energy savings provided by this novel form of cooperative diversity in WSNs.

Keywords: Wireless Sensor Networks, Pervasive Computing, Eye Vision Application, 3D Connectivity, Clusters, Energy Efficient, Cooperative diversity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1606
523 A Local Decisional Algorithm Using Agent- Based Management in Constrained Energy Environment

Authors: C. Adam, G. Henri, T. Levent, J-B Mauro, A-L Mayet

Abstract:

Energy Efficiency Management is the heart of a worldwide problem. The capability of a multi-agent system as a technology to manage the micro-grid operation has already been proved. This paper deals with the implementation of a decisional pattern applied to a multi-agent system which provides intelligence to a distributed local energy network considered at local consumer level. Development of multi-agent application involves agent specifications, analysis, design, and realization. Furthermore, it can be implemented by following several decisional patterns. The purpose of present article is to suggest a new approach for a decisional pattern involving a multi-agent system to control a distributed local energy network in a decentralized competitive system. The proposed solution is the result of a dichotomous approach based on environment observation. It uses an iterative process to solve automatic learning problems and converges monotonically very fast to system attracting operation point.

Keywords: Energy Efficiency Management, Distributed Smart- Grid, Multi-Agent System, Decisional Decentralized Competitive System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1383
522 Study on Two Way Reinforced Concrete Slab Using ANSYS with Different Boundary Conditions and Loading

Authors: A. Gherbi, L. Dahmani, A. Boudjemia

Abstract:

This paper presents the Finite Element Method (FEM) for analyzing the failure pattern of rectangular slab with various edge conditions. Non-Linear static analysis is carried out using ANSYS 15 Software. Using SOLID65 solid elements, the compressive crushing of concrete is facilitated using plasticity algorithm, while the concrete cracking in tension zone is accommodated by the nonlinear material model. Smeared reinforcement is used and introduced as a percentage of steel embedded in concrete slab. The behavior of the analyzed concrete slab has been observed in terms of the crack pattern and displacement for various loading and boundary conditions. The finite element results are also compared with the experimental data. One of the other objectives of the present study is to show how similar the crack path found by ANSYS program to those observed for the yield line analysis. The smeared reinforcement method is found to be more practical especially for the layered elements like concrete slabs. The value of this method is that it does not require explicit modeling of the rebar, and thus a much coarser mesh can be defined.

Keywords: ANSYS, cracking pattern, displacements, RC Slab, smeared reinforcement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1229
521 Controlling 6R Robot by Visionary System

Authors: Azamossadat Nourbakhsh, Moharram Habibnezhad Korayem

Abstract:

In the visual servoing systems, the data obtained by Visionary is used for controlling robots. In this project, at first the simulator which was proposed for simulating the performance of a 6R robot before, was examined in terms of software and test, and in the proposed simulator, existing defects were obviated. In the first version of simulation, the robot was directed toward the target object only in a Position-based method using two cameras in the environment. In the new version of the software, three cameras were used simultaneously. The camera which is installed as eye-inhand on the end-effector of the robot is used for visual servoing in a Feature-based method. The target object is recognized according to its characteristics and the robot is directed toward the object in compliance with an algorithm similar to the function of human-s eyes. Then, the function and accuracy of the operation of the robot are examined through Position-based visual servoing method using two cameras installed as eye-to-hand in the environment. Finally, the obtained results are tested under ANSI-RIA R15.05-2 standard.

Keywords: 6R Robot , camera, visual servoing, Feature-based visual servoing, Position-based visual servoing, Performance tests.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1365
520 Contaminant Transport in Soil from a Point Source

Authors: S. A. Nta, M. J. Ayotamuno, A. H. Igoni, R. N. Okparanma

Abstract:

The work sought to understand the pattern of movement of contaminant from a continuous point source through soil. The soil used was sandy-loam in texture. The contaminant used was municipal solid waste landfill leachate, introduced as a point source through an entry point located at the center of top layer of the soil tank. Analyses were conducted after maturity periods of 50 and 80 days. The maximum change in chemical concentration was observed on soil samples at a radial distance of 0.25 m. Finite element approximation based model was used to assess the future prediction, management and remediation in the polluted area. The actual field data collected for the case study were used to calibrate the modeling and thus simulated the flow pattern of the pollutants through soil. MATLAB R2015a was used to visualize the flow of pollutant through the soil. Dispersion coefficient at 0.25 and 0.50 m radial distance from the point of application of leachate shows a measure of the spreading of a flowing leachate due to the nature of the soil medium, with its interconnected channels distributed at random in all directions. Surface plots of metals on soil after maturity period of 80 days shows a functional relationship between a designated dependent variable (Y), and two independent variables (X and Z). Comparison of measured and predicted profile transport along the depth after 50 and 80 days of leachate application and end of the experiment shows that there were no much difference between the predicted and measured concentrations as they were all lying close to each other. For the analysis of contaminant transport, finite difference approximation based model was very effective in assessing the future prediction, management and remediation in the polluted area. The experiment gave insight into the most likely pattern of movement of contaminant as a result of continuous percolations of the leachate on soil. This is important for contaminant movement prediction and subsequent remediation of such soils.

Keywords: Contaminant, dispersion, point or leaky source, surface plot, soil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 488
519 Data Collection with Bounded-Sized Messages in Wireless Sensor Networks

Authors: Min Kyung An

Abstract:

In this paper, we study the data collection problem in Wireless Sensor Networks (WSNs) adopting the two interference models: The graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR). The main issue of the problem is to compute schedules with the minimum number of timeslots, that is, to compute the minimum latency schedules, such that data from every node can be collected without any collision or interference to a sink node. While existing works studied the problem with unit-sized and unbounded-sized message models, we investigate the problem with the bounded-sized message model, and introduce a constant factor approximation algorithm. To the best known of our knowledge, our result is the first result of the data collection problem with bounded-sized model in both interference models.

Keywords: Data collection, collision-free, interference-free, physical interference model, SINR, approximation, bounded-sized message model, wireless sensor networks, WSN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1202
518 Lowering Error Floors by Concatenation of Low-Density Parity-Check and Array Code

Authors: Cinna Soltanpur, Mohammad Ghamari, Behzad Momahed Heravi, Fatemeh Zare

Abstract:

Low-density parity-check (LDPC) codes have been shown to deliver capacity approaching performance; however, problematic graphical structures (e.g. trapping sets) in the Tanner graph of some LDPC codes can cause high error floors in bit-error-ratio (BER) performance under conventional sum-product algorithm (SPA). This paper presents a serial concatenation scheme to avoid the trapping sets and to lower the error floors of LDPC code. The outer code in the proposed concatenation is the LDPC, and the inner code is a high rate array code. This approach applies an interactive hybrid process between the BCJR decoding for the array code and the SPA for the LDPC code together with bit-pinning and bit-flipping techniques. Margulis code of size (2640, 1320) has been used for the simulation and it has been shown that the proposed concatenation and decoding scheme can considerably improve the error floor performance with minimal rate loss.

Keywords: Concatenated coding, low–density parity–check codes, array code, error floors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 966
517 Simulation of Soil-Pile Interaction of Steel Batter Piles Penetrated in Sandy Soil Subjected to Pull-Out Loads

Authors: Ameer A. Jebur, William Atherton, Rafid M. Alkhaddar, Edward Loffill

Abstract:

Superstructures like offshore platforms, tall buildings, transition towers, skyscrapers and bridges are normally designed to resist compression, uplift and lateral forces from wind waves, negative skin friction, ship impact and other applied loads. Better understanding and the precise simulation of the response of batter piles under the action of independent uplift loads is a vital topic and an area of active research in the field of geotechnical engineering. This paper investigates the use of finite element code (FEC) to examine the behaviour of model batter piles penetrated in dense sand, subjected to pull-out pressure by means of numerical modelling. The concept of the Winkler Model (beam on elastic foundation) has been used in which the interaction between the pile embedded depth and adjacent soil in the bearing zone is simulated by nonlinear p-y curves. The analysis was conducted on different pile slenderness ratios (lc⁄d) ranging from 7.5, 15.22 and 30 respectively. In addition, the optimum batter angle for a model steel pile penetrated in dense sand has been chosen to be 20° as this is the best angle for this simulation as demonstrated by other researcher published in literature. In this numerical analysis, the soil response is idealized as elasto-plastic and the model piles are described as elastic materials for the purpose of simulation. The results revealed that the applied loads affect the pullout pile capacity as well as the lateral pile response for dense sand together with varying shear strength parameters linked to the pile critical depth. Furthermore, the pile pull-out capacity increases with increasing the pile aspect ratios.

Keywords: Slenderness ratio, soil-pile interaction, winkler model (beam on elastic foundation), pull-out capacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1603
516 Electromyography Pattern Classification with Laplacian Eigenmaps in Human Running

Authors: Elnaz Lashgari, Emel Demircan

Abstract:

Electromyography (EMG) is one of the most important interfaces between humans and robots for rehabilitation. Decoding this signal helps to recognize muscle activation and converts it into smooth motion for the robots. Detecting each muscle’s pattern during walking and running is vital for improving the quality of a patient’s life. In this study, EMG data from 10 muscles in 10 subjects at 4 different speeds were analyzed. EMG signals are nonlinear with high dimensionality. To deal with this challenge, we extracted some features in time-frequency domain and used manifold learning and Laplacian Eigenmaps algorithm to find the intrinsic features that represent data in low-dimensional space. We then used the Bayesian classifier to identify various patterns of EMG signals for different muscles across a range of running speeds. The best result for vastus medialis muscle corresponds to 97.87±0.69 for sensitivity and 88.37±0.79 for specificity with 97.07±0.29 accuracy using Bayesian classifier. The results of this study provide important insight into human movement and its application for robotics research.

Keywords: Electrocardiogram, manifold learning, Laplacian Eigenmaps, running pattern.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1084
515 Mutation Rate for Evolvable Hardware

Authors: Emanuele Stomeo, Tatiana Kalganova, Cyrille Lambert

Abstract:

Evolvable hardware (EHW) refers to a selfreconfiguration hardware design, where the configuration is under the control of an evolutionary algorithm (EA). A lot of research has been done in this area several different EA have been introduced. Every time a specific EA is chosen for solving a particular problem, all its components, such as population size, initialization, selection mechanism, mutation rate, and genetic operators, should be selected in order to achieve the best results. In the last three decade a lot of research has been carried out in order to identify the best parameters for the EA-s components for different “test-problems". However different researchers propose different solutions. In this paper the behaviour of mutation rate on (1+λ) evolution strategy (ES) for designing logic circuits, which has not been done before, has been deeply analyzed. The mutation rate for an EHW system modifies values of the logic cell inputs, the cell type (for example from AND to NOR) and the circuit output. The behaviour of the mutation has been analyzed based on the number of generations, genotype redundancy and number of logic gates used for the evolved circuits. The experimental results found provide the behaviour of the mutation rate to be used during evolution for the design and optimization of logic circuits. The researches on the best mutation rate during the last 40 years are also summarized.

Keywords: Evolvable hardware, mutation rate, evolutionarycomputation, design of logic circuit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1475
514 Pre-Deflection Routing with Control Packet Signal Scheme in Optical Burst Switch Networks

Authors: Jaipal Bisht, Aditya Goel

Abstract:

Optical Burst Switching (OBS) is a promising technology for the future generation Internet. Control architecture and Contention resolution are the main issues faced by the Optical Burst Switching networks. In this paper we are only taking care of the Contention problem and to overcome this issue we propose Pre-Deflection Routing with Control Packet Signal Scheme for Contention Resolution in Optical Burst Switch Networks. In this paper Pre-deflection routing approach has been proposed in which routing is carried out in two ways, Shortest Path First (SPF) and Least Hop First (LHF) Routing to forward the clusters and canoes respectively. Hereafter Burst Offset Time Control Algorithm has been proposed where a forward control packet (FCP) collects the congestion price and contention price along its paths. Thereafter a reverse-direction control packet (RCP) sent by destination node which delivers the information of FCP to the source node, and source node uses this information to revise its offset time and burst length.

Keywords: Contention Resolution, FCP, OBS, Offset Time, PST, RCP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1874
513 Detection and Correction of Ectopic Beats for HRV Analysis Applying Discrete Wavelet Transforms

Authors: Desmond B. Keenan

Abstract:

The clinical usefulness of heart rate variability is limited to the range of Holter monitoring software available. These software algorithms require a normal sinus rhythm to accurately acquire heart rate variability (HRV) measures in the frequency domain. Premature ventricular contractions (PVC) or more commonly referred to as ectopic beats, frequent in heart failure, hinder this analysis and introduce ambiguity. This investigation demonstrates an algorithm to automatically detect ectopic beats by analyzing discrete wavelet transform coefficients. Two techniques for filtering and replacing the ectopic beats from the RR signal are compared. One technique applies wavelet hard thresholding techniques and another applies linear interpolation to replace ectopic cycles. The results demonstrate through simulation, and signals acquired from a 24hr ambulatory recorder, that these techniques can accurately detect PVC-s and remove the noise and leakage effects produced by ectopic cycles retaining smooth spectra with the minimum of error.

Keywords: Heart rate variability, vagal tone, sympathetic, parasympathetic, wavelets, ectopic beats, spectral analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2033
512 Design Transformation to Reduce Cost in Irrigation Using Value Engineering

Authors: F. S. Al-Anzi, M. Sarfraz, A. Elmi, A. R. Khan

Abstract:

Researchers are responding to the environmental challenges of Kuwait in localized, innovative, effective and economic ways. One of the vital and significant examples of the natural challenges is lack or water and desertification. In this research, the project team focuses on redesigning a prototype, using Value Engineering Methodology, which would provide similar functionalities to the well-known technology of Waterboxx kits while reducing the capital and operational costs and simplifying the process of manufacturing and usability by regular farmers. The design employs used tires and recycled plastic sheets as raw materials. Hence, this approach is going to help not just fighting desertification but also helping in getting rid of ever growing huge tire dumpsters in Kuwait, as well as helping in avoiding hazards of tire fires yielding in a safer and friendlier environment. Several alternatives for implementing the prototype have been considered. The best alternative in terms of value has been selected after thorough Function Analysis System Technique (FAST) exercise has been developed. A prototype has been fabricated and tested in a controlled simulated lab environment that is being followed by real environment field testing. Water and soil analysis conducted on the site of the experiment to cross compare between the composition of the soil before and after the experiment to insure that the prototype being tested is actually going to be environment safe. Experimentation shows that the design was equally as effective as, and may exceed, the original design with significant savings in cost. An estimated total cost reduction using the VE approach of 43.84% over the original design. This cost reduction does not consider the intangible costs of environmental issue of waste recycling which many further intensify the total savings of using the alternative VE design. This case study shows that Value Engineering Methodology can be an important tool in innovating new designs for reducing costs.

Keywords: Desertification, functional analysis, scrap tires, value engineering, waste recycling, water irrigation rationing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1430
511 A Comparison of SVM-based Criteria in Evolutionary Method for Gene Selection and Classification of Microarray Data

Authors: Rameswar Debnath, Haruhisa Takahashi

Abstract:

An evolutionary method whose selection and recombination operations are based on generalization error-bounds of support vector machine (SVM) can select a subset of potentially informative genes for SVM classifier very efficiently [7]. In this paper, we will use the derivative of error-bound (first-order criteria) to select and recombine gene features in the evolutionary process, and compare the performance of the derivative of error-bound with the error-bound itself (zero-order) in the evolutionary process. We also investigate several error-bounds and their derivatives to compare the performance, and find the best criteria for gene selection and classification. We use 7 cancer-related human gene expression datasets to evaluate the performance of the zero-order and first-order criteria of error-bounds. Though both criteria have the same strategy in theoretically, experimental results demonstrate the best criterion for microarray gene expression data.

Keywords: support vector machine, generalization error-bound, feature selection, evolutionary algorithm, microarray data

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1511
510 Modified Fuzzy ARTMAP and Supervised Fuzzy ART: Comparative Study with Multispectral Classification

Authors: F.Alilat, S.Loumi, H.Merrad, B.Sansal

Abstract:

In this article a modification of the algorithm of the fuzzy ART network, aiming at returning it supervised is carried out. It consists of the search for the comparison, training and vigilance parameters giving the minimum quadratic distances between the output of the training base and those obtained by the network. The same process is applied for the determination of the parameters of the fuzzy ARTMAP giving the most powerful network. The modification consist in making learn the fuzzy ARTMAP a base of examples not only once as it is of use, but as many time as its architecture is in evolution or than the objective error is not reached . In this way, we don-t worry about the values to impose on the eight (08) parameters of the network. To evaluate each one of these three networks modified, a comparison of their performances is carried out. As application we carried out a classification of the image of Algiers-s bay taken by SPOT XS. We use as criterion of evaluation the training duration, the mean square error (MSE) in step control and the rate of good classification per class. The results of this study presented as curves, tables and images show that modified fuzzy ARTMAP presents the best compromise quality/computing time.

Keywords: Neural Networks, fuzzy ART, fuzzy ARTMAP, Remote sensing, multispectral Classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1338
509 Adaptive Kernel Principal Analysis for Online Feature Extraction

Authors: Mingtao Ding, Zheng Tian, Haixia Xu

Abstract:

The batch nature limits the standard kernel principal component analysis (KPCA) methods in numerous applications, especially for dynamic or large-scale data. In this paper, an efficient adaptive approach is presented for online extraction of the kernel principal components (KPC). The contribution of this paper may be divided into two parts. First, kernel covariance matrix is correctly updated to adapt to the changing characteristics of data. Second, KPC are recursively formulated to overcome the batch nature of standard KPCA.This formulation is derived from the recursive eigen-decomposition of kernel covariance matrix and indicates the KPC variation caused by the new data. The proposed method not only alleviates sub-optimality of the KPCA method for non-stationary data, but also maintains constant update speed and memory usage as the data-size increases. Experiments for simulation data and real applications demonstrate that our approach yields improvements in terms of both computational speed and approximation accuracy.

Keywords: adaptive method, kernel principal component analysis, online extraction, recursive algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527
508 Robust Numerical Scheme for Pricing American Options under Jump Diffusion Models

Authors: Salah Alrabeei, Mohammad Yousuf

Abstract:

The goal of option pricing theory is to help the investors to manage their money, enhance returns and control their financial future by theoretically valuing their options. However, most of the option pricing models have no analytical solution. Furthermore, not all the numerical methods are efficient to solve these models because they have nonsmoothing payoffs or discontinuous derivatives at the exercise price. In this paper, we solve the American option under jump diffusion models by using efficient time-dependent numerical methods. several techniques are integrated to reduced the overcome the computational complexity. Fast Fourier Transform (FFT) algorithm is used as a matrix-vector multiplication solver, which reduces the complexity from O(M2) into O(M logM). Partial fraction decomposition technique is applied to rational approximation schemes to overcome the complexity of inverting polynomial of matrices. The proposed method is easy to implement on serial or parallel versions. Numerical results are presented to prove the accuracy and efficiency of the proposed method.

Keywords: Integral differential equations, American options, jump–diffusion model, rational approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 520
507 Application of Exact String Matching Algorithms towards SMILES Representation of Chemical Structure

Authors: Ahmad Fadel Klaib, Zurinahni Zainol, Nurul Hashimah Ahamed, Rosma Ahmad, Wahidah Hussin

Abstract:

Bioinformatics and Cheminformatics use computer as disciplines providing tools for acquisition, storage, processing, analysis, integrate data and for the development of potential applications of biological and chemical data. A chemical database is one of the databases that exclusively designed to store chemical information. NMRShiftDB is one of the main databases that used to represent the chemical structures in 2D or 3D structures. SMILES format is one of many ways to write a chemical structure in a linear format. In this study we extracted Antimicrobial Structures in SMILES format from NMRShiftDB and stored it in our Local Data Warehouse with its corresponding information. Additionally, we developed a searching tool that would response to user-s query using the JME Editor tool that allows user to draw or edit molecules and converts the drawn structure into SMILES format. We applied Quick Search algorithm to search for Antimicrobial Structures in our Local Data Ware House.

Keywords: Exact String-matching Algorithms, NMRShiftDB, SMILES Format, Antimicrobial Structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2188
506 Designing a Football Team of Robots from Beginning to End

Authors: Maziar A. Sharbafi, Caro Lucas, Aida Mohammadinejad, Mostafa Yaghobi

Abstract:

The Combination of path planning and path following is the main purpose of this paper. This paper describes the developed practical approach to motion control of the MRL small size robots. An intelligent controller is applied to control omni-directional robots motion in simulation and real environment respectively. The Brain Emotional Learning Based Intelligent Controller (BELBIC), based on LQR control is adopted for the omni-directional robots. The contribution of BELBIC in improving the control system performance is shown as application of the emotional learning in a real world problem. Optimizing of the control effort can be achieved in this method too. Next the implicit communication method is used to determine the high level strategies and coordination of the robots. Some simple rules besides using the environment as a memory to improve the coordination between agents make the robots' decision making system. With this simple algorithm our team manifests a desirable cooperation.

Keywords: multi-agent systems (MAS), Emotional learning, MIMO system, BELBIC, LQR, Communication via environment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1821
505 Automated Thickness Measurement of Retinal Blood Vessels for Implementation of Clinical Decision Support Systems in Diagnostic Diabetic Retinopathy

Authors: S.Jerald Jeba Kumar, M.Madheswaran

Abstract:

The structure of retinal vessels is a prominent feature, that reveals information on the state of disease that are reflected in the form of measurable abnormalities in thickness and colour. Vascular structures of retina, for implementation of clinical diabetic retinopathy decision making system is presented in this paper. Retinal Vascular structure is with thin blood vessel, whose accuracy is highly dependent upon the vessel segmentation. In this paper the blood vessel thickness is automatically detected using preprocessing techniques and vessel segmentation algorithm. First the capture image is binarized to get the blood vessel structure clearly, then it is skeletonised to get the overall structure of all the terminal and branching nodes of the blood vessels. By identifying the terminal node and the branching points automatically, the main and branching blood vessel thickness is estimated. Results are presented and compared with those provided by clinical classification on 50 vessels collected from Bejan Singh Eye hospital..

Keywords: Diabetic retinopathy, Binarization, SegmentationClinical Decision Support Systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2014