Search results for: Particle Swarm Optimization algorithm
2286 PIL Theory
Authors: A. Peveri
Abstract:
The curvature space-time by the presence of material, this deformation must present a pattern of deformation, not random. Space is uniform, elastic and any modification that occurs in one part, causes a change in another.
This deformation exists, must be a constant value and is independent of the observer, and relates the amount of matter, the force caused by the curvature of space and surface space. This unit of space is defined in this study as PIL and represents a constant area of space, deformable in the direction and sense of the center of mass of the body. The PIL is curved and connected to the center of mass of the Earth, to get to that point, through all matter, thus forming part of any place between particles at atomic and subatomic levels. At these levels the space between each particle is flat, unlike the macro where the space curves.
Keywords: Space flat, Space curved, Unit of space, Deformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15262285 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).Keywords: Time history dynamic analysis, basic modal displacement, earthquake induced demands, shear steel structures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14282284 Behavior of Cu-WC-Ti Metal Composite Afterusing Planetary Ball Milling
Authors: A.T.Z. Mahamat, A.M. A Rani, Patthi Husain
Abstract:
Copper based composites reinforced with WC and Ti particles were prepared using planetary ball-mill. The experiment was designed by using Taguchi technique and milling was carried out in an air for several hours. The powder was characterized before and after milling using the SEM, TEM and X-ray for microstructure and for possible new phases. Microstructures show that milled particles size and reduction in particle size depend on many parameters. The distance d between planes of atoms estimated from X-ray powder diffraction data and TEM image. X-ray diffraction patterns of the milled powder did not show clearly any new peak or energy shift, but the TEM images show a significant change in crystalline structure of corporate on titanium in the composites.Keywords: ball milling, microstructures, titanium, tungstencarbides, X-ray
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16052283 Investigation of the Effect of Milling Time on the Mechanochemical Synthesis of Fe3Al/ Al2O3 Nanocomposite
Authors: B. Ghasemi, A. A. Najafzadeh Khoee
Abstract:
In this study, the effect of mechanical activation on the synthesis of Fe3Al/Al2O3 nanocomposite has been investigated by using mechanochemical method. For this purpose, Aluminum powder and hematite as precursors, with stoichiometric ratio, have been utilized and other effective parameters in milling process were kept constant. Phase formation analysis, crystallite size measurement and lattice strain were studied by X-ray diffraction (XRD) by using Williamson-Hall method as well as microstructure and morphology were explored by Scanning electron microscopy (SEM). Also, Energy-dispersive X-ray spectroscopy (EDX) analysis was used in order to probe the particle distribution. The results showed that after 30-hour milling, the reaction was started, combustibly done and completed.
Keywords: hematite, mechanochemical, milling, nanocomposite
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19162282 Featured based Segmentation of Color Textured Images using GLCM and Markov Random Field Model
Authors: Dipti Patra, Mridula J
Abstract:
In this paper, we propose a new image segmentation approach for colour textured images. The proposed method for image segmentation consists of two stages. In the first stage, textural features using gray level co-occurrence matrix(GLCM) are computed for regions of interest (ROI) considered for each class. ROI acts as ground truth for the classes. Ohta model (I1, I2, I3) is the colour model used for segmentation. Statistical mean feature at certain inter pixel distance (IPD) of I2 component was considered to be the optimized textural feature for further segmentation. In the second stage, the feature matrix obtained is assumed to be the degraded version of the image labels and modeled as Markov Random Field (MRF) model to model the unknown image labels. The labels are estimated through maximum a posteriori (MAP) estimation criterion using ICM algorithm. The performance of the proposed approach is compared with that of the existing schemes, JSEG and another scheme which uses GLCM and MRF in RGB colour space. The proposed method is found to be outperforming the existing ones in terms of segmentation accuracy with acceptable rate of convergence. The results are validated with synthetic and real textured images.
Keywords: Texture Image Segmentation, Gray Level Cooccurrence Matrix, Markov Random Field Model, Ohta colour space, ICM algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21762281 A New Multi-Target, Multi-Agent Search-and-Rescue Path Planning Approach
Authors: Jean Berger, Nassirou Lo, Martin Noel
Abstract:
Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.
Keywords: Search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24902280 Enhancing Hand Efficiency of Smart Glass Cleaning Robot through Generative Design Module
Authors: Pankaj Gupta, Amit Kumar Srivastava, Nitesh Pandey
Abstract:
This article explores the domain of generative design in order to enhance the development of robot designs for innovative and efficient maintenance approaches for tall buildings. This study aims to optimize the design of robotic hands by focusing on minimizing mass and volume while ensuring they can withstand the specified pressure with equal strength. The research procedure is structured and systematic. The purpose of optimization is to enhance the efficiency of the robot and reduce the manufacturing expenses. The project seeks to investigate the application of generative design in order to optimize products. Autodesk Fusion 360 offers the capability to immediately apply the generative design functionality to the solid model. The effort involved creating a solid model of the Smart Glass Cleaning Robot and optimizing one of its components, the Hand, using generative techniques. The article has thoroughly examined the designs, outcomes, and procedure. These loads serve as a benchmark for creating designs that can endure the necessary level of pressure and preserve their structural integrity. The efficacy of the generative design process is contingent upon the selection of materials, as different materials possess distinct physical attributes. The study utilizes five different materials, namely Steel, Stainless Steel, Titanium, Aluminum, and CFRP (Carbon Fiber Reinforced Polymer), in order to investigate a range of design possibilities.
Keywords: Generative design, mass and volume optimization, material strength analysis, generative design, smart glass cleaning robot.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2272279 The Gravitational Impact of the Sun and the Moon on Heavy Mineral Deposits and Dust Particles in Low Gravity Regions of the Earth
Authors: T. B. Karu Jayasundara
Abstract:
The Earth’s gravity is not uniform. The satellite imageries of the Earth’s surface from NASA reveal a number of different gravity anomaly regions all over the globe. When the moon rotates around the earth, its gravity has a major physical influence on a number of regions on the earth. This physical change can be seen by the tides. The tides make sea levels high and low in coastal regions. During high tide, the gravitational force of the Moon pulls the Earth’s gravity so that the total gravitational intensity of Earth is reduced; it is further reduced in the low gravity regions of Earth. This reduction in gravity helps keep the suspended particles such as dust in the atmosphere, sand grains in the sea water for longer. Dramatic differences can be seen from the floating dust in the low gravity regions when compared with other regions. The above phenomena can be demonstrated from experiments. The experiments have to be done in high and low gravity regions of the earth during high and low tide, which will assist in comparing the final results. One of the experiments that can be done is by using a water filled cylinder about 80 cm tall, a few particles, which have the same density and same diameter (about 1 mm) and a stop watch. The selected particles were dropped from the surface of the water in the cylinder and the time taken for the particles to reach the bottom of the cylinder was measured using the stop watch. The times of high and low tide charts can be obtained from the regional government authorities. This concept is demonstrated by the particle drop times taken at high and low tides. The result of the experiment shows that the particle settlement time is less in low tide and high in high tide. The experiment for dust particles in air can be collected on filters, which are cellulose ester membranes and using a vacuum pump. The dust on filters can be used to make slides according to the NOHSC method. Counting the dust particles on the slides can be done using a phase contrast microscope. The results show that the concentration of dust is high at high tide and low in low tide. As a result of the high tides, a high concentration of heavy minerals deposit on placer deposits and dust particles retain in the atmosphere for longer in low gravity regions. These conditions are remarkably exhibited in the lowest low gravity region of the earth, mainly in the regions of India, Sri Lanka and in the middle part of the Indian Ocean. The biggest heavy mineral placer deposits are found in coastal regions of India and Sri Lanka and heavy dust particles are found in the atmosphere of India, particularly in the Delhi region.
Keywords: Dust particles, high and low tides, heavy minerals. low gravity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6322278 A Comprehensive Survey on RAT Selection Algorithms for Heterogeneous Networks
Authors: Abdallah AL Sabbagh, Robin Braun, Mehran Abolhasan
Abstract:
Due to the coexistence of different Radio Access Technologies (RATs), Next Generation Wireless Networks (NGWN) are predicted to be heterogeneous in nature. The coexistence of different RATs requires a need for Common Radio Resource Management (CRRM) to support the provision of Quality of Service (QoS) and the efficient utilization of radio resources. RAT selection algorithms are part of the CRRM algorithms. Simply, their role is to verify if an incoming call will be suitable to fit into a heterogeneous wireless network, and to decide which of the available RATs is most suitable to fit the need of the incoming call and admit it. Guaranteeing the requirements of QoS for all accepted calls and at the same time being able to provide the most efficient utilization of the available radio resources is the goal of RAT selection algorithm. The normal call admission control algorithms are designed for homogeneous wireless networks and they do not provide a solution to fit a heterogeneous wireless network which represents the NGWN. Therefore, there is a need to develop RAT selection algorithm for heterogeneous wireless network. In this paper, we propose an approach for RAT selection which includes receiving different criteria, assessing and making decisions, then selecting the most suitable RAT for incoming calls. A comprehensive survey of different RAT selection algorithms for a heterogeneous wireless network is studied.Keywords: Heterogeneous Wireless Network, RAT selection algorithms, Next Generation Wireless Network (NGWN), Beyond 3G Network, Common Radio Resource Management (CRRM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20332277 Development of a Neural Network based Algorithm for Multi-Scale Roughness Parameters and Soil Moisture Retrieval
Authors: L. Bennaceur Farah, I. R. Farah, R. Bennaceur, Z. Belhadj, M. R. Boussema
Abstract:
The overall objective of this paper is to retrieve soil surfaces parameters namely, roughness and soil moisture related to the dielectric constant by inverting the radar backscattered signal from natural soil surfaces. Because the classical description of roughness using statistical parameters like the correlation length doesn't lead to satisfactory results to predict radar backscattering, we used a multi-scale roughness description using the wavelet transform and the Mallat algorithm. In this description, the surface is considered as a superposition of a finite number of one-dimensional Gaussian processes each having a spatial scale. A second step in this study consisted in adapting a direct model simulating radar backscattering namely the small perturbation model to this multi-scale surface description. We investigated the impact of this description on radar backscattering through a sensitivity analysis of backscattering coefficient to the multi-scale roughness parameters. To perform the inversion of the small perturbation multi-scale scattering model (MLS SPM) we used a multi-layer neural network architecture trained by backpropagation learning rule. The inversion leads to satisfactory results with a relative uncertainty of 8%.Keywords: Remote sensing, rough surfaces, inverse problems, SAR, radar scattering, Neural networks and Fractals.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16042276 Evolutionary Training of Hybrid Systems of Recurrent Neural Networks and Hidden Markov Models
Authors: Rohitash Chandra, Christian W. Omlin
Abstract:
We present a hybrid architecture of recurrent neural networks (RNNs) inspired by hidden Markov models (HMMs). We train the hybrid architecture using genetic algorithms to learn and represent dynamical systems. We train the hybrid architecture on a set of deterministic finite-state automata strings and observe the generalization performance of the hybrid architecture when presented with a new set of strings which were not present in the training data set. In this way, we show that the hybrid system of HMM and RNN can learn and represent deterministic finite-state automata. We ran experiments with different sets of population sizes in the genetic algorithm; we also ran experiments to find out which weight initializations were best for training the hybrid architecture. The results show that the hybrid architecture of recurrent neural networks inspired by hidden Markov models can train and represent dynamical systems. The best training and generalization performance is achieved when the hybrid architecture is initialized with random real weight values of range -15 to 15.Keywords: Deterministic finite-state automata, genetic algorithm, hidden Markov models, hybrid systems and recurrent neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18972275 Reversible Binary Arithmetic for Integrated Circuit Design
Authors: D. Krishnaveni, M. Geetha Priya
Abstract:
Application of reversible logic in integrated circuits results in the improved optimization of power consumption. This technology can be put into use in a variety of low power applications such as quantum computing, optical computing, nano-technology, and Complementary Metal Oxide Semiconductor (CMOS) Very Large Scale Integrated (VLSI) design etc. Logic gates are the basic building blocks in the design of any logic network and thus integrated circuits. In this paper, reversible Dual Key Gate (DKG) and Dual key Gate Pair (DKGP) gates that work singly as full adder/full subtractor are used to realize the basic building blocks of logic circuits. Reversible full adder/subtractor and parallel adder/ subtractor are designed using other reversible gates available in the literature and compared with that of DKG & DKGP gates. Efficient performance of reversible logic circuits relies on the optimization of the key parameters viz number of constant inputs, garbage outputs and number of reversible gates. The full adder/subtractor and parallel adder/subtractor design with reversible DKGP and DKG gates results in least number of constant inputs, garbage outputs, and number of reversible gates compared to the other designs. Thus, this paper provides a threshold to build more complex arithmetic systems using these reversible logic gates, leading to the enhanced performance of computing systems.
Keywords: Low power CMOS, quantum computing, reversible logic gates, full adder, full subtractor, parallel adder/subtractor, basic gates, universal gates.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14492274 Efficient Design Optimization of Multi-State Flow Network for Multiple Commodities
Authors: Yu-Cheng Chou, Po Ting Lin
Abstract:
The network of delivering commodities has been an important design problem in our daily lives and many transportation applications. The delivery performance is evaluated based on the system reliability of delivering commodities from a source node to a sink node in the network. The system reliability is thus maximized to find the optimal routing. However, the design problem is not simple because (1) each path segment has randomly distributed attributes; (2) there are multiple commodities that consume various path capacities; (3) the optimal routing must successfully complete the delivery process within the allowable time constraints. In this paper, we want to focus on the design optimization of the Multi-State Flow Network (MSFN) for multiple commodities. We propose an efficient approach to evaluate the system reliability in the MSFN with respect to randomly distributed path attributes and find the optimal routing subject to the allowable time constraints. The delivery rates, also known as delivery currents, of the path segments are evaluated and the minimal-current arcs are eliminated to reduce the complexity of the MSFN. Accordingly, the correct optimal routing is found and the worst-case reliability is evaluated. It has been shown that the reliability of the optimal routing is at least higher than worst-case measure. Two benchmark examples are utilized to demonstrate the proposed method. The comparisons between the original and the reduced networks show that the proposed method is very efficient.
Keywords: Multiple Commodities, Multi-State Flow Network (MSFN), Time Constraints, Worst-Case Reliability (WCR)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14582273 The Wavelet-Based DFT: A New Interpretation, Extensions and Applications
Authors: Abdulnasir Hossen, Ulrich Heute
Abstract:
In 1990 [1] the subband-DFT (SB-DFT) technique was proposed. This technique used the Hadamard filters in the decomposition step to split the input sequence into low- and highpass sequences. In the next step, either two DFTs are needed on both bands to compute the full-band DFT or one DFT on one of the two bands to compute an approximate DFT. A combination network with correction factors was to be applied after the DFTs. Another approach was proposed in 1997 [2] for using a special discrete wavelet transform (DWT) to compute the discrete Fourier transform (DFT). In the first step of the algorithm, the input sequence is decomposed in a similar manner to the SB-DFT into two sequences using wavelet decomposition with Haar filters. The second step is to perform DFTs on both bands to obtain the full-band DFT or to obtain a fast approximate DFT by implementing pruning at both input and output sides. In this paper, the wavelet-based DFT (W-DFT) with Haar filters is interpreted as SB-DFT with Hadamard filters. The only difference is in a constant factor in the combination network. This result is very important to complete the analysis of the W-DFT, since all the results concerning the accuracy and approximation errors in the SB-DFT are applicable. An application example in spectral analysis is given for both SB-DFT and W-DFT (with different filters). The adaptive capability of the SB-DFT is included in the W-DFT algorithm to select the band of most energy as the band to be computed. Finally, the W-DFT is extended to the two-dimensional case. An application in image transformation is given using two different types of wavelet filters.
Keywords: Image Transform, Spectral Analysis, Sub-Band DFT, Wavelet DFT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16772272 Novel Hybrid Method for Gene Selection and Cancer Prediction
Authors: Liping Jing, Michael K. Ng, Tieyong Zeng
Abstract:
Microarray data profiles gene expression on a whole genome scale, therefore, it provides a good way to study associations between gene expression and occurrence or progression of cancer. More and more researchers realized that microarray data is helpful to predict cancer sample. However, the high dimension of gene expressions is much larger than the sample size, which makes this task very difficult. Therefore, how to identify the significant genes causing cancer becomes emergency and also a hot and hard research topic. Many feature selection algorithms have been proposed in the past focusing on improving cancer predictive accuracy at the expense of ignoring the correlations between the features. In this work, a novel framework (named by SGS) is presented for stable gene selection and efficient cancer prediction . The proposed framework first performs clustering algorithm to find the gene groups where genes in each group have higher correlation coefficient, and then selects the significant genes in each group with Bayesian Lasso and important gene groups with group Lasso, and finally builds prediction model based on the shrinkage gene space with efficient classification algorithm (such as, SVM, 1NN, Regression and etc.). Experiment results on real world data show that the proposed framework often outperforms the existing feature selection and prediction methods, say SAM, IG and Lasso-type prediction model.Keywords: Gene Selection, Cancer Prediction, Lasso, Clustering, Classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20492271 A β-mannanase from Fusarium oxysporum SS-25 via Solid State Fermentation on Brewer’s Spent Grain: Medium Optimization by Statistical Tools, Kinetic Characterization and Its Applications
Authors: S. S. Rana, C. Janveja, S. K. Soni
Abstract:
This study is concerned with the optimization of fermentation parameters for the hyper production of mannanase from Fusarium oxysporum SS-25 employing two step statistical strategy and kinetic characterization of crude enzyme preparation. The Plackett-Burman design used to screen out the important factors in the culture medium revealed 20% (w/w) wheat bran, 2% (w/w) each of potato peels, soyabean meal and malt extract, 1% tryptone, 0.14% NH4SO4, 0.2% KH2PO4, 0.0002% ZnSO4, 0.0005% FeSO4, 0.01% MnSO4, 0.012% SDS, 0.03% NH4Cl, 0.1% NaNO3 in brewer’s spent grain based medium with 50% moisture content, inoculated with 2.8×107 spores and incubated at 30oC for 6 days to be the main parameters influencing the enzyme production. Of these factors, four variables including soyabean meal, FeSO4, MnSO4 and NaNO3 were chosen to study the interactive effects and their optimum levels in central composite design of response surface methodology with the final mannanase yield of 193 IU/gds. The kinetic characterization revealed the crude enzyme to be active over broader temperature and pH range. This could result in 26.6% reduction in kappa number with 4.93% higher tear index and 1% increase in brightness when used to treat the wheat straw based kraft pulp. The hydrolytic potential of enzyme was also demonstrated on both locust bean gum and guar gum.
Keywords: Brewer’s Spent Grain, Fusarium oxysporum, Mannanase, Response Surface Methodology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 51792270 A Family Cars- Life Cycle Cost (LCC)-Oriented Hybrid Modelling Approach Combining ANN and CBR
Authors: Xiaochuan Chen, Jianguo Yang, Beizhi Li
Abstract:
Design for cost (DFC) is a method that reduces life cycle cost (LCC) from the angle of designers. Multiple domain features mapping (MDFM) methodology was given in DFC. Using MDFM, we can use design features to estimate the LCC. From the angle of DFC, the design features of family cars were obtained, such as all dimensions, engine power and emission volume. At the conceptual design stage, cars- LCC were estimated using back propagation (BP) artificial neural networks (ANN) method and case-based reasoning (CBR). Hamming space was used to measure the similarity among cases in CBR method. Levenberg-Marquardt (LM) algorithm and genetic algorithm (GA) were used in ANN. The differences of LCC estimation model between CBR and artificial neural networks (ANN) were provided. ANN and CBR separately each method has its shortcomings. By combining ANN and CBR improved results accuracy was obtained. Firstly, using ANN selected some design features that affect LCC. Then using LCC estimation results of ANN could raise the accuracy of LCC estimation in CBR method. Thirdly, using ANN estimate LCC errors and correct errors in CBR-s estimation results if the accuracy is not enough accurate. Finally, economically family cars and sport utility vehicle (SUV) was given as LCC estimation cases using this hybrid approach combining ANN and CBR.Keywords: case-based reasoning, life cycle cost (LCC), artificialneural networks (ANN), family cars
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19652269 Tree Based Data Fusion Clustering Routing Algorithm for Illimitable Network Administration in Wireless Sensor Network
Authors: Y. Harold Robinson, M. Rajaram, E. Golden Julie, S. Balaji
Abstract:
In wireless sensor networks, locality and positioning information can be captured using Global Positioning System (GPS). This message can be congregated initially from spot to identify the system. Users can retrieve information of interest from a wireless sensor network (WSN) by injecting queries and gathering results from the mobile sink nodes. Routing is the progression of choosing optimal path in a mobile network. Intermediate node employs permutation of device nodes into teams and generating cluster heads that gather the data from entity cluster’s node and encourage the collective data to base station. WSNs are widely used for gathering data. Since sensors are power-constrained devices, it is quite vital for them to reduce the power utilization. A tree-based data fusion clustering routing algorithm (TBDFC) is used to reduce energy consumption in wireless device networks. Here, the nodes in a tree use the cluster formation, whereas the elevation of the tree is decided based on the distance of the member nodes to the cluster-head. Network simulation shows that this scheme improves the power utilization by the nodes, and thus considerably improves the lifetime.
Keywords: WSN, TBDFC, LEACH, PEGASIS, TREEPSI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11242268 Solar Tracking System: More Efficient Use of Solar Panels
Abstract:
This paper shows the potential system benefits of simple tracking solar system using a stepper motor and light sensor. This method is increasing power collection efficiency by developing a device that tracks the sun to keep the panel at a right angle to its rays. A solar tracking system is designed, implemented and experimentally tested. The design details and the experimental results are shown.Keywords: Renewable Energy, Power Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 78102267 Stability Analysis of a Tricore
Authors: C. M. De Marco Muscat-Fenech, A.M. Grech La Rosa
Abstract:
The application of stability theory has led to detailed studies of different types of vessels; however, the shortage of information relating to multihull vessels demanded further investigation. This study shows that the position of the hulls has a very influential effect on both the transverse and longitudinal stability of the tricore. HSC stability code is applied for the optimisation of the hull configurations. Such optimization criteria would undoubtedly aid the performance of the vessel for both commercial or leisure purposes
Keywords: Stability, Multihull, Tricore
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29092266 Isolation and Classification of Red Blood Cells in Anemic Microscopic Images
Authors: Jameela Ali Alkrimi, Loay E. George, Azizah Suliman, Abdul Rahim Ahmad, Karim Al-Jashamy
Abstract:
Red blood cells (RBCs) are among the most commonly and intensively studied type of blood cells in cell biology. Anemia is a lack of RBCs is characterized by its level compared to the normal hemoglobin level. In this study, a system based image processing methodology was developed to localize and extract RBCs from microscopic images. Also, the machine learning approach is adopted to classify the localized anemic RBCs images. Several textural and geometrical features are calculated for each extracted RBCs. The training set of features was analyzed using principal component analysis (PCA). With the proposed method, RBCs were isolated in 4.3secondsfrom an image containing 18 to 27 cells. The reasons behind using PCA are its low computation complexity and suitability to find the most discriminating features which can lead to accurate classification decisions. Our classifier algorithm yielded accuracy rates of 100%, 99.99%, and 96.50% for K-nearest neighbor (K-NN) algorithm, support vector machine (SVM), and neural network RBFNN, respectively. Classification was evaluated in highly sensitivity, specificity, and kappa statistical parameters. In conclusion, the classification results were obtained within short time period, and the results became better when PCA was used.
Keywords: Red blood cells, pre-processing image algorithms, classification algorithms, principal component analysis PCA, confusion matrix, kappa statistical parameters, ROC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32052265 Efficient Dimensionality Reduction of Directional Overcurrent Relays Optimal Coordination Problem
Authors: Fouad Salha , X. Guillaud
Abstract:
Directional over current relays (DOCR) are commonly used in power system protection as a primary protection in distribution and sub-transmission electrical systems and as a secondary protection in transmission systems. Coordination of protective relays is necessary to obtain selective tripping. In this paper, an approach for efficiency reduction of DOCRs nonlinear optimum coordination (OC) is proposed. This was achieved by modifying the objective function and relaxing several constraints depending on the four constraints classification, non-valid, redundant, pre-obtained and valid constraints. According to this classification, the far end fault effect on the objective function and constraints, and in consequently on relay operating time, was studied. The study was carried out, firstly by taking into account the near-end and far-end faults in DOCRs coordination problem formulation; and then faults very close to the primary relays (nearend faults). The optimal coordination (OC) was achieved by simultaneously optimizing all variables (TDS and Ip) in nonlinear environment by using of Genetic algorithm nonlinear programming techniques. The results application of the above two approaches on 6-bus and 26-bus system verify that the far-end faults consideration on OC problem formulation don-t lose the optimality.
Keywords: Backup/Primary relay, Coordination time interval (CTI), directional over current relays, Genetic algorithm, time dial setting (TDS), pickup current setting (Ip), nonlinear programming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15922264 A Model for Estimation of Efforts in Development of Software Systems
Authors: Parvinder S. Sandhu, Manisha Prashar, Pourush Bassi, Atul Bisht
Abstract:
Software effort estimation is the process of predicting the most realistic use of effort required to develop or maintain software based on incomplete, uncertain and/or noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets. There are various models like Halstead, Walston-Felix, Bailey-Basili, Doty and GA Based models which have already used to estimate the software effort for projects. In this study Statistical Models, Fuzzy-GA and Neuro-Fuzzy (NF) Inference Systems are experimented to estimate the software effort for projects. The performances of the developed models were tested on NASA software project datasets and results are compared with the Halstead, Walston-Felix, Bailey-Basili, Doty and Genetic Algorithm Based models mentioned in the literature. The result shows that the NF Model has the lowest MMRE and RMSE values. The NF Model shows the best results as compared with the Fuzzy-GA based hybrid Inference System and other existing Models that are being used for the Effort Prediction with lowest MMRE and RMSE values.Keywords: Neuro-Fuzzy Model, Halstead Model, Walston-Felix Model, Bailey-Basili Model, Doty Model, GA Based Model, Genetic Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32372263 Holistic Face Recognition using Multivariate Approximation, Genetic Algorithms and AdaBoost Classifier: Preliminary Results
Authors: C. Villegas-Quezada, J. Climent
Abstract:
Several works regarding facial recognition have dealt with methods which identify isolated characteristics of the face or with templates which encompass several regions of it. In this paper a new technique which approaches the problem holistically dispensing with the need to identify geometrical characteristics or regions of the face is introduced. The characterization of a face is achieved by randomly sampling selected attributes of the pixels of its image. From this information we construct a set of data, which correspond to the values of low frequencies, gradient, entropy and another several characteristics of pixel of the image. Generating a set of “p" variables. The multivariate data set with different polynomials minimizing the data fitness error in the minimax sense (L∞ - Norm) is approximated. With the use of a Genetic Algorithm (GA) it is able to circumvent the problem of dimensionality inherent to higher degree polynomial approximations. The GA yields the degree and values of a set of coefficients of the polynomials approximating of the image of a face. By finding a family of characteristic polynomials from several variables (pixel characteristics) for each face (say Fi ) in the data base through a resampling process the system in use, is trained. A face (say F ) is recognized by finding its characteristic polynomials and using an AdaBoost Classifier from F -s polynomials to each of the Fi -s polynomials. The winner is the polynomial family closer to F -s corresponding to target face in data base.
Keywords: AdaBoost Classifier, Holistic Face Recognition, Minimax Multivariate Approximation, Genetic Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15092262 Detection and Classification of Faults on Parallel Transmission Lines Using Wavelet Transform and Neural Network
Authors: V.S.Kale, S.R.Bhide, P.P.Bedekar, G.V.K.Mohan
Abstract:
The protection of parallel transmission lines has been a challenging task due to mutual coupling between the adjacent circuits of the line. This paper presents a novel scheme for detection and classification of faults on parallel transmission lines. The proposed approach uses combination of wavelet transform and neural network, to solve the problem. While wavelet transform is a powerful mathematical tool which can be employed as a fast and very effective means of analyzing power system transient signals, artificial neural network has a ability to classify non-linear relationship between measured signals by identifying different patterns of the associated signals. The proposed algorithm consists of time-frequency analysis of fault generated transients using wavelet transform, followed by pattern recognition using artificial neural network to identify the type of the fault. MATLAB/Simulink is used to generate fault signals and verify the correctness of the algorithm. The adaptive discrimination scheme is tested by simulating different types of fault and varying fault resistance, fault location and fault inception time, on a given power system model. The simulation results show that the proposed scheme for fault diagnosis is able to classify all the faults on the parallel transmission line rapidly and correctly.
Keywords: Artificial neural network, fault detection and classification, parallel transmission lines, wavelet transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30172261 Eigenvalues of Particle Bound in Single and Double Delta Function Potentials through Numerical Analysis
Authors: Edward Aris D. Fajardo, Hamdi Muhyuddin D. Barra
Abstract:
This study employs the use of the fourth order Numerov scheme to determine the eigenstates and eigenvalues of particles, electrons in particular, in single and double delta function potentials. For the single delta potential, it is found that the eigenstates could only be attained by using specific potential depths. The depth of the delta potential well has a value that varies depending on the delta strength. These depths are used for each well on the double delta function potential and the eigenvalues are determined. There are two bound states found in the computation, one with a symmetric eigenstate and another one which is antisymmetric.Keywords: Double Delta Potential, Eigenstates, Eigenvalue, Numerov Method, Single Delta Potential
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30692260 Open-Loop Vector Control of Induction Motor with Space Vector Pulse Width Modulation Technique
Authors: Karchung, S. Ruangsinchaiwanich
Abstract:
This paper presents open-loop vector control method of induction motor with space vector pulse width modulation (SVPWM) technique. Normally, the closed loop speed control is preferred and is believed to be more accurate. However, it requires a position sensor to track the rotor position which is not desirable to use it for certain workspace applications. This paper exhibits the performance of three-phase induction motor with the simplest control algorithm without the use of a position sensor nor an estimation block to estimate rotor position for sensorless control. The motor stator currents are measured and are transformed to synchronously rotating (d-q-axis) frame by use of Clarke and Park transformation. The actual control happens in this frame where the measured currents are compared with the reference currents. The error signal is fed to a conventional PI controller, and the corrected d-q voltage is generated. The controller outputs are transformed back to three phase voltages and are fed to SVPWM block which generates PWM signal for the voltage source inverter. The open loop vector control model along with SVPWM algorithm is modeled in MATLAB/Simulink software and is experimented and validated in TMS320F28335 DSP board.
Keywords: Electric drive, induction motor, open-loop vector control, space vector pulse width modulation technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9822259 An Approach of Quantum Steganography through Special SSCE Code
Authors: Indradip Banerjee, Souvik Bhattacharyya, Gautam Sanyal
Abstract:
Encrypted messages sending frequently draws the attention of third parties, perhaps causing attempts to break and reveal the original messages. Steganography is introduced to hide the existence of the communication by concealing a secret message in an appropriate carrier like text, image, audio or video. Quantum steganography where the sender (Alice) embeds her steganographic information into the cover and sends it to the receiver (Bob) over a communication channel. Alice and Bob share an algorithm and hide quantum information in the cover. An eavesdropper (Eve) without access to the algorithm can-t find out the existence of the quantum message. In this paper, a text quantum steganography technique based on the use of indefinite articles (a) or (an) in conjunction with the nonspecific or non-particular nouns in English language and quantum gate truth table have been proposed. The authors also introduced a new code representation technique (SSCE - Secret Steganography Code for Embedding) at both ends in order to achieve high level of security. Before the embedding operation each character of the secret message has been converted to SSCE Value and then embeds to cover text. Finally stego text is formed and transmits to the receiver side. At the receiver side different reverse operation has been carried out to get back the original information.Keywords: Quantum Steganography, SSCE (Secret SteganographyCode for Embedding), Security, Cover Text, Stego Text.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21152258 Development of Integrated GIS Interface for Characteristics of Regional Daily Flow
Authors: Ju Young Lee, Jung-Seok Yang, Jaeyoung Choi
Abstract:
The purpose of this paper primarily intends to develop GIS interface for estimating sequences of stream-flows at ungauged stations based on known flows at gauged stations. The integrated GIS interface is composed of three major steps. The first, precipitation characteristics using statistical analysis is the procedure for making multiple linear regression equation to get the long term mean daily flow at ungauged stations. The independent variables in regression equation are mean daily flow and drainage area. Traditionally, mean flow data are generated by using Thissen polygon method. However, method for obtaining mean flow data can be selected by user such as Kriging, IDW (Inverse Distance Weighted), Spline methods as well as other traditional methods. At the second, flow duration curve (FDC) is computing at unguaged station by FDCs in gauged stations. Finally, the mean annual daily flow is computed by spatial interpolation algorithm. The third step is to obtain watershed/topographic characteristics. They are the most important factors which govern stream-flows. In summary, the simulated daily flow time series are compared with observed times series. The results using integrated GIS interface are closely similar and are well fitted each other. Also, the relationship between the topographic/watershed characteristics and stream flow time series is highly correlated.Keywords: Integrated GIS interface, spatial interpolation algorithm, FDC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15212257 Obtaining High-Dimensional Configuration Space for Robotic Systems Operating in a Common Environment
Authors: U. Yerlikaya, R. T. Balkan
Abstract:
In this research, a method is developed to obtain high-dimensional configuration space for path planning problems. In typical cases, the path planning problems are solved directly in the 3-dimensional (D) workspace. However, this method is inefficient in handling the robots with various geometrical and mechanical restrictions. To overcome these difficulties, path planning may be formalized and solved in a new space which is called configuration space. The number of dimensions of the configuration space comes from the degree of freedoms of the system of interest. The method can be applied in two ways. In the first way, the point clouds of all the bodies of the system and interaction of them are used. The second way is performed via using the clearance function of simulation software where the minimum distances between surfaces of bodies are simultaneously measured. A double-turret system is held in the scope of this study. The 4-D configuration space of a double-turret system is obtained in these two ways. As a result, the difference between these two methods is around 1%, depending on the density of the point cloud. The disparity between the two forms steadily decreases as the point cloud density increases. At the end of the study, in order to verify 4-D configuration space obtained, 4-D path planning problem was realized as 2-D + 2-D and a sample path planning is carried out with using A* algorithm. Then, the accuracy of the configuration space is proved using the obtained paths on the simulation model of the double-turret system.
Keywords: A* Algorithm, autonomous turrets, high-dimensional C-Space, manifold C-Space, point clouds.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 394