Search results for: Genetic algorithm optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4796

Search results for: Genetic algorithm optimization

2456 Development of a Methodology for Processing of Drilling Operations

Authors: Majid Tolouei-Rad, Ankit Shah

Abstract:

Drilling is the most common machining operation and it forms the highest machining cost in many manufacturing activities including automotive engine production. The outcome of this operation depends upon many factors including utilization of proper cutting tool geometry, cutting tool material and the type of coating used to improve hardness and resistance to wear, and also cutting parameters. With the availability of a large array of tool geometries, materials and coatings, is has become a challenging task to select the best tool and cutting parameters that would result in the lowest machining cost or highest profit rate. This paper describes an algorithm developed to help achieve good performances in drilling operations by automatically determination of proper cutting tools and cutting parameters. It also helps determine machining sequences resulting in minimum tool changes that would eventually reduce machining time and cost where multiple tools are used.

Keywords: Cutting tool, drilling, machining, algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3350
2455 Value Index, a Novel Decision Making Approach for Waste Load Allocation

Authors: E. Feizi Ashtiani, S. Jamshidi, M.H Niksokhan, A. Feizi Ashtiani

Abstract:

Waste load allocation (WLA) policies may use multiobjective optimization methods to find the most appropriate and sustainable solutions. These usually intend to simultaneously minimize two criteria, total abatement costs (TC) and environmental violations (EV). If other criteria, such as inequity, need for minimization as well, it requires introducing more binary optimizations through different scenarios. In order to reduce the calculation steps, this study presents value index as an innovative decision making approach. Since the value index contains both the environmental violation and treatment costs, it can be maximized simultaneously with the equity index. It implies that the definition of different scenarios for environmental violations is no longer required. Furthermore, the solution is not necessarily the point with minimized total costs or environmental violations. This idea is testified for Haraz River, in north of Iran. Here, the dissolved oxygen (DO) level of river is simulated by Streeter-Phelps equation in MATLAB software. The WLA is determined for fish farms using multi-objective particle swarm optimization (MOPSO) in two scenarios. At first, the trade-off curves of TC-EV and TC-Inequity are plotted separately as the conventional approach. In the second, the Value-Equity curve is derived. The comparative results show that the solutions are in a similar range of inequity with lower total costs. This is due to the freedom of environmental violation attained in value index. As a result, the conventional approach can well be replaced by the value index particularly for problems optimizing these objectives. This reduces the process to achieve the best solutions and may find better classification for scenario definition. It is also concluded that decision makers are better to focus on value index and weighting its contents to find the most sustainable alternatives based on their requirements.

Keywords: Waste load allocation (WLA), Value index, Multi objective particle swarm optimization (MOPSO), Haraz River, Equity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2027
2454 Developments for ''Virtual'' Monitoring and Process Simulation of the Cryogenic Pilot Plant

Authors: Carmen Maria Moraru, Iuliana Stefan, Ovidiu Balteanu, Ciprian Bucur, Liviu Stefan, Anisia Bornea, Ioan Stefanescu

Abstract:

The implementation of the new software and hardware-s technologies for tritium processing nuclear plants, and especially those with an experimental character or of new technology developments shows a coefficient of complexity due to issues raised by the implementation of the performing instrumentation and equipment into a unitary monitoring system of the nuclear technological process of tritium removal. Keeping the system-s flexibility is a demand of the nuclear experimental plants for which the change of configuration, process and parameters is something usual. The big amount of data that needs to be processed stored and accessed for real time simulation and optimization demands the achievement of the virtual technologic platform where the data acquiring, control and analysis systems of the technological process can be integrated with a developed technological monitoring system. Thus, integrated computing and monitoring systems needed for the supervising of the technological process will be executed, to be continued with the execution of optimization system, by choosing new and performed methods corresponding to the technological processes within the tritium removal processing nuclear plants. The developing software applications is executed with the support of the program packages dedicated to industrial processes and they will include acquisition and monitoring sub-modules, named “virtually" as well as the storage sub-module of the process data later required for the software of optimization and simulation of the technological process for tritium removal. The system plays and important role in the environment protection and durable development through new technologies, that is – the reduction of and fight against industrial accidents in the case of tritium processing nuclear plants. Research for monitoring optimisation of nuclear processes is also a major driving force for economic and social development.

Keywords: Monitoring system, process simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1973
2453 Global Chaos Synchronization of Identical and Nonidentical Chaotic Systems Using Only Two Nonlinear Controllers

Authors: Azizan Bin Saaban, Adyda Binti Ibrahim, Mohammad Shehzad, Israr Ahmad

Abstract:

In chaos synchronization, the main goal is to design such controller(s) that synchronizes the states of master and slave system asymptotically globally. This paper studied and investigated the synchronization problem of two identical Chen, and identical Tigan chaotic systems and two non-identical Chen and Tigan chaotic systems using Non-linear active control algorithm. In this study, based on Lyapunov stability theory and using non-linear active control algorithm, it has been shown that the proposed schemes have excellent transient performance using only two nonlinear controllers and have shown analytically as well as graphically that synchronization is asymptotically globally stable.

Keywords: Nonlinear Active Control, Chen and Tigan Chaotic systems, Lyapunov Stability theory, Synchronization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1963
2452 Grouping-Based Job Scheduling Model In Grid Computing

Authors: Vishnu Kant Soni, Raksha Sharma, Manoj Kumar Mishra

Abstract:

Grid computing is a high performance computing environment to solve larger scale computational applications. Grid computing contains resource management, job scheduling, security problems, information management and so on. Job scheduling is a fundamental and important issue in achieving high performance in grid computing systems. However, it is a big challenge to design an efficient scheduler and its implementation. In Grid Computing, there is a need of further improvement in Job Scheduling algorithm to schedule the light-weight or small jobs into a coarse-grained or group of jobs, which will reduce the communication time, processing time and enhance resource utilization. This Grouping strategy considers the processing power, memory-size and bandwidth requirements of each job to realize the real grid system. The experimental results demonstrate that the proposed scheduling algorithm efficiently reduces the processing time of jobs in comparison to others.

Keywords: Grid computing, Job grouping and Jobscheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1948
2451 Computing the Loop Bound in Iterative Data Flow Graphs Using Natural Token Flow

Authors: Ali Shatnawi

Abstract:

Signal processing applications which are iterative in nature are best represented by data flow graphs (DFG). In these applications, the maximum sampling frequency is dependent on the topology of the DFG, the cyclic dependencies in particular. The determination of the iteration bound, which is the reciprocal of the maximum sampling frequency, is critical in the process of hardware implementation of signal processing applications. In this paper, a novel technique to compute the iteration bound is proposed. This technique is different from all previously proposed techniques, in the sense that it is based on the natural flow of tokens into the DFG rather than the topology of the graph. The proposed algorithm has lower run-time complexity than all known algorithms. The performance of the proposed algorithm is illustrated through analytical analysis of the time complexity, as well as through simulation of some benchmark problems.

Keywords: Data flow graph, Iteration period bound, Rateoptimalscheduling, Recursive DSP algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2562
2450 Automatic LV Segmentation with K-means Clustering and Graph Searching on Cardiac MRI

Authors: Hae-Yeoun Lee

Abstract:

Quantification of cardiac function is performed by calculating blood volume and ejection fraction in routine clinical practice. However, these works have been performed by manual contouring, which requires computational costs and varies on the observer. In this paper, an automatic left ventricle segmentation algorithm on cardiac magnetic resonance images (MRI) is presented. Using knowledge on cardiac MRI, a K-mean clustering technique is applied to segment blood region on a coil-sensitivity corrected image. Then, a graph searching technique is used to correct segmentation errors from coil distortion and noises. Finally, blood volume and ejection fraction are calculated. Using cardiac MRI from 15 subjects, the presented algorithm is tested and compared with manual contouring by experts to show outstanding performance.

Keywords: Cardiac MRI, Graph searching, Left ventricle segmentation, K-means clustering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2094
2449 Virtual Machines Cooperation for Impatient Jobs under Cloud Paradigm

Authors: Nawfal A. Mehdi, Ali Mamat, Hamidah Ibrahim, Shamala K. Syrmabn

Abstract:

The increase on the demand of IT resources diverts the enterprises to use the cloud as a cheap and scalable solution. Cloud computing promises achieved by using the virtual machine as a basic unite of computation. However, the virtual machine pre-defined settings might be not enough to handle jobs QoS requirements. This paper addresses the problem of mapping jobs have critical start deadlines to virtual machines that have predefined specifications. These virtual machines hosted by physical machines and shared a fixed amount of bandwidth. This paper proposed an algorithm that uses the idle virtual machines bandwidth to increase the quote of other virtual machines nominated as executors to urgent jobs. An algorithm with empirical study have been given to evaluate the impact of the proposed model on impatient jobs. The results show the importance of dynamic bandwidth allocation in virtualized environment and its affect on throughput metric.

Keywords: Insufficient bandwidth, virtual machine, cloudprovider, impatient jobs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1680
2448 Reduction of Linear Time-Invariant Systems Using Routh-Approximation and PSO

Authors: S. Panda, S. K. Tomar, R. Prasad, C. Ardil

Abstract:

Order reduction of linear-time invariant systems employing two methods; one using the advantages of Routh approximation and other by an evolutionary technique is presented in this paper. In Routh approximation method the denominator of the reduced order model is obtained using Routh approximation while the numerator of the reduced order model is determined using the indirect approach of retaining the time moments and/or Markov parameters of original system. By this method the reduced order model guarantees stability if the original high order model is stable. In the second method Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical examples.

Keywords: Model Order Reduction, Markov Parameters, Routh Approximation, Particle Swarm Optimization, Integral Squared Error, Steady State Stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3288
2447 Forecasting Foreign Direct Investment with Modified Diffusion Model

Authors: Bi-Huei Tsai

Abstract:

Prior research has not effectively investigated how the profitability of Chinese branches affect FDIs in China [1, 2], so this study for the first time incorporates realistic earnings information to systematically investigate effects of innovation, imitation, and profit factors of FDI diffusions from Taiwan to China. Our nonlinear least square (NLS) model, which incorporates earnings factors, forms a nonlinear ordinary differential equation (ODE) in numerical simulation programs. The model parameters are obtained through a genetic algorithms (GA) technique and then optimized with the collected data for the best accuracy. Particularly, Taiwanese regulatory FDI restrictions are also considered in our modified model to meet the realistic conditions. To validate the model-s effectiveness, this investigation compares the prediction accuracy of modified model with the conventional diffusion model, which does not take account of the profitability factors. The results clearly demonstrate the internal influence to be positive, as early FDI adopters- consistent praises of FDI attract potential firms to make the same move. The former erects a behavior model for the latter to imitate their foreign investment decision. Particularly, the results of modified diffusion models show that the earnings from Chinese branches are positively related to the internal influence. In general, the imitating tendency of potential consumers is substantially hindered by the losses in the Chinese branches, and these firms would invest less into China. The FDI inflow extension depends on earnings of Chinese branches, and companies will adjust their FDI strategies based on the returns. Since this research has proved that earning is an influential factor on FDI dynamics, our revised model explicitly performs superior in prediction ability than conventional diffusion model.

Keywords: diffusion model, genetic algorithms, nonlinear leastsquares (NLS) model, prediction error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613
2446 Refitting Equations for Peak Ground Acceleration in Light of the PF-L Database

Authors: M. Breška, I. Peruš, V. Stankovski

Abstract:

The number of Ground Motion Prediction Equations (GMPEs) used for predicting peak ground acceleration (PGA) and the number of earthquake recordings that have been used for fitting these equations has increased in the past decades. The current PF-L database contains 3550 recordings. Since the GMPEs frequently model the peak ground acceleration the goal of the present study was to refit a selection of 44 of the existing equation models for PGA in light of the latest data. The algorithm Levenberg-Marquardt was used for fitting the coefficients of the equations and the results are evaluated both quantitatively by presenting the root mean squared error (RMSE) and qualitatively by drawing graphs of the five best fitted equations. The RMSE was found to be as low as 0.08 for the best equation models. The newly estimated coefficients vary from the values published in the original works.

Keywords: Ground Motion Prediction Equations, Levenberg-Marquardt algorithm, refitting PF-L database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1495
2445 Studies on Properties of Knowledge Dependency and Reduction Algorithm in Tolerance Rough Set Model

Authors: Chen Wu, Lijuan Wang

Abstract:

Relation between tolerance class and indispensable attribute and knowledge dependency in rough set model with tolerance relation is explored. After giving definitions and concepts of knowledge dependency and knowledge dependency degree for incomplete information system in tolerance rough set model by distinguishing decision attribute containing missing attribute value or not, the result of maintaining reflectivity, transitivity, augmentation, decomposition law and merge law for complete knowledge dependency is proved. Knowledge dependency degrees (not complete knowledge dependency degrees) only satisfy some laws after transitivity, augmentation and decomposition operations. An algorithm to solve attribute reduction in an incomplete decision table is designed. The correctness is checked by an example.

Keywords: Incomplete information system, rough set, tolerance relation, knowledge dependence, attribute reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 729
2444 A Bi-Objective Stochastic Mathematical Model for Agricultural Supply Chain Network

Authors: Mohammad Mahdi Paydar, Armin Cheraghalipour, Mostafa Hajiaghaei-Keshteli

Abstract:

Nowadays, in advanced countries, agriculture as one of the most significant sectors of the economy, plays an important role in its political and economic independence. Due to farmers' lack of information about products' demand and lack of proper planning for harvest time, annually the considerable amount of products is corrupted. Besides, in this paper, we attempt to improve these unfavorable conditions via designing an effective supply chain network that tries to minimize total costs of agricultural products along with minimizing shortage in demand points. To validate the proposed model, a stochastic optimization approach by using a branch and bound solver of the LINGO software is utilized. Furthermore, to accumulate the data of parameters, a case study in Mazandaran province placed in the north of Iran has been applied. Finally, using ɛ-constraint approach, a Pareto front is obtained and one of its Pareto solutions as best solution is selected. Then, related results of this solution are explained. Finally, conclusions and suggestions for the future research are presented.

Keywords: Perishable products, stochastic optimization, agricultural supply chain, ɛ-constraint.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1003
2443 Traceable Watermarking System using SoC for Digital Cinema Delivery

Authors: Sadi Vural, Hiromi Tomii, Hironori Yamauchi

Abstract:

As the development of digital technology is increasing, Digital cinema is getting more spread. However, content copy and attack against the digital cinema becomes a serious problem. To solve the above security problem, we propose “Additional Watermarking" for digital cinema delivery system. With this proposed “Additional watermarking" method, we protect content copyrights at encoder and user side information at decoder. It realizes the traceability of the watermark embedded at encoder. The watermark is embedded into the random-selected frames using Hash function. Using it, the embedding position is distributed by Hash Function so that third parties do not break off the watermarking algorithm. Finally, our experimental results show that proposed method is much better than the convenient watermarking techniques in terms of robustness, image quality and its simple but unbreakable algorithm.

Keywords: Decoder, Digital content, JPEG2000 Frame, System-On-Chip and additional watermark.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686
2442 Optimal and Critical Path Analysis of State Transportation Network Using Neo4J

Authors: Pallavi Bhogaram, Xiaolong Wu, Min He, Onyedikachi Okenwa

Abstract:

A transportation network is a realization of a spatial network, describing a structure which permits either vehicular movement or flow of some commodity. Examples include road networks, railways, air routes, pipelines, and many more. The transportation network plays a vital role in maintaining the vigor of the nation’s economy. Hence, ensuring the network stays resilient all the time, especially in the face of challenges such as heavy traffic loads and large scale natural disasters, is of utmost importance. In this paper, we used the Neo4j application to develop the graph. Neo4j is the world's leading open-source, NoSQL, a native graph database that implements an ACID-compliant transactional backend to applications. The Southern California network model is developed using the Neo4j application and obtained the most critical and optimal nodes and paths in the network using centrality algorithms. The edge betweenness centrality algorithm calculates the critical or optimal paths using Yen's k-shortest paths algorithm, and the node betweenness centrality algorithm calculates the amount of influence a node has over the network. The preliminary study results confirm that the Neo4j application can be a suitable tool to study the important nodes and the critical paths for the major congested metropolitan area.

Keywords: Transportation network, critical path, connectivity reliability, network model, Neo4J application, optimal path, critical path, edge betweenness centrality index, node betweenness centrality index, Yen’s k-shortest paths.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 855
2441 Optimal Efficiency Control of Pulse Width Modulation - Inverter Fed Motor Pump Drive Using Neural Network

Authors: O. S. Ebrahim, M. A. Badr, A. S. Elgendy, K. O. Shawky, P. K. Jain

Abstract:

This paper demonstrates an improved Loss Model Control (LMC) for a 3-phase induction motor (IM) driving pump load. Compared with other power loss reduction algorithms for IM, the presented one has the advantages of fast and smooth flux adaptation, high accuracy, and versatile implementation. The performance of LMC depends mainly on the accuracy of modeling the motor drive and losses. A loss-model for IM drive that considers the surplus power loss caused by inverter voltage harmonics using closed-form equations and also includes the magnetic saturation has been developed. Further, an Artificial Neural Network (ANN) controller is synthesized and trained offline to determine the optimal flux level that achieves maximum drive efficiency. The drive’s voltage and speed control loops are connecting via the stator frequency to avoid the possibility of excessive magnetization. Besides, the resistance change due to temperature is considered by a first-order thermal model. The obtained thermal information enhances motor protection and control. These together have the potential of making the proposed algorithm reliable. Simulation and experimental studies are performed on 5.5 kW test motor using the proposed control method. The test results are provided and compared with the fixed flux operation to validate the effectiveness.

Keywords: Artificial neural network, ANN, efficiency optimization, induction motor, IM, Pulse Width Modulated, PWM, harmonic losses.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 359
2440 A Hybrid Ontology Based Approach for Ranking Documents

Authors: Sarah Motiee, Azadeh Nematzadeh, Mehrnoush Shamsfard

Abstract:

Increasing growth of information volume in the internet causes an increasing need to develop new (semi)automatic methods for retrieval of documents and ranking them according to their relevance to the user query. In this paper, after a brief review on ranking models, a new ontology based approach for ranking HTML documents is proposed and evaluated in various circumstances. Our approach is a combination of conceptual, statistical and linguistic methods. This combination reserves the precision of ranking without loosing the speed. Our approach exploits natural language processing techniques to extract phrases from documents and the query and doing stemming on words. Then an ontology based conceptual method will be used to annotate documents and expand the query. To expand a query the spread activation algorithm is improved so that the expansion can be done flexible and in various aspects. The annotated documents and the expanded query will be processed to compute the relevance degree exploiting statistical methods. The outstanding features of our approach are (1) combining conceptual, statistical and linguistic features of documents, (2) expanding the query with its related concepts before comparing to documents, (3) extracting and using both words and phrases to compute relevance degree, (4) improving the spread activation algorithm to do the expansion based on weighted combination of different conceptual relationships and (5) allowing variable document vector dimensions. A ranking system called ORank is developed to implement and test the proposed model. The test results will be included at the end of the paper.

Keywords: Document ranking, Ontology, Spread activation algorithm, Annotation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630
2439 Optimization of Fiber Rich Gluten-Free Cookie Formulation by Response Surface Methodology

Authors: Bahadur Singh Hathan, B. L. Prassana

Abstract:

Most of the commercial gluten free products are nutritionally inferior when compared to gluten containing counterparts as manufacturers most often use the refined flours and starches. So it is possible that people on gluten free diet have low intake of fibre content. The foxtail millet flour and copra meal are gluten free and have high fibre and protein contents. The formulation of fibre rich gluten free cookies was optimized by response surface methodology considering independent process variables as proportion of Foxtail millet (Setaria italica) flour in mixed flour, fat content and guar gum. The sugar, sodium chloride, sodium bicarbonates and water were added in fixed proportion as 60, 1.0, 0.4 and 20% of mixed flour weight, respectively. Optimum formulation obtained for maximum spread ratio, fibre content, surface L-value, overall acceptability and minimum breaking strength were 80% foxtail millet flour in mixed flour, 42.8 % fat content and 0.05% guar gum.

Keywords: Copra meal flour, Fiber rich gluten-free cookies, Foxtail millet flour, Optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2353
2438 A New Heuristic Statistical Methodology for Optimizing Queuing Networks Using Discreet Event Simulation

Authors: Mohamad Mahdavi

Abstract:

Most of the real queuing systems include special properties and constraints, which can not be analyzed directly by using the results of solved classical queuing models. Lack of Markov chains features, unexponential patterns and service constraints, are the mentioned conditions. This paper represents an applied general algorithm for analysis and optimizing the queuing systems. The algorithm stages are described through a real case study. It is consisted of an almost completed non-Markov system with limited number of customers and capacities as well as lots of common exception of real queuing networks. Simulation is used for optimizing this system. So introduced stages over the following article include primary modeling, determining queuing system kinds, index defining, statistical analysis and goodness of fit test, validation of model and optimizing methods of system with simulation.

Keywords: Estimation, queuing system, simulation model, probability distribution, non-Markov chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1620
2437 Simulation and Optimization of Mechanisms made of Micro-molded Components

Authors: Albert Albers, Pablo Enrique Leslabay

Abstract:

The Institute of Product Development is dealing with the development, design and dimensioning of micro components and systems as a member of the Collaborative Research Centre 499 “Design, Production and Quality Assurance of Molded micro components made of Metallic and Ceramic Materials". Because of technological restrictions in the miniaturization of conventional manufacturing techniques, shape and material deviations cannot be scaled down in the same proportion as the micro parts, rendering components with relatively wide tolerance fields. Systems that include such components should be designed with this particularity in mind, often requiring large clearance. On the end, the output of such systems results variable and prone to dynamical instability. To save production time and resources, every study of these effects should happen early in the product development process and base on computer simulation to avoid costly prototypes. A suitable method is proposed here and exemplary applied to a micro technology demonstrator developed by the CRC499. It consists of a one stage planetary gear train in a sun-planet-ring configuration, with input through the sun gear and output through the carrier. The simulation procedure relies on ordinary Multi Body Simulation methods and subsequently adds other techniques to further investigate details of the system-s behavior and to predict its response. The selection of the relevant parameters and output functions followed the engineering standards for regular sized gear trains. The first step is to quantify the variability and to reveal the most critical points of the system, performed through a whole-mechanism Sensitivity Analysis. Due to the lack of previous knowledge about the system-s behavior, different DOE methods involving small and large amount of experiments were selected to perform the SA. In this particular case the parameter space can be divided into two well defined groups, one of them containing the gear-s profile information and the other the components- spatial location. This has been exploited to explore the different DOE techniques more promptly. A reduced set of parameters is derived for further investigation and to feed the final optimization process, whether as optimization parameters or as external perturbation collective. The 10 most relevant perturbation factors and 4 to 6 prospective variable parameters are considered in a new, simplified model. All of the parameters are affected by the mentioned production variability. The objective functions of interest are based on scalar output-s variability measures, so the problem becomes an optimization under robustness and reliability constrains. The study shows an initial step on the development path of a method to design and optimize complex micro mechanisms composed of wide tolerated elements accounting for the robustness and reliability of the systems- output.

Keywords: Micro molded components, Optimization, Robustness und Reliability, Simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1517
2436 Using Cooperation Approaches at Different Levels of Artificial Bee Colony Method

Authors: Vahid Zeighami, Mohsen Ghasemi, Reza Akbari

Abstract:

In this work, a Multi-Level Artificial Bee Colony (called MLABC) for optimizing numerical test functions is presented. In MLABC, two species are used. The first species employs n colonies where each of them optimizes the complete solution vector. The cooperation between these colonies is carried out by exchanging information through a leader colony, which contains a set of elite bees. The second species uses a cooperative approach in which the complete solution vector is divided to k sub-vectors, and each of these sub-vectors is optimized by a colony. The cooperation between these colonies is carried out by compiling sub-vectors into the complete solution vector. Finally, the cooperation between two species is obtained by exchanging information. The proposed algorithm is tested on a set of well-known test functions. The results show that MLABC algorithm provides efficiency and robustness to solve numerical functions.

Keywords: Artificial bee colony, cooperative artificial bee colony, multilevel cooperation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2363
2435 A Framework for Data Mining Based Multi-Agent: An Application to Spatial Data

Authors: H. Baazaoui Zghal, S. Faiz, H. Ben Ghezala

Abstract:

Data mining is an extraordinarily demanding field referring to extraction of implicit knowledge and relationships, which are not explicitly stored in databases. A wide variety of methods of data mining have been introduced (classification, characterization, generalization...). Each one of these methods includes more than algorithm. A system of data mining implies different user categories,, which mean that the user-s behavior must be a component of the system. The problem at this level is to know which algorithm of which method to employ for an exploratory end, which one for a decisional end, and how can they collaborate and communicate. Agent paradigm presents a new way of conception and realizing of data mining system. The purpose is to combine different algorithms of data mining to prepare elements for decision-makers, benefiting from the possibilities offered by the multi-agent systems. In this paper the agent framework for data mining is introduced, and its overall architecture and functionality are presented. The validation is made on spatial data. Principal results will be presented.

Keywords: Databases, data mining, multi-agent, spatial datamart.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2045
2434 Robot Vision Application based on Complex 3D Pose Computation

Authors: F. Rotaru, S. Bejinariu, C. D. Niţâ, R. Luca, I. Pâvâloi, C. Lazâr

Abstract:

The paper presents a technique suitable in robot vision applications where it is not possible to establish the object position from one view. Usually, one view pose calculation methods are based on the correspondence of image features established at a training step and exactly the same image features extracted at the execution step, for a different object pose. When such a correspondence is not feasible because of the lack of specific features a new method is proposed. In the first step the method computes from two views the 3D pose of feature points. Subsequently, using a registration algorithm, the set of 3D feature points extracted at the execution phase is aligned with the set of 3D feature points extracted at the training phase. The result is a Euclidean transform which have to be used by robot head for reorientation at execution step.

Keywords: features correspondence, registration algorithm, robot vision, triangulation method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1472
2433 Fluid Differential Agitators

Authors: Saeed Asiri

Abstract:

This research is to design and implement a new kind of agitators called differential agitator. The Differential Agitator is an electro- mechanic set consists of two shafts. The first shaft is the bearing axis while the second shaft is the axis of the quartet upper bearing impellers group and the triple lower group which are called as agitating group. The agitating group is located inside a cylindrical container equipped especially to contain square directors for the liquid entrance and square directors called fixing group for the liquid exit. The fixing group is installed containing the agitating group inside any tank whether from upper or lower position. The agitating process occurs through the agitating group bearing causing a lower pressure over the upper group leading to withdrawing the liquid from the square directors of the liquid entering and consequently the liquid moves to the denser place under the quartet upper group. Then, the liquid moves to the so high pressure area under the agitating group causing the liquid to exit from the square directors in the bottom of the container. For improving efficiency, parametric study and shape optimization has been carried out. A numerical analysis, manufacturing and laboratory experiments were conducted to design and implement the differential agitator. Knowing the material prosperities and the loading conditions, the FEM using ANSYS11 was used to get the optimum design of the geometrical parameters of the differential agitator elements while the experimental test was performed to validate the advantages of the differential agitators to give a high agitation performance of lime in the water as an example. In addition, the experimental work has been done to express the internal container shape in the agitation efficiency. The study ended up with conclusions to maximize agitator performance and optimize the geometrical parameters to be used for manufacturing the differential agitator

Keywords: Differential Agitators, Parametric Optimization, Shape Optimization, Agitation, FEM, ANSYS11.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3709
2432 Performance Evaluation of Wavelet Based Coders on Brain MRI Volumetric Medical Datasets for Storage and Wireless Transmission

Authors: D. Dhouib, A. Naït-Ali, C. Olivier, M. S. Naceur

Abstract:

In this paper, we evaluate the performance of some wavelet based coding algorithms such as 3D QT-L, 3D SPIHT and JPEG2K. In the first step we achieve an objective comparison between three coders, namely 3D SPIHT, 3D QT-L and JPEG2K. For this purpose, eight MRI head scan test sets of 256 x 256x124 voxels have been used. Results show superior performance of 3D SPIHT algorithm, whereas 3D QT-L outperforms JPEG2K. The second step consists of evaluating the robustness of 3D SPIHT and JPEG2K coding algorithm over wireless transmission. Compressed dataset images are then transmitted over AWGN wireless channel or over Rayleigh wireless channel. Results show the superiority of JPEG2K over these two models. In fact, it has been deduced that JPEG2K is more robust regarding coding errors. Thus we may conclude the necessity of using corrector codes in order to protect the transmitted medical information.

Keywords: Image coding, medical imaging, wavelet basedcoder, wireless transmission.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1942
2431 Optimization of Surface Roughness and Vibration in Turning of Aluminum Alloy AA2024 Using Taguchi Technique

Authors: Vladimir Aleksandrovich Rogov, Ghorbani Siamak

Abstract:

Determination of optimal conditions of machining parameters is important to reduce the production cost and achieve the desired surface quality. This paper investigates the influence of cutting parameters on surface roughness and natural frequency in turning of aluminum alloy AA2024. The experiments were performed at the lathe machine using two different cutting tools made of AISI 5140 and carbide cutting insert coated with TiC. Turning experiments were planned by Taguchi method L9 orthogonal array.Three levels for spindle speed, feed rate, depth of cut and tool overhang were chosen as cutting variables. The obtained experimental data has been analyzed using signal to noise ratio and analysis of variance. The main effects have been discussed and percentage contributions of various parameters affecting surface roughness and natural frequency, and optimal cutting conditions have been determined. Finally, optimization of the cutting parameters using Taguchi method was verified by confirmation experiments.

Keywords: Turning, Cutting conditions, Surface roughness, Natural frequency, Taguchi method, ANOVA, S/N ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4616
2430 Upgraded Rough Clustering and Outlier Detection Method on Yeast Dataset by Entropy Rough K-Means Method

Authors: P. Ashok, G. M. Kadhar Nawaz

Abstract:

Rough set theory is used to handle uncertainty and incomplete information by applying two accurate sets, Lower approximation and Upper approximation. In this paper, the rough clustering algorithms are improved by adopting the Similarity, Dissimilarity–Similarity and Entropy based initial centroids selection method on three different clustering algorithms namely Entropy based Rough K-Means (ERKM), Similarity based Rough K-Means (SRKM) and Dissimilarity-Similarity based Rough K-Means (DSRKM) were developed and executed by yeast dataset. The rough clustering algorithms are validated by cluster validity indexes namely Rand and Adjusted Rand indexes. An experimental result shows that the ERKM clustering algorithm perform effectively and delivers better results than other clustering methods. Outlier detection is an important task in data mining and very much different from the rest of the objects in the clusters. Entropy based Rough Outlier Factor (EROF) method is seemly to detect outlier effectively for yeast dataset. In rough K-Means method, by tuning the epsilon (ᶓ) value from 0.8 to 1.08 can detect outliers on boundary region and the RKM algorithm delivers better results, when choosing the value of epsilon (ᶓ) in the specified range. An experimental result shows that the EROF method on clustering algorithm performed very well and suitable for detecting outlier effectively for all datasets. Further, experimental readings show that the ERKM clustering method outperformed the other methods.

Keywords: Clustering, Entropy, Outlier, Rough K-Means, validity index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1412
2429 Minimization Entropic Applied to Rotary Dryers to Reduce the Energy Consumption

Authors: I. O. Nascimento, J. T. Manzi

Abstract:

The drying process is an important operation in the chemical industry and it is widely used in the food, grain industry and fertilizer industry. However, for demanding a considerable consumption of energy, such a process requires a deep energetic analysis in order to reduce operating costs. This paper deals with thermodynamic optimization applied to rotary dryers based on the entropy production minimization, aiming at to reduce the energy consumption. To do this, the mass, energy and entropy balance was used for developing a relationship that represents the rate of entropy production. The use of the Second Law of Thermodynamics is essential because it takes into account constraints of nature. Since the entropy production rate is minimized, optimals conditions of operations can be established and the process can obtain a substantial gain in energy saving. The minimization strategy had been led using classical methods such as Lagrange multipliers and implemented in the MATLAB platform. As expected, the preliminary results reveal a significant energy saving by the application of the optimal parameters found by the procedure of the entropy minimization It is important to say that this method has shown easy implementation and low cost.

Keywords: Drying, entropy minimization, modeling dryers, thermodynamic optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1430
2428 Blind Impulse Response Identification of Frequency Radio Channels: Application to Bran A Channel

Authors: S. Safi, M. Frikel, M. M'Saad, A. Zeroual

Abstract:

This paper describes a blind algorithm for estimating a time varying and frequency selective fading channel. In order to identify blindly the impulse response of these channels, we have used Higher Order Statistics (HOS) to build our algorithm. In this paper, we have selected two theoretical frequency selective channels as the Proakis-s 'B' channel and the Macchi-s channel, and one practical frequency selective fading channel called Broadband Radio Access Network (BRAN A). The simulation results in noisy environment and for different data input channel, demonstrate that the proposed method could estimate the phase and magnitude of these channels blindly and without any information about the input, except that the input excitation is i.i.d (Identically and Independent Distributed) and non-Gaussian.

Keywords: Frequency response, system identification, higher order statistics, communication channels, phase estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1833
2427 A Flexible Flowshop Scheduling Problem with Machine Eligibility Constraint and Two Criteria Objective Function

Authors: Bita Tadayon, Nasser Salmasi

Abstract:

This research deals with a flexible flowshop scheduling problem with arrival and delivery of jobs in groups and processing them individually. Due to the special characteristics of each job, only a subset of machines in each stage is eligible to process that job. The objective function deals with minimization of sum of the completion time of groups on one hand and minimization of sum of the differences between completion time of jobs and delivery time of the group containing that job (waiting period) on the other hand. The problem can be stated as FFc / rj , Mj / irreg which has many applications in production and service industries. A mathematical model is proposed, the problem is proved to be NPcomplete, and an effective heuristic method is presented to schedule the jobs efficiently. This algorithm can then be used within the body of any metaheuristic algorithm for solving the problem.

Keywords: flexible flowshop scheduling, group processing, machine eligibility constraint, mathematical modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1833