Search results for: whale optimization algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6087

Search results for: whale optimization algorithm

4617 A Case Study for User Rating Prediction on Automobile Recommendation System Using Mapreduce

Authors: Jiao Sun, Li Pan, Shijun Liu

Abstract:

Recommender systems have been widely used in contemporary industry, and plenty of work has been done in this field to help users to identify items of interest. Collaborative Filtering (CF, for short) algorithm is an important technology in recommender systems. However, less work has been done in automobile recommendation system with the sharp increase of the amount of automobiles. What’s more, the computational speed is a major weakness for collaborative filtering technology. Therefore, using MapReduce framework to optimize the CF algorithm is a vital solution to this performance problem. In this paper, we present a recommendation of the users’ comment on industrial automobiles with various properties based on real world industrial datasets of user-automobile comment data collection, and provide recommendation for automobile providers and help them predict users’ comment on automobiles with new-coming property. Firstly, we solve the sparseness of matrix using previous construction of score matrix. Secondly, we solve the data normalization problem by removing dimensional effects from the raw data of automobiles, where different dimensions of automobile properties bring great error to the calculation of CF. Finally, we use the MapReduce framework to optimize the CF algorithm, and the computational speed has been improved times. UV decomposition used in this paper is an often used matrix factorization technology in CF algorithm, without calculating the interpolation weight of neighbors, which will be more convenient in industry.

Keywords: collaborative filtering, recommendation, data normalization, mapreduce

Procedia PDF Downloads 217
4616 Developing a Simulation-Based Optimization Framework to Perform Energy Simulation for Indian Buildings

Authors: Sujoy Anirudha Das, Albert Thomas

Abstract:

Building sector is a major consumer of energy globally, and it has corresponding effects to the environment with respect to the carbon emissions. Given the fact that India is expected to add 40-billion square meter of new buildings till 2050, we need frameworks that help in reducing the overall energy consumption in the building sector. Even though several simulation-based frameworks that help in analyzing the building energy consumption are developed globally, in the Indian context, to the best of our knowledge, there is a lack of a comprehensive, yet user-friendly framework to simulate and optimize the effects of various energy influencing factors, specifically for Indian buildings. Therefore, this study is aimed at developing a simulation-based optimization framework to model the energy interactions in different types of Indian buildings by considering the dynamic nature of various energy influencing factors. This comprehensive framework can be used by various building stakeholders to test the energy effects of different factors such as, but not limited to, the various building materials, the orientation, the weather fluctuations, occupancy changes and the type of the building (e.g., office, residential). The results from the case study involving several building types would help us in gaining insights to build new energy-efficient buildings as well as retrofit the existing structures in a more convenient way to consume less energy, exclusively for an Indian scenario.

Keywords: building energy consumption, building energy simulations, energy efficient buildings, optimization framework

Procedia PDF Downloads 177
4615 Simulation and Experimental Research on Pocketing Operation for Toolpath Optimization in CNC Milling

Authors: Rakesh Prajapati, Purvik Patel, Avadhoot Rajurkar

Abstract:

Nowadays, manufacturing industries augment their production lines with modern machining centers backed by CAM software. Several attempts are being made to cut down the programming time for machining complex geometries. Special programs/software have been developed to generate the digital numerical data and to prepare NC programs by using suitable post-processors for different machines. By selecting the tools and manufacturing process then applying tool paths and NC program are generated. More and more complex mechanical parts that earlier were being cast and assembled/manufactured by other processes are now being machined. Majority of these parts require lots of pocketing operations and find their applications in die and mold, turbo machinery, aircraft, nuclear, defense etc. Pocketing operations involve removal of large quantity of material from the metal surface. The modeling of warm cast and clamping a piece of food processing parts which the used of Pro-E and MasterCAM® software. Pocketing operation has been specifically chosen for toolpath optimization. Then after apply Pocketing toolpath, Multi Tool Selection and Reduce Air Time give the results of software simulation time and experimental machining time.

Keywords: toolpath, part program, optimization, pocket

Procedia PDF Downloads 288
4614 Spatial Data Mining by Decision Trees

Authors: Sihem Oujdi, Hafida Belbachir

Abstract:

Existing methods of data mining cannot be applied on spatial data because they require spatial specificity consideration, as spatial relationships. This paper focuses on the classification with decision trees, which are one of the data mining techniques. We propose an extension of the C4.5 algorithm for spatial data, based on two different approaches Join materialization and Querying on the fly the different tables. Similar works have been done on these two main approaches, the first - Join materialization - favors the processing time in spite of memory space, whereas the second - Querying on the fly different tables- promotes memory space despite of the processing time. The modified C4.5 algorithm requires three entries tables: a target table, a neighbor table, and a spatial index join that contains the possible spatial relationship among the objects in the target table and those in the neighbor table. Thus, the proposed algorithms are applied to a spatial data pattern in the accidentology domain. A comparative study of our approach with other works of classification by spatial decision trees will be detailed.

Keywords: C4.5 algorithm, decision trees, S-CART, spatial data mining

Procedia PDF Downloads 612
4613 Cluster Based Ant Colony Routing Algorithm for Mobile Ad-Hoc Networks

Authors: Alaa Eddien Abdallah, Bajes Yousef Alskarnah

Abstract:

Ant colony based routing algorithms are known to grantee the packet delivery, but they su ffer from the huge overhead of control messages which are needed to discover the route. In this paper we utilize the network nodes positions to group the nodes in connected clusters. We use clusters-heads only on forwarding the route discovery control messages. Our simulations proved that the new algorithm has decreased the overhead dramatically without affecting the delivery rate.

Keywords: ad-hoc network, MANET, ant colony routing, position based routing

Procedia PDF Downloads 425
4612 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring

Authors: Daniel Fundi Murithi

Abstract:

Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.

Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring

Procedia PDF Downloads 163
4611 Optimal Tuning of Linear Quadratic Regulator Controller Using a Particle Swarm Optimization for Two-Rotor Aerodynamical System

Authors: Ayad Al-Mahturi, Herman Wahid

Abstract:

This paper presents an optimal state feedback controller based on Linear Quadratic Regulator (LQR) for a two-rotor aero-dynamical system (TRAS). TRAS is a highly nonlinear multi-input multi-output (MIMO) system with two degrees of freedom and cross coupling. There are two parameters that define the behavior of LQR controller: state weighting matrix and control weighting matrix. The two parameters influence the performance of LQR. Particle Swarm Optimization (PSO) is proposed to optimally tune weighting matrices of LQR. The major concern of using LQR controller is to stabilize the TRAS by making the beam move quickly and accurately for tracking a trajectory or to reach a desired altitude. The simulation results were carried out in MATLAB/Simulink. The system is decoupled into two single-input single-output (SISO) systems. Comparing the performance of the optimized proportional, integral and derivative (PID) controller provided by INTECO, results depict that LQR controller gives a better performance in terms of both transient and steady state responses when PSO is performed.

Keywords: LQR controller, optimal control, particle swarm optimization (PSO), two rotor aero-dynamical system (TRAS)

Procedia PDF Downloads 323
4610 PID Sliding Mode Control with Sliding Surface Dynamics based Continuous Control Action for Robotic Systems

Authors: Wael M. Elawady, Mohamed F. Asar, Amany M. Sarhan

Abstract:

This paper adopts a continuous sliding mode control scheme for trajectory tracking control of robot manipulators with structured and unstructured uncertain dynamics and external disturbances. In this algorithm, the equivalent control in the conventional sliding mode control is replaced by a PID control action. Moreover, the discontinuous switching control signal is replaced by a continuous proportional-integral (PI) control term such that the implementation of the proposed control algorithm does not require the prior knowledge of the bounds of unknown uncertainties and external disturbances and completely eliminates the chattering phenomenon of the conventional sliding mode control approach. The closed-loop system with the adopted control algorithm has been proved to be globally stable by using Lyapunov stability theory. Numerical simulations using the dynamical model of robot manipulators with modeling uncertainties demonstrate the superiority and effectiveness of the proposed approach in high speed trajectory tracking problems.

Keywords: PID, robot, sliding mode control, uncertainties

Procedia PDF Downloads 508
4609 Sparse-View CT Reconstruction Based on Nonconvex L1 − L2 Regularizations

Authors: Ali Pour Yazdanpanah, Farideh Foroozandeh Shahraki, Emma Regentova

Abstract:

The reconstruction from sparse-view projections is one of important problems in computed tomography (CT) limited by the availability or feasibility of obtaining of a large number of projections. Traditionally, convex regularizers have been exploited to improve the reconstruction quality in sparse-view CT, and the convex constraint in those problems leads to an easy optimization process. However, convex regularizers often result in a biased approximation and inaccurate reconstruction in CT problems. Here, we present a nonconvex, Lipschitz continuous and non-smooth regularization model. The CT reconstruction is formulated as a nonconvex constrained L1 − L2 minimization problem and solved through a difference of convex algorithm and alternating direction of multiplier method which generates a better result than L0 or L1 regularizers in the CT reconstruction. We compare our method with previously reported high performance methods which use convex regularizers such as TV, wavelet, curvelet, and curvelet+TV (CTV) on the test phantom images. The results show that there are benefits in using the nonconvex regularizer in the sparse-view CT reconstruction.

Keywords: computed tomography, non-convex, sparse-view reconstruction, L1-L2 minimization, difference of convex functions

Procedia PDF Downloads 316
4608 FlexPoints: Efficient Algorithm for Detection of Electrocardiogram Characteristic Points

Authors: Daniel Bulanda, Janusz A. Starzyk, Adrian Horzyk

Abstract:

The electrocardiogram (ECG) is one of the most commonly used medical tests, essential for correct diagnosis and treatment of the patient. While ECG devices generate a huge amount of data, only a small part of them carries valuable medical information. To deal with this problem, many compression algorithms and filters have been developed over the past years. However, the rapid development of new machine learning techniques poses new challenges. To address this class of problems, we created the FlexPoints algorithm that searches for characteristic points on the ECG signal and ignores all other points that do not carry relevant medical information. The conducted experiments proved that the presented algorithm can significantly reduce the number of data points which represents ECG signal without losing valuable medical information. These sparse but essential characteristic points (flex points) can be a perfect input for some modern machine learning models, which works much better using flex points as an input instead of raw data or data compressed by many popular algorithms.

Keywords: characteristic points, electrocardiogram, ECG, machine learning, signal compression

Procedia PDF Downloads 162
4607 Inferential Reasoning for Heterogeneous Multi-Agent Mission

Authors: Sagir M. Yusuf, Chris Baber

Abstract:

We describe issues bedeviling the coordination of heterogeneous (different sensors carrying agents) multi-agent missions such as belief conflict, situation reasoning, etc. We applied Bayesian and agents' presumptions inferential reasoning to solve the outlined issues with the heterogeneous multi-agent belief variation and situational-base reasoning. Bayesian Belief Network (BBN) was used in modeling the agents' belief conflict due to sensor variations. Simulation experiments were designed, and cases from agents’ missions were used in training the BBN using gradient descent and expectation-maximization algorithms. The output network is a well-trained BBN for making inferences for both agents and human experts. We claim that the Bayesian learning algorithm prediction capacity improves by the number of training data and argue that it enhances multi-agents robustness and solve agents’ sensor conflicts.

Keywords: distributed constraint optimization problem, multi-agent system, multi-robot coordination, autonomous system, swarm intelligence

Procedia PDF Downloads 154
4606 Localization of Buried People Using Received Signal Strength Indication Measurement of Wireless Sensor

Authors: Feng Tao, Han Ye, Shaoyi Liao

Abstract:

City constructions collapse after earthquake and people will be buried under ruins. Search and rescue should be conducted as soon as possible to save them. Therefore, according to the complicated environment, irregular aftershocks and rescue allow of no delay, a kind of target localization method based on RSSI (Received Signal Strength Indication) is proposed in this article. The target localization technology based on RSSI with the features of low cost and low complexity has been widely applied to nodes localization in WSN (Wireless Sensor Networks). Based on the theory of RSSI transmission and the environment impact to RSSI, this article conducts the experiments in five scenes, and multiple filtering algorithms are applied to original RSSI value in order to establish the signal propagation model with minimum test error respectively. Target location can be calculated from the distance, which can be estimated from signal propagation model, through improved centroid algorithm. Result shows that the localization technology based on RSSI is suitable for large-scale nodes localization. Among filtering algorithms, mixed filtering algorithm (average of average, median and Gaussian filtering) performs better than any other single filtering algorithm, and by using the signal propagation model, the minimum error of distance between known nodes and target node in the five scene is about 3.06m.

Keywords: signal propagation model, centroid algorithm, localization, mixed filtering, RSSI

Procedia PDF Downloads 300
4605 Modeling Average Paths Traveled by Ferry Vessels Using AIS Data

Authors: Devin Simmons

Abstract:

At the USDOT’s Bureau of Transportation Statistics, a biannual census of ferry operators in the U.S. is conducted, with results such as route mileage used to determine federal funding levels for operators. AIS data allows for the possibility of using GIS software and geographical methods to confirm operator-reported mileage for individual ferry routes. As part of the USDOT’s work on the ferry census, an algorithm was developed that uses AIS data for ferry vessels in conjunction with known ferry terminal locations to model the average route travelled for use as both a cartographic product and confirmation of operator-reported mileage. AIS data from each vessel is first analyzed to determine individual journeys based on the vessel’s velocity, and changes in velocity over time. These trips are then converted to geographic linestring objects. Using the terminal locations, the algorithm then determines whether the trip represented a known ferry route. Given a large enough dataset, routes will be represented by multiple trip linestrings, which are then filtered by DBSCAN spatial clustering to remove outliers. Finally, these remaining trips are ready to be averaged into one route. The algorithm interpolates the point on each trip linestring that represents the start point. From these start points, a centroid is calculated, and the first point of the average route is determined. Each trip is interpolated again to find the point that represents one percent of the journey’s completion, and the centroid of those points is used as the next point in the average route, and so on until 100 points have been calculated. Routes created using this algorithm have shown demonstrable improvement over previous methods, which included the implementation of a LOESS model. Additionally, the algorithm greatly reduces the amount of manual digitizing needed to visualize ferry activity.

Keywords: ferry vessels, transportation, modeling, AIS data

Procedia PDF Downloads 176
4604 Battery Replacement Strategy for Electric AGVs in an Automated Container Terminal

Authors: Jiheon Park, Taekwang Kim, Kwang Ryel Ryu

Abstract:

Electric automated guided vehicles (AGVs) are becoming popular in many automated container terminals nowadays because they are pollution-free and environmentally friendly vehicles for transporting the containers within the terminal. Since efficient operation of AGVs is critical for the productivity of the container terminal, the replacement of batteries of the AGVs must be conducted in a strategic way to minimize undesirable transportation interruptions. While a too frequent replacement may lead to a loss of terminal productivity by delaying container deliveries, missing the right timing of battery replacement can result in a dead AGV that causes a severer productivity loss due to the extra efforts required to finish post treatment. In this paper, we propose a strategy for battery replacement based on a scoring function of multiple criteria taking into account the current battery level, the distances to different battery stations, and the progress of the terminal job operations. The strategy is optimized using a genetic algorithm with the objectives of minimizing the total time spent for battery replacement as well as maximizing the terminal productivity.

Keywords: AGV operation, automated container terminal, battery replacement, electric AGV, strategy optimization

Procedia PDF Downloads 388
4603 Energy and Exergy Performance Optimization on a Real Gas Turbine Power Plant

Authors: Farhat Hajer, Khir Tahar, Cherni Rafik, Dakhli Radhouen, Ammar Ben Brahim

Abstract:

This paper presents the energy and exergy optimization of a real gas turbine power plant performance of 100 MW of power, installed in the South East of Tunisia. A simulation code is established using the EES (Engineering Equation Solver) software. The parameters considered are those of the actual operating conditions of the gas turbine thermal power station under study. The results show that thermal and exergetic efficiency decreases with the increase of the ambient temperature. Air excess has an important effect on the thermal efficiency. The emission of NOx rises in the summer and decreases in the winter. The obtained rates of NOx are compared with measurements results.

Keywords: efficiency, exergy, gas turbine, temperature

Procedia PDF Downloads 284
4602 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.

Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document

Procedia PDF Downloads 159
4601 Seat Assignment Model for Student Admissions Process at Saudi Higher Education Institutions

Authors: Mohammed Salem Alzahrani

Abstract:

In this paper, student admission process is studied to optimize the assignment of vacant seats with three main objectives. Utilizing all vacant seats, satisfying all program of study admission requirements and maintaining fairness among all candidates are the three main objectives of the optimization model. Seat Assignment Method (SAM) is used to build the model and solve the optimization problem with help of Northwest Coroner Method and Least Cost Method. A closed formula is derived for applying the priority of assigning seat to candidate based on SAM.

Keywords: admission process model, assignment problem, Hungarian Method, Least Cost Method, Northwest Corner Method, SAM

Procedia PDF Downloads 500
4600 A Protein-Wave Alignment Tool for Frequency Related Homologies Identification in Polypeptide Sequences

Authors: Victor Prevost, Solene Landerneau, Michel Duhamel, Joel Sternheimer, Olivier Gallet, Pedro Ferrandiz, Marwa Mokni

Abstract:

The search for homologous proteins is one of the ongoing challenges in biology and bioinformatics. Traditionally, a pair of proteins is thought to be homologous when they originate from the same ancestral protein. In such a case, their sequences share similarities, and advanced scientific research effort is spent to investigate this question. On this basis, we propose the Protein-Wave Alignment Tool (”P-WAT”) developed within the framework of the France Relance 2030 plan. Our work takes into consideration the mass-related wave aspect of protein biosynthesis, by associating specific frequencies to each amino acid according to its mass. Amino acids are then regrouped within their mass category. This way, our algorithm produces specific alignments in addition to those obtained with a common amino acid coding system. For this purpose, we develop the ”P-WAT” original algorithm, able to address large protein databases, with different attributes such as species, protein names, etc. that allow us to align user’s requests with a set of specific protein sequences. The primary intent of this algorithm is to achieve efficient alignments, in this specific conceptual frame, by minimizing execution costs and information loss. Our algorithm identifies sequence similarities by searching for matches of sub-sequences of different sizes, referred to as primers. Our algorithm relies on Boolean operations upon a dot plot matrix to identify primer amino acids common to both proteins which are likely to be part of a significant alignment of peptides. From those primers, dynamic programming-like traceback operations generate alignments and alignment scores based on an adjusted PAM250 matrix.

Keywords: protein, alignment, homologous, Genodic

Procedia PDF Downloads 113
4599 An AI-Based Dynamical Resource Allocation Calculation Algorithm for Unmanned Aerial Vehicle

Authors: Zhou Luchen, Wu Yubing, Burra Venkata Durga Kumar

Abstract:

As the scale of the network becomes larger and more complex than before, the density of user devices is also increasing. The development of Unmanned Aerial Vehicle (UAV) networks is able to collect and transform data in an efficient way by using software-defined networks (SDN) technology. This paper proposed a three-layer distributed and dynamic cluster architecture to manage UAVs by using an AI-based resource allocation calculation algorithm to address the overloading network problem. Through separating services of each UAV, the UAV hierarchical cluster system performs the main function of reducing the network load and transferring user requests, with three sub-tasks including data collection, communication channel organization, and data relaying. In this cluster, a head node and a vice head node UAV are selected considering the Central Processing Unit (CPU), operational (RAM), and permanent (ROM) memory of devices, battery charge, and capacity. The vice head node acts as a backup that stores all the data in the head node. The k-means clustering algorithm is used in order to detect high load regions and form the UAV layered clusters. The whole process of detecting high load areas, forming and selecting UAV clusters, and moving the selected UAV cluster to that area is proposed as offloading traffic algorithm.

Keywords: k-means, resource allocation, SDN, UAV network, unmanned aerial vehicles

Procedia PDF Downloads 111
4598 Subband Coding and Glottal Closure Instant (GCI) Using SEDREAMS Algorithm

Authors: Harisudha Kuresan, Dhanalakshmi Samiappan, T. Rama Rao

Abstract:

In modern telecommunication applications, Glottal Closure Instants location finding is important and is directly evaluated from the speech waveform. Here, we study the GCI using Speech Event Detection using Residual Excitation and the Mean Based Signal (SEDREAMS) algorithm. Speech coding uses parameter estimation using audio signal processing techniques to model the speech signal combined with generic data compression algorithms to represent the resulting modeled in a compact bit stream. This paper proposes a sub-band coder SBC, which is a type of transform coding and its performance for GCI detection using SEDREAMS are evaluated. In SBCs code in the speech signal is divided into two or more frequency bands and each of these sub-band signal is coded individually. The sub-bands after being processed are recombined to form the output signal, whose bandwidth covers the whole frequency spectrum. Then the signal is decomposed into low and high-frequency components and decimation and interpolation in frequency domain are performed. The proposed structure significantly reduces error, and precise locations of Glottal Closure Instants (GCIs) are found using SEDREAMS algorithm.

Keywords: SEDREAMS, GCI, SBC, GOI

Procedia PDF Downloads 356
4597 Application of Simulated Annealing to Threshold Optimization in Distributed OS-CFAR System

Authors: L. Abdou, O. Taibaoui, A. Moumen, A. Talib Ahmed

Abstract:

This paper proposes an application of the simulated annealing to optimize the detection threshold in an ordered statistics constant false alarm rate (OS-CFAR) system. Using conventional optimization methods, such as the conjugate gradient, can lead to a local optimum and lose the global optimum. Also for a system with a number of sensors that is greater than or equal to three, it is difficult or impossible to find this optimum; Hence, the need to use other methods, such as meta-heuristics. From a variety of meta-heuristic techniques, we can find the simulated annealing (SA) method, inspired from a process used in metallurgy. This technique is based on the selection of an initial solution and the generation of a near solution randomly, in order to improve the criterion to optimize. In this work, two parameters will be subject to such optimisation and which are the statistical order (k) and the scaling factor (T). Two fusion rules; “AND” and “OR” were considered in the case where the signals are independent from sensor to sensor. The results showed that the application of the proposed method to the problem of optimisation in a distributed system is efficiency to resolve such problems. The advantage of this method is that it allows to browse the entire solutions space and to avoid theoretically the stagnation of the optimization process in an area of local minimum.

Keywords: distributed system, OS-CFAR system, independent sensors, simulating annealing

Procedia PDF Downloads 497
4596 A Classical Method of Optimizing Manufacturing Systems Using a Number of Industrial Engineering Techniques

Authors: John M. Ikome, Martha E. Ikome, Therese Van Wyk

Abstract:

Productivity optimization of a company can significantly increase the company’s output and productivity which can be in the form of corrective actions of ineffective activities, process simplification, and reduction of variations, responsiveness, and reduction of set-up-time which are all under the classification of waste within the manufacturing environment. Deriving a means to eliminate a number of these issues has a key importance for manufacturing organization. This paper focused on a number of industrial engineering techniques which include a cause and effect diagram, to identify and optimize the method or systems being used. Based on our results, it shows that there are a number of variations within the production processes that can significantly disrupt the expected output.

Keywords: optimization, fishbone, diagram, productivity

Procedia PDF Downloads 312
4595 2D-Modeling with Lego Mindstorms

Authors: Miroslav Popelka, Jakub Nozicka

Abstract:

The whole work is based on possibility to use Lego Mindstorms robotics systems to reduce costs. Lego Mindstorms consists of a wide variety of hardware components necessary to simulate, programme and test of robotics systems in practice. To programme algorithm, which simulates space using the ultrasonic sensor, was used development environment supplied with kit. Software Matlab was used to render values afterwards they were measured by ultrasonic sensor. The algorithm created for this paper uses theoretical knowledge from area of signal processing. Data being processed by algorithm are collected by ultrasonic sensor that scans 2D space in front of it. Ultrasonic sensor is placed on moving arm of robot which provides horizontal moving of sensor. Vertical movement of sensor is provided by wheel drive. The robot follows map in order to get correct positioning of measured data. Based on discovered facts it is possible to consider Lego Mindstorm for low-cost and capable kit for real-time modelling.

Keywords: LEGO Mindstorms, ultrasonic sensor, real-time modeling, 2D object, low-cost robotics systems, sensors, Matlab, EV3 Home Edition Software

Procedia PDF Downloads 473
4594 Structural Analysis and Detail Design of APV Module Structure Using Topology Optimization Design

Authors: Hyun Kyu Cho, Jun Soo Kim, Young Hoon Lee, Sang Hoon Kang, Young Chul Park

Abstract:

In the study, structure for one of offshore drilling system APV(Air Pressure Vessle) modules was designed by using topology optimum design and performed structural safety evaluation according to DNV rules. 3D model created base on design area and non-design area separated by using topology optimization for the environmental loads. This model separated 17 types for wind loads and dynamic loads and performed structural analysis evaluation for each model. As a result, the maximum stress occurred 181.25MPa.

Keywords: APV, topology optimum design, DNV, structural analysis, stress

Procedia PDF Downloads 426
4593 A 5G Architecture Based to Dynamic Vehicular Clustering Enhancing VoD Services Over Vehicular Ad hoc Networks

Authors: Lamaa Sellami, Bechir Alaya

Abstract:

Nowadays, video-on-demand (VoD) applications are becoming one of the tendencies driving vehicular network users. In this paper, considering the unpredictable vehicle density, the unexpected acceleration or deceleration of the different cars included in the vehicular traffic load, and the limited radio range of the employed communication scheme, we introduce the “Dynamic Vehicular Clustering” (DVC) algorithm as a new scheme for video streaming systems over VANET. The proposed algorithm takes advantage of the concept of small cells and the introduction of wireless backhauls, inspired by the different features and the performance of the Long Term Evolution (LTE)- Advanced network. The proposed clustering algorithm considers multiple characteristics such as the vehicle’s position and acceleration to reduce latency and packet loss. Therefore, each cluster is counted as a small cell containing vehicular nodes and an access point that is elected regarding some particular specifications.

Keywords: video-on-demand, vehicular ad-hoc network, mobility, vehicular traffic load, small cell, wireless backhaul, LTE-advanced, latency, packet loss

Procedia PDF Downloads 141
4592 An Approach to the Assembly Line Balancing Problem with Uncertain Operation Time

Authors: Zhongmin Wang, Lin Wei, Hengshan Zhang, Tianhua Chen, Yimin Zhou

Abstract:

The assembly line balancing problems are signficant in mass production systems. In order to deal with the uncertainties that practically exist but barely mentioned in the literature, this paper develops a mathematic model with an optimisation algorithm to solve the assembly line balancing problem with uncertainty operation time. The developed model is able to work with a variable number of workstations under the uncertain environment, aiming to obtain the minimal number of workstation and minimal idle time for each workstation. In particular, the proposed approach first introduces the concept of protection time that closely works with the uncertain operation time. Four dominance rules and the mechanism of determining up and low bounds are subsequently put forward, which serve as the basis for the proposed branch and bound algorithm. Experimental results show that the proposed work verified on a benchmark data set is able to solve the uncertainties efficiently.

Keywords: assembly lines, SALBP-UOT, uncertain operation time, branch and bound algorithm.

Procedia PDF Downloads 171
4591 An Algorithm for Preventing the Irregular Operation Modes of the Drive Synchronous Motor Providing the Ore Grinding

Authors: Baghdasaryan Marinka

Abstract:

The current scientific and engineering interest concerning the problems of preventing the emergency manifestations of drive synchronous motors, ensuring the ore grinding technological process has been justified. The analysis of the known works devoted to the abnormal operation modes of synchronous motors and possibilities of protection against them, has shown that their application is inexpedient for preventing the impermissible displays arising in the electrical drive synchronous motors ensuring the ore-grinding process. The main energy and technological factors affecting the technical condition of synchronous motors are evaluated. An algorithm for preventing the irregular operation modes of the electrical drive synchronous motor applied in the ore-grinding technological process has been developed and proposed for further application which gives an opportunity to provide smart solutions, ensuring the safe operation of the drive synchronous motor by a comprehensive consideration of the energy and technological factors.

Keywords: synchronous motor, abnormal operating mode, electric drive, algorithm, energy factor, technological factor

Procedia PDF Downloads 136
4590 Exergetic Optimization on Solid Oxide Fuel Cell Systems

Authors: George N. Prodromidis, Frank A. Coutelieris

Abstract:

Biogas can be currently considered as an alternative option for electricity production, mainly due to its high energy content (hydrocarbon-rich source), its renewable status and its relatively low utilization cost. Solid Oxide Fuel Cell (SOFC) stacks convert fuel’s chemical energy to electricity with high efficiencies and reveal significant advantages on fuel flexibility combined with lower emissions rate, especially when utilize biogas. Electricity production by biogas constitutes a composite problem which incorporates an extensive parametric analysis on numerous dynamic variables. The main scope of the presented study is to propose a detailed thermodynamic model on the optimization of SOFC-based power plants’ operation based on fundamental thermodynamics, energy and exergy balances. This model named THERMAS (THERmodynamic MAthematical Simulation model) incorporates each individual process, during electricity production, mathematically simulated for different case studies that represent real life operational conditions. Also, THERMAS offers the opportunity to choose a great variety of different values for each operational parameter individually, thus allowing for studies within unexplored and experimentally impossible operational ranges. Finally, THERMAS innovatively incorporates a specific criterion concluded by the extensive energy analysis to identify the most optimal scenario per simulated system in exergy terms. Therefore, several dynamical parameters as well as several biogas mixture compositions have been taken into account, to cover all the possible incidents. Towards the optimization process in terms of an innovative OPF (OPtimization Factor), presented here, this research study reveals that systems supplied by low methane fuels can be comparable to these supplied by pure methane. To conclude, such an innovative simulation model indicates a perspective on the optimal design of a SOFC stack based system, in the direction of the commercialization of systems utilizing biogas.

Keywords: biogas, exergy, efficiency, optimization

Procedia PDF Downloads 370
4589 Optimal Maintenance Policy for a Three-Unit System

Authors: A. Abbou, V. Makis, N. Salari

Abstract:

We study the condition-based maintenance (CBM) problem of a system subject to stochastic deterioration. The system is composed of three units (or modules): (i) Module 1 deterioration follows a Markov process with two operational states and one failure state. The operational states are partially observable through periodic condition monitoring. (ii) Module 2 deterioration follows a Gamma process with a known failure threshold. The deterioration level of this module is fully observable through periodic inspections. (iii) Only the operating age information is available of Module 3. The lifetime of this module has a general distribution. A CBM policy prescribes when to initiate a maintenance intervention and which modules to repair during intervention. Our objective is to determine the optimal CBM policy minimizing the long-run expected average cost of operating the system. This is achieved by formulating a Markov decision process (MDP) and developing the value iteration algorithm for solving the MDP. We provide numerical examples illustrating the cost-effectiveness of the optimal CBM policy through a comparison with heuristic policies commonly found in the literature.

Keywords: reliability, maintenance optimization, Markov decision process, heuristics

Procedia PDF Downloads 219
4588 A Novel Meta-Heuristic Algorithm Based on Cloud Theory for Redundancy Allocation Problem under Realistic Condition

Authors: H. Mousavi, M. Sharifi, H. Pourvaziri

Abstract:

Redundancy Allocation Problem (RAP) is a well-known mathematical problem for modeling series-parallel systems. It is a combinatorial optimization problem which focuses on determining an optimal assignment of components in a system design. In this paper, to be more practical, we have considered the problem of redundancy allocation of series system with interval valued reliability of components. Therefore, during the search process, the reliabilities of the components are considered as a stochastic variable with a lower and upper bounds. In order to optimize the problem, we proposed a simulated annealing based on cloud theory (CBSAA). Also, the Monte Carlo simulation (MCS) is embedded to the CBSAA to handle the random variable components’ reliability. This novel approach has been investigated by numerical examples and the experimental results have shown that the CBSAA combining MCS is an efficient tool to solve the RAP of systems with interval-valued component reliabilities.

Keywords: redundancy allocation problem, simulated annealing, cloud theory, monte carlo simulation

Procedia PDF Downloads 412