Search results for: BAT algorithm
2905 Application of Heuristic Integration Ant Colony Optimization in Path Planning
Authors: Zeyu Zhang, Guisheng Yin, Ziying Zhang, Liguo Zhang
Abstract:
This paper mainly studies the path planning method based on ant colony optimization (ACO), and proposes heuristic integration ant colony optimization (HIACO). This paper not only analyzes and optimizes the principle, but also simulates and analyzes the parameters related to the application of HIACO in path planning. Compared with the original algorithm, the improved algorithm optimizes probability formula, tabu table mechanism and updating mechanism, and introduces more reasonable heuristic factors. The optimized HIACO not only draws on the excellent ideas of the original algorithm, but also solves the problems of premature convergence, convergence to the sub optimal solution and improper exploration to some extent. HIACO can be used to achieve better simulation results and achieve the desired optimization. Combined with the probability formula and update formula, several parameters of HIACO are tested. This paper proves the principle of the HIACO and gives the best parameter range in the research of path planning.Keywords: ant colony optimization, heuristic integration, path planning, probability formula
Procedia PDF Downloads 2512904 Inverse Scattering of Two-Dimensional Objects Using an Enhancement Method
Authors: A.R. Eskandari, M.R. Eskandari
Abstract:
A 2D complete identification algorithm for dielectric and multiple objects immersed in air is presented. The employed technique consists of initially retrieving the shape and position of the scattering object using a linear sampling method and then determining the electric permittivity and conductivity of the scatterer using adjoint sensitivity analysis. This inversion algorithm results in high computational speed and efficiency, and it can be generalized for any scatterer structure. Also, this method is robust with respect to noise. The numerical results clearly show that this hybrid approach provides accurate reconstructions of various objects.Keywords: inverse scattering, microwave imaging, two-dimensional objects, Linear Sampling Method (LSM)
Procedia PDF Downloads 3872903 A Versatile Algorithm to Propose Optimized Solutions to the Dengue Disease Problem
Authors: Fernando L. P. Santos, Luiz G. Lyra, Helenice O. Florentino, Daniela R. Cantane
Abstract:
Dengue is a febrile infectious disease caused by a virus of the family Flaviridae. It is transmitted by the bite of mosquitoes, usually of the genus Aedes aegypti. It occurs in tropical and subtropical areas of the world. This disease has been a major public health problem worldwide, especially in tropical countries such as Brazil, and its incidence has increased in recent years. Dengue is a subject of intense research. Efficient forms of mosquito control must be considered. In this work, the mono-objective optimal control problem was solved for analysing the dengue disease problem. Chemical and biological controls were considered in the mathematical aspect. This model describes the dynamics of mosquitoes in water and winged phases. We applied the genetic algorithms (GA) to obtain optimal strategies for the control of dengue. Numerical simulations have been performed to verify the versatility and the applicability of this algorithm. On the basis of the present results we may recommend the GA to solve optimal control problem with a large region of feasibility.Keywords: genetic algorithm, dengue, Aedes aegypti, biological control, chemical control
Procedia PDF Downloads 3492902 A Genetic Algorithm Based Ensemble Method with Pairwise Consensus Score on Malware Cacophonous Labels
Authors: Shih-Yu Wang, Shun-Wen Hsiao
Abstract:
In the field of cybersecurity, there exists many vendors giving malware samples classified results, namely naming after the label that contains some important information which is also called AV label. Lots of researchers relay on AV labels for research. Unfortunately, AV labels are too cluttered. They do not have a fixed format and fixed naming rules because the naming results were based on each classifiers' viewpoints. A way to fix the problem is taking a majority vote. However, voting can sometimes create problems of bias. Thus, we create a novel ensemble approach which does not rely on the cacophonous naming result but depend on group identification to aggregate everyone's opinion. To achieve this purpose, we develop an scoring system called Pairwise Consensus Score (PCS) to calculate result similarity. The entire method architecture combine Genetic Algorithm and PCS to find maximum consensus in the group. Experimental results revealed that our method outperformed the majority voting by 10% in term of the score.Keywords: genetic algorithm, ensemble learning, malware family, malware labeling, AV labels
Procedia PDF Downloads 862901 Performance Analysis of Bluetooth Low Energy Mesh Routing Algorithm in Case of Disaster Prediction
Authors: Asmir Gogic, Aljo Mujcic, Sandra Ibric, Nermin Suljanovic
Abstract:
Ubiquity of natural disasters during last few decades have risen serious questions towards the prediction of such events and human safety. Every disaster regardless its proportion has a precursor which is manifested as a disruption of some environmental parameter such as temperature, humidity, pressure, vibrations and etc. In order to anticipate and monitor those changes, in this paper we propose an overall system for disaster prediction and monitoring, based on wireless sensor network (WSN). Furthermore, we introduce a modified and simplified WSN routing protocol built on the top of the trickle routing algorithm. Routing algorithm was deployed using the bluetooth low energy protocol in order to achieve low power consumption. Performance of the WSN network was analyzed using a real life system implementation. Estimates of the WSN parameters such as battery life time, network size and packet delay are determined. Based on the performance of the WSN network, proposed system can be utilized for disaster monitoring and prediction due to its low power profile and mesh routing feature.Keywords: bluetooth low energy, disaster prediction, mesh routing protocols, wireless sensor networks
Procedia PDF Downloads 3852900 An Improved Tracking Approach Using Particle Filter and Background Subtraction
Authors: Amir Mukhtar, Dr. Likun Xia
Abstract:
An improved, robust and efficient visual target tracking algorithm using particle filtering is proposed. Particle filtering has been proven very successful in estimating non-Gaussian and non-linear problems. In this paper, the particle filter is used with color feature to estimate the target state with time. Color distributions are applied as this feature is scale and rotational invariant, shows robustness to partial occlusion and computationally efficient. The performance is made more robust by choosing the different (YIQ) color scheme. Tracking is performed by comparison of chrominance histograms of target and candidate positions (particles). Color based particle filter tracking often leads to inaccurate results when light intensity changes during a video stream. Furthermore, background subtraction technique is used for size estimation of the target. The qualitative evaluation of proposed algorithm is performed on several real-world videos. The experimental results demonstrate that the improved algorithm can track the moving objects very well under illumination changes, occlusion and moving background.Keywords: tracking, particle filter, histogram, corner points, occlusion, illumination
Procedia PDF Downloads 3812899 Battery Control with Moving Average Algorithm to Smoothen the Intermittent Output Power of Photovoltaic Solar Power Plants in Off-Grid Configuration
Authors: Muhammad Gillfran Samual, Rinaldy Dalimi, Fauzan Hanif Jufri, Budi Sudiarto, Ismi Rosyiana Fitri
Abstract:
Solar energy is increasingly recognized as an important future energy source due to its abundant availability and renewable nature. However, the intermittent nature of solar energy can cause fluctuations in the electricity produced, making it difficult to guarantee a stable and reliable electricity supply. One solution that can be implemented is to use batteries in a photovoltaic solar power plant system with a Moving Average control algorithm, which can help smooth and reduce fluctuations in solar power output power. The parameter that can be adjusted in the Moving Average algorithm is the window size or the arithmetic average width of the photovoltaic output power over time. This research evaluates the effect of a change of window size parameter in the Moving Average algorithm on the resulting smoothed photovoltaic output power and the technical effects on batteries, i.e., power and energy usage. Based on the evaluation, it is found that the increase of window size parameter will slow down the response of photovoltaic output power to changes in irradiation and increase the smoothing quality of the intermittent photovoltaic output power. In addition, increasing the window size will reduce the maximum power received on the load side, and the amount of energy used by the battery during the power smoothing process will increase, which, in turn, increases the required battery capacity.Keywords: battery, intermittent, moving average, photovoltaic, power smoothing
Procedia PDF Downloads 622898 Spatial Object-Oriented Template Matching Algorithm Using Normalized Cross-Correlation Criterion for Tracking Aerial Image Scene
Authors: Jigg Pelayo, Ricardo Villar
Abstract:
Leaning on the development of aerial laser scanning in the Philippine geospatial industry, researches about remote sensing and machine vision technology became a trend. Object detection via template matching is one of its application which characterized to be fast and in real time. The paper purposely attempts to provide application for robust pattern matching algorithm based on the normalized cross correlation (NCC) criterion function subjected in Object-based image analysis (OBIA) utilizing high-resolution aerial imagery and low density LiDAR data. The height information from laser scanning provides effective partitioning order, thus improving the hierarchal class feature pattern which allows to skip unnecessary calculation. Since detection is executed in the object-oriented platform, mathematical morphology and multi-level filter algorithms were established to effectively avoid the influence of noise, small distortion and fluctuating image saturation that affect the rate of recognition of features. Furthermore, the scheme is evaluated to recognized the performance in different situations and inspect the computational complexities of the algorithms. Its effectiveness is demonstrated in areas of Misamis Oriental province, achieving an overall accuracy of 91% above. Also, the garnered results portray the potential and efficiency of the implemented algorithm under different lighting conditions.Keywords: algorithm, LiDAR, object recognition, OBIA
Procedia PDF Downloads 2442897 Genetic Algorithm Methods for Determination Over Flow Coefficient of Medium Throat Length Morning Glory Spillway Equipped Crest Vortex Breakers
Authors: Roozbeh Aghamajidi
Abstract:
Shaft spillways are circling spillways used generally for emptying unexpected floods on earth and concrete dams. There are different types of shaft spillways: Stepped and Smooth spillways. Stepped spillways pass more flow discharges through themselves in comparison to smooth spillways. Therefore, awareness of flow behavior of these spillways helps using them better and more efficiently. Moreover, using vortex breaker has great effect on passing flow through shaft spillway. In order to use more efficiently, the risk of flow pressure decreases to less than fluid vapor pressure, called cavitations, should be prevented as far as possible. At this research, it has been tried to study different behavior of spillway with different vortex shapes on spillway crest on flow. From the viewpoint of the effects of flow regime changes on spillway, changes of step dimensions, and the change of type of discharge will be studied effectively. Therefore, two spillway models with three different vortex breakers and three arrangements have been used to assess the hydraulic characteristics of flow. With regard to the inlet discharge to spillway, the parameters of pressure and flow velocity on spillway surface have been measured at several points and after each run. Using these kinds of information leads us to create better design criteria of spillway profile. To achieve these purposes, optimization has important role and genetic algorithm are utilized to study the emptying discharge. As a result, it turned out that the best type of spillway with maximum discharge coefficient is smooth spillway with ogee shapes as vortex breaker and 3 number as arrangement. Besides it has been concluded that the genetic algorithm can be used to optimize the results.Keywords: shaft spillway, vortex breaker, flow, genetic algorithm
Procedia PDF Downloads 3712896 Light-Weight Network for Real-Time Pose Estimation
Authors: Jianghao Hu, Hongyu Wang
Abstract:
The effective and efficient human pose estimation algorithm is an important task for real-time human pose estimation on mobile devices. This paper proposes a light-weight human key points detection algorithm, Light-Weight Network for Real-Time Pose Estimation (LWPE). LWPE uses light-weight backbone network and depthwise separable convolutions to reduce parameters and lower latency. LWPE uses the feature pyramid network (FPN) to fuse the high-resolution, semantically weak features with the low-resolution, semantically strong features. In the meantime, with multi-scale prediction, the predicted result by the low-resolution feature map is stacked to the adjacent higher-resolution feature map to intermediately monitor the network and continuously refine the results. At the last step, the key point coordinates predicted in the highest-resolution are used as the final output of the network. For the key-points that are difficult to predict, LWPE adopts the online hard key points mining strategy to focus on the key points that hard predicting. The proposed algorithm achieves excellent performance in the single-person dataset selected in the AI (artificial intelligence) challenge dataset. The algorithm maintains high-precision performance even though the model only contains 3.9M parameters, and it can run at 225 frames per second (FPS) on the generic graphics processing unit (GPU).Keywords: depthwise separable convolutions, feature pyramid network, human pose estimation, light-weight backbone
Procedia PDF Downloads 1542895 Space Time Adaptive Algorithm in Bi-Static Passive Radar Systems for Clutter Mitigation
Authors: D. Venu, N. V. Koteswara Rao
Abstract:
Space – time adaptive processing (STAP) is an effective tool for detecting a moving target in spaceborne or airborne radar systems. Since airborne passive radar systems utilize broadcast, navigation and excellent communication signals to perform various surveillance tasks and also has attracted significant interest from the distinct past, therefore the need of the hour is to have cost effective systems as compared to conventional active radar systems. Moreover, requirements of small number of secondary samples for effective clutter suppression in bi-static passive radar offer abundant illuminator resources for passive surveillance radar systems. This paper presents a framework for incorporating knowledge sources directly in the space-time beam former of airborne adaptive radars. STAP algorithm for clutter mitigation for passive bi-static radar has better quantitation of the reduction in sample size thereby amalgamating the earlier data bank with existing radar data sets. Also, we proposed a novel method to estimate the clutter matrix and perform STAP for efficient clutter suppression based on small sample size. Furthermore, the effectiveness of the proposed algorithm is verified using MATLAB simulations in order to validate STAP algorithm for passive bi-static radar. In conclusion, this study highlights the importance for various applications which augments traditional active radars using cost-effective measures.Keywords: bistatic radar, clutter, covariance matrix passive radar, STAP
Procedia PDF Downloads 2952894 Optimal Design of Composite Patch for a Cracked Pipe by Utilizing Genetic Algorithm and Finite Element Method
Authors: Mahdi Fakoor, Seyed Mohammad Navid Ghoreishi
Abstract:
Composite patching is a common way for reinforcing the cracked pipes and cylinders. The effects of composite patch reinforcement on fracture parameters of a cracked pipe depend on a variety of parameters such as number of layers, angle, thickness, and material of each layer. Therefore, stacking sequence optimization of composite patch becomes crucial for the applications of cracked pipes. In this study, in order to obtain the optimal stacking sequence for a composite patch that has minimum weight and maximum resistance in propagation of cracks, a coupled Multi-Objective Genetic Algorithm (MOGA) and Finite Element Method (FEM) process is proposed. This optimization process has done for longitudinal and transverse semi-elliptical cracks and optimal stacking sequences and Pareto’s front for each kind of cracks are presented. The proposed algorithm is validated against collected results from the existing literature.Keywords: multi objective optimization, pareto front, composite patch, cracked pipe
Procedia PDF Downloads 3122893 Wireless Transmission of Big Data Using Novel Secure Algorithm
Authors: K. Thiagarajan, K. Saranya, A. Veeraiah, B. Sudha
Abstract:
This paper presents a novel algorithm for secure, reliable and flexible transmission of big data in two hop wireless networks using cooperative jamming scheme. Two hop wireless networks consist of source, relay and destination nodes. Big data has to transmit from source to relay and from relay to destination by deploying security in physical layer. Cooperative jamming scheme determines transmission of big data in more secure manner by protecting it from eavesdroppers and malicious nodes of unknown location. The novel algorithm that ensures secure and energy balance transmission of big data, includes selection of data transmitting region, segmenting the selected region, determining probability ratio for each node (capture node, non-capture and eavesdropper node) in every segment, evaluating the probability using binary based evaluation. If it is secure transmission resume with the two- hop transmission of big data, otherwise prevent the attackers by cooperative jamming scheme and transmit the data in two-hop transmission.Keywords: big data, two-hop transmission, physical layer wireless security, cooperative jamming, energy balance
Procedia PDF Downloads 4902892 A Novel RLS Based Adaptive Filtering Method for Speech Enhancement
Authors: Pogula Rakesh, T. Kishore Kumar
Abstract:
Speech enhancement is a long standing problem with numerous applications like teleconferencing, VoIP, hearing aids, and speech recognition. The motivation behind this research work is to obtain a clean speech signal of higher quality by applying the optimal noise cancellation technique. Real-time adaptive filtering algorithms seem to be the best candidate among all categories of the speech enhancement methods. In this paper, we propose a speech enhancement method based on Recursive Least Squares (RLS) adaptive filter of speech signals. Experiments were performed on noisy data which was prepared by adding AWGN, Babble and Pink noise to clean speech samples at -5dB, 0dB, 5dB, and 10dB SNR levels. We then compare the noise cancellation performance of proposed RLS algorithm with existing NLMS algorithm in terms of Mean Squared Error (MSE), Signal to Noise ratio (SNR), and SNR loss. Based on the performance evaluation, the proposed RLS algorithm was found to be a better optimal noise cancellation technique for speech signals.Keywords: adaptive filter, adaptive noise canceller, mean squared error, noise reduction, NLMS, RLS, SNR, SNR loss
Procedia PDF Downloads 4812891 Spectrum Allocation in Cognitive Radio Using Monarch Butterfly Optimization
Authors: Avantika Vats, Kushal Thakur
Abstract:
This paper displays the point at issue, improvement, and utilization of a Monarch Butterfly Optimization (MBO) rather than a Genetic Algorithm (GA) in cognitive radio for the channel portion. This approach offers a satisfactory approach to get the accessible range of both the users, i.e., primary users (PUs) and secondary users (SUs). The proposed enhancement procedure depends on a nature-inspired metaheuristic algorithm. In MBO, all the monarch butterfly individuals are located in two distinct lands, viz. Southern Canada and the northern USA (land 1), and Mexico (Land 2). The positions of the monarch butterflies are modernizing in two ways. At first, the offsprings are generated (position updating) by the migration operator and can be adjusted by the migration ratio. It is trailed by tuning the positions for different butterflies by the methods for the butterfly adjusting operator. To keep the population unaltered and minimize fitness evaluations, the aggregate of the recently produced butterflies in these two ways stays equivalent to the first population. The outcomes obviously display the capacity of the MBO technique towards finding the upgraded work values on issues regarding the genetic algorithm.Keywords: cognitive radio, channel allocation, monarch butterfly optimization, evolutionary, computation
Procedia PDF Downloads 732890 A New Heuristic Algorithm for Maximization Total Demands of Nodes and Number of Covered Nodes Simultaneously
Authors: Ehsan Saghehei, Mahdi Eghbali
Abstract:
The maximal covering location problem (MCLP) was originally developed to determine a set of facility locations which would maximize the total customers' demand serviced by the facilities within a predetermined critical service criterion. However, on some problems that differences between the demand nodes are covered or the number of nodes each node is large, the method of solving MCLP may ignore these differences. In this paper, Heuristic solution based on the ranking of demands in each node and the number of nodes covered by each node according to a predetermined critical value is proposed. The output of this method is to maximize total demands of nodes and number of covered nodes, simultaneously. Furthermore, by providing an example, the solution algorithm is described and its results are compared with Greedy and Lagrange algorithms. Also, the results of the algorithm to solve the larger problem sizes that compared with other methods are provided. A summary and future works conclude the paper.Keywords: heuristic solution, maximal covering location problem, ranking, set covering
Procedia PDF Downloads 5732889 A Generalized Space-Efficient Algorithm for Quantum Bit String Comparators
Authors: Khuram Shahzad, Omar Usman Khan
Abstract:
Quantum bit string comparators (QBSC) operate on two sequences of n-qubits, enabling the determination of their relationships, such as equality, greater than, or less than. This is analogous to the way conditional statements are used in programming languages. Consequently, QBSCs play a crucial role in various algorithms that can be executed or adapted for quantum computers. The development of efficient and generalized comparators for any n-qubit length has long posed a challenge, as they have a high-cost footprint and lead to quantum delays. Comparators that are efficient are associated with inputs of fixed length. As a result, comparators without a generalized circuit cannot be employed at a higher level, though they are well-suited for problems with limited size requirements. In this paper, we introduce a generalized design for the comparison of two n-qubit logic states using just two ancillary bits. The design is examined on the basis of qubit requirements, ancillary bit usage, quantum cost, quantum delay, gate operations, and circuit complexity and is tested comprehensively on various input lengths. The work allows for sufficient flexibility in the design of quantum algorithms, which can accelerate quantum algorithm development.Keywords: quantum comparator, quantum algorithm, space-efficient comparator, comparator
Procedia PDF Downloads 152888 On Dynamic Chaotic S-BOX Based Advanced Encryption Standard Algorithm for Image Encryption
Authors: Ajish Sreedharan
Abstract:
Security in transmission and storage of digital images has its importance in today’s image communications and confidential video conferencing. Due to the increasing use of images in industrial process, it is essential to protect the confidential image data from unauthorized access. Advanced Encryption Standard (AES) is a well known block cipher that has several advantages in data encryption. However, it is not suitable for real-time applications. This paper presents modifications to the Advanced Encryption Standard to reflect a high level security and better image encryption. The modifications are done by adjusting the ShiftRow Transformation and using On Dynamic chaotic S-BOX. In AES the Substitute bytes, Shift row and Mix columns by themselves would provide no security because they do not use the key. In Dynamic chaotic S-BOX Based AES the Substitute bytes provide security because the S-Box is constructed from the key. Experimental results verify and prove that the proposed modification to image cryptosystem is highly secure from the cryptographic viewpoint. The results also prove that with a comparison to original AES encryption algorithm the modified algorithm gives better encryption results in terms of security against statistical attacks.Keywords: advanced encryption standard (AES), on dynamic chaotic S-BOX, image encryption, security analysis, ShiftRow transformation
Procedia PDF Downloads 4372887 DWT-SATS Based Detection of Image Region Cloning
Authors: Michael Zimba
Abstract:
A duplicated image region may be subjected to a number of attacks such as noise addition, compression, reflection, rotation, and scaling with the intention of either merely mating it to its targeted neighborhood or preventing its detection. In this paper, we present an effective and robust method of detecting duplicated regions inclusive of those affected by the various attacks. In order to reduce the dimension of the image, the proposed algorithm firstly performs discrete wavelet transform, DWT, of a suspicious image. However, unlike most existing copy move image forgery (CMIF) detection algorithms operating in the DWT domain which extract only the low frequency sub-band of the DWT of the suspicious image thereby leaving valuable information in the other three sub-bands, the proposed algorithm simultaneously extracts features from all the four sub-bands. The extracted features are not only more accurate representation of image regions but also robust to additive noise, JPEG compression, and affine transformation. Furthermore, principal component analysis-eigenvalue decomposition, PCA-EVD, is applied to reduce the dimension of the features. The extracted features are then sorted using the more computationally efficient Radix Sort algorithm. Finally, same affine transformation selection, SATS, a duplication verification method, is applied to detect duplicated regions. The proposed algorithm is not only fast but also more robust to attacks compared to the related CMIF detection algorithms. The experimental results show high detection rates.Keywords: affine transformation, discrete wavelet transform, radix sort, SATS
Procedia PDF Downloads 2302886 Analysis of Different Classification Techniques Using WEKA for Diabetic Disease
Authors: Usama Ahmed
Abstract:
Data mining is the process of analyze data which are used to predict helpful information. It is the field of research which solve various type of problem. In data mining, classification is an important technique to classify different kind of data. Diabetes is most common disease. This paper implements different classification technique using Waikato Environment for Knowledge Analysis (WEKA) on diabetes dataset and find which algorithm is suitable for working. The best classification algorithm based on diabetic data is Naïve Bayes. The accuracy of Naïve Bayes is 76.31% and take 0.06 seconds to build the model.Keywords: data mining, classification, diabetes, WEKA
Procedia PDF Downloads 1472885 A New Learning Automata-Based Algorithm to the Priority-Based Target Coverage Problem in Directional Sensor Networks
Authors: Shaharuddin Salleh, Sara Marouf, Hosein Mohammadi
Abstract:
Directional sensor networks (DSNs) have recently attracted a great deal of attention due to their extensive applications in a wide range of situations. One of the most important problems associated with DSNs is covering a set of targets in a given area and, at the same time, maximizing the network lifetime. This is due to limitation in sensing angle and battery power of the directional sensors. This problem gets more complicated by the possibility that targets may have different coverage requirements. In the present study, this problem is referred to as priority-based target coverage (PTC). As sensors are often densely deployed, organizing the sensors into several cover sets and then activating these cover sets successively is a promising solution to this problem. In this paper, we propose a learning automata-based algorithm to organize the directional sensors into several cover sets in such a way that each cover set could satisfy coverage requirements of all the targets. Several experiments are conducted to evaluate the performance of the proposed algorithm. The results demonstrated that the algorithms were able to contribute to solving the problem.Keywords: directional sensor networks, target coverage problem, cover set formation, learning automata
Procedia PDF Downloads 4142884 Improvements in OpenCV's Viola Jones Algorithm in Face Detection–Skin Detection
Authors: Jyoti Bharti, M. K. Gupta, Astha Jain
Abstract:
This paper proposes a new improved approach for false positives filtering of detected face images on OpenCV’s Viola Jones Algorithm In this approach, for Filtering of False Positives, Skin Detection in two colour spaces i.e. HSV (Hue, Saturation and Value) and YCrCb (Y is luma component and Cr- red difference, Cb- Blue difference) is used. As a result, it is found that false detection has been reduced. Our proposed method reaches the accuracy of about 98.7%. Thus, a better recognition rate is achieved.Keywords: face detection, Viola Jones, false positives, OpenCV
Procedia PDF Downloads 4062883 Determining of the Performance of Data Mining Algorithm Determining the Influential Factors and Prediction of Ischemic Stroke: A Comparative Study in the Southeast of Iran
Authors: Y. Mehdipour, S. Ebrahimi, A. Jahanpour, F. Seyedzaei, B. Sabayan, A. Karimi, H. Amirifard
Abstract:
Ischemic stroke is one of the common reasons for disability and mortality. The fourth leading cause of death in the world and the third in some other sources. Only 1/3 of the patients with ischemic stroke fully recover, 1/3 of them end in permanent disability and 1/3 face death. Thus, the use of predictive models to predict stroke has a vital role in reducing the complications and costs related to this disease. Thus, the aim of this study was to specify the effective factors and predict ischemic stroke with the help of DM methods. The present study was a descriptive-analytic study. The population was 213 cases from among patients referring to Ali ibn Abi Talib (AS) Hospital in Zahedan. Data collection tool was a checklist with the validity and reliability confirmed. This study used DM algorithms of decision tree for modeling. Data analysis was performed using SPSS-19 and SPSS Modeler 14.2. The results of the comparison of algorithms showed that CHAID algorithm with 95.7% accuracy has the best performance. Moreover, based on the model created, factors such as anemia, diabetes mellitus, hyperlipidemia, transient ischemic attacks, coronary artery disease, and atherosclerosis are the most effective factors in stroke. Decision tree algorithms, especially CHAID algorithm, have acceptable precision and predictive ability to determine the factors affecting ischemic stroke. Thus, by creating predictive models through this algorithm, will play a significant role in decreasing the mortality and disability caused by ischemic stroke.Keywords: data mining, ischemic stroke, decision tree, Bayesian network
Procedia PDF Downloads 1742882 Performance Analysis and Multi-Objective Optimization of a Kalina Cycle for Low-Temperature Applications
Authors: Sadegh Sadeghi, Negar Shabani
Abstract:
From a thermal point of view, zeotropic mixtures are likely to be more efficient than azeotropic fluids in low-temperature thermodynamic cycles due to their suitable boiling characteristics. In this study, performance of a low-temperature Kalina cycle with R717/water working fluid used in different existing power plants is mathematically investigated. To analyze the behavior of the cycle, mass conservation, energy conservation, and exergy balance equations are presented. With regard to the similarity in molar mass of R717 (17.03 gr/mol) and water (18.01 gr/mol), there is no need to alter the size of Kalina system components such as turbine and pump. To optimize the cycle energy and exergy efficiencies simultaneously, a constrained multi-objective optimization is carried out applying an Artificial Bee Colony algorithm. The main motivation behind using this algorithm lies on its robustness, reliability, remarkable precision and high–speed convergence rate in dealing with complicated constrained multi-objective problems. Convergence rates of the algorithm for calculating the optimal energy and exergy efficiencies are presented. Subsequently, due to the importance of exergy concept in Kalina cycles, exergy destructions occurring in the components are computed. Finally, the impacts of pressure, temperature, mass fraction and mass flow rate on the energy and exergy efficiencies are elaborately studied.Keywords: artificial bee colony algorithm, binary zeotropic mixture, constrained multi-objective optimization, energy efficiency, exergy efficiency, Kalina cycle
Procedia PDF Downloads 1532881 A Greedy Alignment Algorithm Supporting Medication Reconciliation
Authors: David Tresner-Kirsch
Abstract:
Reconciling patient medication lists from multiple sources is a critical task supporting the safe delivery of patient care. Manual reconciliation is a time-consuming and error-prone process, and recently attempts have been made to develop efficiency- and safety-oriented automated support for professionals performing the task. An important capability of any such support system is automated alignment – finding which medications from a list correspond to which medications from a different source, regardless of misspellings, naming differences (e.g. brand name vs. generic), or changes in treatment (e.g. switching a patient from one antidepressant class to another). This work describes a new algorithmic solution to this alignment task, using a greedy matching approach based on string similarity, edit distances, concept extraction and normalization, and synonym search derived from the RxNorm nomenclature. The accuracy of this algorithm was evaluated against a gold-standard corpus of 681 medication records; this evaluation found that the algorithm predicted alignments with 99% precision and 91% recall. This performance is sufficient to support decision support applications for medication reconciliation.Keywords: clinical decision support, medication reconciliation, natural language processing, RxNorm
Procedia PDF Downloads 2852880 An Efficient Robot Navigation Model in a Multi-Target Domain amidst Static and Dynamic Obstacles
Authors: Michael Ayomoh, Adriaan Roux, Oyindamola Omotuyi
Abstract:
This paper presents an efficient robot navigation model in a multi-target domain amidst static and dynamic workspace obstacles. The problem is that of developing an optimal algorithm to minimize the total travel time of a robot as it visits all target points within its task domain amidst unknown workspace obstacles and finally return to its initial position. In solving this problem, a classical algorithm was first developed to compute the optimal number of paths to be travelled by the robot amidst the network of paths. The principle of shortest distance between robot and targets was used to compute the target point visitation order amidst workspace obstacles. Algorithm premised on the standard polar coordinate system was developed to determine the length of obstacles encountered by the robot hence giving room for a geometrical estimation of the total surface area occupied by the obstacle especially when classified as a relevant obstacle i.e. obstacle that lies in between a robot and its potential visitation point. A stochastic model was developed and used to estimate the likelihood of a dynamic obstacle bumping into the robot’s navigation path and finally, the navigation/obstacle avoidance algorithm was hinged on the hybrid virtual force field (HVFF) method. Significant modelling constraints herein include the choice of navigation path to selected target points, the possible presence of static obstacles along a desired navigation path and the likelihood of encountering a dynamic obstacle along the robot’s path and the chances of it remaining at this position as a static obstacle hence resulting in a case of re-routing after routing. The proposed algorithm demonstrated a high potential for optimal solution in terms of efficiency and effectiveness.Keywords: multi-target, mobile robot, optimal path, static obstacles, dynamic obstacles
Procedia PDF Downloads 2812879 Application of Chinese Remainder Theorem to Find The Messages Sent in Broadcast
Authors: Ayubi Wirara, Ardya Suryadinata
Abstract:
Improper application of the RSA algorithm scheme can cause vulnerability to attacks. The attack utilizes the relationship between broadcast messages sent to the user with some fixed polynomial functions that belong to each user. Scheme attacks carried out by applying the Chinese Remainder Theorem to obtain a general polynomial equation with the same modulus. The formation of the general polynomial becomes a first step to get back the original message. Furthermore, to solve these equations can use Coppersmith's theorem.Keywords: RSA algorithm, broadcast message, Chinese Remainder Theorem, Coppersmith’s theorem
Procedia PDF Downloads 3412878 Battery Grading Algorithm in 2nd-Life Repurposing LI-Ion Battery System
Authors: Ya L. V., Benjamin Ong Wei Lin, Wanli Niu, Benjamin Seah Chin Tat
Abstract:
This article introduces a methodology that improves reliability and cyclability of 2nd-life Li-ion battery system repurposed as an energy storage system (ESS). Most of the 2nd-life retired battery systems in the market have module/pack-level state-of-health (SOH) indicator, which is utilized for guiding appropriate depth-of-discharge (DOD) in the application of ESS. Due to the lack of cell-level SOH indication, the different degrading behaviors among various cells cannot be identified upon reaching retired status; in the end, considering end-of-life (EOL) loss and pack-level DOD, the repurposed ESS has to be oversized by > 1.5 times to complement the application requirement of reliability and cyclability. This proposed battery grading algorithm, using non-invasive methodology, is able to detect outlier cells based on historical voltage data and calculate cell-level historical maximum temperature data using semi-analytic methodology. In this way, the individual battery cell in the 2nd-life battery system can be graded in terms of SOH on basis of the historical voltage fluctuation and estimated historical maximum temperature variation. These grades will have corresponding DOD grades in the application of the repurposed ESS to enhance system reliability and cyclability. In all, this introduced battery grading algorithm is non-invasive, compatible with all kinds of retired Li-ion battery systems which lack of cell-level SOH indication, as well as potentially being embedded into battery management software for preventive maintenance and real-time cyclability optimization.Keywords: battery grading algorithm, 2nd-life repurposing battery system, semi-analytic methodology, reliability and cyclability
Procedia PDF Downloads 2032877 Partially Knowing of Least Support Orthogonal Matching Pursuit (PKLS-OMP) for Recovering Signal
Authors: Israa Sh. Tawfic, Sema Koc Kayhan
Abstract:
Given a large sparse signal, great wishes are to reconstruct the signal precisely and accurately from lease number of measurements as possible as it could. Although this seems possible by theory, the difficulty is in built an algorithm to perform the accuracy and efficiency of reconstructing. This paper proposes a new proved method to reconstruct sparse signal depend on using new method called Least Support Matching Pursuit (LS-OMP) merge it with the theory of Partial Knowing Support (PSK) given new method called Partially Knowing of Least Support Orthogonal Matching Pursuit (PKLS-OMP). The new methods depend on the greedy algorithm to compute the support which depends on the number of iterations. So to make it faster, the PKLS-OMP adds the idea of partial knowing support of its algorithm. It shows the efficiency, simplicity, and accuracy to get back the original signal if the sampling matrix satisfies the Restricted Isometry Property (RIP). Simulation results also show that it outperforms many algorithms especially for compressible signals.Keywords: compressed sensing, lest support orthogonal matching pursuit, partial knowing support, restricted isometry property, signal reconstruction
Procedia PDF Downloads 2412876 A Fast Algorithm for Electromagnetic Compatibility Estimation for Radio Communication Network Equipment in a Complex Electromagnetic Environment
Authors: C. Temaneh-Nyah
Abstract:
Electromagnetic compatibility (EMC) is the ability of a Radio Communication Equipment (RCE) to operate with a desired quality of service in a given Electromagnetic Environment (EME) and not to create harmful interference with other RCE. This paper presents an algorithm which improves the simulation speed of estimating EMC of RCE in a complex EME, based on a stage by stage frequency-energy criterion of filtering. This algorithm considers different interference types including: Blocking and intermodulation. It consist of the following steps: simplified energy criterion where filtration is based on comparing the free space interference level to the industrial noise, frequency criterion which checks whether the interfering emissions characteristic overlap with the receiver’s channels characteristic and lastly the detailed energy criterion where the real channel interference level is compared to the noise level. In each of these stages, some interference cases are filtered out by the relevant criteria. This reduces the total number of dual and different combinations of RCE involved in the tedious detailed energy analysis and thus provides an improved simulation speed.Keywords: electromagnetic compatibility, electromagnetic environment, simulation of communication network
Procedia PDF Downloads 218