Search results for: SOFM algorithm
2757 Sparse Principal Component Analysis: A Least Squares Approximation Approach
Authors: Giovanni Merola
Abstract:
Sparse Principal Components Analysis aims to find principal components with few non-zero loadings. We derive such sparse solutions by adding a genuine sparsity requirement to the original Principal Components Analysis (PCA) objective function. This approach differs from others because it preserves PCA's original optimality: uncorrelatedness of the components and least squares approximation of the data. To identify the best subset of non-zero loadings we propose a branch-and-bound search and an iterative elimination algorithm. This last algorithm finds sparse solutions with large loadings and can be run without specifying the cardinality of the loadings and the number of components to compute in advance. We give thorough comparisons with the existing sparse PCA methods and several examples on real datasets.Keywords: SPCA, uncorrelated components, branch-and-bound, backward elimination
Procedia PDF Downloads 3812756 A Distributed Cryptographically Generated Address Computing Algorithm for Secure Neighbor Discovery Protocol in IPv6
Authors: M. Moslehpour, S. Khorsandi
Abstract:
Due to shortage in IPv4 addresses, transition to IPv6 has gained significant momentum in recent years. Like Address Resolution Protocol (ARP) in IPv4, Neighbor Discovery Protocol (NDP) provides some functions like address resolution in IPv6. Besides functionality of NDP, it is vulnerable to some attacks. To mitigate these attacks, Internet Protocol Security (IPsec) was introduced, but it was not efficient due to its limitation. Therefore, SEND protocol is proposed to automatic protection of auto-configuration process. It is secure neighbor discovery and address resolution process. To defend against threats on NDP’s integrity and identity, Cryptographically Generated Address (CGA) and asymmetric cryptography are used by SEND. Besides advantages of SEND, its disadvantages like the computation process of CGA algorithm and sequentially of CGA generation algorithm are considerable. In this paper, we parallel this process between network resources in order to improve it. In addition, we compare the CGA generation time in self-computing and distributed-computing process. We focus on the impact of the malicious nodes on the CGA generation time in the network. According to the result, although malicious nodes participate in the generation process, CGA generation time is less than when it is computed in a one-way. By Trust Management System, detecting and insulating malicious nodes is easier.Keywords: NDP, IPsec, SEND, CGA, modifier, malicious node, self-computing, distributed-computing
Procedia PDF Downloads 2782755 Considering Effect of Wind Turbines in the Distribution System
Authors: Majed Ahmadi
Abstract:
In recent years, the high penetration of different types of renewable energy sources (RESs) has affected most of the available strategies. The main motivations behind the high penetration of RESs are clean energy, modular system and easy installation. Among different types of RESs, wind turbine (WT) is an interesting choice referring to the availability of wind in almost any area. The new technologies of WT can provide energy from residential applications to wide grid connected applications. Regarding the WT, advantages such as reducing the dependence on fossil fuels and enhancing the independence and flexibility of large power grid are the most prominent. Nevertheless, the high volatile nature of wind speed injects much uncertainty in the grid that if not managed optimally can put the analyses far from the reality.the aim of this project is scrutiny and to offer proper ways for renewing distribution networks with envisage the effects of wind power plants and uncertainties related to distribution systems including wind power generating plants output rate and consumers consuming rate and also decrease the incidents of the whole network losses, amount of pollution, voltage refraction and cost extent.to solve this problem we use dual point estimate method.And algorithm used in this paper is reformed bat algorithm, which will be under exact research furthermore the results.Keywords: order renewal, wind turbines, bat algorithm, outspread production, uncertainty
Procedia PDF Downloads 2842754 Fuzzy-Genetic Algorithm Multi-Objective Optimization Methodology for Cylindrical Stiffened Tanks Conceptual Design
Authors: H. Naseh, M. Mirshams, M. Mirdamadian, H. R. Fazeley
Abstract:
This paper presents an extension of fuzzy-genetic algorithm multi-objective optimization methodology that could effectively be used to find the overall satisfaction of objective functions (selecting the design variables) in the early stages of design process. The coupling of objective functions due to design variables in an engineering design process will result in difficulties in design optimization problems. In many cases, decision making on design variables conflicts with more than one discipline in system design. In space launch system conceptual design, decision making on some design variable (e.g. oxidizer to fuel mass flow rate O/F) in early stages of the design process is related to objective of liquid propellant engine (specific impulse) and Tanks (structure weight). Then, the primary application of this methodology is the design of a liquid propellant engine with the maximum specific impulse and cylindrical stiffened tank with the minimum weight. To this end, the design problem is established the fuzzy rule set based on designer's expert knowledge with a holistic approach. The independent design variables in this model are oxidizer to fuel mass flow rate, thickness of stringers, thickness of rings, shell thickness. To handle the mentioned problems, a fuzzy-genetic algorithm multi-objective optimization methodology is developed based on Pareto optimal set. Consequently, this methodology is modeled with the one stage of space launch system to illustrate accuracy and efficiency of proposed methodology.Keywords: cylindrical stiffened tanks, multi-objective, genetic algorithm, fuzzy approach
Procedia PDF Downloads 6552753 A Mini Radar System for Low Altitude Targets Detection
Authors: Kangkang Wu, Kaizhi Wang, Zhijun Yuan
Abstract:
This paper deals with a mini radar system aimed at detecting small targets at the low latitude. The radar operates at Ku-band in the frequency modulated continuous wave (FMCW) mode with two receiving channels. The radar system has the characteristics of compactness, mobility, and low power consumption. This paper focuses on the implementation of the radar system, and the Block least mean square (Block LMS) algorithm is applied to minimize the fortuitous distortion. It is validated from a series of experiments that the track of the unmanned aerial vehicle (UAV) can be easily distinguished with the radar system.Keywords: unmanned aerial vehicle (UAV), interference, Block Least Mean Square (Block LMS) Algorithm, Frequency Modulated Continuous Wave (FMCW)
Procedia PDF Downloads 3202752 Monte Carlo Pathwise Sensitivities for Barrier Options with Application to Coco-Bond Calibration
Authors: Thomas Gerstner, Bastian von Harrach, Daniel Roth
Abstract:
The Monte Carlo pathwise sensitivities approach is well established for smooth payoff functions. In this work, we present a new Monte Carlo algorithm that is able to calculate the pathwise sensitivities for discontinuous payoff functions. Our main tool is the one-step survival idea of Glasserman and Staum. Although this technique yields to new terms per observation, while differentiating, the algorithm is still efficient. As an application, we use the results for a two-dimensional calibration of a Coco-Bond, which we model with different types of discretely monitored barrier options.Keywords: Monte Carlo, discretely monitored barrier options, pathwise sensitivities, Coco-Bond
Procedia PDF Downloads 3582751 Paddy/Rice Singulation for Determination of Husking Efficiency and Damage Using Machine Vision
Authors: M. Shaker, S. Minaei, M. H. Khoshtaghaza, A. Banakar, A. Jafari
Abstract:
In this study a system of machine vision and singulation was developed to separate paddy from rice and determine paddy husking and rice breakage percentages. The machine vision system consists of three main components including an imaging chamber, a digital camera, a computer equipped with image processing software. The singulation device consists of a kernel holding surface, a motor with vacuum fan, and a dimmer. For separation of paddy from rice (in the image), it was necessary to set a threshold. Therefore, some images of paddy and rice were sampled and the RGB values of the images were extracted using MATLAB software. Then mean and standard deviation of the data were determined. An Image processing algorithm was developed using MATLAB to determine paddy/rice separation and rice breakage and paddy husking percentages, using blue to red ratio. Tests showed that, a threshold of 0.75 is suitable for separating paddy from rice kernels. Results from the evaluation of the image processing algorithm showed that the accuracies obtained with the algorithm were 98.36% and 91.81% for paddy husking and rice breakage percentage, respectively. Analysis also showed that a suction of 45 mmHg to 50 mmHg yielding 81.3% separation efficiency is appropriate for operation of the kernel singulation system.Keywords: breakage, computer vision, husking, rice kernel
Procedia PDF Downloads 3812750 Parallel Gripper Modelling and Design Optimization Using Multi-Objective Grey Wolf Optimizer
Authors: Golak Bihari Mahanta, Bibhuti Bhusan Biswal, B. B. V. L. Deepak, Amruta Rout, Gunji Balamurali
Abstract:
Robots are widely used in the manufacturing industry for rapid production with higher accuracy and precision. With the help of End-of-Arm Tools (EOATs), robots are interacting with the environment. Robotic grippers are such EOATs which help to grasp the object in an automation system for improving the efficiency. As the robotic gripper directly influence the quality of the product due to the contact between the gripper surface and the object to be grasped, it is necessary to design and optimize the gripper mechanism configuration. In this study, geometric and kinematic modeling of the parallel gripper is proposed. Grey wolf optimizer algorithm is introduced for solving the proposed multiobjective gripper optimization problem. Two objective functions developed from the geometric and kinematic modeling along with several nonlinear constraints of the proposed gripper mechanism is used to optimize the design variables of the systems. Finally, the proposed methodology compared with a previously proposed method such as Teaching Learning Based Optimization (TLBO) algorithm, NSGA II, MODE and it was seen that the proposed method is more efficient compared to the earlier proposed methodology.Keywords: gripper optimization, metaheuristics, , teaching learning based algorithm, multi-objective optimization, optimal gripper design
Procedia PDF Downloads 1882749 An Enhanced Hybrid Backoff Technique for Minimizing the Occurrence of Collision in Mobile Ad Hoc Networks
Authors: N. Sabiyath Fatima, R. K. Shanmugasundaram
Abstract:
In Mobile Ad-hoc Networks (MANETS), every node performs both as transmitter and receiver. The existing backoff models do not exactly forecast the performance of the wireless network. Also, the existing models experience elevated packet collisions. Every time a collision happens, the station’s contention window (CW) is doubled till it arrives at the utmost value. The main objective of this paper is to diminish collision by means of contention window Multiplicative Increase Decrease Backoff (CWMIDB) scheme. The intention of rising CW is to shrink the collision possibility by distributing the traffic into an outsized point in time. Within wireless Ad hoc networks, the CWMIDB algorithm dynamically controls the contention window of the nodes experiencing collisions. During packet communication, the backoff counter is evenly selected from the given choice of [0, CW-1]. At this point, CW is recognized as contention window and its significance lies on the amount of unsuccessful transmission that had happened for the packet. On the initial transmission endeavour, CW is put to least amount value (C min), if transmission effort fails, subsequently the value gets doubled, and once more the value is set to least amount on victorious broadcast. CWMIDB is simulated inside NS2 environment and its performance is compared with Binary Exponential Backoff Algorithm. The simulation results show improvement in transmission probability compared to that of the existing backoff algorithm.Keywords: backoff, contention window, CWMIDB, MANET
Procedia PDF Downloads 2772748 Writing a Parametric Design Algorithm Based on Recreation and Structural Analysis of Patkane Model: The Case Study of Oshtorjan Mosque
Authors: Behnoush Moghiminia, Jesus Anaya Diaz
Abstract:
The current study attempts to present the relationship between the structure development and Patkaneh as one of the Iranian geometric patterns and parametric algorithms by introducing two practical methods. While having a structural function, Patkaneh is also used as an ornamental element. It can be helpful in the scientific and practical review of Patkaneh. The current study aims to use Patkaneh as a parametric form generator based on the algorithm. The current paper attempts to express how can a more complete algorithm of this covering be obtained based on the parametric study and analysis of a sample of a Patkaneh and also investigate the relationship between the development of the geometrical pattern of Patkaneh as a structural-decorative element of Iranian architecture and digital design. In this regard, to achieve the research purposes, researchers investigated the oldest type of Patkaneh in the architecture history of Iran, such as the Northern Entrance Patkaneh of Oshtorjan Jame’ Mosque. An accurate investigation was done on the history of the background to answer the questions. Then, by investigating the structural behavior of Patkaneh, the decorative or structural-decorative role of Patkaneh was investigated to eliminate the ambiguity. Then, the geometrical structure of Patkaneh was analyzed by introducing two practical methods. The first method is based on the constituent units of Patkaneh (Square and diamond) and investigating the interactive relationships between them in 2D and 3D. This method is appropriate for cases where there are rational and regular geometrical relationships. The second method is based on the separation of the floors and the investigation of their interrelation. It is practical when the constituent units are not geometrically regular and have numerous diversity. Finally, the parametric form algorithm of these methods was codified.Keywords: geometric properties, parametric design, Patkaneh, structural analysis
Procedia PDF Downloads 1512747 Insider Theft Detection in Organizations Using Keylogger and Machine Learning
Authors: Shamatha Shetty, Sakshi Dhabadi, Prerana M., Indushree B.
Abstract:
About 66% of firms claim that insider attacks are more likely to happen. The frequency of insider incidents has increased by 47% in the last two years. The goal of this work is to prevent dangerous employee behavior by using keyloggers and the Machine Learning (ML) model. Every keystroke that the user enters is recorded by the keylogging program, also known as keystroke logging. Keyloggers are used to stop improper use of the system. This enables us to collect all textual data, save it in a CSV file, and analyze it using an ML algorithm and the VirusTotal API. Many large companies use it to methodically monitor how their employees use computers, the internet, and email. We are utilizing the SVM algorithm and the VirusTotal API to improve overall efficiency and accuracy in identifying specific patterns and words to automate and offer the report for improved monitoring.Keywords: cyber security, machine learning, cyclic process, email notification
Procedia PDF Downloads 572746 An Automatic Method for Building Learners’ Groups in Virtual Environment
Authors: O. Bourkoukou, Essaid El Bachari
Abstract:
The group composing is one of the key issue in collaborative learning to achieve a positive educational experience. The goal of this work is to propose for teachers and tutors a method to create effective collaborative learning groups in e-learning environment based on the learner profile. For this purpose, a new function was defined to rate implicitly learning objects used by the learner during his learning experience. This paper describes the proposed algorithm to build an adequate collaborative learning group. In order to verify the performance of the proposed algorithm, several experiments were conducted in real data set in virtual environment. Results show the effectiveness of the method for which it appears that the proposed approach may be promising to produce better outcomes.Keywords: building groups, collaborative learning, e-learning, learning objects
Procedia PDF Downloads 2972745 Artificial Bee Colony Optimization for SNR Maximization through Relay Selection in Underlay Cognitive Radio Networks
Authors: Babar Sultan, Kiran Sultan, Waseem Khan, Ijaz Mansoor Qureshi
Abstract:
In this paper, a novel idea for the performance enhancement of secondary network is proposed for Underlay Cognitive Radio Networks (CRNs). In Underlay CRNs, primary users (PUs) impose strict interference constraints on the secondary users (SUs). The proposed scheme is based on Artificial Bee Colony (ABC) optimization for relay selection and power allocation to handle the highlighted primary challenge of Underlay CRNs. ABC is a simple, population-based optimization algorithm which attains global optimum solution by combining local search methods (Employed and Onlooker Bees) and global search methods (Scout Bees). The proposed two-phase relay selection and power allocation algorithm aims to maximize the signal-to-noise ratio (SNR) at the destination while operating in an underlying mode. The proposed algorithm has less computational complexity and its performance is verified through simulation results for a different number of potential relays, different interference threshold levels and different transmit power thresholds for the selected relays.Keywords: artificial bee colony, underlay spectrum sharing, cognitive radio networks, amplify-and-forward
Procedia PDF Downloads 5812744 Relevance Feedback within CBIR Systems
Authors: Mawloud Mosbah, Bachir Boucheham
Abstract:
We present here the results for a comparative study of some techniques, available in the literature, related to the relevance feedback mechanism in the case of a short-term learning. Only one method among those considered here is belonging to the data mining field which is the K-Nearest Neighbours Algorithm (KNN) while the rest of the methods is related purely to the information retrieval field and they fall under the purview of the following three major axes: Shifting query, Feature Weighting and the optimization of the parameters of similarity metric. As a contribution, and in addition to the comparative purpose, we propose a new version of the KNN algorithm referred to as an incremental KNN which is distinct from the original version in the sense that besides the influence of the seeds, the rate of the actual target image is influenced also by the images already rated. The results presented here have been obtained after experiments conducted on the Wang database for one iteration and utilizing colour moments on the RGB space. This compact descriptor, Colour Moments, is adequate for the efficiency purposes needed in the case of interactive systems. The results obtained allow us to claim that the proposed algorithm proves good results; it even outperforms a wide range of techniques available in the literature.Keywords: CBIR, category search, relevance feedback, query point movement, standard Rocchio’s formula, adaptive shifting query, feature weighting, original KNN, incremental KNN
Procedia PDF Downloads 2802743 Normalizing Scientometric Indicators of Individual Publications Using Local Cluster Detection Methods on Citation Networks
Authors: Levente Varga, Dávid Deritei, Mária Ercsey-Ravasz, Răzvan Florian, Zsolt I. Lázár, István Papp, Ferenc Járai-Szabó
Abstract:
One of the major shortcomings of widely used scientometric indicators is that different disciplines cannot be compared with each other. The issue of cross-disciplinary normalization has been long discussed, but even the classification of publications into scientific domains poses problems. Structural properties of citation networks offer new possibilities, however, the large size and constant growth of these networks asks for precaution. Here we present a new tool that in order to perform cross-field normalization of scientometric indicators of individual publications relays on the structural properties of citation networks. Due to the large size of the networks, a systematic procedure for identifying scientific domains based on a local community detection algorithm is proposed. The algorithm is tested with different benchmark and real-world networks. Then, by the use of this algorithm, the mechanism of the scientometric indicator normalization process is shown for a few indicators like the citation number, P-index and a local version of the PageRank indicator. The fat-tail trend of the article indicator distribution enables us to successfully perform the indicator normalization process.Keywords: citation networks, cross-field normalization, local cluster detection, scientometric indicators
Procedia PDF Downloads 2032742 Novel GPU Approach in Predicting the Directional Trend of the S&P500
Authors: A. J. Regan, F. J. Lidgey, M. Betteridge, P. Georgiou, C. Toumazou, K. Hayatleh, J. R. Dibble
Abstract:
Our goal is development of an algorithm capable of predicting the directional trend of the Standard and Poor’s 500 index (S&P 500). Extensive research has been published attempting to predict different financial markets using historical data testing on an in-sample and trend basis, with many authors employing excessively complex mathematical techniques. In reviewing and evaluating these in-sample methodologies, it became evident that this approach was unable to achieve sufficiently reliable prediction performance for commercial exploitation. For these reasons, we moved to an out-of-sample strategy based on linear regression analysis of an extensive set of financial data correlated with historical closing prices of the S&P 500. We are pleased to report a directional trend accuracy of greater than 55% for tomorrow (t+1) in predicting the S&P 500.Keywords: financial algorithm, GPU, S&P 500, stock market prediction
Procedia PDF Downloads 3502741 Phasor Measurement Unit Based on Particle Filtering
Authors: Rithvik Reddy Adapa, Xin Wang
Abstract:
Phasor Measurement Units (PMUs) are very sophisticated measuring devices that find amplitude, phase and frequency of various voltages and currents in a power system. Particle filter is a state estimation technique that uses Bayesian inference. Particle filters are widely used in pose estimation and indoor navigation and are very reliable. This paper studies and compares four different particle filters as PMUs namely, generic particle filter (GPF), genetic algorithm particle filter (GAPF), particle swarm optimization particle filter (PSOPF) and adaptive particle filter (APF). Two different test signals are used to test the performance of the filters in terms of responsiveness and correctness of the estimates.Keywords: phasor measurement unit, particle filter, genetic algorithm, particle swarm optimisation, state estimation
Procedia PDF Downloads 92740 Transmit Power Optimization for Cooperative Beamforming in Reverse-Link MIMO Ad-Hoc Networks
Authors: Younghyun Jeon, Seungjoo Maeng
Abstract:
In the Ad-hoc network, the great interests regarding MIMO scheme leads to their combination, which is also utilized into its applicable network. We manage the field of the problem into Reverse-link MIMO Ad-hoc Network (RMAN) and propose the methodology to maximize the data rate with its power consumption using Node-Cooperative beamforming technique. Based on the result of mathematical optimization formulation, we design the algorithm to construct optimal orthogonal weight vector according to channel feedback and control its transmission power according to QoS-pricing value level. In simulation results, we show the validity of the proposed mathematical optimization result and algorithm which mean that the sum-rate of each link is converged into some point.Keywords: ad-hoc network, MIMO, cooperative beamforming, transmit power
Procedia PDF Downloads 3982739 Distributed Optical Fiber Vibration Sensing Using Phase Generated Carrier Demodulation Algorithm
Authors: Zhihua Yu, Qi Zhang, Mingyu Zhang, Haolong Dai
Abstract:
Distributed fiber-optic vibration sensors are gaining extensive attention, for the advantages of high sensitivity, accurate location, light weight, large-scale monitoring, good concealment, and etc. In this paper, a novel optical fiber distributed vibration sensing system is proposed, which is based on self-interference of Rayleigh backscattering with phase generated carrier (PGC) demodulation algorithm. Pulsed lights are sent into the sensing fiber and the Rayleigh backscattering light from a certain position along the sensing fiber would interfere through an unbalanced Michelson Interferometry (MI) to generate the interference light. An improved PGC demodulation algorithm is carried out to recover the phase information of the interference signal, which carries the sensing information. Three vibration events were applied simultaneously to different positions over 2000m sensing fiber and demodulated correctly. Experiments show that the spatial resolution of is 10 m, and the noise level of the Φ-OTDR system is about 10-3 rad/√Hz, and the signal to noise ratio (SNR) is about 30.34dB. This vibration measurement scheme can be applied at surface, seabed or downhole for vibration measurements or distributed acoustic sensing (DAS).Keywords: fiber optics sensors, Michelson interferometry, MI, phase-sensitive optical time domain reflectometry, Φ-OTDR, phase generated carrier, PGC
Procedia PDF Downloads 1892738 Design and Development of an Algorithm to Predict Fluctuations of Currency Rates
Authors: Nuwan Kuruwitaarachchi, M. K. M. Peiris, C. N. Madawala, K. M. A. R. Perera, V. U. N Perera
Abstract:
Dealing with businesses with the foreign market always took a special place in a country’s economy. Political and social factors came into play making currency rate changes fluctuate rapidly. Currency rate prediction has become an important factor for larger international businesses since large amounts of money exchanged between countries. This research focuses on comparing the accuracy of mainly three models; Autoregressive Integrated Moving Average (ARIMA), Artificial Neural Networks(ANN) and Support Vector Machines(SVM). series of data import, export, USD currency exchange rate respect to LKR has been selected for training using above mentioned algorithms. After training the data set and comparing each algorithm, it was able to see that prediction in SVM performed better than other models. It was improved more by combining SVM and SVR models together.Keywords: ARIMA, ANN, FFNN, RMSE, SVM, SVR
Procedia PDF Downloads 2122737 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status
Authors: Rosa Figueroa, Christopher Flores
Abstract:
Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm
Procedia PDF Downloads 2972736 Using ε Value in Describe Regular Languages by Using Finite Automata, Operation on Languages and the Changing Algorithm Implementation
Authors: Abdulmajid Mukhtar Afat
Abstract:
This paper aims at introducing nondeterministic finite automata with ε value which is used to perform some operations on languages. a program is created to implement the algorithm that converts nondeterministic finite automata with ε value (ε-NFA) to deterministic finite automata (DFA).The program is written in c++ programming language. The program inputs are FA 5-tuples from text file and then classifies it into either DFA/NFA or ε -NFA. For DFA, the program will get the string w and decide whether it is accepted or rejected. The tracking path for an accepted string is saved by the program. In case of NFA or ε-NFA automation, the program changes the automation to DFA to enable tracking and to decide if the string w exists in the regular language or not.Keywords: DFA, NFA, ε-NFA, eclose, finite automata, operations on languages
Procedia PDF Downloads 4892735 Probability-Based Damage Detection of Structures Using Kriging Surrogates and Enhanced Ideal Gas Molecular Movement Algorithm
Authors: M. R. Ghasemi, R. Ghiasi, H. Varaee
Abstract:
Surrogate model has received increasing attention for use in detecting damage of structures based on vibration modal parameters. However, uncertainties existing in the measured vibration data may lead to false or unreliable output result from such model. In this study, an efficient approach based on Monte Carlo simulation is proposed to take into account the effect of uncertainties in developing a surrogate model. The probability of damage existence (PDE) is calculated based on the probability density function of the existence of undamaged and damaged states. The kriging technique allows one to genuinely quantify the surrogate error, therefore it is chosen as metamodeling technique. Enhanced version of ideal gas molecular movement (EIGMM) algorithm is used as main algorithm for model updating. The developed approach is applied to detect simulated damage in numerical models of 72-bar space truss and 120-bar dome truss. The simulation results show the proposed method can perform well in probability-based damage detection of structures with less computational effort compared to direct finite element model.Keywords: probability-based damage detection (PBDD), Kriging, surrogate modeling, uncertainty quantification, artificial intelligence, enhanced ideal gas molecular movement (EIGMM)
Procedia PDF Downloads 2392734 Optimization of Water Pipeline Routes Using a GIS-Based Multi-Criteria Decision Analysis and a Geometric Search Algorithm
Authors: Leon Mortari
Abstract:
The Metropolitan East region of Rio de Janeiro state, Brazil, faces a historic water scarcity. Among the alternatives studied to solve this situation, the possibility of adduction of the available water in the reservoir Lagoa de Juturnaíba to supply the region's municipalities stands out. The allocation of a linear engineering project must occur through an evaluation of different aspects, such as altitude, slope, proximity to roads, distance from watercourses, land use and occupation, and physical and chemical features of the soil. This work aims to apply a multi-criteria model that combines geoprocessing techniques, decision-making, and geometric search algorithm to optimize a hypothetical adductor system in the scenario of expanding the water supply system that serves this region, known as Imunana-Laranjal, using the Lagoa de Juturnaíba as the source. It is proposed in this study, the construction of a spatial database related to the presented evaluation criteria, treatment and rasterization of these data, and standardization and reclassification of this information in a Geographic Information System (GIS) platform. The methodology involves the integrated analysis of these criteria, using their relative importance defined by weighting them based on expert consultations and the Analytic Hierarchy Process (AHP) method. Three approaches are defined for weighting the criteria by AHP: the first treats all criteria as equally important, the second considers weighting based on a pairwise comparison matrix, and the third establishes a hierarchy based on the priority of the criteria. For each approach, a distinct group of weightings is defined. In the next step, map algebra tools are used to overlay the layers and generate cost surfaces, that indicates the resistance to the passage of the adductor route, using the three groups of weightings. The Dijkstra algorithm, a geometric search algorithm, is then applied to these cost surfaces to find an optimized path within the geographical space, aiming to minimize resources, time, investment, maintenance, and environmental and social impacts.Keywords: geometric search algorithm, GIS, pipeline, route optimization, spatial multi-criteria analysis model
Procedia PDF Downloads 312733 Parametric Analysis and Optimal Design of Functionally Graded Plates Using Particle Swarm Optimization Algorithm and a Hybrid Meshless Method
Authors: Foad Nazari, Seyed Mahmood Hosseini, Mohammad Hossein Abolbashari, Mohammad Hassan Abolbashari
Abstract:
The present study is concerned with the optimal design of functionally graded plates using particle swarm optimization (PSO) algorithm. In this study, meshless local Petrov-Galerkin (MLPG) method is employed to obtain the functionally graded (FG) plate’s natural frequencies. Effects of two parameters including thickness to height ratio and volume fraction index on the natural frequencies and total mass of plate are studied by using the MLPG results. Then the first natural frequency of the plate, for different conditions where MLPG data are not available, is predicted by an artificial neural network (ANN) approach which is trained by back-error propagation (BEP) technique. The ANN results show that the predicted data are in good agreement with the actual one. To maximize the first natural frequency and minimize the mass of FG plate simultaneously, the weighted sum optimization approach and PSO algorithm are used. However, the proposed optimization process of this study can provide the designers of FG plates with useful data.Keywords: optimal design, natural frequency, FG plate, hybrid meshless method, MLPG method, ANN approach, particle swarm optimization
Procedia PDF Downloads 3672732 Prediction of Bariatric Surgery Publications by Using Different Machine Learning Algorithms
Authors: Senol Dogan, Gunay Karli
Abstract:
Identification of relevant publications based on a Medline query is time-consuming and error-prone. An all based process has the potential to solve this problem without any manual work. To the best of our knowledge, our study is the first to investigate the ability of machine learning to identify relevant articles accurately. 5 different machine learning algorithms were tested using 23 predictors based on several metadata fields attached to publications. We find that the Boosted model is the best-performing algorithm and its overall accuracy is 96%. In addition, specificity and sensitivity of the algorithm is 97 and 93%, respectively. As a result of the work, we understood that we can apply the same procedure to understand cancer gene expression big data.Keywords: prediction of publications, machine learning, algorithms, bariatric surgery, comparison of algorithms, boosted, tree, logistic regression, ANN model
Procedia PDF Downloads 2092731 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling
Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed
Abstract:
The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.Keywords: streamflow, neural network, optimisation, algorithm
Procedia PDF Downloads 1522730 Visualization Tool for EEG Signal Segmentation
Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh
Abstract:
This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation
Procedia PDF Downloads 3972729 Equalization Algorithm for the Optical OFDM System Based on the Fractional Fourier Transform
Authors: A. Cherifi, B. Bouazza, A. O. Dahmane, B. Yagoubi
Abstract:
Transmission over Optical channels will introduce inter-symbol interference (ISI) as well as inter-channel (or inter-carrier) interference (ICI). To decrease the effects of ICI, this paper proposes equalizer for the Optical OFDM system based on the fractional Fourier transform (FrFFT). In this FrFT-OFDM system, traditional Fourier transform is replaced by fractional Fourier transform to modulate and demodulate the data symbols. The equalizer proposed consists of sampling the received signal in the different time per time symbol. Theoretical analysis and numerical simulation are discussed.Keywords: OFDM, (FrFT) fractional fourier transform, optical OFDM, equalization algorithm
Procedia PDF Downloads 4302728 Genetic Algorithms for Parameter Identification of DC Motor ARMAX Model and Optimal Control
Authors: A. Mansouri, F. Krim
Abstract:
This paper presents two techniques for DC motor parameters identification. We propose a numerical method using the adaptive extensive recursive least squares (AERLS) algorithm for real time parameters estimation. This algorithm, based on minimization of quadratic criterion, is realized in simulation for parameters identification of DC motor autoregressive moving average with extra inputs (ARMAX). As advanced technique, we use genetic algorithms (GA) identification with biased estimation for high dynamic performance speed regulation. DC motors are extensively used in variable speed drives, for robot and solar panel trajectory control. GA effectiveness is derived through comparison of the two approaches.Keywords: ARMAX model, DC motor, AERLS, GA, optimization, parameter identification, PID speed regulation
Procedia PDF Downloads 379