Search results for: small baseline subset algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9017

Search results for: small baseline subset algorithm

8987 A Speeded up Robust Scale-Invariant Feature Transform Currency Recognition Algorithm

Authors: Daliyah S. Aljutaili, Redna A. Almutlaq, Suha A. Alharbi, Dina M. Ibrahim

Abstract:

All currencies around the world look very different from each other. For instance, the size, color, and pattern of the paper are different. With the development of modern banking services, automatic methods for paper currency recognition become important in many applications like vending machines. One of the currency recognition architecture’s phases is Feature detection and description. There are many algorithms that are used for this phase, but they still have some disadvantages. This paper proposes a feature detection algorithm, which merges the advantages given in the current SIFT and SURF algorithms, which we call, Speeded up Robust Scale-Invariant Feature Transform (SR-SIFT) algorithm. Our proposed SR-SIFT algorithm overcomes the problems of both the SIFT and SURF algorithms. The proposed algorithm aims to speed up the SIFT feature detection algorithm and keep it robust. Simulation results demonstrate that the proposed SR-SIFT algorithm decreases the average response time, especially in small and minimum number of best key points, increases the distribution of the number of best key points on the surface of the currency. Furthermore, the proposed algorithm increases the accuracy of the true best point distribution inside the currency edge than the other two algorithms.

Keywords: currency recognition, feature detection and description, SIFT algorithm, SURF algorithm, speeded up and robust features

Procedia PDF Downloads 209
8986 Non-Local Simultaneous Sparse Unmixing for Hyperspectral Data

Authors: Fanqiang Kong, Chending Bian

Abstract:

Sparse unmixing is a promising approach in a semisupervised fashion by assuming that the observed pixels of a hyperspectral image can be expressed in the form of linear combination of only a few pure spectral signatures (end members) in an available spectral library. However, the sparse unmixing problem still remains a great challenge at finding the optimal subset of endmembers for the observed data from a large standard spectral library, without considering the spatial information. Under such circumstances, a sparse unmixing algorithm termed as non-local simultaneous sparse unmixing (NLSSU) is presented. In NLSSU, the non-local simultaneous sparse representation method for endmember selection of sparse unmixing, is used to finding the optimal subset of endmembers for the similar image patch set in the hyperspectral image. And then, the non-local means method, as a regularizer for abundance estimation of sparse unmixing, is used to exploit the abundance image non-local self-similarity. Experimental results on both simulated and real data demonstrate that NLSSU outperforms the other algorithms, with a better spectral unmixing accuracy.

Keywords: hyperspectral unmixing, simultaneous sparse representation, sparse regression, non-local means

Procedia PDF Downloads 211
8985 Generation of Photo-Mosaic Images through Block Matching and Color Adjustment

Authors: Hae-Yeoun Lee

Abstract:

Mosaic refers to a technique that makes image by gathering lots of small materials in various colours. This paper presents an automatic algorithm that makes the photomosaic image using photos. The algorithm is composed of four steps: Partition and feature extraction, block matching, redundancy removal and colour adjustment. The input image is partitioned in the small block to extract feature. Each block is matched to find similar photo in database by comparing similarity with Euclidean difference between blocks. The intensity of the block is adjusted to enhance the similarity of image by replacing the value of light and darkness with that of relevant block. Further, the quality of image is improved by minimizing the redundancy of tiles in the adjacent blocks. Experimental results support that the proposed algorithm is excellent in quantitative analysis and qualitative analysis.

Keywords: photomosaic, Euclidean distance, block matching, intensity adjustment

Procedia PDF Downloads 250
8984 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring

Authors: Daniel Fundi Murithi

Abstract:

Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.

Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring

Procedia PDF Downloads 129
8983 Co-Evolutionary Fruit Fly Optimization Algorithm and Firefly Algorithm for Solving Unconstrained Optimization Problems

Authors: R. M. Rizk-Allah

Abstract:

This paper presents co-evolutionary fruit fly optimization algorithm based on firefly algorithm (CFOA-FA) for solving unconstrained optimization problems. The proposed algorithm integrates the merits of fruit fly optimization algorithm (FOA), firefly algorithm (FA) and elite strategy to refine the performance of classical FOA. Moreover, co-evolutionary mechanism is performed by applying FA procedures to ensure the diversity of the swarm. Finally, the proposed algorithm CFOA- FA is tested on several benchmark problems from the usual literature and the numerical results have demonstrated the superiority of the proposed algorithm for finding the global optimal solution.

Keywords: firefly algorithm, fruit fly optimization algorithm, unconstrained optimization problems

Procedia PDF Downloads 505
8982 Sparse Principal Component Analysis: A Least Squares Approximation Approach

Authors: Giovanni Merola

Abstract:

Sparse Principal Components Analysis aims to find principal components with few non-zero loadings. We derive such sparse solutions by adding a genuine sparsity requirement to the original Principal Components Analysis (PCA) objective function. This approach differs from others because it preserves PCA's original optimality: uncorrelatedness of the components and least squares approximation of the data. To identify the best subset of non-zero loadings we propose a branch-and-bound search and an iterative elimination algorithm. This last algorithm finds sparse solutions with large loadings and can be run without specifying the cardinality of the loadings and the number of components to compute in advance. We give thorough comparisons with the existing sparse PCA methods and several examples on real datasets.

Keywords: SPCA, uncorrelated components, branch-and-bound, backward elimination

Procedia PDF Downloads 342
8981 Investigation of Glacier Activity Using Optical and Radar Data in Zardkooh

Authors: Mehrnoosh Ghadimi, Golnoush Ghadimi

Abstract:

Precise monitoring of glacier velocity is critical in determining glacier-related hazards. Zardkooh Mountain was studied in terms of glacial activity rate in Zagros Mountainous region in Iran. In this study, we assessed the ability of optical and radar imagery to derive glacier-surface velocities in mountainous terrain. We processed Landsat 8 for optical data and Sentinel-1a for radar data. We used methods that are commonly used to measure glacier surface movements, such as cross correlation of optical and radar satellite images, SAR tracking techniques, and multiple aperture InSAR (MAI). We also assessed time series glacier surface displacement using our modified method, Enhanced Small Baseline Subset (ESBAS). The ESBAS has been implemented in StaMPS software, with several aspects of the processing chain modified, including filtering prior to phase unwrapping, topographic correction within three-dimensional phase unwrapping, reducing atmospheric noise, and removing the ramp caused by ionosphere turbulence and/or orbit errors. Our findings indicate an average surface velocity rate of 32 mm/yr in the Zardkooh mountainous areas.

Keywords: active rock glaciers, landsat 8, sentinel-1a, zagros mountainous region

Procedia PDF Downloads 53
8980 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features

Authors: Bushra Zafar, Usman Qamar

Abstract:

Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.

Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection

Procedia PDF Downloads 293
8979 A Bayesian Model with Improved Prior in Extreme Value Problems

Authors: Eva L. Sanjuán, Jacinto Martín, M. Isabel Parra, Mario M. Pizarro

Abstract:

In Extreme Value Theory, inference estimation for the parameters of the distribution is made employing a small part of the observation values. When block maxima values are taken, many data are discarded. We developed a new Bayesian inference model to seize all the information provided by the data, introducing informative priors and using the relations between baseline and limit parameters. Firstly, we studied the accuracy of the new model for three baseline distributions that lead to a Gumbel extreme distribution: Exponential, Normal and Gumbel. Secondly, we considered mixtures of Normal variables, to simulate practical situations when data do not adjust to pure distributions, because of perturbations (noise).

Keywords: bayesian inference, extreme value theory, Gumbel distribution, highly informative prior

Procedia PDF Downloads 166
8978 Genetic Algorithm Optimization of Microcantilever Based Resonator

Authors: Manjula Sutagundar, B. G. Sheeparamatti, D. S. Jangamshetti

Abstract:

Micro Electro Mechanical Systems (MEMS) resonators have shown the potential of replacing quartz crystal technology for sensing and high frequency signal processing applications because of inherent advantages like small size, high quality factor, low cost, compatibility with integrated circuit chips. This paper presents the optimization and modelling and simulation of the optimized micro cantilever resonator. The objective of the work is to optimize the dimensions of a micro cantilever resonator for a specified range of resonant frequency and specific quality factor. Optimization is carried out using genetic algorithm. The genetic algorithm is implemented using MATLAB. The micro cantilever resonator is modelled in CoventorWare using the optimized dimensions obtained from genetic algorithm. The modeled cantilever is analysed for resonance frequency.

Keywords: MEMS resonator, genetic algorithm, modelling and simulation, optimization

Procedia PDF Downloads 522
8977 A Hybrid Multi-Objective Firefly-Sine Cosine Algorithm for Multi-Objective Optimization Problem

Authors: Gaohuizi Guo, Ning Zhang

Abstract:

Firefly algorithm (FA) and Sine Cosine algorithm (SCA) are two very popular and advanced metaheuristic algorithms. However, these algorithms applied to multi-objective optimization problems have some shortcomings, respectively, such as premature convergence and limited exploration capability. Combining the privileges of FA and SCA while avoiding their deficiencies may improve the accuracy and efficiency of the algorithm. This paper proposes a hybridization of FA and SCA algorithms, named multi-objective firefly-sine cosine algorithm (MFA-SCA), to develop a more efficient meta-heuristic algorithm than FA and SCA.

Keywords: firefly algorithm, hybrid algorithm, multi-objective optimization, sine cosine algorithm

Procedia PDF Downloads 135
8976 Reducing Total Harmonic Content of 9-Level Inverter by Use of Cuckoo Algorithm

Authors: Mahmoud Enayati, Sirous Mohammadi

Abstract:

In this paper, a novel procedure to find the firing angles of the multilevel inverters of supply voltage and, consequently, to decline the total harmonic distortion (THD), has been presented. In order to eliminate more harmonics in the multilevel inverters, its number of levels can be lessened or pulse width modulation waveform, in which more than one switching occur in each level, be used. Both cases complicate the non-algebraic equations and their solution cannot be performed by the conventional methods for the numerical solution of nonlinear equations such as Newton-Raphson method. In this paper, Cuckoo algorithm is used to compute the optimal firing angle of the pulse width modulation voltage waveform in the multilevel inverter. These angles should be calculated in such a way that the voltage amplitude of the fundamental frequency be generated while the total harmonic distortion of the output voltage be small. The simulation and theoretical results for the 9-levels inverter offer the high applicability of the proposed algorithm to identify the suitable firing angles for declining the low order harmonics and generate a waveform whose total harmonic distortion is very small and it is almost a sinusoidal waveform.

Keywords: evolutionary algorithms, multilevel inverters, total harmonic content, Cuckoo Algorithm

Procedia PDF Downloads 505
8975 Gene Prediction in DNA Sequences Using an Ensemble Algorithm Based on Goertzel Algorithm and Anti-Notch Filter

Authors: Hamidreza Saberkari, Mousa Shamsi, Hossein Ahmadi, Saeed Vaali, , MohammadHossein Sedaaghi

Abstract:

In the recent years, using signal processing tools for accurate identification of the protein coding regions has become a challenge in bioinformatics. Most of the genomic signal processing methods is based on the period-3 characteristics of the nucleoids in DNA strands and consequently, spectral analysis is applied to the numerical sequences of DNA to find the location of periodical components. In this paper, a novel ensemble algorithm for gene selection in DNA sequences has been presented which is based on the combination of Goertzel algorithm and anti-notch filter (ANF). The proposed algorithm has many advantages when compared to other conventional methods. Firstly, it leads to identify the coding protein regions more accurate due to using the Goertzel algorithm which is tuned at the desired frequency. Secondly, faster detection time is achieved. The proposed algorithm is applied on several genes, including genes available in databases BG570 and HMR195 and their results are compared to other methods based on the nucleotide level evaluation criteria. Implementation results show the excellent performance of the proposed algorithm in identifying protein coding regions, specifically in identification of small-scale gene areas.

Keywords: protein coding regions, period-3, anti-notch filter, Goertzel algorithm

Procedia PDF Downloads 365
8974 Approximating Fixed Points by a Two-Step Iterative Algorithm

Authors: Safeer Hussain Khan

Abstract:

In this paper, we introduce a two-step iterative algorithm to prove a strong convergence result for approximating common fixed points of three contractive-like operators. Our algorithm basically generalizes an existing algorithm..Our iterative algorithm also contains two famous iterative algorithms: Mann iterative algorithm and Ishikawa iterative algorithm. Thus our result generalizes the corresponding results proved for the above three iterative algorithms to a class of more general operators. At the end, we remark that nothing prevents us to extend our result to the case of the iterative algorithm with error terms.

Keywords: contractive-like operator, iterative algorithm, fixed point, strong convergence

Procedia PDF Downloads 515
8973 Investigation of Adaptable Winglets for Improved UAV Control and Performance

Authors: E. Kaygan, A. Gatto

Abstract:

An investigation of adaptable winglets for morphing aircraft control and performance is described in this paper. The concepts investigated consist of various winglet configurations fundamentally centred on a baseline swept wing. The impetus for the work was to identify and optimize winglets to enhance controllability and the aerodynamic efficiency of a small unmanned aerial vehicle. All computations were performed with Athena Vortex Lattice modelling with varying degrees of twist, swept, and dihedral angle considered. The results from this work indicate that if adaptable winglets were employed on small scale UAV’s improvements in both aircraft control and performance could be achieved.

Keywords: aircraft, rolling, wing, winglet

Procedia PDF Downloads 437
8972 Minimum Vertices Dominating Set Algorithm for Secret Sharing Scheme

Authors: N. M. G. Al-Saidi, K. A. Kadhim, N. A. Rajab

Abstract:

Over the past decades, computer networks and data communication system has been developing fast, so, the necessity to protect a transmitted data is a challenging issue, and data security becomes a serious problem nowadays. A secret sharing scheme is a method which allows a master key to be distributed among a finite set of participants, in such a way that only certain authorized subsets of participants to reconstruct the original master key. To create a secret sharing scheme, many mathematical structures have been used; the most widely used structure is the one that is based on graph theory (graph access structure). Subsequently, many researchers tried to find efficient schemes based on graph access structures. In this paper, we propose a novel efficient construction of a perfect secret sharing scheme for uniform access structure. The dominating set of vertices in a regular graph is used for this construction in the following way; each vertex represents a participant and each minimum independent dominating subset represents a minimal qualified subset. Some relations between dominating set, graph order and regularity are achieved, and can be used to demonstrate the possibility of using dominating set to construct a secret sharing scheme. The information rate that is used as a measure for the efficiency of such systems is calculated to show that the proposed method has some improved values.

Keywords: secret sharing scheme, dominating set, information rate, access structure, rank

Procedia PDF Downloads 364
8971 The Importance of including All Data in a Linear Model for the Analysis of RNAseq Data

Authors: Roxane A. Legaie, Kjiana E. Schwab, Caroline E. Gargett

Abstract:

Studies looking at the changes in gene expression from RNAseq data often make use of linear models. It is also common practice to focus on a subset of data for a comparison of interest, leaving aside the samples not involved in this particular comparison. This work shows the importance of including all observations in the modeling process to better estimate variance parameters, even when the samples included are not directly used in the comparison under test. The human endometrium is a dynamic tissue, which undergoes cycles of growth and regression with each menstrual cycle. The mesenchymal stem cells (MSCs) present in the endometrium are likely responsible for this remarkable regenerative capacity. However recent studies suggest that MSCs also plays a role in the pathogenesis of endometriosis, one of the most common medical conditions affecting the lower abdomen in women in which the endometrial tissue grows outside the womb. In this study we compared gene expression profiles between MSCs and non-stem cell counterparts (‘non-MSC’) obtained from women with (‘E’) or without (‘noE’) endometriosis from RNAseq. Raw read counts were used for differential expression analysis using a linear model with the limma-voom R package, including either all samples in the study or only the samples belonging to the subset of interest (e.g. for the comparison ‘E vs noE in MSC cells’, including only MSC samples from E and noE patients but not the non-MSC ones). Using the full dataset we identified about 100 differentially expressed (DE) genes between E and noE samples in MSC samples (adj.p-val < 0.05 and |logFC|>1) while only 9 DE genes were identified when using only the subset of data (MSC samples only). Important genes known to be involved in endometriosis such as KLF9 and RND3 were missed in the latter case. When looking at the MSC vs non-MSC cells comparison, the linear model including all samples identified 260 genes for noE samples (including the stem cell marker SUSD2) while the subset analysis did not identify any DE genes. When looking at E samples, 12 genes were identified with the first approach and only 1 with the subset approach. Although the stem cell marker RGS5 was found in both cases, the subset test missed important genes involved in stem cell differentiation such as NOTCH3 and other potentially related genes to be used for further investigation and pathway analysis.

Keywords: differential expression, endometriosis, linear model, RNAseq

Procedia PDF Downloads 404
8970 Biologically Inspired Small Infrared Target Detection Using Local Contrast Mechanisms

Authors: Tian Xia, Yuan Yan Tang

Abstract:

In order to obtain higher small target detection accuracy, this paper presents an effective algorithm inspired by the local contrast mechanism. The proposed method can enhance target signal and suppress background clutter simultaneously. In the first stage, a enhanced image is obtained using the proposed Weighted Laplacian of Gaussian. In the second stage, an adaptive threshold is adopted to segment the target. Experimental results on two changeling image sequences show that the proposed method can detect the bright and dark targets simultaneously, and is not sensitive to sea-sky line of the infrared image. So it is fit for IR small infrared target detection.

Keywords: small target detection, local contrast, human vision system, Laplacian of Gaussian

Procedia PDF Downloads 436
8969 Long-Baseline Single-epoch RTK Positioning Method Based on BDS-3 and Galileo Penta-Frequency Ionosphere-Reduced Combinations

Authors: Liwei Liu, Shuguo Pan, Wang Gao

Abstract:

In order to take full advantages of the BDS-3 penta-frequency signals in the long-baseline RTK positioning, a long-baseline RTK positioning method based on the BDS-3 penta-frequency ionospheric-reduced (IR) combinations is proposed. First, the low noise and weak ionospheric delay characteristics of the multi-frequency combined observations of BDS-3is analyzed. Second, the multi-frequency extra-wide-lane (EWL)/ wide-lane (WL) combinations with long-wavelengths are constructed. Third, the fixed IR EWL combinations are used to constrain the IR WL, then constrain narrow-lane (NL)ambiguityies and start multi-epoch filtering. There is no need to consider the influence of ionospheric parameters in the third step. Compared with the estimated ionospheric model, the proposed method reduces the number of parameters by half, so it is suitable for the use of multi-frequency and multi-system real-time RTK. The results using real data show that the stepwise fixed model of the IR EWL/WL/NL combinations can realize long-baseline instantaneous cimeter-level positioning.

Keywords: penta-frequency, ionospheric-reduced (IR), RTK positioning, long-baseline

Procedia PDF Downloads 129
8968 An Approach to Maximize the Influence Spread in the Social Networks

Authors: Gaye Ibrahima, Mendy Gervais, Seck Diaraf, Ouya Samuel

Abstract:

In this paper, we consider the influence maximization in social networks. Here we give importance to initial diffuser called the seeds. The goal is to find efficiently a subset of k elements in the social network that will begin and maximize the information diffusion process. A new approach which treats the social network before to determine the seeds, is proposed. This treatment eliminates the information feedback toward a considered element as seed by extracting an acyclic spanning social network. At first, we propose two algorithm versions called SCG − algoritm (v1 and v2) (Spanning Connected Graphalgorithm). This algorithm takes as input data a connected social network directed or no. And finally, a generalization of the SCG − algoritm is proposed. It is called SG − algoritm (Spanning Graph-algorithm) and takes as input data any graph. These two algorithms are effective and have each one a polynomial complexity. To show the pertinence of our approach, two seeds set are determined and those given by our approach give a better results. The performances of this approach are very perceptible through the simulation carried out by the R software and the igraph package.

Keywords: acyclic spanning graph, centrality measures, information feedback, influence maximization, social network

Procedia PDF Downloads 214
8967 An Algorithm to Compute the State Estimation of a Bilinear Dynamical Systems

Authors: Abdullah Eqal Al Mazrooei

Abstract:

In this paper, we introduce a mathematical algorithm which is used for estimating the states in the bilinear systems. This algorithm uses a special linearization of the second-order term by using the best available information about the state of the system. This technique makes our algorithm generalizes the well-known Kalman estimators. The system which is used here is of the bilinear class, the evolution of this model is linear-bilinear in the state of the system. Our algorithm can be used with linear and bilinear systems. We also here introduced a real application for the new algorithm to prove the feasibility and the efficiency for it.

Keywords: estimation algorithm, bilinear systems, Kakman filter, second order linearization

Procedia PDF Downloads 449
8966 Intrusion Detection in Computer Networks Using a Hybrid Model of Firefly and Differential Evolution Algorithms

Authors: Mohammad Besharatloo

Abstract:

Intrusion detection is an important research topic in network security because of increasing growth in the use of computer network services. Intrusion detection is done with the aim of detecting the unauthorized use or abuse in the networks and systems by the intruders. Therefore, the intrusion detection system is an efficient tool to control the user's access through some predefined regulations. Since, the data used in intrusion detection system has high dimension, a proper representation is required to show the basis structure of this data. Therefore, it is necessary to eliminate the redundant features to create the best representation subset. In the proposed method, a hybrid model of differential evolution and firefly algorithms was employed to choose the best subset of properties. In addition, decision tree and support vector machine (SVM) are adopted to determine the quality of the selected properties. In the first, the sorted population is divided into two sub-populations. These optimization algorithms were implemented on these sub-populations, respectively. Then, these sub-populations are merged to create next repetition population. The performance evaluation of the proposed method is done based on KDD Cup99. The simulation results show that the proposed method has better performance than the other methods in this context.

Keywords: intrusion detection system, differential evolution, firefly algorithm, support vector machine, decision tree

Procedia PDF Downloads 55
8965 Triangulations via Iterated Largest Angle Bisection

Authors: Yeonjune Kang

Abstract:

A triangulation of a planar region is a partition of that region into triangles. In the finite element method, triangulations are often used as the grid underlying a computation. In order to be suitable as a finite element mesh, a triangulation must have well-shaped triangles, according to criteria that depend on the details of the particular problem. For instance, most methods require that all triangles be small and as close to the equilateral shape as possible. Stated differently, one wants to avoid having either thin or flat triangles in the triangulation. There are many triangulation procedures, a particular one being the one known as the longest edge bisection algorithm described below. Starting with a given triangle, locate the midpoint of the longest edge and join it to the opposite vertex of the triangle. Two smaller triangles are formed; apply the same bisection procedure to each of these triangles. Continuing in this manner after n steps one obtains a triangulation of the initial triangle into 2n smaller triangles. The longest edge algorithm was first considered in the late 70’s. It was shown by various authors that this triangulation has the desirable properties for the finite element method: independently of the number of iterations the angles of these triangles cannot get too small; moreover, the size of the triangles decays exponentially. In the present paper we consider a related triangulation algorithm we refer to as the largest angle bisection procedure. As the name suggests, rather than bisecting the longest edge, at each step we bisect the largest angle. We study the properties of the resulting triangulation and prove that, while the general behavior resembles the one in the longest edge bisection algorithm, there are several notable differences as well.

Keywords: angle bisectors, geometry, triangulation, applied mathematics

Procedia PDF Downloads 367
8964 A Robust and Adaptive Unscented Kalman Filter for the Air Fine Alignment of the Strapdown Inertial Navigation System/GPS

Authors: Jian Shi, Baoguo Yu, Haonan Jia, Meng Liu, Ping Huang

Abstract:

Adapting to the flexibility of war, a large number of guided weapons launch from aircraft. Therefore, the inertial navigation system loaded in the weapon needs to undergo an alignment process in the air. This article proposes the following methods to the problem of inaccurate modeling of the system under large misalignment angles, the accuracy reduction of filtering caused by outliers, and the noise changes in GPS signals: first, considering the large misalignment errors of Strapdown Inertial Navigation System (SINS)/GPS, a more accurate model is made rather than to make a small-angle approximation, and the Unscented Kalman Filter (UKF) algorithms are used to estimate the state; then, taking into account the impact of GPS noise changes on the fine alignment algorithm, the innovation adaptive filtering algorithm is introduced to estimate the GPS’s noise in real-time; at the same time, in order to improve the anti-interference ability of the air fine alignment algorithm, a robust filtering algorithm based on outlier detection is combined with the air fine alignment algorithm to improve the robustness of the algorithm. The algorithm can improve the alignment accuracy and robustness under interference conditions, which is verified by simulation.

Keywords: air alignment, fine alignment, inertial navigation system, integrated navigation system, UKF

Procedia PDF Downloads 128
8963 Adaptive Envelope Protection Control for the below and above Rated Regions of Wind Turbines

Authors: Mustafa Sahin, İlkay Yavrucuk

Abstract:

This paper presents a wind turbine envelope protection control algorithm that protects Variable Speed Variable Pitch (VSVP) wind turbines from damage during operation throughout their below and above rated regions, i.e. from cut-in to cut-out wind speed. The proposed approach uses a neural network that can adapt to turbines and their operating points. An algorithm monitors instantaneous wind and turbine states, predicts a wind speed that would push the turbine to a pre-defined envelope limit and, when necessary, realizes an avoidance action. Simulations are realized using the MS Bladed Wind Turbine Simulation Model for the NREL 5 MW wind turbine equipped with baseline controllers. In all simulations, through the proposed algorithm, it is observed that the turbine operates safely within the allowable limit throughout the below and above rated regions. Two example cases, adaptations to turbine operating points for the below and above rated regions and protections are investigated in simulations to show the capability of the proposed envelope protection system (EPS) algorithm, which reduces excessive wind turbine loads and expectedly increases the turbine service life.

Keywords: adaptive envelope protection control, limit detection and avoidance, neural networks, ultimate load reduction, wind turbine power control

Procedia PDF Downloads 106
8962 A Matheuristic Algorithm for the School Bus Routing Problem

Authors: Cagri Memis, Muzaffer Kapanoglu

Abstract:

The school bus routing problem (SBRP) is a variant of the Vehicle Routing Problem (VRP) classified as a location-allocation-routing problem. In this study, the SBRP is decomposed into two sub-problems: (1) bus route generation and (2) bus stop selection to solve large instances of the SBRP in reasonable computational times. To solve the first sub-problem, we propose a genetic algorithm to generate bus routes. Once the routes have been fixed, a sub-problem remains of allocating students to stops considering the capacity of the buses and the walkability constraints of the students. While the exact method solves small-scale problems, treating large-scale problems with the exact method becomes complex due to computational problems, a deficiency that the genetic algorithm can overcome. Results obtained from the proposed approach on 150 instances up to 250 stops show that the matheuristic algorithm provides better solutions in reasonable computational times with respect to benchmark algorithms.

Keywords: genetic algorithm, matheuristic, school bus routing problem, vehicle routing problem

Procedia PDF Downloads 40
8961 The Mechanical Properties of a Small-Size Seismic Isolation Rubber Bearing for Bridges

Authors: Yi F. Wu, Ai Q. Li, Hao Wang

Abstract:

Taking a novel type of bridge bearings with the diameter being 100mm as an example, the theoretical analysis, the experimental research as well as the numerical simulation of the bearing were conducted. Since the normal compression-shear machines cannot be applied to the small-size bearing, an improved device to test the properties of the bearing was proposed and fabricated. Besides, the simulation of the bearing was conducted on the basis of the explicit finite element software ANSYS/LS-DYNA, and some parameters of the bearing are modified in the finite element model to effectively reduce the computation cost. Results show that all the research methods are capable of revealing the fundamental properties of the small-size bearings, and a combined use of these methods can better catch both the integral properties and the inner detailed mechanical behaviors of the bearing.

Keywords: ANSYS/LS-DYNA, compression shear, contact analysis, explicit algorithm, small-size

Procedia PDF Downloads 154
8960 Handshake Algorithm for Minimum Spanning Tree Construction

Authors: Nassiri Khalid, El Hibaoui Abdelaaziz et Hajar Moha

Abstract:

In this paper, we introduce and analyse a probabilistic distributed algorithm for a construction of a minimum spanning tree on network. This algorithm is based on the handshake concept. Firstly, each network node is considered as a sub-spanning tree. And at each round of the execution of our algorithm, a sub-spanning trees are merged. The execution continues until all sub-spanning trees are merged into one. We analyze this algorithm by a stochastic process.

Keywords: Spanning tree, Distributed Algorithm, Handshake Algorithm, Matching, Probabilistic Analysis

Procedia PDF Downloads 631
8959 Enhancing Technical Trading Strategy on the Bitcoin Market using News Headlines and Language Models

Authors: Mohammad Hosein Panahi, Naser Yazdani

Abstract:

we present a technical trading strategy that leverages the FinBERT language model and financial news analysis with a focus on news related to a subset of Nasdaq 100 stocks. Our approach surpasses the baseline Range Break-out strategy in the Bitcoin market, yielding a remarkable 24.8% increase in the win ratio for all Friday trades and an impressive 48.9% surge in short trades specifically on Fridays. Moreover, we conduct rigorous hypothesis testing to establish the statistical significance of these improvements. Our findings underscore considerable potential of our NLP-driven approach in enhancing trading strategies and achieving greater profitability within financial markets.

Keywords: quantitative finance, technical analysis, bitcoin market, NLP, language models, FinBERT, technical trading

Procedia PDF Downloads 33
8958 A Clustering Algorithm for Massive Texts

Authors: Ming Liu, Chong Wu, Bingquan Liu, Lei Chen

Abstract:

Internet users have to face the massive amount of textual data every day. Organizing texts into categories can help users dig the useful information from large-scale text collection. Clustering, in fact, is one of the most promising tools for categorizing texts due to its unsupervised characteristic. Unfortunately, most of traditional clustering algorithms lose their high qualities on large-scale text collection. This situation mainly attributes to the high- dimensional vectors generated from texts. To effectively and efficiently cluster large-scale text collection, this paper proposes a vector reconstruction based clustering algorithm. Only the features that can represent the cluster are preserved in cluster’s representative vector. This algorithm alternately repeats two sub-processes until it converges. One process is partial tuning sub-process, where feature’s weight is fine-tuned by iterative process. To accelerate clustering velocity, an intersection based similarity measurement and its corresponding neuron adjustment function are proposed and implemented in this sub-process. The other process is overall tuning sub-process, where the features are reallocated among different clusters. In this sub-process, the features useless to represent the cluster are removed from cluster’s representative vector. Experimental results on the three text collections (including two small-scale and one large-scale text collections) demonstrate that our algorithm obtains high quality on both small-scale and large-scale text collections.

Keywords: vector reconstruction, large-scale text clustering, partial tuning sub-process, overall tuning sub-process

Procedia PDF Downloads 404