Search results for: Feature Selection
1665 Bio-inspired Audio Content-Based Retrieval Framework (B-ACRF)
Authors: Noor A. Draman, Campbell Wilson, Sea Ling
Abstract:
Content-based music retrieval generally involves analyzing, searching and retrieving music based on low or high level features of a song which normally used to represent artists, songs or music genre. Identifying them would normally involve feature extraction and classification tasks. Theoretically the greater features analyzed, the better the classification accuracy can be achieved but with longer execution time. Technique to select significant features is important as it will reduce dimensions of feature used in classification and contributes to the accuracy. Artificial Immune System (AIS) approach will be investigated and applied in the classification task. Bio-inspired audio content-based retrieval framework (B-ACRF) is proposed at the end of this paper where it embraces issues that need further consideration in music retrieval performances.
Keywords: Bio-inspired audio content-based retrieval framework, features selection technique, low/high level features, artificial immune system
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15941664 Enhanced Bidirectional Selection Sort
Authors: Jyoti Dua
Abstract:
An algorithm is a well-defined procedure that takes some input in the form of some values, processes them and gives the desired output. It forms the basis of many other algorithms such as searching, pattern matching, digital filters etc., and other applications have been found in database systems, data statistics and processing, data communications and pattern matching. This paper introduces algorithmic “Enhanced Bidirectional Selection” sort which is bidirectional, stable. It is said to be bidirectional as it selects two values smallest from the front and largest from the rear and assigns them to their appropriate locations thus reducing the number of passes by half the total number of elements as compared to selection sort.
Keywords: Bubble sort, cocktail sort, selection sort, heap sort.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23741663 A Spatial Hypergraph Based Semi-Supervised Band Selection Method for Hyperspectral Imagery Semantic Interpretation
Authors: Akrem Sellami, Imed Riadh Farah
Abstract:
Hyperspectral imagery (HSI) typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image. Hence, a pixel in HSI is a high-dimensional vector of intensities with a large spectral range and a high spectral resolution. Therefore, the semantic interpretation is a challenging task of HSI analysis. We focused in this paper on object classification as HSI semantic interpretation. However, HSI classification still faces some issues, among which are the following: The spatial variability of spectral signatures, the high number of spectral bands, and the high cost of true sample labeling. Therefore, the high number of spectral bands and the low number of training samples pose the problem of the curse of dimensionality. In order to resolve this problem, we propose to introduce the process of dimensionality reduction trying to improve the classification of HSI. The presented approach is a semi-supervised band selection method based on spatial hypergraph embedding model to represent higher order relationships with different weights of the spatial neighbors corresponding to the centroid of pixel. This semi-supervised band selection has been developed to select useful bands for object classification. The presented approach is evaluated on AVIRIS and ROSIS HSIs and compared to other dimensionality reduction methods. The experimental results demonstrate the efficacy of our approach compared to many existing dimensionality reduction methods for HSI classification.Keywords: Hyperspectral image, spatial hypergraph, dimensionality reduction, semantic interpretation, band selection, feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12201662 A Model for Test Case Selection in the Software-Development Life Cycle
Authors: Adtha Lawanna
Abstract:
Software maintenance is one of the essential processes of Software-Development Life Cycle. The main philosophies of retaining software concern the improvement of errors, the revision of codes, the inhibition of future errors, and the development in piece and capacity. While the adjustment has been employing, the software structure has to be retested to an upsurge a level of assurance that it will be prepared due to the requirements. According to this state, the test cases must be considered for challenging the revised modules and the whole software. A concept of resolving this problem is ongoing by regression test selection such as the retest-all selections, random/ad-hoc selection and the safe regression test selection. Particularly, the traditional techniques concern a mapping between the test cases in a test suite and the lines of code it executes. However, there are not only the lines of code as one of the requirements that can affect the size of test suite but including the number of functions and faulty versions. Therefore, a model for test case selection is developed to cover those three requirements by the integral technique which can produce the smaller size of the test cases when compared with the traditional regression selection techniques.
Keywords: Software maintenance, regression test selection, test case.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16981661 A Model for Test Case Selection in the Software-Development Life Cycle
Authors: Adtha Lawanna
Abstract:
Software maintenance is one of the essential processes of Software-Development Life Cycle. The main philosophies of retaining software concern the improvement of errors, the revision of codes, the inhibition of future errors, and the development in piece and capacity. While the adjustment has been employing, the software structure has to be retested to an upsurge a level of assurance that it will be prepared due to the requirements. According to this state, the test cases must be considered for challenging the revised modules and the whole software. A concept of resolving this problem is ongoing by regression test selection such as the retest-all selections, random/ad-hoc selection and the safe regression test selection. Particularly, the traditional techniques concern a mapping between the test cases in a test suite and the lines of code it executes. However, there are not only the lines of code as one of the requirements that can affect the size of test suite but including the number of functions and faulty versions. Therefore, a model for test case selection is developed to cover those three requirements by the integral technique which can produce the smaller size of the test cases when compared with the traditional regression selection techniques.
Keywords: Software maintenance, regression test selection, test case.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15991660 Towards a Systematic, Cost-Effective Approach for ERP Selection
Authors: Hassan Haghighi, Omid Mafi
Abstract:
Existing experiences indicate that one of the most prominent reasons that some ERP implementations fail is related to selecting an improper ERP package. Among those important factors resulting in inappropriate ERP selections, one is to ignore preliminary activities that should be done before the evaluation of ERP packages. Another factor yielding these unsuitable selections is that usually organizations employ prolonged and costly selection processes in such extent that sometimes the process would never be finalized or sometimes the evaluation team might perform many key final activities in an incomplete or inaccurate way due to exhaustion, lack of interest or out-of-date data. In this paper, a systematic approach that recommends some activities to be done before and after the main selection phase is introduced for choosing an ERP package. On the other hand, the proposed approach has utilized some ideas that accelerates the selection process at the same time that reduces the probability of an erroneous final selection.Keywords: enterprise resource planning, evaluation and selectionof ERP packages, organizational readiness for employing ERP, evaluationlists.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15501659 Development of an Ensemble Classification Model Based on Hybrid Filter-Wrapper Feature Selection for Email Phishing Detection
Authors: R. B. Ibrahim, M. S. Argungu, I. M. Mungadi
Abstract:
It is obvious in this present time, internet has become an indispensable part of human life since its inception. The Internet has provided diverse opportunities to make life so easy for human beings, through the adoption of various channels. Among these channels are email, internet banking, video conferencing, and the like. Email is one of the easiest means of communication hugely accepted among individuals and organizations globally. But over decades the security integrity of this platform has been challenged with malicious activities like Phishing. Email phishing is designed by phishers to fool the recipient into handing over sensitive personal information such as passwords, credit card numbers, account credentials, social security numbers, etc. This activity has caused a lot of financial damage to email users globally which has resulted in bankruptcy, sudden death of victims, and other health-related sicknesses. Although many methods have been proposed to detect email phishing, in this research, the results of multiple machine-learning methods for predicting email phishing have been compared with the use of filter-wrapper feature selection. It is worth noting that all three models performed substantially but one outperformed the other. The dataset used for these models is obtained from Kaggle online data repository, while three classifiers: decision tree, Naïve Bayes, and Logistic regression are ensemble (Bagging) respectively. Results from the study show that the Decision Tree (CART) bagging ensemble recorded the highest accuracy of 98.13% using PEF (Phishing Essential Features). This result further demonstrates the dependability of the proposed model.
Keywords: Ensemble, hybrid, filter-wrapper, phishing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1791658 Relay Node Selection Algorithm for Cooperative Communications in Wireless Networks
Authors: Sunmyeng Kim
Abstract:
IEEE 802.11a/b/g standards support multiple transmission rates. Even though the use of multiple transmission rates increase the WLAN capacity, this feature leads to the performance anomaly problem. Cooperative communication was introduced to relieve the performance anomaly problem. Data packets are delivered to the destination much faster through a relay node with high rate than through direct transmission to the destination at low rate. In the legacy cooperative protocols, a source node chooses a relay node only based on the transmission rate. Therefore, they are not so feasible in multi-flow environments since they do not consider the effect of other flows. To alleviate the effect, we propose a new relay node selection algorithm based on the transmission rate and channel contention level. Performance evaluation is conducted using simulation, and shows that the proposed protocol significantly outperforms the previous protocol in terms of throughput and delay.
Keywords: Cooperative communications, MAC protocol, Relay node, WLAN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29351657 Dataset Analysis Using Membership-Deviation Graph
Authors: Itgel Bayarsaikhan, Jimin Lee, Sejong Oh
Abstract:
Classification is one of the primary themes in computational biology. The accuracy of classification strongly depends on quality of a dataset, and we need some method to evaluate this quality. In this paper, we propose a new graphical analysis method using 'Membership-Deviation Graph (MDG)' for analyzing quality of a dataset. MDG represents degree of membership and deviations for instances of a class in the dataset. The result of MDG analysis is used for understanding specific feature and for selecting best feature for classification.Keywords: feature, classification, machine learning algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14451656 3D CAD Models and its Feature Similarity
Authors: Elmi Abu Bakar, Tetsuo Miyake, Zhong Zhang, Takashi Imamura
Abstract:
Knowing the geometrical object pose of products in manufacturing line before robot manipulation is required and less time consuming for overall shape measurement. In order to perform it, the information of shape representation and matching of objects is become required. Objects are compared with its descriptor that conceptually subtracted from each other to form scalar metric. When the metric value is smaller, the object is considered closed to each other. Rotating the object from static pose in some direction introduce the change of value in scalar metric value of boundary information after feature extraction of related object. In this paper, a proposal method for indexing technique for retrieval of 3D geometrical models based on similarity between boundaries shapes in order to measure 3D CAD object pose using object shape feature matching for Computer Aided Testing (CAT) system in production line is proposed. In experimental results shows the effectiveness of proposed method.
Keywords: CAD, rendering, feature extraction, feature classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19791655 Automatic Moment-Based Texture Segmentation
Authors: Tudor Barbu
Abstract:
An automatic moment-based texture segmentation approach is proposed in this paper. First, we describe the related work in this computer vision domain. Our texture feature extraction, the first part of the texture recognition process, produces a set of moment-based feature vectors. For each image pixel, a texture feature vector is computed as a sequence of area moments. Then, an automatic pixel classification approach is proposed. The feature vectors are clustered using an unsupervised classification algorithm, the optimal number of clusters being determined using a measure based on validation indexes. From the resulted pixel classes one determines easily the desired texture regions of the image.
Keywords: Image segmentation, moment-based texture analysis, automatic classification, validity indexes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23791654 Feature Based Dense Stereo Matching using Dynamic Programming and Color
Authors: Hajar Sadeghi, Payman Moallem, S. Amirhassn Monadjemi
Abstract:
This paper presents a new feature based dense stereo matching algorithm to obtain the dense disparity map via dynamic programming. After extraction of some proper features, we use some matching constraints such as epipolar line, disparity limit, ordering and limit of directional derivative of disparity as well. Also, a coarseto- fine multiresolution strategy is used to decrease the search space and therefore increase the accuracy and processing speed. The proposed method links the detected feature points into the chains and compares some of the feature points from different chains, to increase the matching speed. We also employ color stereo matching to increase the accuracy of the algorithm. Then after feature matching, we use the dynamic programming to obtain the dense disparity map. It differs from the classical DP methods in the stereo vision, since it employs sparse disparity map obtained from the feature based matching stage. The DP is also performed further on a scan line, between any matched two feature points on that scan line. Thus our algorithm is truly an optimization method. Our algorithm offers a good trade off in terms of accuracy and computational efficiency. Regarding the results of our experiments, the proposed algorithm increases the accuracy from 20 to 70%, and reduces the running time of the algorithm almost 70%.Keywords: Chain Correspondence, Color Stereo Matching, Dynamic Programming, Epipolar Line, Stereo Vision.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23491653 Selection Initial modes for Belief K-modes Method
Authors: Sarra Ben Hariz, Zied Elouedi, Khaled Mellouli
Abstract:
The belief K-modes method (BKM) approach is a new clustering technique handling uncertainty in the attribute values of objects in both the cluster construction task and the classification one. Like the standard version of this method, the BKM results depend on the chosen initial modes. So, one selection method of initial modes is developed, in this paper, aiming at improving the performances of the BKM approach. Experiments with several sets of real data show that by considered the developed selection initial modes method, the clustering algorithm produces more accurate results.Keywords: Clustering, Uncertainty, Belief function theory, Belief K-modes Method, Initial modes selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18141652 TOPSIS Method for Supplier Selection Problem
Authors: Omid Jadidi, Fatemeh Firouzi, Enzo Bagliery
Abstract:
Supplier selection, in real situation, is affected by several qualitative and quantitative factors and is one of the most important activities of purchasing department. Since at the time of evaluating suppliers against the criteria or factors, decision makers (DMS) do not have precise, exact and complete information, supplier selection becomes more difficult. In this case, Grey theory helps us to deal with this problem of uncertainty. Here, we apply Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method to evaluate and select the best supplier by using interval fuzzy numbers. Through this article, we compare TOPSIS with some other approaches and afterward demonstrate that the concept of TOPSIS is very important for ranking and selecting right supplier.Keywords: TOPSIS, fuzzy number, MADM, Supplier selection
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 130021651 Hospital Facility Location Selection Using Permanent Analytics Process
Authors: C. Ardil
Abstract:
In this paper, a new MCDMA approach, the permanent analytics process is proposed to assess the immovable valuation criteria and their significance in the placement of the healthcare facility. Five decision factors are considered for the value and selection of immovables. In the multiple factor selection problems, the priority vector of the criteria used to compare several immovables is first determined using the permanent analytics method, a mathematical model for the multiple criteria decisionmaking process. Then, to demonstrate the viability and efficacy of the suggested approach, twenty potential candidate locations were evaluated using the hospital site selection problem's decision criteria. The ranking accuracy of estimation was evaluated using composite programming, which took into account both the permanent analytics process and the weighted multiplicative model.
Keywords: Hospital Facility Location Selection, Permanent Analytics Process, Multiple Criteria Decision Making (MCDM)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4351650 Efficient and Effective Gabor Feature Representation for Face Detection
Authors: Yasuomi D. Sato, Yasutaka Kuriya
Abstract:
We here propose improved version of elastic graph matching (EGM) as a face detector, called the multi-scale EGM (MS-EGM). In this improvement, Gabor wavelet-based pyramid reduces computational complexity for the feature representation often used in the conventional EGM, but preserving a critical amount of information about an image. The MS-EGM gives us higher detection performance than Viola-Jones object detection algorithm of the AdaBoost Haar-like feature cascade. We also show rapid detection speeds of the MS-EGM, comparable to the Viola-Jones method. We find fruitful benefits in the MS-EGM, in terms of topological feature representation for a face.
Keywords: Face detection, Gabor wavelet based pyramid, elastic graph matching, topological preservation, redundancy of computational complexity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18751649 Applying Fuzzy Decision Making Approach to IT Outsourcing Supplier Selection
Authors: Gülcin Büyüközkan, Mehmet Sakir Ersoy
Abstract:
The decision of information technology (IT) outsourcing requires close attention to the evaluation of supplier selection process because the selection decision involves conflicting multiple criteria and is replete with complex decision making problems. Selecting the most appropriate suppliers is considered an important strategic decision that may impact the performance of outsourcing engagements. The objective of this paper is to aid decision makers to evaluate and assess possible IT outsourcing suppliers. An axiomatic design based fuzzy group decision making is adopted to evaluate supplier alternatives. Finally, a case study is given to demonstrate the potential of the methodology. KeywordsIT outsourcing, Supplier selection, Multi-criteria decision making, Axiomatic design, Fuzzy logic.Keywords: IT outsourcing, Supplier selection, Multi-criteria decision making, Axiomatic design, Fuzzy logic
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19521648 Prediction of Cardiovascular Disease by Applying Feature Extraction
Authors: Nebi Gedik
Abstract:
Heart disease threatens the lives of a great number of people every year around the world. Heart issues lead to many of all deaths; therefore, early diagnosis and treatment are critical. The diagnosis of heart disease is complicated due to several factors affecting health such as high blood pressure, raised cholesterol, an irregular pulse rhythm, and more. Artificial intelligence has the potential to assist in the early detection and treatment of diseases. Improving heart failure prediction is one of the primary goals of research on heart disease risk assessment. This study aims to determine the features that provide the most successful classification prediction in detecting cardiovascular disease. The performances of each feature are compared using the K-Nearest Neighbor machine learning method. The feature that gives the most successful performance has been identified.
Keywords: Cardiovascular disease, feature extraction, supervised learning, k-NN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1371647 Selection of Solid Waste Landfill Site Using Geographical Information System (GIS)
Abstract:
Rapid population growth, urbanization and industrialization are known as the most important factors of environment problems. Elimination and management of solid wastes are also within the most important environment problems. One of the main problems in solid waste management is the selection of the best site for elimination of solid wastes. Lately, Geographical Information System (GIS) has been used for easing selection of landfill area. GIS has the ability of imitating necessary economic, environmental and political limitations. They play an important role for the site selection of landfill area as a decision support tool. In this study; map layers will be studied for minimum effect of environmental, social and cultural factors and maximum effect for engineering/economic factors for site selection of landfill areas and using GIS for a decision support mechanism in solid waste landfill areas site selection will be presented in Aksaray/Turkey city, Güzelyurt district practice.Keywords: GIS, landfill, solid waste, spatial analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31801646 Improved Feature Processing for Iris Biometric Authentication System
Authors: Somnath Dey, Debasis Samanta
Abstract:
Iris-based biometric authentication is gaining importance in recent times. Iris biometric processing however, is a complex process and computationally very expensive. In the overall processing of iris biometric in an iris-based biometric authentication system, feature processing is an important task. In feature processing, we extract iris features, which are ultimately used in matching. Since there is a large number of iris features and computational time increases as the number of features increases, it is therefore a challenge to develop an iris processing system with as few as possible number of features and at the same time without compromising the correctness. In this paper, we address this issue and present an approach to feature extraction and feature matching process. We apply Daubechies D4 wavelet with 4 levels to extract features from iris images. These features are encoded with 2 bits by quantizing into 4 quantization levels. With our proposed approach it is possible to represent an iris template with only 304 bits, whereas existing approaches require as many as 1024 bits. In addition, we assign different weights to different iris region to compare two iris templates which significantly increases the accuracy. Further, we match the iris template based on a weighted similarity measure. Experimental results on several iris databases substantiate the efficacy of our approach.Keywords: Iris recognition, biometric, feature processing, patternrecognition, pattern matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21411645 Feature Point Reduction for Video Stabilization
Authors: Theerawat Songyot, Tham Manjing, Bunyarit Uyyanonvara, Chanjira Sinthanayothin
Abstract:
Corner detection and optical flow are common techniques for feature-based video stabilization. However, these algorithms are computationally expensive and should be performed at a reasonable rate. This paper presents an algorithm for discarding irrelevant feature points and maintaining them for future use so as to improve the computational cost. The algorithm starts by initializing a maintained set. The feature points in the maintained set are examined against its accuracy for modeling. Corner detection is required only when the feature points are insufficiently accurate for future modeling. Then, optical flows are computed from the maintained feature points toward the consecutive frame. After that, a motion model is estimated based on the simplified affine motion model and least square method, with outliers belonging to moving objects presented. Studentized residuals are used to eliminate such outliers. The model estimation and elimination processes repeat until no more outliers are identified. Finally, the entire algorithm repeats along the video sequence with the points remaining from the previous iteration used as the maintained set. As a practical application, an efficient video stabilization can be achieved by exploiting the computed motion models. Our study shows that the number of times corner detection needs to perform is greatly reduced, thus significantly improving the computational cost. Moreover, optical flow vectors are computed for only the maintained feature points, not for outliers, thus also reducing the computational cost. In addition, the feature points after reduction can sufficiently be used for background objects tracking as demonstrated in the simple video stabilizer based on our proposed algorithm.
Keywords: background object tracking, feature point reduction, low cost tracking, video stabilization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17671644 Exploiting Global Self Similarity for Head-Shoulder Detection
Authors: Lae-Jeong Park, Jung-Ho Moon
Abstract:
People detection from images has a variety of applications such as video surveillance and driver assistance system, but is still a challenging task and more difficult in crowded environments such as shopping malls in which occlusion of lower parts of human body often occurs. Lack of the full-body information requires more effective features than common features such as HOG. In this paper, new features are introduced that exploits global self-symmetry (GSS) characteristic in head-shoulder patterns. The features encode the similarity or difference of color histograms and oriented gradient histograms between two vertically symmetric blocks. The domain-specific features are rapid to compute from the integral images in Viola-Jones cascade-of-rejecters framework. The proposed features are evaluated with our own head-shoulder dataset that, in part, consists of a well-known INRIA pedestrian dataset. Experimental results show that the GSS features are effective in reduction of false alarmsmarginally and the gradient GSS features are preferred more often than the color GSS ones in the feature selection.
Keywords: Pedestrian detection, cascade of rejecters, feature extraction, self-symmetry, HOG.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24001643 Regression Test Selection Technique for Multi-Programming Language
Authors: Walid S. Abd El-hamid, Sherif S. El-Etriby, Mohiy M. Hadhoud
Abstract:
Regression testing is a maintenance activity applied to modified software to provide confidence that the changed parts are correct and that the unchanged parts have not been adversely affected by the modifications. Regression test selection techniques reduce the cost of regression testing, by selecting a subset of an existing test suite to use in retesting modified programs. This paper presents the first general regression-test-selection technique, which based on code and allows selecting test cases for any programs written in any programming language. Then it handles incomplete program. We also describe RTSDiff, a regression-test-selection system that implements the proposed technique. The results of the empirical studied that performed in four programming languages java, C#, Cµ and Visual basic show that the efficiency and effective in reducing the size of test suit.Keywords: Regression testing, testing, test selection, softwareevolution, software maintenance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15331642 A Speeded up Robust Scale-Invariant Feature Transform Currency Recognition Algorithm
Authors: Daliyah S. Aljutaili, Redna A. Almutlaq, Suha A. Alharbi, Dina M. Ibrahim
Abstract:
All currencies around the world look very different from each other. For instance, the size, color, and pattern of the paper are different. With the development of modern banking services, automatic methods for paper currency recognition become important in many applications like vending machines. One of the currency recognition architecture’s phases is Feature detection and description. There are many algorithms that are used for this phase, but they still have some disadvantages. This paper proposes a feature detection algorithm, which merges the advantages given in the current SIFT and SURF algorithms, which we call, Speeded up Robust Scale-Invariant Feature Transform (SR-SIFT) algorithm. Our proposed SR-SIFT algorithm overcomes the problems of both the SIFT and SURF algorithms. The proposed algorithm aims to speed up the SIFT feature detection algorithm and keep it robust. Simulation results demonstrate that the proposed SR-SIFT algorithm decreases the average response time, especially in small and minimum number of best key points, increases the distribution of the number of best key points on the surface of the currency. Furthermore, the proposed algorithm increases the accuracy of the true best point distribution inside the currency edge than the other two algorithms.
Keywords: Currency recognition, feature detection and description, SIFT algorithm, SURF algorithm, speeded up and robust features.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8641641 Geometric Operators in the Selection of Human Resources
Authors: José M. Merigó, Anna M. Gil-Lafuente
Abstract:
We study the possibility of using geometric operators in the selection of human resources. We develop three new methods that use the ordered weighted geometric (OWG) operator in different indexes used for the selection of human resources. The objective of these models is to manipulate the neutrality of the old methods so the decision maker is able to select human resources according to his particular attitude. In order to develop these models, first a short revision of the OWG operator is developed. Second, we briefly explain the general process for the selection of human resources. Then, we develop the three new indexes. They will use the OWG operator in the Hamming distance, in the adequacy coefficient and in the index of maximum and minimum level. Finally, an illustrative example about the new approach is given.Keywords: OWG operator, decision making, human resources, Hamming distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14041640 3D Model Retrieval based on Normal Vector Interpolation Method
Authors: Ami Kim, Oubong Gwun, Juwhan Song
Abstract:
In this paper, we proposed the distribution of mesh normal vector direction as a feature descriptor of a 3D model. A normal vector shows the entire shape of a model well. The distribution of normal vectors was sampled in proportion to each polygon's area so that the information on the surface with less surface area may be less reflected on composing a feature descriptor in order to enhance retrieval performance. At the analysis result of ANMRR, the enhancement of approx. 12.4%~34.7% compared to the existing method has also been indicated.Keywords: Interpolated Normal Vector, Feature Descriptor, 3DModel Retrieval.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14741639 Project Selection by Using Fuzzy AHP and TOPSIS Technique
Authors: S. Mahmoodzadeh, J. Shahrabi, M. Pariazar, M. S. Zaeri
Abstract:
In this article, by using fuzzy AHP and TOPSIS technique we propose a new method for project selection problem. After reviewing four common methods of comparing alternatives investment (net present value, rate of return, benefit cost analysis and payback period) we use them as criteria in AHP tree. In this methodology by utilizing improved Analytical Hierarchy Process by Fuzzy set theory, first we try to calculate weight of each criterion. Then by implementing TOPSIS algorithm, assessment of projects has been done. Obtained results have been tested in a numerical example.Keywords: Fuzzy AHP, Project Selection, TOPSIS Technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 66041638 Vague Multiple Criteria Decision Making Analysis Method for Fighter Aircraft Selection
Authors: C. Ardil
Abstract:
Fighter aircraft selection is one of the most critical strategies for defense multiple criteria decision-making analysis to increase the decisive power of air defense and its superior power in the defense strategy. Vague set theory is an adequate approach for modeling vagueness, uncertainty, and imprecision in decision-making problems. This study integrates vague set theory and the technique for order of preference by similarity to ideal solution (TOPSIS) to support fighter aircraft selection. The proposed method is applied in the selection of fighter aircraft for the Air Force. In the proposed approach, the ratings of alternatives and the importance weights of criteria for fighter aircraft selection are represented by the vague set theory. Finally, an illustrative example for fighter aircraft selection is given to demonstrate the applicability and effectiveness of the proposed approach. The fighter aircraft candidates were selected under six criteria including costability, payloadability, maneuverability, speedability, stealthility, and survivability. Analysis results show that the best fighter aircraft is selected with the highest closeness coefficient value. The proposed method can also be applied to solve other multiple criteria decision analysis problems.
Keywords: fighter aircraft selection, vague set theory, fuzzy set theory, neutrosophic set theory, multiple criteria decision making analysis, MCDMA, TOPSIS
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5411637 Voice Command Recognition System Based on MFCC and VQ Algorithms
Authors: Mahdi Shaneh, Azizollah Taheri
Abstract:
The goal of this project is to design a system to recognition voice commands. Most of voice recognition systems contain two main modules as follow “feature extraction" and “feature matching". In this project, MFCC algorithm is used to simulate feature extraction module. Using this algorithm, the cepstral coefficients are calculated on mel frequency scale. VQ (vector quantization) method will be used for reduction of amount of data to decrease computation time. In the feature matching stage Euclidean distance is applied as similarity criterion. Because of high accuracy of used algorithms, the accuracy of this voice command system is high. Using these algorithms, by at least 5 times repetition for each command, in a single training session, and then twice in each testing session zero error rate in recognition of commands is achieved.Keywords: MFCC, Vector quantization, Vocal tract, Voicecommand.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31571636 Optimized Preprocessing for Accurate and Efficient Bioassay Prediction with Machine Learning Algorithms
Authors: Jeff Clarine, Chang-Shyh Peng, Daisy Sang
Abstract:
Bioassay is the measurement of the potency of a chemical substance by its effect on a living animal or plant tissue. Bioassay data and chemical structures from pharmacokinetic and drug metabolism screening are mined from and housed in multiple databases. Bioassay prediction is calculated accordingly to determine further advancement. This paper proposes a four-step preprocessing of datasets for improving the bioassay predictions. The first step is instance selection in which dataset is categorized into training, testing, and validation sets. The second step is discretization that partitions the data in consideration of accuracy vs. precision. The third step is normalization where data are normalized between 0 and 1 for subsequent machine learning processing. The fourth step is feature selection where key chemical properties and attributes are generated. The streamlined results are then analyzed for the prediction of effectiveness by various machine learning algorithms including Pipeline Pilot, R, Weka, and Excel. Experiments and evaluations reveal the effectiveness of various combination of preprocessing steps and machine learning algorithms in more consistent and accurate prediction.
Keywords: Bioassay, machine learning, preprocessing, virtual screen.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 982