Search results for: Best subset technique.
3127 A Novel Prediction Method for Tag SNP Selection using Genetic Algorithm based on KNN
Authors: Li-Yeh Chuang, Yu-Jen Hou, Jr., Cheng-Hong Yang
Abstract:
Single nucleotide polymorphisms (SNPs) hold much promise as a basis for disease-gene association. However, research is limited by the cost of genotyping the tremendous number of SNPs. Therefore, it is important to identify a small subset of informative SNPs, the so-called tag SNPs. This subset consists of selected SNPs of the genotypes, and accurately represents the rest of the SNPs. Furthermore, an effective evaluation method is needed to evaluate prediction accuracy of a set of tag SNPs. In this paper, a genetic algorithm (GA) is applied to tag SNP problems, and the K-nearest neighbor (K-NN) serves as a prediction method of tag SNP selection. The experimental data used was taken from the HapMap project; it consists of genotype data rather than haplotype data. The proposed method consistently identified tag SNPs with considerably better prediction accuracy than methods from the literature. At the same time, the number of tag SNPs identified was smaller than the number of tag SNPs in the other methods. The run time of the proposed method was much shorter than the run time of the SVM/STSA method when the same accuracy was reached.
Keywords: Genetic Algorithm (GA), Genotype, Single nucleotide polymorphism (SNP), tag SNPs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17713126 Integrated Subset Split for Balancing Network Utilization and Quality of Routing
Authors: S. V. Kasmir Raja, P. Herbert Raj
Abstract:
The overlay approach has been widely used by many service providers for Traffic Engineering (TE) in large Internet backbones. In the overlay approach, logical connections are set up between edge nodes to form a full mesh virtual network on top of the physical topology. IP routing is then run over the virtual network. Traffic engineering objectives are achieved through carefully routing logical connections over the physical links. Although the overlay approach has been implemented in many operational networks, it has a number of well-known scaling issues. This paper proposes a new approach to achieve traffic engineering without full-mesh overlaying with the help of integrated approach and equal subset split method. Traffic engineering needs to determine the optimal routing of traffic over the existing network infrastructure by efficiently allocating resource in order to optimize traffic performance on an IP network. Even though constraint-based routing [1] of Multi-Protocol Label Switching (MPLS) is developed to address this need, since it is not widely tested or debugged, Internet Service Providers (ISPs) resort to TE methods under Open Shortest Path First (OSPF), which is the most commonly used intra-domain routing protocol. Determining OSPF link weights for optimal network performance is an NP-hard problem. As it is not possible to solve this problem, we present a subset split method to improve the efficiency and performance by minimizing the maximum link utilization in the network via a small number of link weight modifications. The results of this method are compared against results of MPLS architecture [9] and other heuristic methods.
Keywords: Constraint based routing, Link Utilization, Subsetsplit method and Traffic Engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13963125 Combined Feature Based Hyperspectral Image Classification Technique Using Support Vector Machines
Authors: Mrs.K.Kavitha, S.Arivazhagan
Abstract:
A spatial classification technique incorporating a State of Art Feature Extraction algorithm is proposed in this paper for classifying a heterogeneous classes present in hyper spectral images. The classification accuracy can be improved if and only if both the feature extraction and classifier selection are proper. As the classes in the hyper spectral images are assumed to have different textures, textural classification is entertained. Run Length feature extraction is entailed along with the Principal Components and Independent Components. A Hyperspectral Image of Indiana Site taken by AVIRIS is inducted for the experiment. Among the original 220 bands, a subset of 120 bands is selected. Gray Level Run Length Matrix (GLRLM) is calculated for the selected forty bands. From GLRLMs the Run Length features for individual pixels are calculated. The Principle Components are calculated for other forty bands. Independent Components are calculated for next forty bands. As Principal & Independent Components have the ability to represent the textural content of pixels, they are treated as features. The summation of Run Length features, Principal Components, and Independent Components forms the Combined Features which are used for classification. SVM with Binary Hierarchical Tree is used to classify the hyper spectral image. Results are validated with ground truth and accuracies are calculated.
Keywords: Multi-class, Run Length features, PCA, ICA, classification and Support Vector Machines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15223124 Study of Efficiency and Capability LZW++ Technique in Data Compression
Authors: Yusof. Mohd Kamir, Mat Deris. Mohd Sufian, Abidin. Ahmad Faisal Amri
Abstract:
The purpose of this paper is to show efficiency and capability LZWµ in data compression. The LZWµ technique is enhancement from existing LZW technique. The modification the existing LZW is needed to produce LZWµ technique. LZW read one by one character at one time. Differ with LZWµ technique, where the LZWµ read three characters at one time. This paper focuses on data compression and tested efficiency and capability LZWµ by different data format such as doc type, pdf type and text type. Several experiments have been done by different types of data format. The results shows LZWµ technique is better compared to existing LZW technique in term of file size.
Keywords: Data Compression, Huffman Encoding, LZW, LZWµ, RLL, Size.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20893123 The Effect of Increment in Simulation Samples on a Combined Selection Procedure
Authors: Mohammad H. Almomani, Rosmanjawati Abdul Rahman
Abstract:
Statistical selection procedures are used to select the best simulated system from a finite set of alternatives. In this paper, we present a procedure that can be used to select the best system when the number of alternatives is large. The proposed procedure consists a combination between Ranking and Selection, and Ordinal Optimization procedures. In order to improve the performance of Ordinal Optimization, Optimal Computing Budget Allocation technique is used to determine the best simulation lengths for all simulation systems and to reduce the total computation time. We also argue the effect of increment in simulation samples for the combined procedure. The results of numerical illustration show clearly the effect of increment in simulation samples on the proposed combination of selection procedure.Keywords: Indifference-Zone, Optimal Computing Budget Allocation, Ordinal Optimization, Ranking and Selection, Subset Selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12413122 Entropy Based Spatial Design: A Genetic Algorithm Approach (Case Study)
Authors: Abbas Siefi, Mohammad Javad Karimifar
Abstract:
We study the spatial design of experiment and we want to select a most informative subset, having prespecified size, from a set of correlated random variables. The problem arises in many applied domains, such as meteorology, environmental statistics, and statistical geology. In these applications, observations can be collected at different locations and possibly at different times. In spatial design, when the design region and the set of interest are discrete then the covariance matrix completely describe any objective function and our goal is to choose a feasible design that minimizes the resulting uncertainty. The problem is recast as that of maximizing the determinant of the covariance matrix of the chosen subset. This problem is NP-hard. For using these designs in computer experiments, in many cases, the design space is very large and it's not possible to calculate the exact optimal solution. Heuristic optimization methods can discover efficient experiment designs in situations where traditional designs cannot be applied, exchange methods are ineffective and exact solution not possible. We developed a GA algorithm to take advantage of the exploratory power of this algorithm. The successful application of this method is demonstrated in large design space. We consider a real case of design of experiment. In our problem, design space is very large and for solving the problem, we used proposed GA algorithm.
Keywords: Spatial design of experiments, maximum entropy sampling, computer experiments, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16573121 Timescape-Based Panoramic View for Historic Landmarks
Authors: H. Ali, A. Whitehead
Abstract:
Providing a panoramic view of famous landmarks around the world offers artistic and historic value for historians, tourists, and researchers. Exploring the history of famous landmarks by presenting a comprehensive view of a temporal panorama merged with geographical and historical information presents a unique challenge of dealing with images that span a long period, from the 1800’s up to the present. This work presents the concept of temporal panorama through a timeline display of aligned historic and modern images for many famous landmarks. Utilization of this panorama requires a collection of hundreds of thousands of landmark images from the Internet comprised of historic images and modern images of the digital age. These images have to be classified for subset selection to keep the more suitable images that chronologically document a landmark’s history. Processing of historic images captured using older analog technology under various different capturing conditions represents a big challenge when they have to be used with modern digital images. Successful processing of historic images to prepare them for next steps of temporal panorama creation represents an active contribution in cultural heritage preservation through the fulfillment of one of UNESCO goals in preservation and displaying famous worldwide landmarks.
Keywords: Cultural heritage, image registration, image subset selection, registered image similarity, temporal panorama, timescapes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10513120 A Study on the Average Information Ratio of Perfect Secret-Sharing Schemes for Access Structures Based On Bipartite Graphs
Authors: Hui-Chuan Lu
Abstract:
A perfect secret-sharing scheme is a method to distribute a secret among a set of participants in such a way that only qualified subsets of participants can recover the secret and the joint share of participants in any unqualified subset is statistically independent of the secret. The collection of all qualified subsets is called the access structure of the perfect secret-sharing scheme. In a graph-based access structure, each vertex of a graph G represents a participant and each edge of G represents a minimal qualified subset. The average information ratio of a perfect secret-sharing scheme realizing the access structure based on G is defined as AR = (Pv2V (G) H(v))/(|V (G)|H(s)), where s is the secret and v is the share of v, both are random variables from and H is the Shannon entropy. The infimum of the average information ratio of all possible perfect secret-sharing schemes realizing a given access structure is called the optimal average information ratio of that access structure. Most known results about the optimal average information ratio give upper bounds or lower bounds on it. In this present structures based on bipartite graphs and determine the exact values of the optimal average information ratio of some infinite classes of them.
Keywords: secret-sharing scheme, average information ratio, star covering, core sequence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15793119 Experimental Study of Eccentrically Loaded Columns Strengthened Using a Steel Jacketing Technique
Authors: Mohamed K. Elsamny, Adel A. Hussein, Amr M. Nafie, Mohamed K. Abd-Elhamed
Abstract:
An experimental study of Reinforced Concrete, RC, columns strengthened using a steel jacketing technique was conducted. The jacketing technique consisted of four steel vertical angles installed at the corners of the column joined by horizontal steel straps confining the column externally. The effectiveness of the technique was evaluated by testing the RC column specimens under eccentric monotonic loading until failure occurred. Strain gauges were installed to monitor the strains in the internal reinforcement as well as the external jacketing system. The effectiveness of the jacketing technique was demonstrated, and the parameters affecting the technique were studied.
Keywords: Reinforced Concrete Columns, Steel Jacketing, Strengthening, Eccentric Load.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38853118 A New Bound on the Average Information Ratio of Perfect Secret-Sharing Schemes for Access Structures Based On Bipartite Graphs of Larger Girth
Authors: Hui-Chuan Lu
Abstract:
In a perfect secret-sharing scheme, a dealer distributes a secret among a set of participants in such a way that only qualified subsets of participants can recover the secret and the joint share of the participants in any unqualified subset is statistically independent of the secret. The access structure of the scheme refers to the collection of all qualified subsets. In a graph-based access structures, each vertex of a graph G represents a participant and each edge of G represents a minimal qualified subset. The average information ratio of a perfect secret-sharing scheme realizing a given access structure is the ratio of the average length of the shares given to the participants to the length of the secret. The infimum of the average information ratio of all possible perfect secret-sharing schemes realizing an access structure is called the optimal average information ratio of that access structure. We study the optimal average information ratio of the access structures based on bipartite graphs. Based on some previous results, we give a bound on the optimal average information ratio for all bipartite graphs of girth at least six. This bound is the best possible for some classes of bipartite graphs using our approach.
Keywords: Secret-sharing scheme, average information ratio, star covering, deduction, core cluster.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14343117 Enhancement of Low Contrast Satellite Images using Discrete Cosine Transform and Singular Value Decomposition
Authors: A. K. Bhandari, A. Kumar, P. K. Padhy
Abstract:
In this paper, a novel contrast enhancement technique for contrast enhancement of a low-contrast satellite image has been proposed based on the singular value decomposition (SVD) and discrete cosine transform (DCT). The singular value matrix represents the intensity information of the given image and any change on the singular values change the intensity of the input image. The proposed technique converts the image into the SVD-DCT domain and after normalizing the singular value matrix; the enhanced image is reconstructed by using inverse DCT. The visual and quantitative results suggest that the proposed SVD-DCT method clearly shows the increased efficiency and flexibility of the proposed method over the exiting methods such as Linear Contrast Stretching technique, GHE technique, DWT-SVD technique, DWT technique, Decorrelation Stretching technique, Gamma Correction method based techniques.Keywords: Singular Value Decomposition (SVD), discretecosine transforms (DCT), image equalization and satellite imagecontrast enhancement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38383116 The Feedback Control for Distributed Systems
Authors: Kamil Aida-zade, C. Ardil
Abstract:
We study the problem of synthesis of lumped sources control for the objects with distributed parameters on the basis of continuous observation of phase state at given points of object. In the proposed approach the phase state space (phase space) is beforehand somehow partitioned at observable points into given subsets (zones). The synthesizing control actions therewith are taken from the class of piecewise constant functions. The current values of control actions are determined by the subset of phase space that contains the aggregate of current states of object at the observable points (in these states control actions take constant values). In the paper such synthesized control actions are called zone control actions. A technique to obtain optimal values of zone control actions with the use of smooth optimization methods is given. With this aim, the formulas of objective functional gradient in the space of zone control actions are obtained.Keywords: Feedback control, distributed systems, smooth optimization methods, lumped control synthesis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6113115 Development of Energy Management System Based on Internet of Things Technique
Authors: Wen-Jye Shyr, Chia-Ming Lin, Hung-Yun Feng
Abstract:
The purpose of this study was to develop an energy management system for university campuses based on the Internet of Things (IoT) technique. The proposed IoT technique based on WebAccess is used via network browser Internet Explore and applies TCP/IP protocol. The case study of IoT for lighting energy usage management system was proposed. Structure of proposed IoT technique included perception layer, equipment layer, control layer, application layer and network layer.
Keywords: Energy management, IoT technique, Sensor, WebAccess
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11373114 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule
Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu
Abstract:
Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.Keywords: Instance selection, data reduction, MapReduce, kNN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10173113 Object Allocation with Replication in Distributed Systems
Authors: H. T. Barney, G. C. Low
Abstract:
The design of distributed systems involves dividing the system into partitions (or components) and then allocating these partitions to physical nodes. There have been several techniques proposed for both the partitioning and allocation processes. These existing techniques suffer from a number of limitations including lack of support for replication. Replication is difficult to use effectively but has the potential to greatly improve the performance of a distributed system. This paper presents a new technique technique for allocating objects in order to improve performance in a distributed system that supports replication. The performance of the proposed technique is demonstrated and tested on an example system. The performance of the new technique is compared with the performance of an existing technique in order to demonstrate both the validity and superiority of the new technique when developing a distributed system that can utilise object replication.
Keywords: Allocation, Distributed Systems, Replication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18293112 Blind Channel Estimation Based on URV Decomposition Technique for Uplink of MC-CDMA
Authors: Pradya Pornnimitkul, Suwich Kunaruttanapruk, Bamrung Tau Sieskul, Somchai Jitapunkul
Abstract:
In this paper, we investigate a blind channel estimation method for Multi-carrier CDMA systems that use a subspace decomposition technique. This technique exploits the orthogonality property between the noise subspace and the received user codes to obtain channel of each user. In the past we used Singular Value Decomposition (SVD) technique but SVD have most computational complexity so in this paper use a new algorithm called URV Decomposition, which serve as an intermediary between the QR decomposition and SVD, replaced in SVD technique to track the noise space of the received data. Because of the URV decomposition has almost the same estimation performance as the SVD, but has less computational complexity.
Keywords: Channel estimation, MC-CDMA, SVD, URV.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17803111 Robot Movement Using the Trust Region Policy Optimization
Authors: Romisaa Ali
Abstract:
The Policy Gradient approach is a subset of the Deep Reinforcement Learning (DRL) combines Deep Neural Networks (DNN) with Reinforcement Learning (RL). This approach finds the optimal policy of robot movement, based on the experience it gains from interaction with its environment. Unlike previous policy gradient algorithms, which were unable to handle the two types of error variance and bias introduced by the DNN model due to over- or underestimation, this algorithm is capable of handling both types of error variance and bias. This article will discuss the state-of-the-art SOTA policy gradient technique, trust region policy optimization (TRPO), by applying this method in various environments compared to another policy gradient method, the Proximal Policy Optimization (PPO), to explain their robust optimization, using this SOTA to gather experience data during various training phases after observing the impact of hyper-parameters on neural network performance.
Keywords: Deep neural networks, deep reinforcement learning, Proximal Policy Optimization, state-of-the-art, trust region policy optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1843110 Project Selection by Using a Fuzzy TOPSIS Technique
Authors: M. Salehi, R. Tavakkoli-Moghaddam
Abstract:
Selection of a project among a set of possible alternatives is a difficult task that the decision maker (DM) has to face. In this paper, by using a fuzzy TOPSIS technique we propose a new method for a project selection problem. After reviewing four common methods of comparing investment alternatives (net present value, rate of return, benefit cost analysis and payback period) we use them as criteria in a TOPSIS technique. First we calculate the weight of each criterion by a pairwise comparison and then we utilize the improved TOPSIS assessment for the project selection.Keywords: Fuzzy Theory, Pairwise Comparison, ProjectSelection, TOPSIS Technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26393109 Thread Lift: Classification, Technique, and How to Approach to the Patient
Authors: Panprapa Yongtrakul, Punyaphat Sirithanabadeekul, Pakjira Siriphan
Abstract:
Background: The thread lift technique has become popular because it is less invasive, requires a shorter operation, less downtime, and results in fewer postoperative complications. The advantage of the technique is that the thread can be inserted under the skin without the need for long incisions. Currently, there are a lot of thread lift techniques with respect to the specific types of thread used on specific areas, such as the mid-face, lower face, or neck area. Objective: To review the thread lift technique for specific areas according to type of thread, patient selection, and how to match the most appropriate to the patient. Materials and Methods: A literature review technique was conducted by searching PubMed and MEDLINE, then compiled and summarized. Result: We have divided our protocols into two sections: Protocols for short suture, and protocols for long suture techniques. We also created 3D pictures for each technique to enhance understanding and application in a clinical setting. Conclusion: There are advantages and disadvantages to short suture and long suture techniques. The best outcome for each patient depends on appropriate patient selection and determining the most suitable technique for the defect and area of patient concern.
Keywords: Thread lift, thread lift method, thread lift technique, thread lift procedure, threading.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 102193108 A Novel Digital Watermarking Technique Basedon ISB (Intermediate Significant Bit)
Authors: Akram M. Zeki, Azizah A. Manaf
Abstract:
Least Significant Bit (LSB) technique is the earliest developed technique in watermarking and it is also the most simple, direct and common technique. It essentially involves embedding the watermark by replacing the least significant bit of the image data with a bit of the watermark data. The disadvantage of LSB is that it is not robust against attacks. In this study intermediate significant bit (ISB) has been used in order to improve the robustness of the watermarking system. The aim of this model is to replace the watermarked image pixels by new pixels that can protect the watermark data against attacks and at the same time keeping the new pixels very close to the original pixels in order to protect the quality of watermarked image. The technique is based on testing the value of the watermark pixel according to the range of each bit-plane.Keywords: Watermarking, LSB, ISB, Robustness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17083107 The Balanced Hamiltonian Cycle on the Toroidal Mesh Graphs
Authors: Wen-Fang Peng, Justie Su-Tzu Juan
Abstract:
The balanced Hamiltonian cycle problemis a quiet new topic of graph theorem. Given a graph G = (V, E), whose edge set can be partitioned into k dimensions, for positive integer k and a Hamiltonian cycle C on G. The set of all i-dimensional edge of C, which is a subset by E(C), is denoted as Ei(C).
Keywords: Hamiltonian cycle, balanced, Cartesian product.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14543106 Application of l1-Norm Minimization Technique to Image Retrieval
Authors: C. S. Sastry, Saurabh Jain, Ashish Mishra
Abstract:
Image retrieval is a topic where scientific interest is currently high. The important steps associated with image retrieval system are the extraction of discriminative features and a feasible similarity metric for retrieving the database images that are similar in content with the search image. Gabor filtering is a widely adopted technique for feature extraction from the texture images. The recently proposed sparsity promoting l1-norm minimization technique finds the sparsest solution of an under-determined system of linear equations. In the present paper, the l1-norm minimization technique as a similarity metric is used in image retrieval. It is demonstrated through simulation results that the l1-norm minimization technique provides a promising alternative to existing similarity metrics. In particular, the cases where the l1-norm minimization technique works better than the Euclidean distance metric are singled out.
Keywords: l1-norm minimization, content based retrieval, modified Gabor function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34323105 Merging and Comparing Ontologies Generically
Authors: Xiuzhan Guo, Arthur Berrill, Ajinkya Kulkarni, Kostya Belezko, Min Luo
Abstract:
Ontology operations, e.g., aligning and merging, were studied and implemented extensively in different settings, such as, categorical operations, relation algebras, typed graph grammars, with different concerns. However, aligning and merging operations in the settings share some generic properties, e.g., idempotence, commutativity, associativity, and representativity, which are defined on an ontology merging system, given by a nonempty set of the ontologies concerned, a binary relation on the set of the ontologies modeling ontology aligning, and a partial binary operation on the set of the ontologies modeling ontology merging. Given an ontology repository, a finite subset of the set of the ontologies, its merging closure is the smallest subset of the set of the ontologies, which contains the repository and is closed with respect to merging. If idempotence, commutativity, associativity, and representativity properties are satisfied, then both the set of the ontologies and the merging closure of the ontology repository are partially ordered naturally by merging, the merging closure of the ontology repository is finite and can be computed, compared, and sorted efficiently, including sorting, selecting, and querying some specific elements, e.g., maximal ontologies and minimal ontologies. An ontology Valignment pair is a pair of ontology homomorphisms with a common domain. We also show that the ontology merging system, given by ontology V-alignment pairs and pushouts, satisfies idempotence, commutativity, associativity, and representativity properties so that the merging system is partially ordered and the merging closure of a given repository with respect to pushouts can be computed efficiently.
Keywords: Ontology aligning, ontology merging, merging system, poset, merging closure, ontology V-alignment pair, ontology homomorphism, ontology V-alignment pair homomorphism, pushout.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3313104 Face Recognition using a Kernelization of Graph Embedding
Authors: Pang Ying Han, Hiew Fu San, Ooi Shih Yin
Abstract:
Linearization of graph embedding has been emerged as an effective dimensionality reduction technique in pattern recognition. However, it may not be optimal for nonlinearly distributed real world data, such as face, due to its linear nature. So, a kernelization of graph embedding is proposed as a dimensionality reduction technique in face recognition. In order to further boost the recognition capability of the proposed technique, the Fisher-s criterion is opted in the objective function for better data discrimination. The proposed technique is able to characterize the underlying intra-class structure as well as the inter-class separability. Experimental results on FRGC database validate the effectiveness of the proposed technique as a feature descriptor.Keywords: Face recognition, Fisher discriminant, graph embedding, kernelization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17013103 Characterization of Adhesive Layers in Sandwich Composites by Nondestructive Technique
Authors: E. Barkanov, E. Skukis, M. Wesolowski, A. Chate
Abstract:
New nondestructive technique, namely an inverse technique based on vibration tests, to characterize nonlinear mechanical properties of adhesive layers in sandwich composites is developed. An adhesive layer is described as a viscoelastic isotropic material with storage and loss moduli which are both frequency dependent values in wide frequency range. An optimization based on the planning of experiments and response surface technique to minimize the error functional is applied to decrease considerably the computational expenses. The developed identification technique has been tested on aluminum panels and successfully applied to characterize viscoelastic material properties of 3M damping polymer ISD-112 used as a core material in sandwich panels.
Keywords: Adhesive layer, finite element method, inverse technique, sandwich panel, vibration test, viscoelastic material properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22513102 Image Ranking to Assist Object Labeling for Training Detection Models
Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman
Abstract:
Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.Keywords: Computer vision, deep learning, object detection, semiconductor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8273101 A Sufficient Condition for Graphs to Have Hamiltonian [a, b]-Factors
Authors: Sizhong Zhou
Abstract:
Let a and b be nonnegative integers with 2 ≤ a < b, and let G be a Hamiltonian graph of order n with n ≥ (a+b−4)(a+b−2) b−2 . An [a, b]-factor F of G is called a Hamiltonian [a, b]-factor if F contains a Hamiltonian cycle. In this paper, it is proved that G has a Hamiltonian [a, b]-factor if |NG(X)| > (a−1)n+|X|−1 a+b−3 for every nonempty independent subset X of V (G) and δ(G) > (a−1)n+a+b−4 a+b−3 .
Keywords: graph, minimum degree, neighborhood, [a, b]-factor, Hamiltonian [a, b]-factor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12353100 An Efficient Energy Adaptive Hybrid Error Correction Technique for Underwater Wireless Sensor Networks
Authors: Ammar Elyas babiker, M.Nordin B. Zakaria, Hassan Yosif, Samir B. Ibrahim
Abstract:
Variable channel conditions in underwater networks, and variable distances between sensors due to water current, leads to variable bit error rate (BER). This variability in BER has great effects on energy efficiency of error correction techniques used. In this paper an efficient energy adaptive hybrid error correction technique (AHECT) is proposed. AHECT adaptively changes error technique from pure retransmission (ARQ) in a low BER case to a hybrid technique with variable encoding rates (ARQ & FEC) in a high BER cases. An adaptation algorithm depends on a precalculated packet acceptance rate (PAR) look-up table, current BER, packet size and error correction technique used is proposed. Based on this adaptation algorithm a periodically 3-bit feedback is added to the acknowledgment packet to state which error correction technique is suitable for the current channel conditions and distance. Comparative studies were done between this technique and other techniques, and the results show that AHECT is more energy efficient and has high probability of success than all those techniques.Keywords: Underwater communication, wireless sensornetworks, error correction technique, energy efficiency
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21513099 Relative Radiometric Correction of Cloudy Multitemporal Satellite Imagery
Authors: Seema Biday, Udhav Bhosle
Abstract:
Repeated observation of a given area over time yields potential for many forms of change detection analysis. These repeated observations are confounded in terms of radiometric consistency due to changes in sensor calibration over time, differences in illumination, observation angles and variation in atmospheric effects. This paper demonstrates applicability of an empirical relative radiometric normalization method to a set of multitemporal cloudy images acquired by Resourcesat1 LISS III sensor. Objective of this study is to detect and remove cloud cover and normalize an image radiometrically. Cloud detection is achieved by using Average Brightness Threshold (ABT) algorithm. The detected cloud is removed and replaced with data from another images of the same area. After cloud removal, the proposed normalization method is applied to reduce the radiometric influence caused by non surface factors. This process identifies landscape elements whose reflectance values are nearly constant over time, i.e. the subset of non-changing pixels are identified using frequency based correlation technique. The quality of radiometric normalization is statistically assessed by R2 value and mean square error (MSE) between each pair of analogous band.Keywords: Correlation, Frequency domain, Multitemporal, Relative Radiometric Correction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19803098 A New Approach to Face Recognition Using Dual Dimension Reduction
Authors: M. Almas Anjum, M. Younus Javed, A. Basit
Abstract:
In this paper a new approach to face recognition is presented that achieves double dimension reduction, making the system computationally efficient with better recognition results and out perform common DCT technique of face recognition. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results change with change in face image resolution and provide optimal results when arriving at a certain resolution level. In the proposed model of face recognition, initially image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to increased computational speed and feature extraction potential of Discrete Cosine Transform (DCT), it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A tradeoff between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL , Yale and EME color database.Keywords: Biometrics, DCT, Face Recognition, Illumination, Computation, Feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686