Search results for: smartphone applications; photo mosaic; similarity search; data mining; large-scale image databases.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11205

Search results for: smartphone applications; photo mosaic; similarity search; data mining; large-scale image databases.

9585 Modeling Uncertainty in Multiple Criteria Decision Making Using the Technique for Order Preference by Similarity to Ideal Solution for the Selection of Stealth Combat Aircraft

Authors: C. Ardil

Abstract:

Uncertainty set theory is a generalization of fuzzy set theory and intuitionistic fuzzy set theory. It serves as an effective tool for dealing with inconsistent, imprecise, and vague information. The technique for order preference by similarity to ideal solution (TOPSIS) method is a multiple-attribute method used to identify solutions from a finite set of alternatives. It simultaneously minimizes the distance from an ideal point and maximizes the distance from a nadir point. In this paper, an extension of the TOPSIS method for multiple attribute group decision-making (MAGDM) based on uncertainty sets is presented. In uncertainty decision analysis, decision-makers express information about attribute values and weights using uncertainty numbers to select the best stealth combat aircraft.

Keywords: Uncertainty set, stealth combat aircraft selection multiple criteria decision-making analysis, MCDM, uncertainty decision analysis, TOPSIS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 122
9584 Automatic Text Summarization

Authors: Mohamed Abdel Fattah, Fuji Ren

Abstract:

This work proposes an approach to address automatic text summarization. This approach is a trainable summarizer, which takes into account several features, including sentence position, positive keyword, negative keyword, sentence centrality, sentence resemblance to the title, sentence inclusion of name entity, sentence inclusion of numerical data, sentence relative length, Bushy path of the sentence and aggregated similarity for each sentence to generate summaries. First we investigate the effect of each sentence feature on the summarization task. Then we use all features score function to train genetic algorithm (GA) and mathematical regression (MR) models to obtain a suitable combination of feature weights. The proposed approach performance is measured at several compression rates on a data corpus composed of 100 English religious articles. The results of the proposed approach are promising.

Keywords: Automatic Summarization, Genetic Algorithm, Mathematical Regression, Text Features.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2324
9583 Codebook Generation for Vector Quantization on Orthogonal Polynomials based Transform Coding

Authors: R. Krishnamoorthi, N. Kannan

Abstract:

In this paper, a new algorithm for generating codebook is proposed for vector quantization (VQ) in image coding. The significant features of the training image vectors are extracted by using the proposed Orthogonal Polynomials based transformation. We propose to generate the codebook by partitioning these feature vectors into a binary tree. Each feature vector at a non-terminal node of the binary tree is directed to one of the two descendants by comparing a single feature associated with that node to a threshold. The binary tree codebook is used for encoding and decoding the feature vectors. In the decoding process the feature vectors are subjected to inverse transformation with the help of basis functions of the proposed Orthogonal Polynomials based transformation to get back the approximated input image training vectors. The results of the proposed coding are compared with the VQ using Discrete Cosine Transform (DCT) and Pairwise Nearest Neighbor (PNN) algorithm. The new algorithm results in a considerable reduction in computation time and provides better reconstructed picture quality.

Keywords: Orthogonal Polynomials, Image Coding, Vector Quantization, TSVQ, Binary Tree Classifier

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2139
9582 Neural Network Based Approach for Face Detection cum Face Recognition

Authors: Kesari Verma, Aniruddha S. Thoke, Pritam Singh

Abstract:

Automatic face detection is a complex problem in image processing. Many methods exist to solve this problem such as template matching, Fisher Linear Discriminate, Neural Networks, SVM, and MRC. Success has been achieved with each method to varying degrees and complexities. In proposed algorithm we used upright, frontal faces for single gray scale images with decent resolution and under good lighting condition. In the field of face recognition technique the single face is matched with single face from the training dataset. The author proposed a neural network based face detection algorithm from the photographs as well as if any test data appears it check from the online scanned training dataset. Experimental result shows that the algorithm detected up to 95% accuracy for any image.

Keywords: Face Detection, Face Recognition, NN Approach, PCA Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2291
9581 A Hybrid Metaheuristic Framework for Evolving the PROAFTN Classifier

Authors: Feras Al-Obeidat, Nabil Belacel, Juan A. Carretero, Prabhat Mahanti,

Abstract:

In this paper, a new learning algorithm based on a hybrid metaheuristic integrating Differential Evolution (DE) and Reduced Variable Neighborhood Search (RVNS) is introduced to train the classification method PROAFTN. To apply PROAFTN, values of several parameters need to be determined prior to classification. These parameters include boundaries of intervals and relative weights for each attribute. Based on these requirements, the hybrid approach, named DEPRO-RVNS, is presented in this study. In some cases, the major problem when applying DE to some classification problems was the premature convergence of some individuals to local optima. To eliminate this shortcoming and to improve the exploration and exploitation capabilities of DE, such individuals were set to iteratively re-explored using RVNS. Based on the generated results on both training and testing data, it is shown that the performance of PROAFTN is significantly improved. Furthermore, the experimental study shows that DEPRO-RVNS outperforms well-known machine learning classifiers in a variety of problems.

Keywords: Knowledge Discovery, Differential Evolution, Reduced Variable Neighborhood Search, Multiple criteria classification, PROAFTN, Supervised Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1471
9580 Feature Subset Selection Using Ant Colony Optimization

Authors: Ahmed Al-Ani

Abstract:

Feature selection is an important step in many pattern classification problems. It is applied to select a subset of features, from a much larger set, such that the selected subset is sufficient to perform the classification task. Due to its importance, the problem of feature selection has been investigated by many researchers. In this paper, a novel feature subset search procedure that utilizes the Ant Colony Optimization (ACO) is presented. The ACO is a metaheuristic inspired by the behavior of real ants in their search for the shortest paths to food sources. It looks for optimal solutions by considering both local heuristics and previous knowledge. When applied to two different classification problems, the proposed algorithm achieved very promising results.

Keywords: Ant Colony Optimization, ant systems, feature selection, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1585
9579 Application of Computer Aided Engineering Tools in Performance Prediction and Fault Detection of Mechanical Equipment of Mining Process Line

Authors: K. Jahani, J. Razavi

Abstract:

Nowadays, to decrease the number of downtimes in the industries such as metal mining, petroleum and chemical industries, predictive maintenance is crucial. In order to have efficient predictive maintenance, knowing the performance of critical equipment of production line such as pumps and hydro-cyclones under variable operating parameters, selecting best indicators of this equipment health situations, best locations for instrumentation, and also measuring of these indicators are very important. In this paper, computer aided engineering (CAE) tools are implemented to study some important elements of copper process line, namely slurry pumps and cyclone to predict the performance of these components under different working conditions. These modeling and simulations can be used in predicting, for example, the damage tolerance of the main shaft of the slurry pump or wear rate and location of cyclone wall or pump case and impeller. Also, the simulations can suggest best-measuring parameters, measuring intervals, and their locations.

Keywords: Computer aided engineering, predictive maintenance, fault detection, mining process line, slurry pump, hydrocyclone.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1828
9578 Adaptive Motion Estimator Based on Variable Block Size Scheme

Authors: S. Dhahri, A. Zitouni, H. Chaouch, R. Tourki

Abstract:

This paper presents an adaptive motion estimator that can be dynamically reconfigured by the best algorithm depending on the variation of the video nature during the lifetime of an application under running. The 4 Step Search (4SS) and the Gradient Search (GS) algorithms are integrated in the estimator in order to be used in the case of rapid and slow video sequences respectively. The Full Search Block Matching (FSBM) algorithm has been also integrated in order to be used in the case of the video sequences which are not real time oriented. In order to efficiently reduce the computational cost while achieving better visual quality with low cost power, the proposed motion estimator is based on a Variable Block Size (VBS) scheme that uses only the 16x16, 16x8, 8x16 and 8x8 modes. Experimental results show that the adaptive motion estimator allows better results in term of Peak Signal to Noise Ratio (PSNR), computational cost, FPGA occupied area, and dissipated power relatively to the most popular variable block size schemes presented in the literature.

Keywords: H264, Configurable Motion Estimator, VariableBlock Size, PSNR, Dissipated power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643
9577 Improvement in Power Transformer Intelligent Dissolved Gas Analysis Method

Authors: S. Qaedi, S. Seyedtabaii

Abstract:

Non-Destructive evaluation of in-service power transformer condition is necessary for avoiding catastrophic failures. Dissolved Gas Analysis (DGA) is one of the important methods. Traditional, statistical and intelligent DGA approaches have been adopted for accurate classification of incipient fault sources. Unfortunately, there are not often enough faulty patterns required for sufficient training of intelligent systems. By bootstrapping the shortcoming is expected to be alleviated and algorithms with better classification success rates to be obtained. In this paper the performance of an artificial neural network, K-Nearest Neighbour and support vector machine methods using bootstrapped data are detailed and shown that while the success rate of the ANN algorithms improves remarkably, the outcome of the others do not benefit so much from the provided enlarged data space. For assessment, two databases are employed: IEC TC10 and a dataset collected from reported data in papers. High average test success rate well exhibits the remarkable outcome.

Keywords: Dissolved gas analysis, Transformer incipient fault, Artificial Neural Network, Support Vector Machine (SVM), KNearest Neighbor (KNN)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2729
9576 A New Spectral-based Approach to Query-by-Humming for MP3 Songs Database

Authors: Leon Fu, Xiangyang Xue

Abstract:

In this paper, we propose a new approach to query-by-humming, focusing on MP3 songs database. Since MP3 songs are much more difficult in melody representation than symbolic performance data, we adopt to extract feature descriptors from the vocal sounds part of the songs. Our approach is based on signal filtering, sub-band spectral processing, MDCT coefficients analysis and peak energy detection by ignorance of the background music as much as possible. Finally, we apply dual dynamic programming algorithm for feature similarity matching. Experiments will show us its online performance in precision and efficiency.

Keywords: DP, MDCT, MP3, QBH.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772
9575 Investigating Polynomial Interpolation Functions for Zooming Low Resolution Digital Medical Images

Authors: Maninder Pal

Abstract:

Medical digital images usually have low resolution because of nature of their acquisition. Therefore, this paper focuses on zooming these images to obtain better level of information, required for the purpose of medical diagnosis. For this purpose, a strategy for selecting pixels in zooming operation is proposed. It is based on the principle of analog clock and utilizes a combination of point and neighborhood image processing. In this approach, the hour hand of clock covers the portion of image to be processed. For alignment, the center of clock points at middle pixel of the selected portion of image. The minute hand is longer in length, and is used to gain information about pixels of the surrounding area. This area is called neighborhood pixels region. This information is used to zoom the selected portion of the image. The proposed algorithm is implemented and its performance is evaluated for many medical images obtained from various sources such as X-ray, Computerized Tomography (CT) scan and Magnetic Resonance Imaging (MRI). However, for illustration and simplicity, the results obtained from a CT scanned image of head is presented. The performance of algorithm is evaluated in comparison to various traditional algorithms in terms of Peak signal-to-noise ratio (PSNR), maximum error, SSIM index, mutual information and processing time. From the results, the proposed algorithm is found to give better performance than traditional algorithms.

Keywords: Zooming, interpolation, medical images, resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1568
9574 A Family of Minimal Residual Based Algorithm for Adaptive Filtering

Authors: Noor Atinah Ahmad

Abstract:

The Minimal Residual (MR) is modified for adaptive filtering application. Three forms of MR based algorithm are presented: i) the low complexity SPCG, ii) MREDSI, and iii) MREDSII. The low complexity is a reduced complexity version of a previously proposed SPCG algorithm. Approximations introduced reduce the algorithm to an LMS type algorithm, but, maintain the superior convergence of the SPCG algorithm. Both MREDSI and MREDSII are MR based methods with Euclidean direction of search. The choice of Euclidean directions is shown via simulation to give better misadjustment compared to their gradient search counterparts.

Keywords: Adaptive filtering, Adaptive least square, Minimalresidual method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1432
9573 Evolutionary Query Optimization for Heterogeneous Distributed Database Systems

Authors: Reza Ghaemi, Amin Milani Fard, Hamid Tabatabaee, Mahdi Sadeghizadeh

Abstract:

Due to new distributed database applications such as huge deductive database systems, the search complexity is constantly increasing and we need better algorithms to speedup traditional relational database queries. An optimal dynamic programming method for such high dimensional queries has the big disadvantage of its exponential order and thus we are interested in semi-optimal but faster approaches. In this work we present a multi-agent based mechanism to meet this demand and also compare the result with some commonly used query optimization algorithms.

Keywords: Information retrieval systems, list fusion methods, document score, multi-agent systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3414
9572 Performance Evaluation of Neural Network Prediction for Data Prefetching in Embedded Applications

Authors: Sofien Chtourou, Mohamed Chtourou, Omar Hammami

Abstract:

Embedded systems need to respect stringent real time constraints. Various hardware components included in such systems such as cache memories exhibit variability and therefore affect execution time. Indeed, a cache memory access from an embedded microprocessor might result in a cache hit where the data is available or a cache miss and the data need to be fetched with an additional delay from an external memory. It is therefore highly desirable to predict future memory accesses during execution in order to appropriately prefetch data without incurring delays. In this paper, we evaluate the potential of several artificial neural networks for the prediction of instruction memory addresses. Neural network have the potential to tackle the nonlinear behavior observed in memory accesses during program execution and their demonstrated numerous hardware implementation emphasize this choice over traditional forecasting techniques for their inclusion in embedded systems. However, embedded applications execute millions of instructions and therefore millions of addresses to be predicted. This very challenging problem of neural network based prediction of large time series is approached in this paper by evaluating various neural network architectures based on the recurrent neural network paradigm with pre-processing based on the Self Organizing Map (SOM) classification technique.

Keywords: Address, data set, memory, prediction, recurrentneural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1669
9571 Choosing R-tree or Quadtree Spatial DataIndexing in One Oracle Spatial Database System to Make Faster Showing Geographical Map in Mobile Geographical Information System Technology

Authors: Maruto Masserie Sardadi, Mohd Shafry bin Mohd Rahim, Zahabidin Jupri, Daut bin Daman

Abstract:

The latest Geographic Information System (GIS) technology makes it possible to administer the spatial components of daily “business object," in the corporate database, and apply suitable geographic analysis efficiently in a desktop-focused application. We can use wireless internet technology for transfer process in spatial data from server to client or vice versa. However, the problem in wireless Internet is system bottlenecks that can make the process of transferring data not efficient. The reason is large amount of spatial data. Optimization in the process of transferring and retrieving data, however, is an essential issue that must be considered. Appropriate decision to choose between R-tree and Quadtree spatial data indexing method can optimize the process. With the rapid proliferation of these databases in the past decade, extensive research has been conducted on the design of efficient data structures to enable fast spatial searching. Commercial database vendors like Oracle have also started implementing these spatial indexing to cater to the large and diverse GIS. This paper focuses on the decisions to choose R-tree and quadtree spatial indexing using Oracle spatial database in mobile GIS application. From our research condition, the result of using Quadtree and R-tree spatial data indexing method in one single spatial database can save the time until 42.5%.

Keywords: Indexing, Mobile GIS, MapViewer, Oracle SpatialDatabase.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4029
9570 Genetic Programming: Principles, Applications and Opportunities for Hydrological Modelling

Authors: Oluwaseun K. Oyebode, Josiah A. Adeyemo

Abstract:

Hydrological modelling plays a crucial role in the planning and management of water resources, most especially in water stressed regions where the need to effectively manage the available water resources is of critical importance. However, due to the complex, nonlinear and dynamic behaviour of hydro-climatic interactions, achieving reliable modelling of water resource systems and accurate projection of hydrological parameters are extremely challenging. Although a significant number of modelling techniques (process-based and data-driven) have been developed and adopted in that regard, the field of hydrological modelling is still considered as one that has sluggishly progressed over the past decades. This is majorly as a result of the identification of some degree of uncertainty in the methodologies and results of techniques adopted. In recent times, evolutionary computation (EC) techniques have been developed and introduced in response to the search for efficient and reliable means of providing accurate solutions to hydrological related problems. This paper presents a comprehensive review of the underlying principles, methodological needs and applications of a promising evolutionary computation modelling technique – genetic programming (GP). It examines the specific characteristics of the technique which makes it suitable to solving hydrological modelling problems. It discusses the opportunities inherent in the application of GP in water related-studies such as rainfall estimation, rainfall-runoff modelling, streamflow forecasting, sediment transport modelling, water quality modelling and groundwater modelling among others. Furthermore, the means by which such opportunities could be harnessed in the near future are discussed. In all, a case for total embracement of GP and its variants in hydrological modelling studies is made so as to put in place strategies that would translate into achieving meaningful progress as it relates to modelling of water resource systems, and also positively influence decision-making by relevant stakeholders.

Keywords: Computational modelling, evolutionary algorithms, genetic programming, hydrological modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3319
9569 Optimization of the Dental Direct Digital Imaging by Applying the Self-Recognition Technology

Authors: Mina Dabirinezhad, Mohsen Bayat Pour, Amin Dabirinejad

Abstract:

This paper is intended to introduce the technology to solve some of the deficiencies of the direct digital radiology. Nowadays, digital radiology is the latest progression in dental imaging, which has become an essential part of dentistry. There are two main parts of the direct digital radiology comprised of an intraoral X-ray machine and a sensor (digital image receptor). The dentists and the dental nurses experience afflictions during the taking image process by the direct digital X-ray machine. For instance, sometimes they need to readjust the sensor in the mouth of the patient to take the X-ray image again due to the low quality of that. Another problem is, the position of the sensor may move in the mouth of the patient and it triggers off an inappropriate image for the dentists. It means that it is a time-consuming process for dentists or dental nurses. On the other hand, taking several the X-ray images brings some problems for the patient such as being harmful to their health and feeling pain in their mouth due to the pressure of the sensor to the jaw. The author provides a technology to solve the above-mentioned issues that is called “Self-Recognition Direct Digital Radiology” (SDDR). This technology is based on the principle that the intraoral X-ray machine is capable to diagnose the location of the sensor in the mouth of the patient automatically. In addition, to solve the aforementioned problems, SDDR technology brings out fewer environmental impacts in comparison to the previous version.

Keywords: Dental direct digital imaging, digital image receptor, digital x-ray machine, and environmental impacts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 584
9568 Optimized Preprocessing for Accurate and Efficient Bioassay Prediction with Machine Learning Algorithms

Authors: Jeff Clarine, Chang-Shyh Peng, Daisy Sang

Abstract:

Bioassay is the measurement of the potency of a chemical substance by its effect on a living animal or plant tissue. Bioassay data and chemical structures from pharmacokinetic and drug metabolism screening are mined from and housed in multiple databases. Bioassay prediction is calculated accordingly to determine further advancement. This paper proposes a four-step preprocessing of datasets for improving the bioassay predictions. The first step is instance selection in which dataset is categorized into training, testing, and validation sets. The second step is discretization that partitions the data in consideration of accuracy vs. precision. The third step is normalization where data are normalized between 0 and 1 for subsequent machine learning processing. The fourth step is feature selection where key chemical properties and attributes are generated. The streamlined results are then analyzed for the prediction of effectiveness by various machine learning algorithms including Pipeline Pilot, R, Weka, and Excel. Experiments and evaluations reveal the effectiveness of various combination of preprocessing steps and machine learning algorithms in more consistent and accurate prediction.

Keywords: Bioassay, machine learning, preprocessing, virtual screen.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 972
9567 A Comparative Study of GTC and PSP Algorithms for Mining Sequential Patterns Embedded in Database with Time Constraints

Authors: Safa Adi

Abstract:

This paper will consider the problem of sequential mining patterns embedded in a database by handling the time constraints as defined in the GSP algorithm (level wise algorithms). We will compare two previous approaches GTC and PSP, that resumes the general principles of GSP. Furthermore this paper will discuss PG-hybrid algorithm, that using PSP and GTC. The results show that PSP and GTC are more efficient than GSP. On the other hand, the GTC algorithm performs better than PSP. The PG-hybrid algorithm use PSP algorithm for the two first passes on the database, and GTC approach for the following scans. Experiments show that the hybrid approach is very efficient for short, frequent sequences.

Keywords: Database, GTC algorithm, PSP algorithm, sequential patterns, time constraints.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 689
9566 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data

Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.

Keywords: Real-Time Spatial Big Data, Quality Of Service, Vertical partitioning, Horizontal partitioning, Matching algorithm, Hamming distance, Stream query.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1048
9565 Response of the Residential Building Structureon Load Technical Seismicity due to Mining Activities

Authors: V. Salajka, Z. Kaláb, J. Kala, P. Hradil

Abstract:

In the territories where high-intensity earthquakes are frequent is paid attention to the solving of the seismic problems. In the paper are described two computational model variants based on finite element method of the construction with different subsoil simulation (rigid or elastic subsoil) is used. For simulation and calculations program system based on method final elements ANSYS was used. Seismic responses calculations of residential building structure were effected on loading characterized by accelerogram for comparing with the responses spectra method.

Keywords: Accelerogram, ANSYS, mining induced seismic, residential building structure, spectra, subsoil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1522
9564 Measurement of Steady Streaming from an Oscillating Bubble Using Particle Image Velocimetry

Authors: Yongseok Kwon, Woowon Jeong, Eunjin Cho, Sangkug Chung, Kyehan Rhee

Abstract:

Steady streaming flow fields induced by a 500 mm bubble oscillating at 12 kHz were measured using microscopic particle image velocimetry (PIV). The accuracy of velocity measurement using a micro PIV system was checked by comparing the measured velocity fields with the theoretical velocity profiles in fully developed laminar flow. The steady streaming flow velocities were measured in the sagittal plane of the bubble attached on the wall. Measured velocity fields showed upward jet flow with two symmetric counter-rotating vortices, and the maximum streaming velocity was about 12 mm/s, which was within the velocity ranges measured by other researchers. The measured streamlines were compared with the analytical solution, and they also showed a reasonable agreement.

Keywords: Oscillating bubble, Particle-Image-Velocimetry microstreaming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1809
9563 A Survey of Model Comparison Strategies and Techniques in Model Driven Engineering

Authors: Junaid Rashid, Waqar Mehmood, Muhammad Wasif Nisar

Abstract:

This survey paper shows the recent state of model comparison as it’s applies to Model Driven engineering. In Model Driven Engineering to calculate the difference between the models is a very important and challenging task. There are number of tasks involved in model differencing that firstly starts with identifying and matching the elements of the model. In this paper, we discuss how model matching is accomplished, the strategies, techniques and the types of the model. We also discuss the future direction. We found out that many of the latest model comparison strategies are geared near enabling Meta model and similarity based matching. Therefore model versioning is the most dominant application of the model comparison. Recently to work on comparison for versioning has begun to deteriorate, giving way to different applications. Ultimately there is wide change among the tools in the measure of client exertion needed to perform model comparisons, as some require more push to encourage more sweeping statement and expressive force.

Keywords: Model comparison, model clone detection, model versioning, EMF Model, model diff.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2166
9562 Increasing the Capacity of Plant Bottlenecks by Using of Improving the Ratio of Mean Time between Failures to Mean Time to Repair

Authors: Jalal Soleimannejad, Mohammad Asadizeidabadi, Mahmoud Koorki, Mojtaba Azarpira

Abstract:

A significant percentage of production costs is the maintenance costs, and analysis of maintenance costs could to achieve greater productivity and competitiveness. With this is mind, the maintenance of machines and installations is considered as an essential part of organizational functions and applying effective strategies causes significant added value in manufacturing activities. Organizations are trying to achieve performance levels on a global scale with emphasis on creating competitive advantage by different methods consist of RCM (Reliability-Center-Maintenance), TPM (Total Productivity Maintenance) etc. In this study, increasing the capacity of Concentration Plant of Golgohar Iron Ore Mining & Industrial Company (GEG) was examined by using of reliability and maintainability analyses. The results of this research showed that instead of increasing the number of machines (in order to solve the bottleneck problems), the improving of reliability and maintainability would solve bottleneck problems in the best way. It should be mention that in the abovementioned study, the data set of Concentration Plant of GEG as a case study, was applied and analyzed.

Keywords: Bottleneck, Golgohar Iron Ore Mining and Industrial Company, maintainability, maintenance costs, reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 949
9561 A Structural Support Vector Machine Approach for Biometric Recognition

Authors: Vishal Awasthi, Atul Kumar Agnihotri

Abstract:

Face is a non-intrusive strong biometrics for identification of original and dummy facial by different artificial means. Face recognition is extremely important in the contexts of computer vision, psychology, surveillance, pattern recognition, neural network, content based video processing. The availability of a widespread face database is crucial to test the performance of these face recognition algorithms. The openly available face databases include face images with a wide range of poses, illumination, gestures and face occlusions but there is no dummy face database accessible in public domain. This paper presents a face detection algorithm based on the image segmentation in terms of distance from a fixed point and template matching methods. This proposed work is having the most appropriate number of nodal points resulting in most appropriate outcomes in terms of face recognition and detection. The time taken to identify and extract distinctive facial features is improved in the range of 90 to 110 sec. with the increment of efficiency by 3%.

Keywords: Face recognition, Principal Component Analysis, PCA, Linear Discriminant Analysis, LDA, Improved Support Vector Machine, iSVM, elastic bunch mapping technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 475
9560 Capturing an Unknown Moving Target in Unknown Territory using Vision and Coordination

Authors: Kiran Ijaz, Umar Manzoor, Arshad Ali Shahid

Abstract:

In this paper we present an extension to Vision Based LRTA* (VLRTA*) known as Vision Based Moving Target Search (VMTS) for capturing unknown moving target in unknown territory with randomly generated obstacles. Target position is unknown to the agents and they cannot predict its position using any probability method. Agents have omni directional vision but can see in one direction at some point in time. Agent-s vision will be blocked by the obstacles in the search space so agent can not see through the obstacles. Proposed algorithm is evaluated on large number of scenarios. Scenarios include grids of sizes from 10x10 to 100x100. Grids had obstacles randomly placed, occupying 0% to 50%, in increments of 10%, of the search space. Experiments used 2 to 9 agents for each randomly generated maze with same obstacle ratio. Observed results suggests that VMTS is effective in locate target time, solution quality and virtual target. In addition, VMTS becomes more efficient if the number of agents is increased with proportion to obstacle ratio.

Keywords: Vision, MTS, Unknown Target, Coordination, VMTS, Multi-Agent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1455
9559 Semi-Supervised Outlier Detection Using a Generative and Adversary Framework

Authors: Jindong Gu, Matthias Schubert, Volker Tresp

Abstract:

In many outlier detection tasks, only training data belonging to one class, i.e., the positive class, is available. The task is then to predict a new data point as belonging either to the positive class or to the negative class, in which case the data point is considered an outlier. For this task, we propose a novel corrupted Generative Adversarial Network (CorGAN). In the adversarial process of training CorGAN, the Generator generates outlier samples for the negative class, and the Discriminator is trained to distinguish the positive training data from the generated negative data. The proposed framework is evaluated using an image dataset and a real-world network intrusion dataset. Our outlier-detection method achieves state-of-the-art performance on both tasks.

Keywords: Outlier detection, generative adversary networks, semi-supervised learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1063
9558 Biosignal Measurement System Based On Ultra-Wide Band Human Body Communication

Authors: Jonghoon Kim, Gilwon Yoon

Abstract:

A wrist-band type biosignal measurement system and its data transfer through human body communication (HBC) were investigated. An HBC method based on pulses of ultra-wide band instead of using frequency or amplitude modulations was studied and implemented since the system became very compact and it was more suited for personal or mobile health monitoring. Our system measured photo-plethysmogram (PPG) and measured PPG signals were transmitted through a finger to a monitoring PC system. The device was compact and low-power consuming. HBC communication has very strongsecurity measures since it does not use wireless network.Furthermore, biosignal monitoring system becomes handy because it does not need to have wire connections.

Keywords: Biosignal, human body communication, mobile health, PPG, ultrawide band.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2398
9557 Ensuring Consistency under the Snapshot Isolation

Authors: Carlos Roberto Valˆencio, F´abio Renato de Almeida, Thatiane Kawabata, Leandro Alves Neves, Julio Cesar Momente, Mario Luiz Tronco, Angelo Cesar Colombini

Abstract:

By running transactions under the SNAPSHOT isolation we can achieve a good level of concurrency, specially in databases with high-intensive read workloads. However, SNAPSHOT is not immune to all the problems that arise from competing transactions and therefore no serialization warranty exists. We propose in this paper a technique to obtain data consistency with SNAPSHOT by using some special triggers that we named DAEMON TRIGGERS. Besides keeping the benefits of the SNAPSHOT isolation, the technique is specially useful for those database systems that do not have an isolation level that ensures serializability, like Firebird and Oracle. We describe all the anomalies that might arise when using the SNAPSHOT isolation and show how to preclude them with DAEMON TRIGGERS. Based on the methodology presented here, it is also proposed the creation of a new isolation level: DAEMON SNAPSHOT.

Keywords: Data consistency, serialization, snapshot.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2111
9556 Comparison of FAHP and TOPSIS for Evacuation Capability Assessment of High-rise Buildings

Authors: Peng Mei, Yan-Jun Qi, Yu Cui, Song Lu, He-Ping Zhang

Abstract:

A lot of computer-based methods have been developed to assess the evacuation capability (EC) of high-rise buildings. Because softwares are time-consuming and not proper for on scene applications, we adopted two methods, fuzzy analytic hierarchy process (FAHP) and technique for order preference by similarity to an ideal solution (TOPSIS), for EC assessment of a high-rise building in Jinan. The EC scores obtained with the two methods and the evacuation time acquired with Pathfinder 2009 for floors 47-60 of the building were compared with each other. The results show that FAHP performs better than TOPSIS for EC assessment of high-rise buildings, especially in the aspect of dealing with the effect of occupant type and distance to exit on EC, tackling complex problem with multi-level structure of criteria, and requiring less amount of computation. However, both FAHP and TOPSIS failed to appropriately handle the situation where the exit width changes while occupants are few.

Keywords: Evacuation capability assessment, FAHP, high-rise buildings, TOPSIS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1615