Search results for: Fast Pattern Detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3051

Search results for: Fast Pattern Detection

1971 A Hybrid Method for Eyes Detection in Facial Images

Authors: Muhammad Shafi, Paul W. H. Chung

Abstract:

This paper proposes a hybrid method for eyes localization in facial images. The novelty is in combining techniques that utilise colour, edge and illumination cues to improve accuracy. The method is based on the observation that eye regions have dark colour, high density of edges and low illumination as compared to other parts of face. The first step in the method is to extract connected regions from facial images using colour, edge density and illumination cues separately. Some of the regions are then removed by applying rules that are based on the general geometry and shape of eyes. The remaining connected regions obtained through these three cues are then combined in a systematic way to enhance the identification of the candidate regions for the eyes. The geometry and shape based rules are then applied again to further remove the false eye regions. The proposed method was tested using images from the PICS facial images database. The proposed method has 93.7% and 87% accuracies for initial blobs extraction and final eye detection respectively.

Keywords: Erosion, dilation, Edge-density

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2025
1970 Determination and Comparison of Fabric Pills Distribution Using Image Processing and Spatial Data Analysis Tools

Authors: Lenka Techniková, Maroš Tunák, Jiří Janáček

Abstract:

This work deals with the determination and comparison of pill patterns in 2 sets of fabric samples which differ in way of pill creation. The first set contains fabric samples with the pills created by simulation on a Martindale abrasion machine, while pills in the second set originated during normal wearing and maintenance. The goal of the study is to determine whether the pattern of the fabric pills created by simulation is the same as the pattern of naturally occurring pills. The system of determination and comparison of the pills is based on image processing and spatial data analysis tools. Firstly, 3D reconstruction of the fabric surfaces with the pills is realized with using a gradient fields method. The gradient fields method creates a 3D fabric surface from a set of 4 images. Thereafter, the pills are detected in 3D fabric surfaces using image-processing tools in the MATLAB software. Determination and comparison of the pills patterns of two sets of fabric samples is based on spatial data analysis using tools in R software.

Keywords: 3D reconstruction of the surface, image analysis tools, distribution of the pills, spatial data analysis tools.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2141
1969 A Subtractive Clustering Based Approach for Early Prediction of Fault Proneness in Software Modules

Authors: Ramandeep S. Sidhu, Sunil Khullar, Parvinder S. Sandhu, R. P. S. Bedi, Kiranbir Kaur

Abstract:

In this paper, subtractive clustering based fuzzy inference system approach is used for early detection of faults in the function oriented software systems. This approach has been tested with real time defect datasets of NASA software projects named as PC1 and CM1. Both the code based model and joined model (combination of the requirement and code based metrics) of the datasets are used for training and testing of the proposed approach. The performance of the models is recorded in terms of Accuracy, MAE and RMSE values. The performance of the proposed approach is better in case of Joined Model. As evidenced from the results obtained it can be concluded that Clustering and fuzzy logic together provide a simple yet powerful means to model the earlier detection of faults in the function oriented software systems.

Keywords: Subtractive clustering, fuzzy inference system, fault proneness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2557
1968 Resonant DC Link in PWM AC Chopper

Authors: Apinan Aurasopon

Abstract:

This paper proposes a resonant dc link in PWM ac chopper. This can solve the spike problems and also reduce the switching loss. The configuration and PWM pattern of the proposed technique are presented. The simulation results are used to confirm the theory.

Keywords: PWM ac chopper and Resonant dc link.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2478
1967 Oil Debris Signal Detection Based on Integral Transform and Empirical Mode Decomposition

Authors: Chuan Li, Ming Liang

Abstract:

Oil debris signal generated from the inductive oil debris monitor (ODM) is useful information for machine condition monitoring but is often spoiled by background noise. To improve the reliability in machine condition monitoring, the high-fidelity signal has to be recovered from the noisy raw data. Considering that the noise components with large amplitude often have higher frequency than that of the oil debris signal, the integral transform is proposed to enhance the detectability of the oil debris signal. To cancel out the baseline wander resulting from the integral transform, the empirical mode decomposition (EMD) method is employed to identify the trend components. An optimal reconstruction strategy including both de-trending and de-noising is presented to detect the oil debris signal with less distortion. The proposed approach is applied to detect the oil debris signal in the raw data collected from an experimental setup. The result demonstrates that this approach is able to detect the weak oil debris signal with acceptable distortion from noisy raw data.

Keywords: Integral transform, empirical mode decomposition, oil debris, signal processing, detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1687
1966 An Approach to Consumption of Exhaustible Resources Based on Islamic Justice and Hartwick Criteria

Authors: Hamed Najafi, Ghasem Nikjou

Abstract:

Nowadays, there is an increasing attention to the resources scarcity issues. Because of failure in present patterns in the field of the allocation of exhaustible resources between generations and the challenges related to economic justice supply, it is supposed, to present a pattern from the Islamic perspective in this essay. By using content analysis of religious texts, we conclude that governments should remove the gap which is exists between the per capita income of the poor and their minimum consumption (necessary consumption). In order to preserve the exhaustible resources for poor people) not for all), between all generations, government should invest exhaustible resources on endless resources according to Hartwick’s criteria and should spend these benefits for poor people. But, if benefits did not cover the gap between minimum consumption and per capita income of poor levels in one generation, in this case, the government is responsible for covering this gap through the direct consumption of exhaustible resources. For an exact answer to this question, ‘how much of exhaustible resources should expense to maintain justice between generations?’ The theoretical and mathematical modeling has been used and proper function has been provided. The consumption pattern is presented for economic policy makers in Muslim countries, and non-Muslim even, it can be useful.

Keywords: Exhaustible resources, Islamic justice, intergenerational justice, distribution of resources, Hartwick Criteria.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 627
1965 Actionable Rules: Issues and New Directions

Authors: Harleen Kaur

Abstract:

Knowledge Discovery in Databases (KDD) is the process of extracting previously unknown, hidden and interesting patterns from a huge amount of data stored in databases. Data mining is a stage of the KDD process that aims at selecting and applying a particular data mining algorithm to extract an interesting and useful knowledge. It is highly expected that data mining methods will find interesting patterns according to some measures, from databases. It is of vital importance to define good measures of interestingness that would allow the system to discover only the useful patterns. Measures of interestingness are divided into objective and subjective measures. Objective measures are those that depend only on the structure of a pattern and which can be quantified by using statistical methods. While, subjective measures depend only on the subjectivity and understandability of the user who examine the patterns. These subjective measures are further divided into actionable, unexpected and novel. The key issues that faces data mining community is how to make actions on the basis of discovered knowledge. For a pattern to be actionable, the user subjectivity is captured by providing his/her background knowledge about domain. Here, we consider the actionability of the discovered knowledge as a measure of interestingness and raise important issues which need to be addressed to discover actionable knowledge.

Keywords: Data Mining Community, Knowledge Discovery inDatabases (KDD), Interestingness, Subjective Measures, Actionability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1923
1964 Fast Approximate Bayesian Contextual Cold Start Learning (FAB-COST)

Authors: Jack R. McKenzie, Peter A. Appleby, Thomas House, Neil Walton

Abstract:

Cold-start is a notoriously difficult problem which can occur in recommendation systems, and arises when there is insufficient information to draw inferences for users or items. To address this challenge, a contextual bandit algorithm – the Fast Approximate Bayesian Contextual Cold Start Learning algorithm (FAB-COST) – is proposed, which is designed to provide improved accuracy compared to the traditionally used Laplace approximation in the logistic contextual bandit, while controlling both algorithmic complexity and computational cost. To this end, FAB-COST uses a combination of two moment projection variational methods: Expectation Propagation (EP), which performs well at the cold start, but becomes slow as the amount of data increases; and Assumed Density Filtering (ADF), which has slower growth of computational cost with data size but requires more data to obtain an acceptable level of accuracy. By switching from EP to ADF when the dataset becomes large, it is able to exploit their complementary strengths. The empirical justification for FAB-COST is presented, and systematically compared to other approaches on simulated data. In a benchmark against the Laplace approximation on real data consisting of over 670, 000 impressions from autotrader.co.uk, FAB-COST demonstrates at one point increase of over 16% in user clicks. On the basis of these results, it is argued that FAB-COST is likely to be an attractive approach to cold-start recommendation systems in a variety of contexts.

Keywords: Cold-start, expectation propagation, multi-armed bandits, Thompson sampling, variational inference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 526
1963 An Advanced Time-Frequency Domain Method for PD Extraction with Non-Intrusive Measurement

Authors: Guomin Luo, Daming Zhang, Yong Kwee Koh, Kim Teck Ng, Helmi Kurniawan, Weng Hoe Leong

Abstract:

Partial discharge (PD) detection is an important method to evaluate the insulation condition of metal-clad apparatus. Non-intrusive sensors which are easy to install and have no interruptions on operation are preferred in onsite PD detection. However, it often lacks of accuracy due to the interferences in PD signals. In this paper a novel PD extraction method that uses frequency analysis and entropy based time-frequency (TF) analysis is introduced. The repetitive pulses from convertor are first removed via frequency analysis. Then, the relative entropy and relative peak-frequency of each pulse (i.e. time-indexed vector TF spectrum) are calculated and all pulses with similar parameters are grouped. According to the characteristics of non-intrusive sensor and the frequency distribution of PDs, the pulses of PD and interferences are separated. Finally the PD signal and interferences are recovered via inverse TF transform. The de-noised result of noisy PD data demonstrates that the combination of frequency and time-frequency techniques can discriminate PDs from interferences with various frequency distributions.

Keywords: Entropy, Fourier analysis, non-intrusive measurement, time-frequency analysis, partial discharge

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1560
1962 Fuzzy based Security Threshold Determining for the Statistical En-Route Filtering in Sensor Networks

Authors: Hae Young Lee, Tae Ho Cho

Abstract:

In many sensor network applications, sensor nodes are deployed in open environments, and hence are vulnerable to physical attacks, potentially compromising the node's cryptographic keys. False sensing report can be injected through compromised nodes, which can lead to not only false alarms but also the depletion of limited energy resource in battery powered networks. Ye et al. proposed a statistical en-route filtering scheme (SEF) to detect such false reports during the forwarding process. In this scheme, the choice of a security threshold value is important since it trades off detection power and overhead. In this paper, we propose a fuzzy logic for determining a security threshold value in the SEF based sensor networks. The fuzzy logic determines a security threshold by considering the number of partitions in a global key pool, the number of compromised partitions, and the energy level of nodes. The fuzzy based threshold value can conserve energy, while it provides sufficient detection power.

Keywords: Fuzzy logic, security, sensor network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1547
1961 Land Use Change Detection Using Remote Sensing and GIS

Authors: Naser Ahmadi Sani, Karim Solaimani, Lida Razaghnia, Jalal Zandi

Abstract:

In recent decades, rapid and incorrect changes in land-use have been associated with consequences such as natural resources degradation and environmental pollution. Detecting changes in land-use is one of the tools for natural resource management and assessment of changes in ecosystems. The target of this research is studying the land-use changes in Haraz basin with an area of 677000 hectares in a 15 years period (1996 to 2011) using LANDSAT data. Therefore, the quality of the images was first evaluated. Various enhancement methods for creating synthetic bonds were used in the analysis. Separate training sites were selected for each image. Then the images of each period were classified in 9 classes using supervised classification method and the maximum likelihood algorithm. Finally, the changes were extracted in GIS environment. The results showed that these changes are an alarm for the HARAZ basin status in future. The reason is that 27% of the area has been changed, which is related to changing the range lands to bare land and dry farming and also changing the dense forest to sparse forest, horticulture, farming land and residential area.

Keywords: HARAZ Basin, Change Detection, Land-use, Satellite Data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2293
1960 High Perfomance Communication Protocol for Wireless Ad-Hoc Sensor Networks

Authors: Toshihiko Sasama, Takahide Yanaka, Kazunori Sugahara, Hiroshi Masuyama

Abstract:

In order to monitor for traffic traversal, sensors can be deployed to perform collaborative target detection. Such a sensor network achieves a certain level of detection performance with the associated costs of deployment and routing protocol. This paper addresses these two points of sensor deployment and routing algorithm in the situation where the absolute quantity of sensors or total energy becomes insufficient. This discussion on the best deployment system concluded that two kinds of deployments; Normal and Power law distributions, show 6 and 3 times longer than Random distribution in the duration of coverage, respectively. The other discussion on routing algorithm to achieve good performance in each deployment system was also addressed. This discussion concluded that, in place of the traditional algorithm, a new algorithm can extend the time of coverage duration by 4 times in a Normal distribution, and in the circumstance where every deployed sensor operates as a binary model.

Keywords: binary sensor, coverage rate, power energy consumption, routing algorithm, sensor deployment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1355
1959 Determining Cluster Boundaries Using Particle Swarm Optimization

Authors: Anurag Sharma, Christian W. Omlin

Abstract:

Self-organizing map (SOM) is a well known data reduction technique used in data mining. Data visualization can reveal structure in data sets that is otherwise hard to detect from raw data alone. However, interpretation through visual inspection is prone to errors and can be very tedious. There are several techniques for the automatic detection of clusters of code vectors found by SOMs, but they generally do not take into account the distribution of code vectors; this may lead to unsatisfactory clustering and poor definition of cluster boundaries, particularly where the density of data points is low. In this paper, we propose the use of a generic particle swarm optimization (PSO) algorithm for finding cluster boundaries directly from the code vectors obtained from SOMs. The application of our method to unlabeled call data for a mobile phone operator demonstrates its feasibility. PSO algorithm utilizes U-matrix of SOMs to determine cluster boundaries; the results of this novel automatic method correspond well to boundary detection through visual inspection of code vectors and k-means algorithm.

Keywords: Particle swarm optimization, self-organizing maps, clustering, data mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695
1958 UDCA: An Energy Efficient Clustering Algorithm for Wireless Sensor Network

Authors: Boregowda S.B., Hemanth Kumar A.R. Babu N.V, Puttamadappa C., And H.S Mruthyunjaya

Abstract:

In the past few years, the use of wireless sensor networks (WSNs) potentially increased in applications such as intrusion detection, forest fire detection, disaster management and battle field. Sensor nodes are generally battery operated low cost devices. The key challenge in the design and operation of WSNs is to prolong the network life time by reducing the energy consumption among sensor nodes. Node clustering is one of the most promising techniques for energy conservation. This paper presents a novel clustering algorithm which maximizes the network lifetime by reducing the number of communication among sensor nodes. This approach also includes new distributed cluster formation technique that enables self-organization of large number of nodes, algorithm for maintaining constant number of clusters by prior selection of cluster head and rotating the role of cluster head to evenly distribute the energy load among all sensor nodes.

Keywords: Clustering algorithms, Cluster head, Energy consumption, Sensor nodes, and Wireless sensor networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2353
1957 Aspects Concerning Flame Propagation of Various Fuels in Combustion Chamber of Four Valve Engines

Authors: Zoran Jovanovic, Zoran Masonicic, S. Dragutinovic, Z. Sakota

Abstract:

In this paper, results concerning flame propagation of various fuels in a particular combustion chamber with four tilted valves were elucidated. Flame propagation was represented by the evolution of spatial distribution of temperature in various cut-planes within combustion chamber while the flame front location was determined by dint of zones with maximum temperature gradient. The results presented are only a small part of broader on-going scrutinizing activity in the field of multidimensional modeling of reactive flows in combustion chambers with complicated geometries encompassing various models of turbulence, different fuels and combustion models. In the case of turbulence two different models were applied i.e. standard k-ε model of turbulence and k-ξ-f model of turbulence. In this paper flame propagation results were analyzed and presented for two different hydrocarbon fuels, such as CH4 and C8H18. In the case of combustion all differences ensuing from different turbulence models, obvious for non-reactive flows are annihilated entirely. Namely the interplay between fluid flow pattern and flame propagation is invariant as regards turbulence models and fuels applied. Namely the interplay between fluid flow pattern and flame propagation is entirely invariant as regards fuel variation indicating that the flame propagation through unburned mixture of CH4 and C8H18 fuels is not chemically controlled.

Keywords: Automotive flows, flame propagation, combustion modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1264
1956 Assessing the Global Water Productivity of Some Irrigation Command Areas in Iran

Authors: A. Montazar

Abstract:

The great challenge of the agricultural sector is to produce more crop from less water, which can be achieved by increasing crop water productivity. The modernization of the irrigation systems offers a number of possibilities to expand the economic productivity of water and improve the virtual water status. The objective of the present study is to assess the global water productivity (GWP) within the major irrigation command areas of I.R. Iran. For this purpose, fourteen irrigation command areas where located in different areas of Iran were selected. In order to calculate the global water productivity of irrigation command areas, all data on the delivered water to cropping pattern, cultivated area, crops water requirement, and yield production rate during 2002-2006 were gathered. In each of the command areas it seems that the cultivated crops have a higher amount of virtual water and thus can be replaced by crops with less virtual water. This is merely suggested due to crop water consumption and at the time of replacing crops, economic value as well as cultural and political factors must be considered. The results indicated that the lowest GWP belongs to Mahyar and Borkhar irrigation areas, 0.24 kg m-3, and the highest is that of the Dez irrigation area, 0.81 kg m-3. The findings demonstrated that water management in the two irrigation areas is just efficient. The difference in the GWP of irrigation areas is due to variations in the cropping pattern, amount of crop productions, in addition to the effective factors in the water use efficiency in the irrigation areas.

Keywords: Iran, Irrigation command area, Water productivity, Virtual water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1653
1955 A Multilanguage Source Code Retrieval System Using Structural-Semantic Fingerprints

Authors: Mohamed Amine Ouddan, Hassane Essafi

Abstract:

Source code retrieval is of immense importance in the software engineering field. The complex tasks of retrieving and extracting information from source code documents is vital in the development cycle of the large software systems. The two main subtasks which result from these activities are code duplication prevention and plagiarism detection. In this paper, we propose a Mohamed Amine Ouddan, and Hassane Essafi source code retrieval system based on two-level fingerprint representation, respectively the structural and the semantic information within a source code. A sequence alignment technique is applied on these fingerprints in order to quantify the similarity between source code portions. The specific purpose of the system is to detect plagiarism and duplicated code between programs written in different programming languages belonging to the same class, such as C, Cµ, Java and CSharp. These four languages are supported by the actual version of the system which is designed such that it may be easily adapted for any programming language.

Keywords: Source code retrieval, plagiarism detection, clonedetection, sequence alignment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1762
1954 Grain Size Characteristics and Sediments Distribution in the Eastern Part of Lekki Lagoon

Authors: Mayowa Philips Ibitola, Abe Oluwaseun Banji, Olorunfemi Akinade-Solomon

Abstract:

A total of 20 bottom sediment samples were collected from the Lekki Lagoon during the wet and dry season. The study was carried out to determine the textural characteristics, sediment distribution pattern and energy of transportation within the lagoon system. The sediment grain sizes and depth profiling was analyzed using dry sieving method and MATLAB algorithm for processing. The granulometric reveals fine grained sand both for the wet and dry season with an average mean value of 2.03 ϕ and -2.88 ϕ, respectively. Sediments were moderately sorted with an average inclusive standard deviation of 0.77 ϕ and -0.82 ϕ. Skewness varied from strongly coarse and near symmetrical 0.34- ϕ and 0.09 ϕ. The kurtosis average value was 0.87 ϕ and -1.4 ϕ (platykurtic and leptokurtic). Entirely, the bathymetry shows an average depth of 4.0 m. The deepest and shallowest area has a depth of 11.2 m and 0.5 m, respectively. High concentration of fine sand was observed at deep areas compared to the shallow areas during wet and dry season. Statistical parameter results show that the overall sediments are sorted, and deposited under low energy condition over a long distance. However, sediment distribution and sediment transport pattern of Lekki Lagoon is controlled by a low energy current and the down slope configuration of the bathymetry enhances the sorting and the deposition rate in the Lekki Lagoon.

Keywords: Lekki Lagoon, marine sediment, bathymetry, grain size distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1031
1953 Simulation Based VLSI Implementation of Fast Efficient Lossless Image Compression System Using Adjusted Binary Code & Golumb Rice Code

Authors: N. Muthukumaran, R. Ravi

Abstract:

The Simulation based VLSI Implementation of FELICS (Fast Efficient Lossless Image Compression System) Algorithm is proposed to provide the lossless image compression and is implemented in simulation oriented VLSI (Very Large Scale Integrated). To analysis the performance of Lossless image compression and to reduce the image without losing image quality and then implemented in VLSI based FELICS algorithm. In FELICS algorithm, which consists of simplified adjusted binary code for Image compression and these compression image is converted in pixel and then implemented in VLSI domain. This parameter is used to achieve high processing speed and minimize the area and power. The simplified adjusted binary code reduces the number of arithmetic operation and achieved high processing speed. The color difference preprocessing is also proposed to improve coding efficiency with simple arithmetic operation. Although VLSI based FELICS Algorithm provides effective solution for hardware architecture design for regular pipelining data flow parallelism with four stages. With two level parallelisms, consecutive pixels can be classified into even and odd samples and the individual hardware engine is dedicated for each one. This method can be further enhanced by multilevel parallelisms.

Keywords: Image compression, Pixel, Compression Ratio, Adjusted Binary code, Golumb Rice code, High Definition display, VLSI Implementation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2051
1952 Route Training in Mobile Robotics through System Identification

Authors: Roberto Iglesias, Theocharis Kyriacou, Ulrich Nehmzow, Steve Billings

Abstract:

Fundamental sensor-motor couplings form the backbone of most mobile robot control tasks, and often need to be implemented fast, efficiently and nevertheless reliably. Machine learning techniques are therefore often used to obtain the desired sensor-motor competences. In this paper we present an alternative to established machine learning methods such as artificial neural networks, that is very fast, easy to implement, and has the distinct advantage that it generates transparent, analysable sensor-motor couplings: system identification through nonlinear polynomial mapping. This work, which is part of the RobotMODIC project at the universities of Essex and Sheffield, aims to develop a theoretical understanding of the interaction between the robot and its environment. One of the purposes of this research is to enable the principled design of robot control programs. As a first step towards this aim we model the behaviour of the robot, as this emerges from its interaction with the environment, with the NARMAX modelling method (Nonlinear, Auto-Regressive, Moving Average models with eXogenous inputs). This method produces explicit polynomial functions that can be subsequently analysed using established mathematical methods. In this paper we demonstrate the fidelity of the obtained NARMAX models in the challenging task of robot route learning; we present a set of experiments in which a Magellan Pro mobile robot was taught to follow four different routes, always using the same mechanism to obtain the required control law.

Keywords: Mobile robotics, system identification, non-linear modelling, NARMAX.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1702
1951 Weakened Vortex Shedding from a Rotating Cylinder

Authors: Sharul S. Dol

Abstract:

An experimental study of the turbulent near wake of a rotating circular cylinder was made at a Reynolds number of 2000 for velocity ratios, λ between 0 and 2.7. Particle image velocimetry data are analyzed to study the effects of rotation on the flow structures behind the cylinder. The results indicate that the rotation of the cylinder causes significant changes in the vortex formation. Kármán vortex shedding pattern of alternating vortices gives rise to strong periodic fluctuations of a vortex street for λ < 2.0. Alternate vortex shedding is weak and close to being suppressed at λ = 2.0 resulting a distorted street with vortices of alternating sense subsequently being found on opposite sides. Only part of the circulation is shed due to the interference in the separation point, mixing in the base region, re-attachment, and vortex cut-off phenomenon. Alternating vortex shedding pattern diminishes and completely disappears when the velocity ratio is 2.7. The shed vortices are insignificant in size and forming a single line of vortex street. It is clear that flow asymmetries will deteriorate vortex shedding, and when the asymmetries are large enough, total inhibition of a periodic street occurs.

Keywords: Circulation, particle image velocimetry, rotating circular cylinder, smoke-wire flow visualization, Strouhal number, vortex shedding, vortex street.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2832
1950 Object Detection Based on Plane Segmentation and Features Matching for a Service Robot

Authors: António J. R. Neves, Rui Garcia, Paulo Dias, Alina Trifan

Abstract:

With the aging of the world population and the continuous growth in technology, service robots are more and more explored nowadays as alternatives to healthcare givers or personal assistants for the elderly or disabled people. Any service robot should be capable of interacting with the human companion, receive commands, navigate through the environment, either known or unknown, and recognize objects. This paper proposes an approach for object recognition based on the use of depth information and color images for a service robot. We present a study on two of the most used methods for object detection, where 3D data is used to detect the position of objects to classify that are found on horizontal surfaces. Since most of the objects of interest accessible for service robots are on these surfaces, the proposed 3D segmentation reduces the processing time and simplifies the scene for object recognition. The first approach for object recognition is based on color histograms, while the second is based on the use of the SIFT and SURF feature descriptors. We present comparative experimental results obtained with a real service robot.

Keywords: Service Robot, Object Recognition, 3D Sensors, Plane Segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646
1949 An Intelligent System for Phish Detection, using Dynamic Analysis and Template Matching

Authors: Chinmay Soman, Hrishikesh Pathak, Vishal Shah, Aniket Padhye, Amey Inamdar

Abstract:

Phishing, or stealing of sensitive information on the web, has dealt a major blow to Internet Security in recent times. Most of the existing anti-phishing solutions fail to handle the fuzziness involved in phish detection, thus leading to a large number of false positives. This fuzziness is attributed to the use of highly flexible and at the same time, highly ambiguous HTML language. We introduce a new perspective against phishing, that tries to systematically prove, whether a given page is phished or not, using the corresponding original page as the basis of the comparison. It analyzes the layout of the pages under consideration to determine the percentage distortion between them, indicative of any form of malicious alteration. The system design represents an intelligent system, employing dynamic assessment which accurately identifies brand new phishing attacks and will prove effective in reducing the number of false positives. This framework could potentially be used as a knowledge base, in educating the internet users against phishing.

Keywords: World Wide Web, Phishing, Internet security, data mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1804
1948 Degraded Document Analysis and Extraction of Original Text Document: An Approach without Optical Character Recognition

Authors: L. Hamsaveni, Navya Prakash, Suresha

Abstract:

Document Image Analysis recognizes text and graphics in documents acquired as images. An approach without Optical Character Recognition (OCR) for degraded document image analysis has been adopted in this paper. The technique involves document imaging methods such as Image Fusing and Speeded Up Robust Features (SURF) Detection to identify and extract the degraded regions from a set of document images to obtain an original document with complete information. In case, degraded document image captured is skewed, it has to be straightened (deskew) to perform further process. A special format of image storing known as YCbCr is used as a tool to convert the Grayscale image to RGB image format. The presented algorithm is tested on various types of degraded documents such as printed documents, handwritten documents, old script documents and handwritten image sketches in documents. The purpose of this research is to obtain an original document for a given set of degraded documents of the same source.

Keywords: Grayscale image format, image fusing, SURF detection, YCbCr image format.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1137
1947 Performance Analysis of Expert Systems Incorporating Neural Network for Fault Detection of an Electric Motor

Authors: M. Khatami Rad, N. Jamali, M. Torabizadeh, A. Noshadi

Abstract:

In this paper, an artificial neural network simulator is employed to carry out diagnosis and prognosis on electric motor as rotating machinery based on predictive maintenance. Vibration data of the primary failed motor including unbalance, misalignment and bearing fault were collected for training the neural network. Neural network training was performed for a variety of inputs and the motor condition was used as the expert training information. The main purpose of applying the neural network as an expert system was to detect the type of failure and applying preventive maintenance. The advantage of this study is for machinery Industries by providing appropriate maintenance that has an essential activity to keep the production process going at all processes in the machinery industry. Proper maintenance is pivotal in order to prevent the possible failures in operating system and increase the availability and effectiveness of a system by analyzing vibration monitoring and developing expert system.

Keywords: Condition based monitoring, expert system, neural network, fault detection, vibration monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1961
1946 Spatial-Temporal Awareness Approach for Extensive Re-Identification

Authors: Tyng-Rong Roan, Fuji Foo, Wenwey Hseush

Abstract:

Recent development of AI and edge computing plays a critical role to capture meaningful events such as detection of an unattended bag. One of the core problems is re-identification across multiple CCTVs. Immediately following the detection of a meaningful event is to track and trace the objects related to the event. In an extensive environment, the challenge becomes severe when the number of CCTVs increases substantially, imposing difficulties in achieving high accuracy while maintaining real-time performance. The algorithm that re-identifies cross-boundary objects for extensive tracking is referred to Extensive Re-Identification, which emphasizes the issues related to the complexity behind a great number of CCTVs. The Spatial-Temporal Awareness approach challenges the conventional thinking and concept of operations which is labor intensive and time consuming. The ability to perform Extensive Re-Identification through a multi-sensory network provides the next-level insights – creating value beyond traditional risk management.

Keywords: Long-short-term memory, re-identification, security critical application, spatial-temporal awareness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 498
1945 An Effective Islanding Detection and Classification Method Using Neuro-Phase Space Technique

Authors: Aziah Khamis, H. Shareef

Abstract:

The purpose of planned islanding is to construct a power island during system disturbances which are commonly formed for maintenance purpose. However, in most of the cases island mode operation is not allowed. Therefore distributed generators (DGs) must sense the unplanned disconnection from the main grid. Passive technique is the most commonly used method for this purpose. However, it needs improvement in order to identify the islanding condition. In this paper an effective method for identification of islanding condition based on phase space and neural network techniques has been developed. The captured voltage waveforms at the coupling points of DGs are processed to extract the required features. For this purposed a method known as the phase space techniques is used. Based on extracted features, two neural network configuration namely radial basis function and probabilistic neural networks are trained to recognize the waveform class. According to the test result, the investigated technique can provide satisfactory identification of the islanding condition in the distribution system.

Keywords: Classification, Islanding detection, Neural network, Phase space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2110
1944 Multiple-Points Fault Signature's Dynamics Modeling for Bearing Defect Frequencies

Authors: Muhammad F. Yaqub, Iqbal Gondal, Joarder Kamruzzaman

Abstract:

Occurrence of a multiple-points fault in machine operations could result in exhibiting complex fault signatures, which could result in lowering fault diagnosis accuracy. In this study, a multiple-points defect model (MPDM) is proposed which can simulate fault signature-s dynamics for n-points bearing faults. Furthermore, this study identifies that in case of multiple-points fault in the rotary machine, the location of the dominant component of defect frequency shifts depending upon the relative location of the fault points which could mislead the fault diagnostic model to inaccurate detections. Analytical and experimental results are presented to characterize and validate the variation in the dominant component of defect frequency. Based on envelop detection analysis, a modification is recommended in the existing fault diagnostic models to consider the multiples of defect frequency rather than only considering the frequency spectrum at the defect frequency in order to incorporate the impact of multiple points fault.

Keywords: Envelop detection, machine defect frequency, multiple faults, machine health monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2244
1943 Fast Factored DCT-LMS Speech Enhancement for Performance Enhancement of Digital Hearing Aid

Authors: Sunitha. S.L., V. Udayashankara

Abstract:

Background noise is particularly damaging to speech intelligibility for people with hearing loss especially for sensorineural loss patients. Several investigations on speech intelligibility have demonstrated sensorineural loss patients need 5-15 dB higher SNR than the normal hearing subjects. This paper describes Discrete Cosine Transform Power Normalized Least Mean Square algorithm to improve the SNR and to reduce the convergence rate of the LMS for Sensory neural loss patients. Since it requires only real arithmetic, it establishes the faster convergence rate as compare to time domain LMS and also this transformation improves the eigenvalue distribution of the input autocorrelation matrix of the LMS filter. The DCT has good ortho-normal, separable, and energy compaction property. Although the DCT does not separate frequencies, it is a powerful signal decorrelator. It is a real valued function and thus can be effectively used in real-time operation. The advantages of DCT-LMS as compared to standard LMS algorithm are shown via SNR and eigenvalue ratio computations. . Exploiting the symmetry of the basis functions, the DCT transform matrix [AN] can be factored into a series of ±1 butterflies and rotation angles. This factorization results in one of the fastest DCT implementation. There are different ways to obtain factorizations. This work uses the fast factored DCT algorithm developed by Chen and company. The computer simulations results show superior convergence characteristics of the proposed algorithm by improving the SNR at least 10 dB for input SNR less than and equal to 0 dB, faster convergence speed and better time and frequency characteristics.

Keywords: Hearing Impairment, DCT Adaptive filter, Sensorineural loss patients, Convergence rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2152
1942 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., entropy, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one-class classification (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, principal component analysis (PCA), kernel principal component analysis (KPCA), and autoassociative neural network (ANN) are presented and their performance are compared. It is also shown that, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 95%.

Keywords: Anomaly detection, dimensionality reduction, frequencies selection, modal analysis, neural network, structural health monitoring, vibration measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 673