Search results for: curve detection
1345 Automated Service Scene Detection for Badminton Game Analysis Using CHLAC and MRA
Authors: Fumito Yoshikawa, Takumi Kobayashi, Kenji Watanabe, Nobuyuki Otsu
Abstract:
Extracting in-play scenes in sport videos is essential for quantitative analysis and effective video browsing of the sport activities. Game analysis of badminton as of the other racket sports requires detecting the start and end of each rally period in an automated manner. This paper describes an automatic serve scene detection method employing cubic higher-order local auto-correlation (CHLAC) and multiple regression analysis (MRA). CHLAC can extract features of postures and motions of multiple persons without segmenting and tracking each person by virtue of shift-invariance and additivity, and necessitate no prior knowledge. Then, the specific scenes, such as serve, are detected by linear regression (MRA) from the CHLAC features. To demonstrate the effectiveness of our method, the experiment was conducted on video sequences of five badminton matches captured by a single ceiling camera. The averaged precision and recall rates for the serve scene detection were 95.1% and 96.3%, respectively.Keywords: Badminton, CHLAC, MRA, Video-based motiondetection
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27141344 Improved of Elliptic Curves Cryptography over a Ring
Authors: A. Chillali, A. Tadmori, M. Ziane
Abstract:
In this article we will study the elliptic curve defined over the ring An and we define the mathematical operations of ECC, which provides a high security and advantage for wireless applications compared to other asymmetric key cryptosystem.
Keywords: Elliptic Curves, Finite Ring, Cryptography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21061343 Performance Evaluation of Iris Region Detection and Localization for Biometric Identification System
Authors: Chit Su Htwe, Win Htay
Abstract:
The iris recognition technology is the most accurate, fast and less invasive one compared to other biometric techniques using for example fingerprints, face, retina, hand geometry, voice or signature patterns. The system developed in this study has the potential to play a key role in areas of high-risk security and can enable organizations with means allowing only to the authorized personnel a fast and secure way to gain access to such areas. The paper aim is to perform the iris region detection and iris inner and outer boundaries localization. The system was implemented on windows platform using Visual C# programming language. It is easy and efficient tool for image processing to get great performance accuracy. In particular, the system includes two main parts. The first is to preprocess the iris images by using Canny edge detection methods, segments the iris region from the rest of the image and determine the location of the iris boundaries by applying Hough transform. The proposed system tested on 756 iris images from 60 eyes of CASIA iris database images.Keywords: Canny, C#, hough transform, image preprocessing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20851342 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus
Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo
Abstract:
The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.
Keywords: Anomaly detection, digital twin, Generalised Additive Model, Power Consumption Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5011341 Unified Method to Block Pornographic Images in Websites
Authors: Sakthi Priya Balaji R., Vijayendar G.
Abstract:
This paper proposes a technique to block adult images displayed in websites. The filter is designed so as to perform even in exceptional cases such as, where face detection is not possible or improper face visibility. This is achieved by using an alternative phase to extract the MFC (Most Frequent Color) from the Human Body regions estimated using a biometric of anthropometric distances between fixed rigidly connected body locations. The logical results generated can be protected from overriding by a firewall or intrusion, by encrypting the result in a SSH data packet.
Keywords: Face detection, characteristics extraction andclassification, Component based shape analysis and classification, open source SSH V2 protocol
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13961340 Financial Statement Fraud: The Need for a Paradigm Shift to Forensic Accounting
Authors: Ifedapo Francis Awolowo
Abstract:
The unrelenting series of embarrassing audit failures should stimulate a paradigm shift in accounting. And in this age of information revolution, there is need for a constant improvement on the products or services one offers to the market in order to be relevant. This study explores the perceptions of external auditors, forensic accountants and accounting academics on whether a paradigm shift to forensic accounting can reduce financial statement frauds. Through Neo-empiricism/inductive analytical approach, findings reveal that a paradigm shift to forensic accounting might be the right step in the right direction in order to increase the chances of fraud prevention and detection in the financial statement. This research has implication on accounting education on the need to incorporate forensic accounting into present day accounting curriculum. Accounting professional bodies, accounting standard setters and accounting firms all have roles to play in incorporating forensic accounting education into accounting curriculum. Particularly, there is need to alter the ISA 240 to make the prevention and detection of frauds the responsibilities of bot those charged with the management and governance of companies and statutory auditors.Keywords: Financial statement fraud, forensic accounting, fraud prevention and detection, auditing, audit expectation gap, corporate governance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36431339 Half Model Testing for Canard of a Hybrid Buoyant Aircraft
Authors: A. U. Haque, W. Asrar, A. A. Omar, E. Sulaeman, J. S. Mohamed Ali
Abstract:
Due to the interference effects, the intrinsic aerodynamic parameters obtained from the individual component testing are always fundamentally different than those obtained for complete model testing. Consideration and limitation for such testing need to be taken into account in any design work related to the component buildup method. In this paper, the scaled model of a straight rectangular canard of a hybrid buoyant aircraft is tested at 50 m/s in IIUM-LSWT (Low Speed Wind Tunnel). Model and its attachment with the balance are kept rigid to have results free from the aeroelastic distortion. Based on the velocity profile of the test section’s floor; the height of the model is kept equal to the corresponding boundary layer displacement. Balance measurements provide valuable but limited information of overall aerodynamic behavior of the model. Zero lift coefficient is obtained at -2.2o and the corresponding drag coefficient was found to be less than that at zero angle of attack. As a part of the validation of low fidelity tool, plot of lift coefficient plot was verified by the experimental data and except the value of zero lift coefficients, the overall trend has under predicted the lift coefficient. Based on this comparative study, a correction factor of 1.36 is proposed for lift curve slope obtained from the panel method.Keywords: Wind tunnel testing, boundary layer displacement, lift curve slope, canard, aerodynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26231338 Learning to Order Terms: Supervised Interestingness Measures in Terminology Extraction
Authors: Jérôme Azé, Mathieu Roche, Yves Kodratoff, Michèle Sebag
Abstract:
Term Extraction, a key data preparation step in Text Mining, extracts the terms, i.e. relevant collocation of words, attached to specific concepts (e.g. genetic-algorithms and decisiontrees are terms associated to the concept “Machine Learning" ). In this paper, the task of extracting interesting collocations is achieved through a supervised learning algorithm, exploiting a few collocations manually labelled as interesting/not interesting. From these examples, the ROGER algorithm learns a numerical function, inducing some ranking on the collocations. This ranking is optimized using genetic algorithms, maximizing the trade-off between the false positive and true positive rates (Area Under the ROC curve). This approach uses a particular representation for the word collocations, namely the vector of values corresponding to the standard statistical interestingness measures attached to this collocation. As this representation is general (over corpora and natural languages), generality tests were performed by experimenting the ranking function learned from an English corpus in Biology, onto a French corpus of Curriculum Vitae, and vice versa, showing a good robustness of the approaches compared to the state-of-the-art Support Vector Machine (SVM).Keywords: Text-mining, Terminology Extraction, Evolutionary algorithm, ROC Curve.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16591337 A Comparison of Inverse Simulation-Based Fault Detection in a Simple Robotic Rover with a Traditional Model-Based Method
Authors: Murray L. Ireland, Kevin J. Worrall, Rebecca Mackenzie, Thaleia Flessa, Euan McGookin, Douglas Thomson
Abstract:
Robotic rovers which are designed to work in extra-terrestrial environments present a unique challenge in terms of the reliability and availability of systems throughout the mission. Should some fault occur, with the nearest human potentially millions of kilometres away, detection and identification of the fault must be performed solely by the robot and its subsystems. Faults in the system sensors are relatively straightforward to detect, through the residuals produced by comparison of the system output with that of a simple model. However, faults in the input, that is, the actuators of the system, are harder to detect. A step change in the input signal, caused potentially by the loss of an actuator, can propagate through the system, resulting in complex residuals in multiple outputs. These residuals can be difficult to isolate or distinguish from residuals caused by environmental disturbances. While a more complex fault detection method or additional sensors could be used to solve these issues, an alternative is presented here. Using inverse simulation (InvSim), the inputs and outputs of the mathematical model of the rover system are reversed. Thus, for a desired trajectory, the corresponding actuator inputs are obtained. A step fault near the input then manifests itself as a step change in the residual between the system inputs and the input trajectory obtained through inverse simulation. This approach avoids the need for additional hardware on a mass- and power-critical system such as the rover. The InvSim fault detection method is applied to a simple four-wheeled rover in simulation. Additive system faults and an external disturbance force and are applied to the vehicle in turn, such that the dynamic response and sensor output of the rover are impacted. Basic model-based fault detection is then employed to provide output residuals which may be analysed to provide information on the fault/disturbance. InvSim-based fault detection is then employed, similarly providing input residuals which provide further information on the fault/disturbance. The input residuals are shown to provide clearer information on the location and magnitude of an input fault than the output residuals. Additionally, they can allow faults to be more clearly discriminated from environmental disturbances.Keywords: Fault detection, inverse simulation, rover, ground robot.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9461336 Automatic Detection of Syllable Repetition in Read Speech for Objective Assessment of Stuttered Disfluencies
Authors: K. M. Ravikumar, Balakrishna Reddy, R. Rajagopal, H. C. Nagaraj
Abstract:
Automatic detection of syllable repetition is one of the important parameter in assessing the stuttered speech objectively. The existing method which uses artificial neural network (ANN) requires high levels of agreement as prerequisite before attempting to train and test ANNs to separate fluent and nonfluent. We propose automatic detection method for syllable repetition in read speech for objective assessment of stuttered disfluencies which uses a novel approach and has four stages comprising of segmentation, feature extraction, score matching and decision logic. Feature extraction is implemented using well know Mel frequency Cepstra coefficient (MFCC). Score matching is done using Dynamic Time Warping (DTW) between the syllables. The Decision logic is implemented by Perceptron based on the score given by score matching. Although many methods are available for segmentation, in this paper it is done manually. Here the assessment by human judges on the read speech of 10 adults who stutter are described using corresponding method and the result was 83%.Keywords: Assessment, DTW, MFCC, Objective, Perceptron, Stuttering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28111335 Finding Pareto Optimal Front for the Multi-Mode Time, Cost Quality Trade-off in Project Scheduling
Authors: H. Iranmanesh, M. R. Skandari, M. Allahverdiloo
Abstract:
Project managers are the ultimate responsible for the overall characteristics of a project, i.e. they should deliver the project on time with minimum cost and with maximum quality. It is vital for any manager to decide a trade-off between these conflicting objectives and they will be benefited of any scientific decision support tool. Our work will try to determine optimal solutions (rather than a single optimal solution) from which the project manager will select his desirable choice to run the project. In this paper, the problem in project scheduling notated as (1,T|cpm,disc,mu|curve:quality,time,cost) will be studied. The problem is multi-objective and the purpose is finding the Pareto optimal front of time, cost and quality of a project (curve:quality,time,cost), whose activities belong to a start to finish activity relationship network (cpm) and they can be done in different possible modes (mu) which are non-continuous or discrete (disc), and each mode has a different cost, time and quality . The project is constrained to a non-renewable resource i.e. money (1,T). Because the problem is NP-Hard, to solve the problem, a meta-heuristic is developed based on a version of genetic algorithm specially adapted to solve multi-objective problems namely FastPGA. A sample project with 30 activities is generated and then solved by the proposed method.Keywords: FastPGA, Multi-Execution Activity Mode, ParetoOptimality, Project Scheduling, Time-Cost-Quality Trade-Off.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16841334 Performance of Subcarrier- OCDMA System with Complementary Subtraction Detection Technique
Authors: R. K. Z. Sahbudin, M. K. Abdullah, M. Mokhtar, S. B. A. Anas, S. Hitam
Abstract:
A subcarrier - spectral amplitude coding optical code division multiple access system using the Khazani-Syed code with Complementary subtraction detection technique is proposed. The proposed system has been analyzed by taking into account the effects of phase-induced intensity noise, shot noise, thermal noise and intermodulation distortion noise. The performance of the system has been compared with the spectral amplitude coding optical code division multiple access system using the Hadamard code and the Modified Quadratic Congruence code. The analysis shows that the proposed system can eliminate the multiple access interference using the Complementary subtraction detection technique, and hence improve the overall system performance.Keywords: Complementary subtraction, Khazani-Syed code, multiple access interference, phase-induced intensity noise
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17431333 FleGSens – Secure Area Monitoring Using Wireless Sensor Networks
Authors: Peter Rothenpieler, Daniela Kruger, Dennis Pfisterer, Stefan Fischer, Denise Dudek, Christian Haas, Martina Zitterbart
Abstract:
In the project FleGSens, a wireless sensor network (WSN) for the surveillance of critical areas and properties is currently developed which incorporates mechanisms to ensure information security. The intended prototype consists of 200 sensor nodes for monitoring a 500m long land strip. The system is focused on ensuring integrity and authenticity of generated alarms and availability in the presence of an attacker who may even compromise a limited number of sensor nodes. In this paper, two of the main protocols developed in the project are presented, a tracking protocol to provide secure detection of trespasses within the monitored area and a protocol for secure detection of node failures. Simulation results of networks containing 200 and 2000 nodes as well as the results of the first prototype comprising a network of 16 nodes are presented. The focus of the simulations and prototype are functional testing of the protocols and particularly demonstrating the impact and cost of several attacks.Keywords: Wireless Sensor Network, Security, Trespass Detection, Testbed.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19771332 Acoustic Detection of the Red Date Palm Weevil
Authors: Mohammed A. Al-Manie, Mohammed I. Alkanhal
Abstract:
In this paper, acoustic techniques are used to detect hidden insect infestations of date palm tress (Phoenix dactylifera L.). In particular, we use an acoustic instrument for early discovery of the presence of a destructive insect pest commonly known as the Red Date Palm Weevil (RDPW) and scientifically as Rhynchophorus ferrugineus (Olivier). This type of insect attacks date palm tress and causes irreversible damages at late stages. As a result, the infected trees must be destroyed. Therefore, early presence detection is a major part in controlling the spread and economic damage caused by this type of infestation. Furthermore monitoring and early detection of the disease can asses in taking appropriate measures such as isolating or treating the infected trees. The acoustic system is evaluated in terms of its ability for early discovery of hidden bests inside the tested tree. When signal acquisitions is completed for a number of date palms, a signal processing technique known as time-frequency analysis is evaluated in terms of providing an estimate that can be visually used to recognize the acoustic signature of the RDPW. The testing instrument was tested in the laboratory first then; it was used on suspected or infested tress in the field. The final results indicate that the acoustic monitoring approach along with signal processing techniques are very promising for the early detection of presence of the larva as well as the adult pest in the date palms.
Keywords: Acoustic emissions, acoustic sensors, nondestructivetests, Red Date Palm Weevil, signal processing..
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26201331 Dynamic Background Updating for Lightweight Moving Object Detection
Authors: Kelemewerk Destalem, Jungjae Cho, Jaeseong Lee, Ju H. Park, Joonhyuk Yoo
Abstract:
Background subtraction and temporal difference are often used for moving object detection in video. Both approaches are computationally simple and easy to be deployed in real-time image processing. However, while the background subtraction is highly sensitive to dynamic background and illumination changes, the temporal difference approach is poor at extracting relevant pixels of the moving object and at detecting the stopped or slowly moving objects in the scene. In this paper, we propose a simple moving object detection scheme based on adaptive background subtraction and temporal difference exploiting dynamic background updates. The proposed technique consists of histogram equalization, a linear combination of background and temporal difference, followed by the novel frame-based and pixel-based background updating techniques. Finally, morphological operations are applied to the output images. Experimental results show that the proposed algorithm can solve the drawbacks of both background subtraction and temporal difference methods and can provide better performance than that of each method.Keywords: Background subtraction, background updating, real time and lightweight algorithm, temporal difference.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25651330 Detection of Diabetic Symptoms in Retina Images Using Analog Algorithms
Authors: Daniela Matei, Radu Matei
Abstract:
In this paper a class of analog algorithms based on the concept of Cellular Neural Network (CNN) is applied in some processing operations of some important medical images, namely retina images, for detecting various symptoms connected with diabetic retinopathy. Some specific processing tasks like morphological operations, linear filtering and thresholding are proposed, the corresponding template values are given and simulations on real retina images are provided.Keywords: Diabetic retinopathy, pathology detection, cellular neural networks, analog algorithms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20791329 Finding Pareto Optimal Front for the Multi- Mode Time, Cost Quality Trade-off in Project Scheduling
Authors: H. Iranmanesh, M. R. Skandari, M. Allahverdiloo
Abstract:
Project managers are the ultimate responsible for the overall characteristics of a project, i.e. they should deliver the project on time with minimum cost and with maximum quality. It is vital for any manager to decide a trade-off between these conflicting objectives and they will be benefited of any scientific decision support tool. Our work will try to determine optimal solutions (rather than a single optimal solution) from which the project manager will select his desirable choice to run the project. In this paper, the problem in project scheduling notated as (1,T|cpm,disc,mu|curve:quality,time,cost) will be studied. The problem is multi-objective and the purpose is finding the Pareto optimal front of time, cost and quality of a project (curve:quality,time,cost), whose activities belong to a start to finish activity relationship network (cpm) and they can be done in different possible modes (mu) which are non-continuous or discrete (disc), and each mode has a different cost, time and quality . The project is constrained to a non-renewable resource i.e. money (1,T). Because the problem is NP-Hard, to solve the problem, a meta-heuristic is developed based on a version of genetic algorithm specially adapted to solve multi-objective problems namely FastPGA. A sample project with 30 activities is generated and then solved by the proposed method.Keywords: FastPGA, Multi-Execution Activity Mode, Pareto Optimality, Project Scheduling, Time-Cost-Quality Trade-Off.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18071328 Skew Detection Technique for Binary Document Images based on Hough Transform
Authors: Manjunath Aradhya V N, Hemantha Kumar G, Shivakumara P
Abstract:
Document image processing has become an increasingly important technology in the automation of office documentation tasks. During document scanning, skew is inevitably introduced into the incoming document image. Since the algorithm for layout analysis and character recognition are generally very sensitive to the page skew. Hence, skew detection and correction in document images are the critical steps before layout analysis. In this paper, a novel skew detection method is presented for binary document images. The method considered the some selected characters of the text which may be subjected to thinning and Hough transform to estimate skew angle accurately. Several experiments have been conducted on various types of documents such as documents containing English Documents, Journals, Text-Book, Different Languages and Document with different fonts, Documents with different resolutions, to reveal the robustness of the proposed method. The experimental results revealed that the proposed method is accurate compared to the results of well-known existing methods.Keywords: Optical Character Recognition, Skew angle, Thinning, Hough transform, Document processing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20951327 Hierarchical PSO-Adaboost Based Classifiers for Fast and Robust Face Detection
Authors: Hong Pan, Yaping Zhu, Liang Zheng Xia
Abstract:
We propose a fast and robust hierarchical face detection system which finds and localizes face images with a cascade of classifiers. Three modules contribute to the efficiency of our detector. First, heterogeneous feature descriptors are exploited to enrich feature types and feature numbers for face representation. Second, a PSO-Adaboost algorithm is proposed to efficiently select discriminative features from a large pool of available features and reinforce them into the final ensemble classifier. Compared with the standard exhaustive Adaboost for feature selection, the new PSOAdaboost algorithm reduces the training time up to 20 times. Finally, a three-stage hierarchical classifier framework is developed for rapid background removal. In particular, candidate face regions are detected more quickly by using a large size window in the first stage. Nonlinear SVM classifiers are used instead of decision stump functions in the last stage to remove those remaining complex nonface patterns that can not be rejected in the previous two stages. Experimental results show our detector achieves superior performance on the CMU+MIT frontal face dataset.
Keywords: Adaboost, Face detection, Feature selection, PSO
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21991326 Word Base Line Detection in Handwritten Text Recognition Systems
Authors: Kamil R. Aida-zade, Jamaladdin Z. Hasanov
Abstract:
An approach is offered for more precise definition of base lines- borders in handwritten cursive text and general problems of handwritten text segmentation have also been analyzed. An offered method tries to solve problems arose in handwritten recognition with specific slant or in other words, where the letters of the words are not on the same vertical line. As an informative features, some recognition systems use ascending and descending parts of the letters, found after the word-s baseline detection. In such recognition systems, problems in baseline detection, impacts the quality of the recognition and decreases the rate of the recognition. Despite other methods, here borders are found by small pieces containing segmentation elements and defined as a set of linear functions. In this method, separate borders for top and bottom border lines are found. At the end of the paper, as a result, azerbaijani cursive handwritten texts written in Latin alphabet by different authors has been analyzed.
Keywords: Azeri, azerbaijani, latin, segmentation, cursive, HWR, handwritten, recognition, baseline, ascender, descender, symbols.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24791325 A Valley Detection for Path Planning
Authors: In-Geun Lim, Jin-Soo Kim, Chirl-Hwa Lee
Abstract:
This paper presents a constrained valley detection algorithm. The intent is to find valleys in the map for the path planning that enables a robot or a vehicle to move safely. The constraint to the valley is a desired width and a desired depth to ensure the space for movement when a vehicle passes through the valley. We propose an algorithm to find valleys satisfying these 2 dimensional constraints. The merit of our algorithm is that the pre-processing and the post-processing are not necessary to eliminate undesired small valleys. The algorithm is validated through simulation using digitized elevation data.Keywords: valley, width, depth, path planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14441324 Improved Feature Extraction Technique for Handling Occlusion in Automatic Facial Expression Recognition
Authors: Khadijat T. Bamigbade, Olufade F. W. Onifade
Abstract:
The field of automatic facial expression analysis has been an active research area in the last two decades. Its vast applicability in various domains has drawn so much attention into developing techniques and dataset that mirror real life scenarios. Many techniques such as Local Binary Patterns and its variants (CLBP, LBP-TOP) and lately, deep learning techniques, have been used for facial expression recognition. However, the problem of occlusion has not been sufficiently handled, making their results not applicable in real life situations. This paper develops a simple, yet highly efficient method tagged Local Binary Pattern-Histogram of Gradient (LBP-HOG) with occlusion detection in face image, using a multi-class SVM for Action Unit and in turn expression recognition. Our method was evaluated on three publicly available datasets which are JAFFE, CK, SFEW. Experimental results showed that our approach performed considerably well when compared with state-of-the-art algorithms and gave insight to occlusion detection as a key step to handling expression in wild.
Keywords: Automatic facial expression analysis, local binary pattern, LBP-HOG, occlusion detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7821323 Performance Degradation for the GLR Test-Statistics for Spatial Signal Detection
Authors: Olesya Bolkhovskaya, Alexander Maltsev
Abstract:
Antenna arrays are widely used in modern radio systems in sonar and communications. The solving of the detection problems of a useful signal on the background of noise is based on the GLRT method. There is a large number of problem which depends on the known a priori information. In this work, in contrast to the majority of already solved problems, it is used only difference spatial properties of the signal and noise for detection. We are analyzing the influence of the degree of non-coherence of signal and noise unhomogeneity on the performance characteristics of different GLRT statistics. The description of the signal and noise is carried out by means of the spatial covariance matrices C in the cases of different number of known information. The partially coherent signalis is simulated as a plane wave with a random angle of incidence of the wave concerning a normal. Background noise is simulated as random process with uniform distribution function in each element. The results of investigation of degradation of performance characteristics for different cases are represented in this work.
Keywords: GLRT, Neumann-Pearson’s criterion, test-statistics, degradation, spatial processing, multielement antenna array
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18061322 A Static Android Malware Detection Based on Actual Used Permissions Combination and API Calls
Authors: Xiaoqing Wang, Junfeng Wang, Xiaolan Zhu
Abstract:
Android operating system has been recognized by most application developers because of its good open-source and compatibility, which enriches the categories of applications greatly. However, it has become the target of malware attackers due to the lack of strict security supervision mechanisms, which leads to the rapid growth of malware, thus bringing serious safety hazards to users. Therefore, it is critical to detect Android malware effectively. Generally, the permissions declared in the AndroidManifest.xml can reflect the function and behavior of the application to a large extent. Since current Android system has not any restrictions to the number of permissions that an application can request, developers tend to apply more than actually needed permissions in order to ensure the successful running of the application, which results in the abuse of permissions. However, some traditional detection methods only consider the requested permissions and ignore whether it is actually used, which leads to incorrect identification of some malwares. Therefore, a machine learning detection method based on the actually used permissions combination and API calls was put forward in this paper. Meanwhile, several experiments are conducted to evaluate our methodology. The result shows that it can detect unknown malware effectively with higher true positive rate and accuracy while maintaining a low false positive rate. Consequently, the AdaboostM1 (J48) classification algorithm based on information gain feature selection algorithm has the best detection result, which can achieve an accuracy of 99.8%, a true positive rate of 99.6% and a lowest false positive rate of 0.Keywords: Android, permissions combination, API calls, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19151321 Sequential Straightforward Clustering for Local Image Block Matching
Authors: Mohammad Akbarpour Sekeh, Mohd. Aizaini Maarof, Mohd. Foad Rohani, Malihe Motiei
Abstract:
Duplicated region detection is a technical method to expose copy-paste forgeries on digital images. Copy-paste is one of the common types of forgeries to clone portion of an image in order to conceal or duplicate special object. In this type of forgery detection, extracting robust block feature and also high time complexity of matching step are two main open problems. This paper concentrates on computational time and proposes a local block matching algorithm based on block clustering to enhance time complexity. Time complexity of the proposed algorithm is formulated and effects of two parameter, block size and number of cluster, on efficiency of this algorithm are considered. The experimental results and mathematical analysis demonstrate this algorithm is more costeffective than lexicographically algorithms in time complexity issue when the image is complex.Keywords: Copy-paste forgery detection, Duplicated region, Timecomplexity, Local block matching, Sequential block clustering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18321320 Implementation of an Innovative Simplified Sliding Mode Observer-Based Robust Fault Detection in a Drum Boiler System
Authors: L. Khoshnevisan, H. R. Momeni, A. Ashraf-Modarres
Abstract:
One of the robust fault detection filter (RFDF) designing method is based on sliding-mode theory. The main purpose of our study is to introduce an innovative simplified reference residual model generator to formulate the RFDF as a sliding-mode observer without any manipulation package or transformation matrix, through which the generated residual signals can be evaluated. So the proposed design is more explicit and requires less design parameters in comparison with approaches requiring changing coordinates. To the best author's knowledge, this is the first time that the sliding mode technique is applied to detect actuator and sensor faults in a real boiler. The designing procedure is proposed in a drum boiler in Synvendska Kraft AB Plant in Malmo, Sweden as a multivariable and strongly coupled system. It is demonstrated that both sensor and actuator faults can robustly be detected. Also sensor faults can be diagnosed and isolated through this method.Keywords: Boiler, fault detection, robustness, simplified sliding-mode observer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19411319 Reduction of False Positives in Head-Shoulder Detection Based on Multi-Part Color Segmentation
Authors: Lae-Jeong Park
Abstract:
The paper presents a method that utilizes figure-ground color segmentation to extract effective global feature in terms of false positive reduction in the head-shoulder detection. Conventional detectors that rely on local features such as HOG due to real-time operation suffer from false positives. Color cue in an input image provides salient information on a global characteristic which is necessary to alleviate the false positives of the local feature based detectors. An effective approach that uses figure-ground color segmentation has been presented in an effort to reduce the false positives in object detection. In this paper, an extended version of the approach is presented that adopts separate multipart foregrounds instead of a single prior foreground and performs the figure-ground color segmentation with each of the foregrounds. The multipart foregrounds include the parts of the head-shoulder shape and additional auxiliary foregrounds being optimized by a search algorithm. A classifier is constructed with the feature that consists of a set of the multiple resulting segmentations. Experimental results show that the presented method can discriminate more false positive than the single prior shape-based classifier as well as detectors with the local features. The improvement is possible because the presented approach can reduce the false positives that have the same colors in the head and shoulder foregrounds.
Keywords: Pedestrian detection, color segmentation, false positives, feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11441318 Developing Optical Sensors with Application of Cancer Detection by Elastic Light Scattering Spectroscopy
Authors: May Fadheel Estephan, Richard Perks
Abstract:
Cancer is a serious health concern that affects millions of people worldwide. Early detection and treatment are essential for improving patient outcomes. However, current methods for cancer detection have limitations, such as low sensitivity and specificity. The aim of this study was to develop an optical sensor for cancer detection using elastic light scattering spectroscopy (ELSS). ELSS is a non-invasive optical technique that can be used to characterize the size and concentration of particles in a solution. An optical probe was fabricated with a 100-μm-diameter core and a 132-μm centre-to-centre separation. The probe was used to measure the ELSS spectra of polystyrene spheres with diameters of 2 μm, 0.8 μm, and 0.413 μm. The spectra were then analysed to determine the size and concentration of the spheres. The results showed that the optical probe was able to differentiate between the three different sizes of polystyrene spheres. The probe was also able to detect the presence of polystyrene spheres in suspension concentrations as low as 0.01%. The results of this study demonstrate the potential of ELSS for cancer detection. ELSS is a non-invasive technique that can be used to characterize the size and concentration of cells in a tissue sample. This information can be used to identify cancer cells and assess the stage of the disease. The data for this study were collected by measuring the ELSS spectra of polystyrene spheres with different diameters. The spectra were collected using a spectrometer and a computer. The ELSS spectra were analysed using a software program to determine the size and concentration of the spheres. The software program used a mathematical algorithm to fit the spectra to a theoretical model. The question addressed by this study was whether ELSS could be used to detect cancer cells. The results of the study showed that ELSS could be used to differentiate between different sizes of cells, suggesting that it could be used to detect cancer cells. The findings of this research show the utility of ELSS in the early identification of cancer. ELSS is a non-invasive method for characterizing the number and size of cells in a tissue sample. To determine cancer cells and determine the disease's stage, this information can be employed. Further research is needed to evaluate the clinical performance of ELSS for cancer detection.
Keywords: Elastic Light Scattering Spectroscopy, Polystyrene spheres in suspension, optical probe, fibre optics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1411317 Anomaly Detection using Neuro Fuzzy system
Authors: Fatemeh Amiri, Caro Lucas, Nasser Yazdani
Abstract:
As the network based technologies become omnipresent, demands to secure networks/systems against threat increase. One of the effective ways to achieve higher security is through the use of intrusion detection systems (IDS), which are a software tool to detect anomalous in the computer or network. In this paper, an IDS has been developed using an improved machine learning based algorithm, Locally Linear Neuro Fuzzy Model (LLNF) for classification whereas this model is originally used for system identification. A key technical challenge in IDS and LLNF learning is the curse of high dimensionality. Therefore a feature selection phase is proposed which is applicable to any IDS. While investigating the use of three feature selection algorithms, in this model, it is shown that adding feature selection phase reduces computational complexity of our model. Feature selection algorithms require the use of a feature goodness measure. The use of both a linear and a non-linear measure - linear correlation coefficient and mutual information- is investigated respectivelyKeywords: anomaly Detection, feature selection, Locally Linear Neuro Fuzzy (LLNF), Mutual Information (MI), liner correlation coefficient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21841316 Design and Performance Improvement of Three-Dimensional Optical Code Division Multiple Access Networks with NAND Detection Technique
Authors: Satyasen Panda, Urmila Bhanja
Abstract:
In this paper, we have presented and analyzed three-dimensional (3-D) matrices of wavelength/time/space code for optical code division multiple access (OCDMA) networks with NAND subtraction detection technique. The 3-D codes are constructed by integrating a two-dimensional modified quadratic congruence (MQC) code with one-dimensional modified prime (MP) code. The respective encoders and decoders were designed using fiber Bragg gratings and optical delay lines to minimize the bit error rate (BER). The performance analysis of the 3D-OCDMA system is based on measurement of signal to noise ratio (SNR), BER and eye diagram for a different number of simultaneous users. Also, in the analysis, various types of noises and multiple access interference (MAI) effects were considered. The results obtained with NAND detection technique were compared with those obtained with OR and AND subtraction techniques. The comparison results proved that the NAND detection technique with 3-D MQC\MP code can accommodate more number of simultaneous users for longer distances of fiber with minimum BER as compared to OR and AND subtraction techniques. The received optical power is also measured at various levels of BER to analyze the effect of attenuation.Keywords: Cross correlation, three-dimensional optical code division multiple access, spectral amplitude coding optical code division multiple access, multiple access interference, phase induced intensity noise, three-dimensional modified quadratic congruence/modified prime code.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1529