Search results for: Barr features
1227 Day/Night Detector for Vehicle Tracking in Traffic Monitoring Systems
Authors: M. Taha, Hala H. Zayed, T. Nazmy, M. Khalifa
Abstract:
Recently, traffic monitoring has attracted the attention of computer vision researchers. Many algorithms have been developed to detect and track moving vehicles. In fact, vehicle tracking in daytime and in nighttime cannot be approached with the same techniques, due to the extreme different illumination conditions. Consequently, traffic-monitoring systems are in need of having a component to differentiate between daytime and nighttime scenes. In this paper, a HSV-based day/night detector is proposed for traffic monitoring scenes. The detector employs the hue-histogram and the value-histogram on the top half of the image frame. Experimental results show that the extraction of the brightness features along with the color features within the top region of the image is effective for classifying traffic scenes. In addition, the detector achieves high precision and recall rates along with it is feasible for real time applications.Keywords: Day/night detector, daytime/nighttime classification, image classification, vehicle tracking, traffic monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 45121226 Function of Fractals: Application of Non-linear Geometry in Continental Architecture
Authors: Mohammadsadegh Zanganehfar
Abstract:
Since the introduction of fractal geometry in 1970, numerous efforts have been made by architects and researchers to transfer this area of mathematical knowledge in the discipline of architecture and postmodernist discourse. The discourse of complexity and architecture is one of the most significant ongoing discourses in the discipline of architecture from the 70's until today and has generated significant styles such as deconstructivism and parametricism in architecture. During these years, several projects were designed and presented by designers and architects using fractal geometry, but due to the lack of sufficient knowledge and appropriate comprehension of the features and characteristics of this nonlinear geometry, none of the fractal-based designs have been successful and satisfying. Fractal geometry as a geometric technology has a long presence in the history of architecture. The current research attempts to identify and discover the characteristics, features, potentials and functionality of fractals despite their aesthetic aspect by examining case studies of pre-modern architecture in Asia and investigating the function of fractals.
Keywords: Asian architecture, fractal geometry, fractal technique, geometric properties
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7781225 Quantum Dot Cellular Automata Based Effective Design of Combinational and Sequential Logical Structures
Authors: Hema Sandhya Jagarlamudi, Mousumi Saha, Pavan Kumar Jagarlamudi
Abstract:
The use of Quantum dots is a promising emerging Technology for implementing digital system at the nano level. It is effecient for attractive features such as faster speed , smaller size and low power consumption than transistor technology. In this paper, various Combinational and sequential logical structures - HALF ADDER, SR Latch and Flip-Flop, D Flip-Flop preceding NAND, NOR, XOR,XNOR are discussed based on QCA design, with comparatively less number of cells and area. By applying these layouts, the hardware requirements for a QCA design can be reduced. These structures are designed and simulated using QCA Designer Tool. By taking full advantage of the unique features of this technology, we are able to create complete circuits on a single layer of QCA. Such Devices are expected to function with ultra low power Consumption and very high speeds.Keywords: QCA, QCA Designer, Clock, Majority Gate
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26221224 The Impact of Scientific Content of National Geographic Channel on Drawing Style of Kindergarten Children
Authors: Ahmed Amin Mousa, Mona Yacoub
Abstract:
This study depends on tracking children style through what they have drawn after being introduced to 16 visual content through National Geographic Abu Dhabi Channel programs and the study of the changing features in their drawings before applying the visual act with them. The researchers used Goodenough-Harris Test to analyse children drawings and to extract the features which changed in their drawing before and after the visual content. The results showed a positive change especially in the shapes of animals and their properties. Children become more aware of animals’ shapes. The study sample was 220 kindergarten children divided into 130 girls and 90 boys at the Orman Experimental Language School in Dokki, Giza, Egypt. The study results showed an improvement in children drawing with 85% than they were before watching videos.
Keywords: National Geographic, children drawing, kindergarten, Goodenough-Harris Test.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7491223 Low Dimensional Representation of Dorsal Hand Vein Features Using Principle Component Analysis (PCA)
Authors: M.Heenaye-Mamode Khan, R.K. Subramanian, N. A. Mamode Khan
Abstract:
The quest of providing more secure identification system has led to a rise in developing biometric systems. Dorsal hand vein pattern is an emerging biometric which has attracted the attention of many researchers, of late. Different approaches have been used to extract the vein pattern and match them. In this work, Principle Component Analysis (PCA) which is a method that has been successfully applied on human faces and hand geometry is applied on the dorsal hand vein pattern. PCA has been used to obtain eigenveins which is a low dimensional representation of vein pattern features. Low cost CCD cameras were used to obtain the vein images. The extraction of the vein pattern was obtained by applying morphology. We have applied noise reduction filters to enhance the vein patterns. The system has been successfully tested on a database of 200 images using a threshold value of 0.9. The results obtained are encouraging.Keywords: Biometric, Dorsal vein pattern, PCA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18981222 Investigation of New Gait Representations for Improving Gait Recognition
Authors: Chirawat Wattanapanich, Hong Wei
Abstract:
This study presents new gait representations for improving gait recognition accuracy on cross gait appearances, such as normal walking, wearing a coat and carrying a bag. Based on the Gait Energy Image (GEI), two ideas are implemented to generate new gait representations. One is to append lower knee regions to the original GEI, and the other is to apply convolutional operations to the GEI and its variants. A set of new gait representations are created and used for training multi-class Support Vector Machines (SVMs). Tests are conducted on the CASIA dataset B. Various combinations of the gait representations with different convolutional kernel size and different numbers of kernels used in the convolutional processes are examined. Both the entire images as features and reduced dimensional features by Principal Component Analysis (PCA) are tested in gait recognition. Interestingly, both new techniques, appending the lower knee regions to the original GEI and convolutional GEI, can significantly contribute to the performance improvement in the gait recognition. The experimental results have shown that the average recognition rate can be improved from 75.65% to 87.50%.
Keywords: Convolutional image, lower knee, gait.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10711221 Elastic Lateral Features of a New Glass Fiber Reinforced Gypsum Wall
Authors: Zhengyong Liu, Huiqing Ying
Abstract:
GFRG(Glass Fiber Reinforced Gypsum) wall is a green product which can erect a building fast in prefabricated method, but its application to high-rise residential buildings is limited for its poor lateral stiffness. This paper has proposed a modification to GFRG walls structure to increase its lateral stiffness, which aiming to erect small high-rise residential buildings as load-bearing walls. The elastic finite element analysis to it has shown the lateral deformation feature and the distributions of the axial force and the shear force. The analysis results show that the new GFRG reinforced concrete wall can be used for small high-rise residential buildings.
Keywords: GFRG wall, lateral features, elastic analysis, residential building.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33361220 Performance Analysis of Brain Tumor Detection Based On Image Fusion
Authors: S. Anbumozhi, P. S. Manoharan
Abstract:
Medical Image fusion plays a vital role in medical field to diagnose the brain tumors which can be classified as benign or malignant. It is the process of integrating multiple images of the same scene into a single fused image to reduce uncertainty and minimizing redundancy while extracting all the useful information from the source images. Fuzzy logic is used to fuse two brain MRI images with different vision. The fused image will be more informative than the source images. The texture and wavelet features are extracted from the fused image. The multilevel Adaptive Neuro Fuzzy Classifier classifies the brain tumors based on trained and tested features. The proposed method achieved 80.48% sensitivity, 99.9% specificity and 99.69% accuracy. Experimental results obtained from fusion process prove that the use of the proposed image fusion approach shows better performance while compared with conventional fusion methodologies.
Keywords: Image fusion, Fuzzy rules, Neuro-fuzzy classifier.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30611219 Single-Camera Basketball Tracker through Pose and Semantic Feature Fusion
Authors: Adrià Arbués-Sangüesa, Coloma Ballester, Gloria Haro
Abstract:
Tracking sports players is a widely challenging scenario, specially in single-feed videos recorded in tight courts, where cluttering and occlusions cannot be avoided. This paper presents an analysis of several geometric and semantic visual features to detect and track basketball players. An ablation study is carried out and then used to remark that a robust tracker can be built with Deep Learning features, without the need of extracting contextual ones, such as proximity or color similarity, nor applying camera stabilization techniques. The presented tracker consists of: (1) a detection step, which uses a pretrained deep learning model to estimate the players pose, followed by (2) a tracking step, which leverages pose and semantic information from the output of a convolutional layer in a VGG network. Its performance is analyzed in terms of MOTA over a basketball dataset with more than 10k instances.Keywords: Basketball, deep learning, feature extraction, single-camera, tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7011218 Texture Characterization Based on a Chandrasekhar Fast Adaptive Filter
Authors: Mounir Sayadi, Farhat Fnaiech
Abstract:
In the framework of adaptive parametric modelling of images, we propose in this paper a new technique based on the Chandrasekhar fast adaptive filter for texture characterization. An Auto-Regressive (AR) linear model of texture is obtained by scanning the image row by row and modelling this data with an adaptive Chandrasekhar linear filter. The characterization efficiency of the obtained model is compared with the model adapted with the Least Mean Square (LMS) 2-D adaptive algorithm and with the cooccurrence method features. The comparison criteria is based on the computation of a characterization degree using the ratio of "betweenclass" variances with respect to "within-class" variances of the estimated coefficients. Extensive experiments show that the coefficients estimated by the use of Chandrasekhar adaptive filter give better results in texture discrimination than those estimated by other algorithms, even in a noisy context.
Keywords: Texture analysis, statistical features, adaptive filters, Chandrasekhar algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16181217 Comparative Study Using Weka for Red Blood Cells Classification
Authors: Jameela Ali Alkrimi, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy
Abstract:
Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifying the RBCs as normal or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithms tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital - Malaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectively.
Keywords: K-Nearest Neighbors, Neural Network, Radial Basis Function, Red blood cells, Support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30001216 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis
Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya
Abstract:
In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.Keywords: Cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9811215 Analysis of Endovascular Graft Features Affecting Endotension Following Endovascular Aneurysm Repair
Authors: Zeinab Hooshyar, Alireza Mehdizadeh
Abstract:
Endovascular aneurysm repair is a new and minimally invasive repair for patients with abdominal aortic aneurysm (AAA). This method has potential advantages that are incomparable with other repair methods. However, the enlargement of aneurysm in the absence of endoleak, which is known as endotension, may occur as one of post-operative compliances of this method. Typically, endotension is mainly as a result of pressure transmitted to aneurysm sac by endovascular installed graft. After installation of graft the aneurysm sac reduces significantly but remains non-zero. There are some factors which affect this pressure transmitted. In this study, the geometry features of installed vascular graft have been considered. It is inferred that graft neck angle and iliac bifurcation angle are two factors which can affect the drag force on graft and consequently the pressure transmitted to aneurysm.
Keywords: Endovascular graft, transmitted pressure, Drag force, Finite Element Modeling, neck angle, iliac bifurcation angle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15711214 What Have Banks Done Wrong?
Authors: F. May Liou, Y. C. Edwin Tang
Abstract:
This paper aims to provide a conceptual framework to examine competitive disadvantage of banks that suffer from poor performance. Banks generate revenues mainly from the interest rate spread on taking deposits and making loans while collecting fees in the process. To maximize firm value, banks seek loan growth and expense control while managing risk associated with loans with respect to non-performing borrowers or narrowing interest spread between assets and liabilities. Competitive disadvantage refers to the failure to access imitable resources and to build managing capabilities to gain sustainable return given appropriate risk management. This paper proposes a four-quadrant framework of organizational typology is subsequently proposed to examine the features of competitive disadvantage in the banking sector. A resource configuration model, which is extracted from CAMEL indicators to examine the underlying features of bank failures.
Keywords: Bank failure, CAMEL, competitive disadvantage, resource configuration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16531213 Analysis of Electrocardiograph (ECG) Signal for the Detection of Abnormalities Using MATLAB
Authors: Durgesh Kumar Ojha, Monica Subashini
Abstract:
The proposed method is to study and analyze Electrocardiograph (ECG) waveform to detect abnormalities present with reference to P, Q, R and S peaks. The first phase includes the acquisition of real time ECG data. In the next phase, generation of signals followed by pre-processing. Thirdly, the procured ECG signal is subjected to feature extraction. The extracted features detect abnormal peaks present in the waveform Thus the normal and abnormal ECG signal could be differentiated based on the features extracted. The work is implemented in the most familiar multipurpose tool, MATLAB. This software efficiently uses algorithms and techniques for detection of any abnormalities present in the ECG signal. Proper utilization of MATLAB functions (both built-in and user defined) can lead us to work with ECG signals for processing and analysis in real time applications. The simulation would help in improving the accuracy and the hardware could be built conveniently.
Keywords: ECG Waveform, Peak Detection, Arrhythmia, Matlab.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 120111212 Eukaryotic Gene Prediction by an Investigation of Nonlinear Dynamical Modeling Techniques on EIIP Coded Sequences
Authors: Mai S. Mabrouk, Nahed H. Solouma, Abou-Bakr M. Youssef, Yasser M. Kadah
Abstract:
Many digital signal processing, techniques have been used to automatically distinguish protein coding regions (exons) from non-coding regions (introns) in DNA sequences. In this work, we have characterized these sequences according to their nonlinear dynamical features such as moment invariants, correlation dimension, and largest Lyapunov exponent estimates. We have applied our model to a number of real sequences encoded into a time series using EIIP sequence indicators. In order to discriminate between coding and non coding DNA regions, the phase space trajectory was first reconstructed for coding and non-coding regions. Nonlinear dynamical features are extracted from those regions and used to investigate a difference between them. Our results indicate that the nonlinear dynamical characteristics have yielded significant differences between coding (CR) and non-coding regions (NCR) in DNA sequences. Finally, the classifier is tested on real genes where coding and non-coding regions are well known.
Keywords: Gene prediction, nonlinear dynamics, correlation dimension, Lyapunov exponent.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18271211 Orthosis and Finite Elements: A Study for Development of New Designs through Additive Manufacturing
Authors: M. Volpini, D. Alves, A. Horta, M. Borges, P. Reis
Abstract:
The gait pattern in people that present motor limitations foment the demand for auxiliary locomotion devices. These artifacts for movement assistance vary according to its shape, size and functional features, following the clinical applications desired. Among the ortheses of lower limbs, the ankle-foot orthesis aims to improve the ability to walk in people with different neuromuscular limitations, although they do not always answer patients' expectations for their aesthetic and functional characteristics. The purpose of this study is to explore the possibility of using new design in additive manufacturer to reproduce the shape and functional features of a ankle-foot orthesis in an efficient and modern way. Therefore, this work presents a study about the performance of the mechanical forces through the analysis of finite elements in an ankle-foot orthesis. It will be demonstrated a study of distribution of the stress on the orthopedic device in orthostatism and during the movement in the course of patient's walk.
Keywords: Additive manufacture, new designs, orthoses, finite elements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11441210 Time-Frequency Modeling and Analysis of Faulty Rotor
Authors: B. X. Tchomeni, A. A. Alugongo, T. B. Tengen
Abstract:
In this paper, de Laval rotor system has been characterized by a hinge model and its transient response numerically treated for a dynamic solution. The effect of the ensuing non-linear disturbances namely rub and breathing crack is numerically simulated. Subsequently, three analysis methods: Orbit Analysis, Fast Fourier Transform (FFT), and Wavelet Transform (WT) are employed to extract features of the vibration signal of the faulty system. An analysis of the system response orbits clearly indicates the perturbations due to the rotor-to-stator contact. The sensitivities of WT to the variation in system speed have been investigated by Continuous Wavelet Transform (CWT). The analysis reveals that features of crack, rubs and unbalance in vibration response can be useful for condition monitoring. WT reveals its ability to detect nonlinear signal, and obtained results provide a useful tool method for detecting machinery faults.Keywords: Continuous wavelet, crack, discrete wavelet, high acceleration, low acceleration, nonlinear, rotor-stator, rub.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17151209 An Experimental Study of a Self-Supervised Classifier Ensemble
Authors: Neamat El Gayar
Abstract:
Learning using labeled and unlabelled data has received considerable amount of attention in the machine learning community due its potential in reducing the need for expensive labeled data. In this work we present a new method for combining labeled and unlabeled data based on classifier ensembles. The model we propose assumes each classifier in the ensemble observes the input using different set of features. Classifiers are initially trained using some labeled samples. The trained classifiers learn further through labeling the unknown patterns using a teaching signals that is generated using the decision of the classifier ensemble, i.e. the classifiers self-supervise each other. Experiments on a set of object images are presented. Our experiments investigate different classifier models, different fusing techniques, different training sizes and different input features. Experimental results reveal that the proposed self-supervised ensemble learning approach reduces classification error over the single classifier and the traditional ensemble classifier approachs.Keywords: Multiple Classifier Systems, classifier ensembles, learning using labeled and unlabelled data, K-nearest neighbor classifier, Bayes classifier.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16471208 A Development of English Pronunciation Using Principles of Phonetics for English Major Students at Loei Rajabhat University
Authors: Pongthep Bunrueng
Abstract:
This action research accentuates the outcome of a development in English pronunciation, using principles of phonetics for English major students at Loei Rajabhat University. The research is split into 5 separate modules: 1) Organs of Speech and How to Produce Sounds, 2) Monopthongs, 3) Diphthongs, 4) Consonant sounds, and 5) Suprasegmental Features. Each module followed a 4 step action research process, 1) Planning, 2) Acting, 3) Observing, and 4) Reflecting. The research targeted 2nd year students who were majoring in English Education at Loei Rajabhat University during the academic year of 2011. A mixed methodology employing both quantitative and qualitative research was used, which put theory into action, taking segmental features up to suprasegmental features. Multiple tools were employed which included the following documents: pre-test and post-test papers, evaluation and assessment papers, group work assessment forms, a presentation grading form, an observation of participants form and a participant self-reflection form.
All 5 modules for the target group showed that results from the post-tests were higher than those of the pre-tests, with 0.01 statistical significance. All target groups attained results ranging from low to moderate and from moderate to high performance. The participants who attained low to moderate results had to re-sit the second round. During the first development stage, participants attended classes with group participation, in which they addressed planning through mutual co-operation and sharing of responsibility. Analytic induction of strong points for this operation illustrated that learner cognition, comprehension, application, and group practices were all present whereas the participants with weak results could be attributed to biological differences, differences in life and learning, or individual differences in responsiveness and self-discipline.
Participants who were required to be re-treated in Spiral 2 received the same treatment again. Results of tests from the 5 modules after the 2nd treatment were that the participants attained higher scores than those attained in the pre-test. Their assessment and development stages also showed improved results. They showed greater confidence at participating in activities, produced higher quality work, and correctly followed instructions for each activity. Analytic induction of strong and weak points for this operation remains the same as for Spiral 1, though there were improvements to problems which existed prior to undertaking the second treatment.
Keywords: Action research, English pronunciation, phonetics, segmental features, suprasegmental features.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28551207 A Context-Sensitive Algorithm for Media Similarity Search
Authors: Guang-Ho Cha
Abstract:
This paper presents a context-sensitive media similarity search algorithm. One of the central problems regarding media search is the semantic gap between the low-level features computed automatically from media data and the human interpretation of them. This is because the notion of similarity is usually based on high-level abstraction but the low-level features do not sometimes reflect the human perception. Many media search algorithms have used the Minkowski metric to measure similarity between image pairs. However those functions cannot adequately capture the aspects of the characteristics of the human visual system as well as the nonlinear relationships in contextual information given by images in a collection. Our search algorithm tackles this problem by employing a similarity measure and a ranking strategy that reflect the nonlinearity of human perception and contextual information in a dataset. Similarity search in an image database based on this contextual information shows encouraging experimental results.
Keywords: Context-sensitive search, image search, media search, similarity ranking, similarity search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6441206 Opponent Color and Curvelet Transform Based Image Retrieval System Using Genetic Algorithm
Authors: Yesubai Rubavathi Charles, Ravi Ramraj
Abstract:
In order to retrieve images efficiently from a large database, a unique method integrating color and texture features using genetic programming has been proposed. Opponent color histogram which gives shadow, shade, and light intensity invariant property is employed in the proposed framework for extracting color features. For texture feature extraction, fast discrete curvelet transform which captures more orientation information at different scales is incorporated to represent curved like edges. The recent scenario in the issues of image retrieval is to reduce the semantic gap between user’s preference and low level features. To address this concern, genetic algorithm combined with relevance feedback is embedded to reduce semantic gap and retrieve user’s preference images. Extensive and comparative experiments have been conducted to evaluate proposed framework for content based image retrieval on two databases, i.e., COIL-100 and Corel-1000. Experimental results clearly show that the proposed system surpassed other existing systems in terms of precision and recall. The proposed work achieves highest performance with average precision of 88.2% on COIL-100 and 76.3% on Corel, the average recall of 69.9% on COIL and 76.3% on Corel. Thus, the experimental results confirm that the proposed content based image retrieval system architecture attains better solution for image retrieval.Keywords: Content based image retrieval, Curvelet transform, Genetic algorithm, Opponent color histogram, Relevance feedback.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18231205 Reducing CO2 Emission Using EDA and Weighted Sum Model in Smart Parking System
Authors: Rahman Ali, Muhammad Sajjad, Farkhund Iqbal, Muhammad Sadiq Hassan Zada, Mohammed Hussain
Abstract:
Emission of Carbon Dioxide (CO2) has adversely affected the environment. One of the major sources of CO2 emission is transportation. In the last few decades, the increase in mobility of people using vehicles has enormously increased the emission of CO2 in the environment. To reduce CO2 emission, sustainable transportation system is required in which smart parking is one of the important measures that need to be established. To contribute to the issue of reducing the amount of CO2 emission, this research proposes a smart parking system. A cloud-based solution is provided to the drivers which automatically searches and recommends the most preferred parking slots. To determine preferences of the parking areas, this methodology exploits a number of unique parking features which ultimately results in the selection of a parking that leads to minimum level of CO2 emission from the current position of the vehicle. To realize the methodology, a scenario-based implementation is considered. During the implementation, a mobile application with GPS signals, vehicles with a number of vehicle features and a list of parking areas with parking features are used by sorting, multi-level filtering, exploratory data analysis (EDA, Analytical Hierarchy Process (AHP)) and weighted sum model (WSM) to rank the parking areas and recommend the drivers with top-k most preferred parking areas. In the EDA process, “2020testcar-2020-03-03”, a freely available dataset is used to estimate CO2 emission of a particular vehicle. To evaluate the system, results of the proposed system are compared with the conventional approach, which reveal that the proposed methodology supersedes the conventional one in reducing the emission of CO2 into the atmosphere.
Keywords: CO2 emission, IoT, EDA, Weighted Sum Model, WSM, regression, smart parking system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7481204 Evaluation of Easy-to-Use Energy Building Design Tools for Solar Access Analysis in Urban Contexts: Comparison of Friendly Simulation Design Tools for Architectural Practice in the Early Design Stage
Abstract:
Current building sector is focused on reduction of energy requirements, on renewable energy generation and on regeneration of existing urban areas. These targets need to be solved with a systemic approach, considering several aspects simultaneously such as climate conditions, lighting conditions, solar radiation, PV potential, etc. The solar access analysis is an already known method to analyze the solar potentials, but in current years, simulation tools have provided more effective opportunities to perform this type of analysis, in particular in the early design stage. Nowadays, the study of the solar access is related to the easiness of the use of simulation tools, in rapid and easy way, during the design process. This study presents a comparison of three simulation tools, from the point of view of the user, with the aim to highlight differences in the easy-to-use of these tools. Using a real urban context as case study, three tools; Ecotect, Townscope and Heliodon, are tested, performing models and simulations and examining the capabilities and output results of solar access analysis. The evaluation of the ease-to-use of these tools is based on some detected parameters and features, such as the types of simulation, requirements of input data, types of results, etc. As a result, a framework is provided in which features and capabilities of each tool are shown. This framework shows the differences among these tools about functions, features and capabilities. The aim of this study is to support users and to improve the integration of simulation tools for solar access with the design process.
Keywords: Solar access analysis, energy building design tools, urban planning, solar potential.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20741203 Automatic Classification of the Stand-to-Sit Phase in the TUG Test Using Machine Learning
Authors: Y. A. Adla, R. Soubra, M. Kasab, M. O. Diab, A. Chkeir
Abstract:
Over the past several years, researchers have shown a great interest in assessing the mobility of elderly people to measure their functional status. Usually, such an assessment is done by conducting tests that require the subject to walk a certain distance, turn around, and finally sit back down. Consequently, this study aims to provide an at home monitoring system to assess the patient’s status continuously. Thus, we proposed a technique to automatically detect when a subject sits down while walking at home. In this study, we utilized a Doppler radar system to capture the motion of the subjects. More than 20 features were extracted from the radar signals out of which 11 were chosen based on their Intraclass Correlation Coefficient (ICC > 0.75). Accordingly, the sequential floating forward selection wrapper was applied to further narrow down the final feature vector. Finally, five features were introduced to the Linear Discriminant Analysis classifier and an accuracy of 93.75% was achieved as well as a precision and recall of 95% and 90% respectively.
Keywords: Doppler radar system, stand-to-sit phase, TUG test, machine learning, classification
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4591202 Differential Protection for Power Transformer Using Wavelet Transform and PNN
Authors: S. Sendilkumar, B. L. Mathur, Joseph Henry
Abstract:
A new approach for protection of power transformer is presented using a time-frequency transform known as Wavelet transform. Different operating conditions such as inrush, Normal, load, External fault and internal fault current are sampled and processed to obtain wavelet coefficients. Different Operating conditions provide variation in wavelet coefficients. Features like energy and Standard deviation are calculated using Parsevals theorem. These features are used as inputs to PNN (Probabilistic neural network) for fault classification. The proposed algorithm provides more accurate results even in the presence of noise inputs and accurately identifies inrush and fault currents. Overall classification accuracy of the proposed method is found to be 96.45%. Simulation of the fault (with and without noise) was done using MATLAB AND SIMULINK software taking 2 cycles of data window (40 m sec) containing 800 samples. The algorithm was evaluated by using 10 % Gaussian white noise.Keywords: Power Transformer, differential Protection, internalfault, inrush current, Wavelet Energy, Db9.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31351201 Apoptosis Inspired Intrusion Detection System
Authors: R. Sridevi, G. Jagajothi
Abstract:
Artificial Immune Systems (AIS), inspired by the human immune system, are algorithms and mechanisms which are self-adaptive and self-learning classifiers capable of recognizing and classifying by learning, long-term memory and association. Unlike other human system inspired techniques like genetic algorithms and neural networks, AIS includes a range of algorithms modeling on different immune mechanism of the body. In this paper, a mechanism of a human immune system based on apoptosis is adopted to build an Intrusion Detection System (IDS) to protect computer networks. Features are selected from network traffic using Fisher Score. Based on the selected features, the record/connection is classified as either an attack or normal traffic by the proposed methodology. Simulation results demonstrates that the proposed AIS based on apoptosis performs better than existing AIS for intrusion detection.
Keywords: Apoptosis, Artificial Immune System (AIS), Fisher Score, KDD dataset, Network intrusion detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21931200 An Improved Face Recognition Algorithm Using Histogram-Based Features in Spatial and Frequency Domains
Authors: Qiu Chen, Koji Kotani, Feifei Lee, Tadahiro Ohmi
Abstract:
In this paper, we propose an improved face recognition algorithm using histogram-based features in spatial and frequency domains. For adding spatial information of the face to improve recognition performance, a region-division (RD) method is utilized. The facial area is firstly divided into several regions, then feature vectors of each facial part are generated by Binary Vector Quantization (BVQ) histogram using DCT coefficients in low frequency domains, as well as Local Binary Pattern (LBP) histogram in spatial domain. Recognition results with different regions are first obtained separately and then fused by weighted averaging. Publicly available ORL database is used for the evaluation of our proposed algorithm, which is consisted of 40 subjects with 10 images per subject containing variations in lighting, posing, and expressions. It is demonstrated that face recognition using RD method can achieve much higher recognition rate.
Keywords: Face recognition, Binary vector quantization (BVQ), Local Binary Patterns (LBP), DCT coefficients.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16221199 Signature Recognition and Verification using Hybrid Features and Clustered Artificial Neural Network(ANN)s
Authors: Manasjyoti Bhuyan, Kandarpa Kumar Sarma, Hirendra Das
Abstract:
Signature represents an individual characteristic of a person which can be used for his / her validation. For such application proper modeling is essential. Here we propose an offline signature recognition and verification scheme which is based on extraction of several features including one hybrid set from the input signature and compare them with the already trained forms. Feature points are classified using statistical parameters like mean and variance. The scanned signature is normalized in slant using a very simple algorithm with an intention to make the system robust which is found to be very helpful. The slant correction is further aided by the use of an Artificial Neural Network (ANN). The suggested scheme discriminates between originals and forged signatures from simple and random forgeries. The primary objective is to reduce the two crucial parameters-False Acceptance Rate (FAR) and False Rejection Rate (FRR) with lesser training time with an intension to make the system dynamic using a cluster of ANNs forming a multiple classifier system.Keywords: offline, algorithm, FAR, FRR, ANN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17821198 Validation of an EEG Classification Procedure Aimed at Physiological Interpretation
Authors: M. Guillard, M. Philippe, F. Laurent, J. Martinerie, J. P. Lachaux, G. Florence
Abstract:
One approach to assess neural networks underlying the cognitive processes is to study Electroencephalography (EEG). It is relevant to detect various mental states and characterize the physiological changes that help to discriminate two situations. That is why an EEG (amplitude, synchrony) classification procedure is described, validated. The two situations are "eyes closed" and "eyes opened" in order to study the "alpha blocking response" phenomenon in the occipital area. The good classification rate between the two situations is 92.1 % (SD = 3.5%) The spatial distribution of a part of amplitude features that helps to discriminate the two situations are located in the occipital regions that permit to validate the localization method. Moreover amplitude features in frontal areas, "short distant" synchrony in frontal areas and "long distant" synchrony between frontal and occipital area also help to discriminate between the two situations. This procedure will be used for mental fatigue detection.
Keywords: Classification, EEG Synchrony, alpha, resting situation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1460