Search results for: binary images
1097 Study of Remote Sensing and Satellite Images Ability in Preparing Agricultural Land Use Map (ALUM)
Authors: Ali Gholami
Abstract:
In this research the Preparation of Land use map of scanner LISS III satellite data, belonging to the IRS in the Aghche region in Isfahan province, is studied carefully. For this purpose, the IRS satellite images of August 2008 and various land preparation uses in region including rangelands, irrigation farming, dry farming, gardens and urban areas were separated and identified. Therefore, the GPS and Erdas Imaging software were used and three methods of Maximum Likelihood, Mahalanobis Distance and Minimum Distance were analyzed. In each of these methods, matrix error and Kappa index were calculated and accuracy of each method, based on percentages: 53.13, 56.64 and 48.44, were obtained respectively. Considering the low accuracy of these methods in separation of land preparation use, the visual interpretation of the map was used. Finally, regional visits of 150 points were noted at random and no error was observed. It shows that the map prepared by visual interpretation is in high accuracy. Although the probable errors due to visual interpretation and geometric correction might happen but the desired accuracy of the map which is more than 85 percent is reliable.Keywords: Land use map, Aghche Region, Erdas Imagine, satellite images
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15711096 Real-Time Specific Weed Recognition System Using Histogram Analysis
Authors: Irshad Ahmad, Abdul Muhamin Naeem, Muhammad Islam
Abstract:
Information on weed distribution within the field is necessary to implement spatially variable herbicide application. Since hand labor is costly, an automated weed control system could be feasible. This paper deals with the development of an algorithm for real time specific weed recognition system based on Histogram Analysis of an image that is used for the weed classification. This algorithm is specifically developed to classify images into broad and narrow class for real-time selective herbicide application. The developed system has been tested on weeds in the lab, which have shown that the system to be very effectiveness in weed identification. Further the results show a very reliable performance on images of weeds taken under varying field conditions. The analysis of the results shows over 95 percent classification accuracy over 140 sample images (broad and narrow) with 70 samples from each category of weeds.Keywords: Image Processing, real-time recognition, Weeddetection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17731095 Multiplayer RC-Car Driving System in a Collaborative Augmented Reality Environment
Authors: Kikuo Asai, Yuji Sugimoto
Abstract:
We developed a prototype system for multiplayer RC-car driving in a collaborative augmented reality (AR) environment. The tele-existence environment is constructed by superimposing digital data onto images captured by a camera on an RC-car, enabling players to experience an augmented coexistence of the digital content and the real world. Marker-based tracking was used for estimating position and orientation of the camera. The plural RC-cars can be operated in a field where square markers are arranged. The video images captured by the camera are transmitted to a PC for visual tracking. The RC-cars are also tracked by using an infrared camera attached to the ceiling, so that the instability is reduced in the visual tracking. Multimedia data such as texts and graphics are visualized to be overlaid onto the video images in the geometrically correct manner. The prototype system allows a tele-existence sensation to be augmented in a collaborative AR environment.
Keywords: Multiplayer, RC-car, Collaborative Environment, Augmented Reality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20681094 Dempster-Shafer Evidence Theory for Image Segmentation: Application in Cells Images
Authors: S. Ben Chaabane, M. Sayadi, F. Fnaiech, E. Brassart
Abstract:
In this paper we propose a new knowledge model using the Dempster-Shafer-s evidence theory for image segmentation and fusion. The proposed method is composed essentially of two steps. First, mass distributions in Dempster-Shafer theory are obtained from the membership degrees of each pixel covering the three image components (R, G and B). Each membership-s degree is determined by applying Fuzzy C-Means (FCM) clustering to the gray levels of the three images. Second, the fusion process consists in defining three discernment frames which are associated with the three images to be fused, and then combining them to form a new frame of discernment. The strategy used to define mass distributions in the combined framework is discussed in detail. The proposed fusion method is illustrated in the context of image segmentation. Experimental investigations and comparative studies with the other previous methods are carried out showing thus the robustness and superiority of the proposed method in terms of image segmentation.Keywords: Fuzzy C-means, Color image, data fusion, Dempster-Shafer's evidence theory
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22001093 Piezoelectric Polarization Effect on Debye Frequency and Temperature in Nitride Wurtzites
Authors: Bijay Kumar Sahoo, Ashok Kumar Srivastav
Abstract:
We have investigated the effect of piezoelectric (PZ) polarization property in binary as well as in ternary wurtzite nitrides. It is found that with the presence of PZ polarization property, the phonon group velocity is modified. The change in phonon group velocity due to PZ polarization effect directly depends on piezoelectric tensor value. Using different piezoelectric tensor values recommended by different workers in the literature, percent change in group velocities of phonons has been estimated. The Debye temperatures and frequencies of binary nitrides GaN, AlN and InN are also calculated using the modified group velocities. For ternary nitrides AlxGa(1-x)N, InxGa(1-x)N and InxAl(1-x)N, the phonon group velocities have been calculated as a functions of composition. A small positive bowing is observed in phonon group velocities of ternary alloys. Percent variations in phonon group velocities are also calculated for a straightforward comparison among ternary nitrides. The results are expected to show a change in phonon relaxation rates and thermal conductivity of III-nitrides when piezoelectric polarization property is taken into consideration.Keywords: Wirtzite nitrides, piezoelectric polarization, Phonon group velocity, Debye frequency and Debye temperature.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19811092 Better Perception of Low Resolution Images Using Wavelet Interpolation Techniques
Authors: Tarun Gulati, Kapil Gupta, Dushyant Gupta
Abstract:
High resolution images are always desired as they contain the more information and they can better represent the original data. So, to convert the low resolution image into high resolution interpolation is done. The quality of such high resolution image depends on the interpolation function and is assessed in terms of sharpness of image. This paper focuses on Wavelet based Interpolation Techniques in which an input image is divided into subbands. Each subband is processed separately and finally combined the processed subbandsto get the super resolution image.
Keywords: SWT, DWTSR, DWTSWT, DWCWT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21721091 Algorithm for Bleeding Determination Based On Object Recognition and Local Color Features in Capsule Endoscopy
Authors: Yong-Gyu Lee, Jin Hee Park, Youngdae Seo, Gilwon Yoon
Abstract:
Automatic determination of blood in less bright or noisy capsule endoscopic images is difficult due to low S/N ratio. Especially it may not be accurate to analyze these images due to the influence of external disturbance. Therefore, we proposed detection methods that are not dependent only on color bands. In locating bleeding regions, the identification of object outlines in the frame and features of their local colors were taken into consideration. The results showed that the capability of detecting bleeding was much improved.Keywords: Endoscopy, object recognition, bleeding, image processing, RGB.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19391090 GPU Based High Speed Error Protection for Watermarked Medical Image Transmission
Authors: Md Shohidul Islam, Jongmyon Kim, Ui-pil Chong
Abstract:
Medical image is an integral part of e-health care and e-diagnosis system. Medical image watermarking is widely used to protect patients’ information from malicious alteration and manipulation. The watermarked medical images are transmitted over the internet among patients, primary and referred physicians. The images are highly prone to corruption in the wireless transmission medium due to various noises, deflection, and refractions. Distortion in the received images leads to faulty watermark detection and inappropriate disease diagnosis. To address the issue, this paper utilizes error correction code (ECC) with (8, 4) Hamming code in an existing watermarking system. In addition, we implement the high complex ECC on a graphics processing units (GPU) to accelerate and support real-time requirement. Experimental results show that GPU achieves considerable speedup over the sequential CPU implementation, while maintaining 100% ECC efficiency.
Keywords: Medical Image Watermarking (MIW), e-health system, error correction, Hamming code, GPU.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17431089 The Implementation of the Javanese Lettered-Manuscript Image Preprocessing Stage Model on the Batak Lettered-Manuscript Image
Authors: Anastasia Rita Widiarti, Agus Harjoko, Marsono, Sri Hartati
Abstract:
This paper presents the results of a study to test whether the Javanese character manuscript image preprocessing model that have been more widely applied, can also be applied to segment of the Batak characters manuscripts. The treatment process begins by converting the input image into a binary image. After the binary image is cleaned of noise, then the segmentation lines using projection profile is conducted. If unclear histogram projection is found, then the smoothing process before production indexes line segments is conducted. For each line image which has been produced, then the segmentation scripts in the line is applied, with regard of the connectivity between pixels which making up the letters that there is no characters are truncated. From the results of manuscript preprocessing system prototype testing, it is obtained the information about the system truth percentage value on pieces of Pustaka Batak Podani Ma AjiMamisinon manuscript ranged from 65% to 87.68% with a confidence level of 95%. The value indicates the truth percentage shown the initial processing model in Javanese characters manuscript image can be applied also to the image of the Batak characters manuscript.Keywords: Connected component, preprocessing manuscript image, projection profiles.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9231088 Mixed Micellization Study of Adiphenine Hydrochloride with 1-Decyl-3-Methylimidazolium Chloride
Authors: Abbul B. Khan, Neeraj Dohare, Rajan Patel
Abstract:
The mixed micellization of adiphenine hydrochloride (ADP) with 1-decyl-3-methylimidazolium chloride (C10mim.Cl), was investigated at different mole fractions and temperatures by surface tension measurements. The synergistic behavior (i.e., non-ideal behavior) for binary mixtures was explained by the deviation of critical micelle concentration (cmc) from ideal critical micelle concentration (cmc*), micellar mole fraction (Xim) from ideal micellar mole fraction (Xiideal), the values of interaction parameter (β) and activity coefficients (fi) (for both mixed micelles and mixed monolayer). The excess free energy (ΔGex) for the ADP- C10mim.Cl binary mixtures explain the stability of mixed micelles in comparison to micelles of pure ADP and C10mim.Cl. Interfacial parameters, i.e., Gibbs surface excess (Гmax), minimum head group area at air/ water interface (Amin), and free energy of micellization (ΔG0m) were also evaluated for the systems.
Keywords: Adiphenine hydrochloride, Critical micelle concentration, Interaction parameter, Activity coefficient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20291087 3D Human Reconstruction over Cloud Based Image Data via AI and Machine Learning
Authors: Kaushik Sathupadi, Sandesh Achar
Abstract:
Human action recognition (HAR) modeling is a critical task in machine learning. These systems require better techniques for recognizing body parts and selecting optimal features based on vision sensors to identify complex action patterns efficiently. Still, there is a considerable gap and challenges between images and videos, such as brightness, motion variation, and random clutters. This paper proposes a robust approach for classifying human actions over cloud-based image data. First, we apply pre-processing and detection, human and outer shape detection techniques. Next, we extract valuable information in terms of cues. We extract two distinct features: fuzzy local binary patterns and sequence representation. Then, we applied a greedy, randomized adaptive search procedure for data optimization and dimension reduction, and for classification, we used a random forest. We tested our model on two benchmark datasets, AAMAZ and the KTH Multi-view Football datasets. Our HAR framework significantly outperforms the other state-of-the-art approaches and achieves a better recognition rate of 91% and 89.6% over the AAMAZ and KTH Multi-view Football datasets, respectively.
Keywords: Computer vision, human motion analysis, random forest, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 351086 A New Ridge Orientation based Method of Computation for Feature Extraction from Fingerprint Images
Authors: Jayadevan R., Jayant V. Kulkarni, Suresh N. Mali, Hemant K. Abhyankar
Abstract:
An important step in studying the statistics of fingerprint minutia features is to reliably extract minutia features from the fingerprint images. A new reliable method of computation for minutiae feature extraction from fingerprint images is presented. A fingerprint image is treated as a textured image. An orientation flow field of the ridges is computed for the fingerprint image. To accurately locate ridges, a new ridge orientation based computation method is proposed. After ridge segmentation a new method of computation is proposed for smoothing the ridges. The ridge skeleton image is obtained and then smoothed using morphological operators to detect the features. A post processing stage eliminates a large number of false features from the detected set of minutiae features. The detected features are observed to be reliable and accurate.Keywords: Minutia, orientation field, ridge segmentation, textured image.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18531085 Metal Streak Analysis with different Acquisition Settings in Postoperative Spine Imaging: A Phantom Study
Authors: N. D. Osman, M. S. Salikin, M. I. Saripan
Abstract:
CT assessment of postoperative spine is challenging in the presence of metal streak artifacts that could deteriorate the quality of CT images. In this paper, we studied the influence of different acquisition parameters on the magnitude of metal streaking. A water-bath phantom was constructed with metal insertion similar with postoperative spine assessment. The phantom was scanned with different acquisition settings and acquired data were reconstructed using various reconstruction settings. Standardized ROIs were defined within streaking region for image analysis. The result shows increased kVp and mAs enhanced SNR values by reducing image noise. Sharper kernel enhanced image quality compared to smooth kernel, but produced more noise in the images with higher CT fluctuation. The noise between both kernels were significantly different (P <0.05) with increment of noise in the bone kernel images (mean difference = 54.78). The technical settings should be selected appropriately to attain the acceptable image quality with the best diagnostic value.Keywords: Computed tomography, metal streak, noise, CT fluctuation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19981084 Mining Correlated Bicluster from Web Usage Data Using Discrete Firefly Algorithm Based Biclustering Approach
Authors: K. Thangavel, R. Rathipriya
Abstract:
For the past one decade, biclustering has become popular data mining technique not only in the field of biological data analysis but also in other applications like text mining, market data analysis with high-dimensional two-way datasets. Biclustering clusters both rows and columns of a dataset simultaneously, as opposed to traditional clustering which clusters either rows or columns of a dataset. It retrieves subgroups of objects that are similar in one subgroup of variables and different in the remaining variables. Firefly Algorithm (FA) is a recently-proposed metaheuristic inspired by the collective behavior of fireflies. This paper provides a preliminary assessment of discrete version of FA (DFA) while coping with the task of mining coherent and large volume bicluster from web usage dataset. The experiments were conducted on two web usage datasets from public dataset repository whereby the performance of FA was compared with that exhibited by other population-based metaheuristic called binary Particle Swarm Optimization (PSO). The results achieved demonstrate the usefulness of DFA while tackling the biclustering problem.
Keywords: Biclustering, Binary Particle Swarm Optimization, Discrete Firefly Algorithm, Firefly Algorithm, Usage profile Web usage mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21331083 Effectiveness of Contourlet vs Wavelet Transform on Medical Image Compression: a Comparative Study
Authors: Negar Riazifar, Mehran Yazdi
Abstract:
Discrete Wavelet Transform (DWT) has demonstrated far superior to previous Discrete Cosine Transform (DCT) and standard JPEG in natural as well as medical image compression. Due to its localization properties both in special and transform domain, the quantization error introduced in DWT does not propagate globally as in DCT. Moreover, DWT is a global approach that avoids block artifacts as in the JPEG. However, recent reports on natural image compression have shown the superior performance of contourlet transform, a new extension to the wavelet transform in two dimensions using nonseparable and directional filter banks, compared to DWT. It is mostly due to the optimality of contourlet in representing the edges when they are smooth curves. In this work, we investigate this fact for medical images, especially for CT images, which has not been reported yet. To do that, we propose a compression scheme in transform domain and compare the performance of both DWT and contourlet transform in PSNR for different compression ratios (CR) using this scheme. The results obtained using different type of computed tomography images show that the DWT has still good performance at lower CR but contourlet transform performs better at higher CR.Keywords: Computed Tomography (CT), DWT, Discrete Contourlet Transform, Image Compression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27981082 Dynamic Web-Based 2D Medical Image Visualization and Processing Software
Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail
Abstract:
In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7951081 Emotions Triggered by Children’s Literature Images
Abstract:
The role of images/illustrations in communicating meanings and triggering emotions assumes an increasingly relevant role in contemporary texts, regardless of the age group for which they are intended or the nature of the texts that host them. It is no coincidence that children's books are full of illustrations and that the image/text ratio decreases as the age group grows. The vast majority of children's books can be considered as multimodal texts containing text and images/illustrations, interacting with each other, to provide the young reader with a broader and more creative understanding of the book's narrative. This interaction is very diverse, ranging from images/illustrations that are not essential for understanding the storytelling to those that contribute significantly to the meaning of the story. Usually, these books are also read by adults, namely by parents, educators, and teachers who act as mediators between the book and the children, explaining aspects that are or seem to be too complex for the child's context. It should be noted that there are books labeled as children's books, that are clearly intended for both children and adults. In this work, following a qualitative and interpretative methodology based on written productions, participant observation, and field notes, we will describe the perceptions of future teachers of the 1st cycle of basic education, attending a master’s degree at a Portuguese university, about the role of the image in literary and non-literary texts, namely in mathematical texts, and how these can constitute precious resources for emotional regulation and for the design of creative didactic situations. The analysis of the collected data allowed us to obtain evidence regarding the evolution of the participants' perception regarding the crucial role of images in children's literature, not only as an emotional regulator for young readers but also as a creative source for the design of meaningful didactical situations, crossing other scientific areas, other than the mother tongue, namely mathematics.
Keywords: Children’s literature, emotions, multimodal texts, soft skills.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1991080 Quality Evaluation of Compressed MRI Medical Images for Telemedicine Applications
Authors: Seddeq E. Ghrare, Salahaddin M. Shreef
Abstract:
Medical image modalities such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound (US), X-ray are adapted to diagnose disease. These modalities provide flexible means of reviewing anatomical cross-sections and physiological state in different parts of the human body. The raw medical images have a huge file size and need large storage requirements. So it should be such a way to reduce the size of those image files to be valid for telemedicine applications. Thus the image compression is a key factor to reduce the bit rate for transmission or storage while maintaining an acceptable reproduction quality, but it is natural to rise the question of how much an image can be compressed and still preserve sufficient information for a given clinical application. Many techniques for achieving data compression have been introduced. In this study, three different MRI modalities which are Brain, Spine and Knee have been compressed and reconstructed using wavelet transform. Subjective and objective evaluation has been done to investigate the clinical information quality of the compressed images. For the objective evaluation, the results show that the PSNR which indicates the quality of the reconstructed image is ranging from (21.95 dB to 30.80 dB, 27.25 dB to 35.75 dB, and 26.93 dB to 34.93 dB) for Brain, Spine, and Knee respectively. For the subjective evaluation test, the results show that the compression ratio of 40:1 was acceptable for brain image, whereas for spine and knee images 50:1 was acceptable.Keywords: Medical Image, Magnetic Resonance Imaging, Image Compression, Discrete Wavelet Transform, Telemedicine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29771079 Weed Classification using Histogram Maxima with Threshold for Selective Herbicide Applications
Authors: Irshad Ahmad, Abdul Muhamin Naeem, Muhammad Islam, Shahid Nawaz
Abstract:
Information on weed distribution within the field is necessary to implement spatially variable herbicide application. Since hand labor is costly, an automated weed control system could be feasible. This paper deals with the development of an algorithm for real time specific weed recognition system based on Histogram Maxima with threshold of an image that is used for the weed classification. This algorithm is specifically developed to classify images into broad and narrow class for real-time selective herbicide application. The developed system has been tested on weeds in the lab, which have shown that the system to be very effectiveness in weed identification. Further the results show a very reliable performance on images of weeds taken under varying field conditions. The analysis of the results shows over 95 percent classification accuracy over 140 sample images (broad and narrow) with 70 samples from each category of weeds.Keywords: Image processing, real-time recognition, weeddetection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21631078 An Approach for Reducing the Computational Complexity of LAMSTAR Intrusion Detection System using Principal Component Analysis
Authors: V. Venkatachalam, S. Selvan
Abstract:
The security of computer networks plays a strategic role in modern computer systems. Intrusion Detection Systems (IDS) act as the 'second line of defense' placed inside a protected network, looking for known or potential threats in network traffic and/or audit data recorded by hosts. We developed an Intrusion Detection System using LAMSTAR neural network to learn patterns of normal and intrusive activities, to classify observed system activities and compared the performance of LAMSTAR IDS with other classification techniques using 5 classes of KDDCup99 data. LAMSAR IDS gives better performance at the cost of high Computational complexity, Training time and Testing time, when compared to other classification techniques (Binary Tree classifier, RBF classifier, Gaussian Mixture classifier). we further reduced the Computational Complexity of LAMSTAR IDS by reducing the dimension of the data using principal component analysis which in turn reduces the training and testing time with almost the same performance.Keywords: Binary Tree Classifier, Gaussian Mixture, IntrusionDetection System, LAMSTAR, Radial Basis Function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17471077 A Novel Approach to Asynchronous State Machine Modeling on Multisim for Avoiding Function Hazards
Authors: L. Parisi, D. Hamili, N. Azlan
Abstract:
The aim of this study was to design and simulate a particular type of Asynchronous State Machine (ASM), namely a ‘traffic light controller’ (TLC), operated at a frequency of 0.5 Hz. The design task involved two main stages: firstly, designing a 4-bit binary counter using J-K flip flops as the timing signal and, subsequently, attaining the digital logic by deploying ASM design process. The TLC was designed such that it showed a sequence of three different colours, i.e. red, yellow and green, corresponding to set thresholds by deploying the least number of AND, OR and NOT gates possible. The software Multisim was deployed to design such circuit and simulate it for circuit troubleshooting in order for it to display the output sequence of the three different colours on the traffic light in the correct order. A clock signal, an asynchronous 4- bit binary counter that was designed through the use of J-K flip flops along with an ASM were used to complete this sequence, which was programmed to be repeated indefinitely. Eventually, the circuit was debugged and optimized, thus displaying the correct waveforms of the three outputs through the logic analyser. However, hazards occurred when the frequency was increased to 10 MHz. This was attributed to delays in the feedback being too high.
Keywords: Asynchronous State Machine, Traffic Light Controller, Circuit Design, Digital Electronics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32421076 Scalable Systolic Multiplier over Binary Extension Fields Based on Two-Level Karatsuba Decomposition
Authors: Chiou-Yng Lee, Wen-Yo Lee, Chieh-Tsai Wu, Cheng-Chen Yang
Abstract:
Shifted polynomial basis (SPB) is a variation of polynomial basis representation. SPB has potential for efficient bit level and digi -level implementations of multiplication over binary extension fields with subquadratic space complexity. For efficient implementation of pairing computation with large finite fields, this paper presents a new SPB multiplication algorithm based on Karatsuba schemes, and used that to derive a novel scalable multiplier architecture. Analytical results show that the proposed multiplier provides a trade-off between space and time complexities. Our proposed multiplier is modular, regular, and suitable for very large scale integration (VLSI) implementations. It involves less area complexity compared to the multipliers based on traditional decomposition methods. It is therefore, more suitable for efficient hardware implementation of pairing based cryptography and elliptic curve cryptography (ECC) in constraint driven applications.
Keywords: Digit-serial systolic multiplier, elliptic curve cryptography (ECC), Karatsuba algorithm (KA), shifted polynomial basis (SPB), pairing computation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20611075 Fused Structure and Texture (FST) Features for Improved Pedestrian Detection
Authors: Hussin K. Ragb, Vijayan K. Asari
Abstract:
In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies.Keywords: Pedestrian detection, phase congruency, local phase, LBP features, CSLBP features, FST descriptor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14891074 An Improved Method on Static Binary Analysis to Enhance the Context-Sensitive CFI
Authors: Qintao Shen, Lei Luo, Jun Ma, Jie Yu, Qingbo Wu, Yongqi Ma, Zhengji Liu
Abstract:
Control Flow Integrity (CFI) is one of the most promising technique to defend Code-Reuse Attacks (CRAs). Traditional CFI Systems and recent Context-Sensitive CFI use coarse control flow graphs (CFGs) to analyze whether the control flow hijack occurs, left vast space for attackers at indirect call-sites. Coarse CFGs make it difficult to decide which target to execute at indirect control-flow transfers, and weaken the existing CFI systems actually. It is an unsolved problem to extract CFGs precisely and perfectly from binaries now. In this paper, we present an algorithm to get a more precise CFG from binaries. Parameters are analyzed at indirect call-sites and functions firstly. By comparing counts of parameters prepared before call-sites and consumed by functions, targets of indirect calls are reduced. Then the control flow would be more constrained at indirect call-sites in runtime. Combined with CCFI, we implement our policy. Experimental results on some popular programs show that our approach is efficient. Further analysis show that it can mitigate COOP and other advanced attacks.Keywords: Contex-sensitive, CFI, binary analysis, code reuse attack.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9431073 Drivers of Land Degradation in Trays Ecosystem as Modulated under a Changing Climate: Case Study of Côte d'Ivoire
Authors: Kadio Valere R. Angaman, Birahim Bouna Niang
Abstract:
Land degradation is a serious problem in developing countries including Cote d’Ivoire, which has its economy focused on agriculture. It occurs in all kinds of ecosystems over the world. However, the drivers of land degradation vary from one region to another, and from one ecosystem to another. Thus, identifying these drivers is an essential prerequisite to develop and implement appropriate policies to reverse the trend of land degradation in the country, especially in the trays ecosystem. Using the binary logistic model with primary data obtained through 780 farmers surveyed, we analyze and identify the drivers of land degradation in the trays ecosystem. The descriptive statistics show that 52% of farmers interviewed have stated facing land degradation in their farmland. This high rate shows the extent of land degradation in this ecosystem. Also, the results obtained from the binary logit regression reveal that land degradation is significantly influenced by a set of variables such as sex, education, slope, erosion, pesticide, agricultural activity, deforestation, and temperature. The drivers identified are mostly local, as a result, the government must implement some policies and strategies that facilitate and incentive the adoption of sustainable land management practices by farmers to reverse the negative trend of land degradation.
Keywords: Drivers, land degradation, trays ecosystem, sustainable land management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4191072 Composite Relevance Feedback for Image Retrieval
Authors: Pushpa B. Patil, Manesh B. Kokare
Abstract:
This paper presents content-based image retrieval (CBIR) frameworks with relevance feedback (RF) based on combined learning of support vector machines (SVM) and AdaBoosts. The framework incorporates only most relevant images obtained from both the learning algorithm. To speed up the system, it removes irrelevant images from the database, which are returned from SVM learner. It is the key to achieve the effective retrieval performance in terms of time and accuracy. The experimental results show that this framework had significant improvement in retrieval effectiveness, which can finally improve the retrieval performance.
Keywords: Image retrieval, relevance feedback, wavelet transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19931071 Person Identification using Gait by Combined Features of Width and Shape of the Binary Silhouette
Authors: M.K. Bhuyan, Aragala Jagan.
Abstract:
Current image-based individual human recognition methods, such as fingerprints, face, or iris biometric modalities generally require a cooperative subject, views from certain aspects, and physical contact or close proximity. These methods cannot reliably recognize non-cooperating individuals at a distance in the real world under changing environmental conditions. Gait, which concerns recognizing individuals by the way they walk, is a relatively new biometric without these disadvantages. The inherent gait characteristic of an individual makes it irreplaceable and useful in visual surveillance. In this paper, an efficient gait recognition system for human identification by extracting two features namely width vector of the binary silhouette and the MPEG-7-based region-based shape descriptors is proposed. In the proposed method, foreground objects i.e., human and other moving objects are extracted by estimating background information by a Gaussian Mixture Model (GMM) and subsequently, median filtering operation is performed for removing noises in the background subtracted image. A moving target classification algorithm is used to separate human being (i.e., pedestrian) from other foreground objects (viz., vehicles). Shape and boundary information is used in the moving target classification algorithm. Subsequently, width vector of the outer contour of binary silhouette and the MPEG-7 Angular Radial Transform coefficients are taken as the feature vector. Next, the Principal Component Analysis (PCA) is applied to the selected feature vector to reduce its dimensionality. These extracted feature vectors are used to train an Hidden Markov Model (HMM) for identification of some individuals. The proposed system is evaluated using some gait sequences and the experimental results show the efficacy of the proposed algorithm.Keywords: Gait Recognition, Gaussian Mixture Model, PrincipalComponent Analysis, MPEG-7 Angular Radial Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19111070 Clustering Categorical Data Using the K-Means Algorithm and the Attribute’s Relative Frequency
Authors: Semeh Ben Salem, Sami Naouali, Moetez Sallami
Abstract:
Clustering is a well known data mining technique used in pattern recognition and information retrieval. The initial dataset to be clustered can either contain categorical or numeric data. Each type of data has its own specific clustering algorithm. In this context, two algorithms are proposed: the k-means for clustering numeric datasets and the k-modes for categorical datasets. The main encountered problem in data mining applications is clustering categorical dataset so relevant in the datasets. One main issue to achieve the clustering process on categorical values is to transform the categorical attributes into numeric measures and directly apply the k-means algorithm instead the k-modes. In this paper, it is proposed to experiment an approach based on the previous issue by transforming the categorical values into numeric ones using the relative frequency of each modality in the attributes. The proposed approach is compared with a previously method based on transforming the categorical datasets into binary values. The scalability and accuracy of the two methods are experimented. The obtained results show that our proposed method outperforms the binary method in all cases.
Keywords: Clustering, k-means, categorical datasets, pattern recognition, unsupervised learning, knowledge discovery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35451069 Performance Evaluation of Iris Region Detection and Localization for Biometric Identification System
Authors: Chit Su Htwe, Win Htay
Abstract:
The iris recognition technology is the most accurate, fast and less invasive one compared to other biometric techniques using for example fingerprints, face, retina, hand geometry, voice or signature patterns. The system developed in this study has the potential to play a key role in areas of high-risk security and can enable organizations with means allowing only to the authorized personnel a fast and secure way to gain access to such areas. The paper aim is to perform the iris region detection and iris inner and outer boundaries localization. The system was implemented on windows platform using Visual C# programming language. It is easy and efficient tool for image processing to get great performance accuracy. In particular, the system includes two main parts. The first is to preprocess the iris images by using Canny edge detection methods, segments the iris region from the rest of the image and determine the location of the iris boundaries by applying Hough transform. The proposed system tested on 756 iris images from 60 eyes of CASIA iris database images.Keywords: Canny, C#, hough transform, image preprocessing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20851068 Unified Method to Block Pornographic Images in Websites
Authors: Sakthi Priya Balaji R., Vijayendar G.
Abstract:
This paper proposes a technique to block adult images displayed in websites. The filter is designed so as to perform even in exceptional cases such as, where face detection is not possible or improper face visibility. This is achieved by using an alternative phase to extract the MFC (Most Frequent Color) from the Human Body regions estimated using a biometric of anthropometric distances between fixed rigidly connected body locations. The logical results generated can be protected from overriding by a firewall or intrusion, by encrypting the result in a SSH data packet.
Keywords: Face detection, characteristics extraction andclassification, Component based shape analysis and classification, open source SSH V2 protocol
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1396