Search results for: Fog vision system
8607 The Role of Synthetic Data in Aerial Object Detection
Authors: Ava Dodd, Jonathan Adams
Abstract:
The purpose of this study is to explore the characteristics of developing a machine learning application using synthetic data. The study is structured to develop the application for the purpose of deploying the computer vision model. The findings discuss the realities of attempting to develop a computer vision model for practical purpose, and detail the processes, tools and techniques that were used to meet accuracy requirements. The research reveals that synthetic data represent another variable that can be adjusted to improve the performance of a computer vision model. Further, a suite of tools and tuning recommendations are provided.
Keywords: computer vision, machine learning, synthetic data, YOLOv4
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8528606 Intelligent Vision System for Human-Robot Interface
Authors: Al-Amin Bhuiyan, Chang Hong Liu
Abstract:
This paper addresses the development of an intelligent vision system for human-robot interaction. The two novel contributions of this paper are 1) Detection of human faces and 2) Localizing the eye. The method is based on visual attributes of human skin colors and geometrical analysis of face skeleton. This paper introduces a spatial domain filtering method named ?Fuzzily skewed filter' which incorporates Fuzzy rules for deciding the gray level of pixels in the image in their neighborhoods and takes advantages of both the median and averaging filters. The effectiveness of the method has been justified over implementing the eye tracking commands to an entertainment robot, named ''AIBO''.Keywords: Fuzzily skewed filter, human-robot interface, rmscontrast, skin color segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14328605 Human Motion Capture: New Innovations in the Field of Computer Vision
Authors: Najm Alotaibi
Abstract:
Human motion capture has become one of the major area of interest in the field of computer vision. Some of the major application areas that have been rapidly evolving include the advanced human interfaces, virtual reality and security/surveillance systems. This study provides a brief overview of the techniques and applications used for the markerless human motion capture, which deals with analyzing the human motion in the form of mathematical formulations. The major contribution of this research is that it classifies the computer vision based techniques of human motion capture based on the taxonomy, and then breaks its down into four systematically different categories of tracking, initialization, pose estimation and recognition. The detailed descriptions and the relationships descriptions are given for the techniques of tracking and pose estimation. The subcategories of each process are further described. Various hypotheses have been used by the researchers in this domain are surveyed and the evolution of these techniques have been explained. It has been concluded in the survey that most researchers have focused on using the mathematical body models for the markerless motion capture.
Keywords: Human Motion Capture, Computer Vision, Vision based, Tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24918604 Capturing an Unknown Moving Target in Unknown Territory using Vision and Coordination
Authors: Kiran Ijaz, Umar Manzoor, Arshad Ali Shahid
Abstract:
In this paper we present an extension to Vision Based LRTA* (VLRTA*) known as Vision Based Moving Target Search (VMTS) for capturing unknown moving target in unknown territory with randomly generated obstacles. Target position is unknown to the agents and they cannot predict its position using any probability method. Agents have omni directional vision but can see in one direction at some point in time. Agent-s vision will be blocked by the obstacles in the search space so agent can not see through the obstacles. Proposed algorithm is evaluated on large number of scenarios. Scenarios include grids of sizes from 10x10 to 100x100. Grids had obstacles randomly placed, occupying 0% to 50%, in increments of 10%, of the search space. Experiments used 2 to 9 agents for each randomly generated maze with same obstacle ratio. Observed results suggests that VMTS is effective in locate target time, solution quality and virtual target. In addition, VMTS becomes more efficient if the number of agents is increased with proportion to obstacle ratio.Keywords: Vision, MTS, Unknown Target, Coordination, VMTS, Multi-Agent.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14648603 Real-Time Vision-based Korean Finger Spelling Recognition System
Authors: Anjin Park, Sungju Yun, Jungwhan Kim, Seungk Min, Keechul Jung
Abstract:
Finger spelling is an art of communicating by signs made with fingers, and has been introduced into sign language to serve as a bridge between the sign language and the verbal language. Previous approaches to finger spelling recognition are classified into two categories: glove-based and vision-based approaches. The glove-based approach is simpler and more accurate recognizing work of hand posture than vision-based, yet the interfaces require the user to wear a cumbersome and carry a load of cables that connected the device to a computer. In contrast, the vision-based approaches provide an attractive alternative to the cumbersome interface, and promise more natural and unobtrusive human-computer interaction. The vision-based approaches generally consist of two steps: hand extraction and recognition, and two steps are processed independently. This paper proposes real-time vision-based Korean finger spelling recognition system by integrating hand extraction into recognition. First, we tentatively detect a hand region using CAMShift algorithm. Then fill factor and aspect ratio estimated by width and height estimated by CAMShift are used to choose candidate from database, which can reduce the number of matching in recognition step. To recognize the finger spelling, we use DTW(dynamic time warping) based on modified chain codes, to be robust to scale and orientation variations. In this procedure, since accurate hand regions, without holes and noises, should be extracted to improve the precision, we use graph cuts algorithm that globally minimize the energy function elegantly expressed by Markov random fields (MRFs). In the experiments, the computational times are less than 130ms, and the times are not related to the number of templates of finger spellings in database, as candidate templates are selected in extraction step.Keywords: CAMShift, DTW, Graph Cuts, MRF.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16368602 Automated Textile Defect Recognition System Using Computer Vision and Artificial Neural Networks
Authors: Atiqul Islam, Shamim Akhter, Tumnun E. Mursalin
Abstract:
Least Development Countries (LDC) like Bangladesh, whose 25% revenue earning is achieved from Textile export, requires producing less defective textile for minimizing production cost and time. Inspection processes done on these industries are mostly manual and time consuming. To reduce error on identifying fabric defects requires more automotive and accurate inspection process. Considering this lacking, this research implements a Textile Defect Recognizer which uses computer vision methodology with the combination of multi-layer neural networks to identify four classifications of textile defects. The recognizer, suitable for LDC countries, identifies the fabric defects within economical cost and produces less error prone inspection system in real time. In order to generate input set for the neural network, primarily the recognizer captures digital fabric images by image acquisition device and converts the RGB images into binary images by restoration process and local threshold techniques. Later, the output of the processed image, the area of the faulty portion, the number of objects of the image and the sharp factor of the image, are feed backed as an input layer to the neural network which uses back propagation algorithm to compute the weighted factors and generates the desired classifications of defects as an output.Keywords: Computer vision, image acquisition device, machine vision, multi-layer neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33008601 FPGA based Relative Distance Measurement using Stereo Vision Technology
Authors: Manasi Pathade, Prachi Kadam, Renuka Kulkarni, Tejas Teredesai
Abstract:
In this paper, we propose a novel concept of relative distance measurement using Stereo Vision Technology and discuss its implementation on a FPGA based real-time image processor. We capture two images using two CCD cameras and compare them. Disparity is calculated for each pixel using a real time dense disparity calculation algorithm. This algorithm is based on the concept of indexed histogram for matching. Disparity being inversely proportional to distance (Proved Later), we can thus get the relative distances of objects in front of the camera. The output is displayed on a TV screen in the form of a depth image (optionally using pseudo colors). This system works in real time on a full PAL frame rate (720 x 576 active pixels @ 25 fps).Keywords: Stereo Vision, Relative Distance Measurement, Indexed Histogram, Real time FPGA Image Processor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30028600 Design and Implementation a Fully Autonomous Soccer Player Robot
Authors: S. H. Mohades Kasaei, S. M. Mohades Kasaei, S. A. Mohades Kasaei, M. Taheri, M. Rahimi, H. Vahiddastgerdi, M. Saeidinezhad
Abstract:
Omni directional mobile robots have been popularly employed in several applications especially in soccer player robots considered in Robocup competitions. However, Omni directional navigation system, Omni-vision system and solenoid kicking mechanism in such mobile robots have not ever been combined. This situation brings the idea of a robot with no head direction into existence, a comprehensive Omni directional mobile robot. Such a robot can respond more quickly and it would be capable for more sophisticated behaviors with multi-sensor data fusion algorithm for global localization base on the data fusion. This paper has tried to focus on the research improvements in the mechanical, electrical and software design of the robots of team ADRO Iran. The main improvements are the world model, the new strategy framework, mechanical structure, Omni-vision sensor for object detection, robot path planning, active ball handling mechanism and the new kicker design, , and other subjects related to mobile robotKeywords: Mobile robot, Machine vision, Omni directional movement, Autonomous Systems, Robot path planning, Object Localization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21538599 A Real-Time Specific Weed Recognition System Using Statistical Methods
Authors: Imran Ahmed, Muhammad Islam, Syed Inayat Ali Shah, Awais Adnan
Abstract:
The identification and classification of weeds are of major technical and economical importance in the agricultural industry. To automate these activities, like in shape, color and texture, weed control system is feasible. The goal of this paper is to build a real-time, machine vision weed control system that can detect weed locations. In order to accomplish this objective, a real-time robotic system is developed to identify and locate outdoor plants using machine vision technology and pattern recognition. The algorithm is developed to classify images into broad and narrow class for real-time selective herbicide application. The developed algorithm has been tested on weeds at various locations, which have shown that the algorithm to be very effectiveness in weed identification. Further the results show a very reliable performance on weeds under varying field conditions. The analysis of the results shows over 90 percent classification accuracy over 140 sample images (broad and narrow) with 70 samples from each category of weeds.Keywords: Weed detection, Image Processing, real-timerecognition, Standard Deviation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22648598 Powerful Laser Diode Matrixes for Active Vision Systems
Authors: Dzmitry M. Kabanau, Vladimir V. Kabanov, Yahor V. Lebiadok, Denis V. Shabrov, Pavel V. Shpak, Gevork T. Mikaelyan, Alexandr P. Bunichev
Abstract:
This article is deal with the experimental investigations of the laser diode matrixes (LDM) based on the AlGaAs/GaAs heterostructures (lasing wavelength 790-880 nm) to find optimal LDM parameters for active vision systems. In particular, the dependence of LDM radiation pulse power on the pulse duration and LDA active layer heating as well as the LDM radiation divergence are discussed.
Keywords: Active vision systems, laser diode matrixes, thermal properties, radiation divergence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21218597 Non-contact Gaze Tracking with Head Movement Adaptation based on Single Camera
Authors: Ying Huang, Zhiliang Wang, An Ping
Abstract:
With advances in computer vision, non-contact gaze tracking systems are heading towards being much easier to operate and more comfortable for use, the technique proposed in this paper is specially designed for achieving these goals. For the convenience in operation, the proposal aims at the system with simple configuration which is composed of a fixed wide angle camera and dual infrared illuminators. Then in order to enhance the usability of the system based on single camera, a self-adjusting method which is called Real-time gaze Tracking Algorithm with head movement Compensation (RTAC) is developed for estimating the gaze direction under natural head movement and simplifying the calibration procedure at the same time. According to the actual evaluations, the average accuracy of about 1° is achieved over a field of 20×15×15 cm3.
Keywords: computer vision, gaze tracking, human-computer interaction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19208596 Usability Evaluation Framework for Computer Vision Based Interfaces
Authors: Muhammad Raza Ali, Tim Morris
Abstract:
Human computer interaction has progressed considerably from the traditional modes of interaction. Vision based interfaces are a revolutionary technology, allowing interaction through human actions, gestures. Researchers have developed numerous accurate techniques, however, with an exception to few these techniques are not evaluated using standard HCI techniques. In this paper we present a comprehensive framework to address this issue. Our evaluation of a computer vision application shows that in addition to the accuracy, it is vital to address human factorsKeywords: Usability evaluation, cognitive walkthrough, think aloud, gesture recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16708595 Vision-Based Collision Avoidance for Unmanned Aerial Vehicles by Recurrent Neural Networks
Authors: Yao-Hong Tsai
Abstract:
Due to the sensor technology, video surveillance has become the main way for security control in every big city in the world. Surveillance is usually used by governments for intelligence gathering, the prevention of crime, the protection of a process, person, group or object, or the investigation of crime. Many surveillance systems based on computer vision technology have been developed in recent years. Moving target tracking is the most common task for Unmanned Aerial Vehicle (UAV) to find and track objects of interest in mobile aerial surveillance for civilian applications. The paper is focused on vision-based collision avoidance for UAVs by recurrent neural networks. First, images from cameras on UAV were fused based on deep convolutional neural network. Then, a recurrent neural network was constructed to obtain high-level image features for object tracking and extracting low-level image features for noise reducing. The system distributed the calculation of the whole system to local and cloud platform to efficiently perform object detection, tracking and collision avoidance based on multiple UAVs. The experiments on several challenging datasets showed that the proposed algorithm outperforms the state-of-the-art methods.Keywords: Unmanned aerial vehicle, object tracking, deep learning, collision avoidance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9538594 Localization by DKF Multi Sensor Fusion in the Uncertain Environments for Mobile Robot
Authors: Omid Sojodishijani, Saeed Ebrahimijam, Vahid Rostami
Abstract:
This paper presents an optimized algorithm for robot localization which increases the correctness and accuracy of the estimating position of mobile robot to more than 150% of the past methods [1] in the uncertain and noisy environment. In this method the odometry and vision sensors are combined by an adapted well-known discrete kalman filter [2]. This technique also decreased the computation process of the algorithm by DKF simple implementation. The experimental trial of the algorithm is performed on the robocup middle size soccer robot; the system can be used in more general environments.
Keywords: Discrete Kalman filter, odometry sensor, omnidirectional vision sensor, Robot Localization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14298593 Dead-Reckoning Error Calibration using Celling Looking Vision Camera
Authors: Jae-Young Choi, Sung-Gaun Kim
Abstract:
This paper suggests a calibration method to reduce errors occurring due to mobile robot sliding during location estimation using the Dead-reckoning. Due to sliding of the mobile robot caused between its wheels and the road surface while on free run, location estimation can be erroneous. Sliding especially occurs during cornering of mobile robot. Therefore, in order to reduce these frequent sliding errors in cornering, we calibrated the mobile robot-s heading values using a vision camera and templates of the ceiling.Keywords: Dead-reckoning, Localization, Odomerty, Vision Camera
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17838592 Stereo Motion Tracking
Authors: Yudhajit Datta, Jonathan Bandi, Ankit Sethia, Hamsi Iyer
Abstract:
Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots.
Keywords: Kalman Filter, Stereo Vision, Motion Tracking, Matlab, Object Tracking, Camera Calibration, Computer Vision System Toolbox.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28228591 The Corporate Vision Effect on Rajabhat University Brand Building in Thailand
Authors: Pisit Potjanajaruwit
Abstract:
This study aims to (1) investigate the corporate vision factor influencing Rajabhat University brand building in Thailand and (2) explore influences of brand building upon Rajabhat University stakeholders’ loyalty, and the research method will use mixed methods to conduct qualitative research with the quantitative research. The qualitative will approach by Indebt-interview the executive of Rathanagosin Rajabhat University group for 6 key informants and the quantitative data was collected by questionnaires distributed to stakeholder including instructors, staff, students and parents of the Rathanagosin Rajabhat University group for 400 sampling were selected by multi-stage sampling method. Data was analyzed by Structural Equation Modeling: SEM and also provide the focus group interview for confirming the model. Findings corporate vision had a direct and positive influence on Rajabhat University brand building were showed direct and positive influence on stakeholder’s loyalty and stakeholder’s loyalty was indirectly influenced by corporate vision through Rajabhat University brand building.
Keywords: Brand building, corporate vision, Rajabhat University, stakeholders’ loyalty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8058590 Information Retrieval in the Semantic LIFE Personal Digital Memory Framework
Authors: Hanh Huu Hoang, Tho Manh Nguyen
Abstract:
Ever increasing capacities of contemporary storage devices inspire the vision to accumulate (personal) information without the need of deleting old data over a long time-span. Hence the target of SemanticLIFE project is to create a Personal Information Management system for a human lifetime data. One of the most important characteristics of the system is its dedication to retrieve information in a very efficient way. By adopting user demands regarding the reduction of ambiguities, our approach aims at a user-oriented and yet powerful enough system with a satisfactory query performance. We introduce the query system of SemanticLIFE, the Virtual Query System, which uses emerging Semantic Web technologies to fulfill users- requirements.Keywords: Ontology-based Information Retrieval, Digital Memories, SemanticLIFE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13468589 Harris Extraction and SIFT Matching for Correlation of Two Tablets
Authors: Ali Alzaabi, Georges Alquié, Hussain Tassadaq, Ali Seba
Abstract:
This article presents the developments of efficient algorithms for tablet copies comparison. Image recognition has specialized use in digital systems such as medical imaging, computer vision, defense, communication etc. Comparison between two images that look indistinguishable is a formidable task. Two images taken from different sources might look identical but due to different digitizing properties they are not. Whereas small variation in image information such as cropping, rotation, and slight photometric alteration are unsuitable for based matching techniques. In this paper we introduce different matching algorithms designed to facilitate, for art centers, identifying real painting images from fake ones. Different vision algorithms for local image features are implemented using MATLAB. In this framework a Table Comparison Computer Tool “TCCT" is designed to facilitate our research. The TCCT is a Graphical Unit Interface (GUI) tool used to identify images by its shapes and objects. Parameter of vision system is fully accessible to user through this graphical unit interface. And then for matching, it applies different description technique that can identify exact figures of objects.Keywords: Harris Extraction and SIFT Matching
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17348588 An Evaluation of Neural Network Efficacies for Image Recognition on Edge-AI Computer Vision Platform
Abstract:
Image recognition enables machine-like robotics to understand a scene and plays an important role in computer vision applications. Computer vision platforms as physical infrastructure, supporting Neural Networks for image recognition, are deterministic to leverage the performance of different Neural Networks. In this paper, three different computer vision platforms – edge AI (Jetson Nano, with 4GB), a standalone laptop (with RTX 3000s, using CUDA), and a web-based device (Google Colab, using GPU) are investigated. In the case study, four prominent neural network architectures (including AlexNet, VGG16, GoogleNet, and ResNet (34/50)), are deployed. By using public ImageNets (Cifar-10), our findings provide a nuanced perspective on optimizing image recognition tasks across Edge-AI platforms, offering guidance on selecting appropriate neural network structures to maximize performance under hardware constraints.
Keywords: AlexNet, VGG, GoogleNet, ResNet, ImageNet, Cifar-10, Edge AI, Jetson Nano, CUDA, GPU.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2218587 MAGNI Dynamics: A Vision-Based Kinematic and Dynamic Upper-Limb Model for Intelligent Robotic Rehabilitation
Authors: Alexandros Lioulemes, Michail Theofanidis, Varun Kanal, Konstantinos Tsiakas, Maher Abujelala, Chris Collander, William B. Townsend, Angie Boisselle, Fillia Makedon
Abstract:
This paper presents a home-based robot-rehabilitation instrument, called ”MAGNI Dynamics”, that utilized a vision-based kinematic/dynamic module and an adaptive haptic feedback controller. The system is expected to provide personalized rehabilitation by adjusting its resistive and supportive behavior according to a fuzzy intelligence controller that acts as an inference system, which correlates the user’s performance to different stiffness factors. The vision module uses the Kinect’s skeletal tracking to monitor the user’s effort in an unobtrusive and safe way, by estimating the torque that affects the user’s arm. The system’s torque estimations are justified by capturing electromyographic data from primitive hand motions (Shoulder Abduction and Shoulder Forward Flexion). Moreover, we present and analyze how the Barrett WAM generates a force-field with a haptic controller to support or challenge the users. Experiments show that by shifting the proportional value, that corresponds to different stiffness factors of the haptic path, can potentially help the user to improve his/her motor skills. Finally, potential areas for future research are discussed, that address how a rehabilitation robotic framework may include multisensing data, to improve the user’s recovery process.Keywords: Human-robot interaction, kinect, kinematics, dynamics, haptic control, rehabilitation robotics, artificial intelligence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13198586 Pre-Analysis of Printed Circuit Boards Based On Multispectral Imaging for Vision Based Recognition of Electronics Waste
Authors: Florian Kleber, Martin Kampel
Abstract:
The increasing demand of gallium, indium and rare-earth elements for the production of electronics, e.g. solid state-lighting, photovoltaics, integrated circuits, and liquid crystal displays, will exceed the world-wide supply according to current forecasts. Recycling systems to reclaim these materials are not yet in place, which challenges the sustainability of these technologies. This paper proposes a multispectral imaging system as a basis for a vision based recognition system for valuable components of electronics waste. Multispectral images intend to enhance the contrast of images of printed circuit boards (single components, as well as labels) for further analysis, such as optical character recognition and entire printed circuit board recognition. The results show, that a higher contrast is achieved in the near infrared compared to ultraviolett and visible light.
Keywords: Electronic Waste, Recycling, Multispectral Imaging, Printed Circuit Boards, Rare-Earth Elements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26858585 An Example of Open Robot Controller Architecture - For Power Distribution Line Maintenance Robot System -
Authors: Yingxin He, Kyouichi Tatsuno
Abstract:
In this paper, we propose an architecture for easily constructing a robot controller. The architecture is a multi-agent system which has eight agents: the Man-machine interface, Task planner, Task teaching editor, Motion planner, Arm controller, Vehicle controller, Vision system and CG display. The controller has three databases: the Task knowledge database, the Robot database and the Environment database. Based on this controller architecture, we are constructing an experimental power distribution line maintenance robot system and are doing the experiment for the maintenance tasks, for example, “Bolt insertion task".Keywords: Robot controller, Software library, Maintenance robot, Robot language, Agent system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14018584 Processing Web-Cam Images by a Neuro-Fuzzy Approach for Vehicular Traffic Monitoring
Authors: A. Faro, D. Giordano, C. Spampinato
Abstract:
Traffic management in an urban area is highly facilitated by the knowledge of the traffic conditions in every street or highway involved in the vehicular mobility system. Aim of the paper is to propose a neuro-fuzzy approach able to compute the main parameters of a traffic system, i.e., car density, velocity and flow, by using the images collected by the web-cams located at the crossroads of the traffic network. The performances of this approach encourage its application when the traffic system is far from the saturation. A fuzzy model is also outlined to evaluate when it is suitable to use more accurate, even if more time consuming, algorithms for measuring traffic conditions near to saturation.
Keywords: Neuro-fuzzy networks, computer vision, Fuzzy systems, intelligent transportation system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15928583 Hand Gesture Recognition Based on Combined Features Extraction
Authors: Mahmoud Elmezain, Ayoub Al-Hamadi, Bernd Michaelis
Abstract:
Hand gesture is an active area of research in the vision community, mainly for the purpose of sign language recognition and Human Computer Interaction. In this paper, we propose a system to recognize alphabet characters (A-Z) and numbers (0-9) in real-time from stereo color image sequences using Hidden Markov Models (HMMs). Our system is based on three main stages; automatic segmentation and preprocessing of the hand regions, feature extraction and classification. In automatic segmentation and preprocessing stage, color and 3D depth map are used to detect hands where the hand trajectory will take place in further step using Mean-shift algorithm and Kalman filter. In the feature extraction stage, 3D combined features of location, orientation and velocity with respected to Cartesian systems are used. And then, k-means clustering is employed for HMMs codeword. The final stage so-called classification, Baum- Welch algorithm is used to do a full train for HMMs parameters. The gesture of alphabets and numbers is recognized using Left-Right Banded model in conjunction with Viterbi algorithm. Experimental results demonstrate that, our system can successfully recognize hand gestures with 98.33% recognition rate.Keywords: Gesture Recognition, Computer Vision & Image Processing, Pattern Recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40328582 Texture Based Weed Detection Using Multi Resolution Combined Statistical and Spatial Frequency (MRCSF)
Authors: R.S.Sabeenian, V.Palanisamy
Abstract:
Texture classification is a trendy and a catchy technology in the field of texture analysis. Textures, the repeated patterns, have different frequency components along different orientations. Our work is based on Texture Classification and its applications. It finds its applications in various fields like Medical Image Classification, Computer Vision, Remote Sensing, Agricultural Field, and Textile Industry. Weed control has a major effect on agriculture. A large amount of herbicide has been used for controlling weeds in agriculture fields, lawns, golf courses, sport fields, etc. Random spraying of herbicides does not meet the exact requirement of the field. Certain areas in field have more weed patches than estimated. So, we need a visual system that can discriminate weeds from the field image which will reduce or even eliminate the amount of herbicide used. This would allow farmers to not use any herbicides or only apply them where they are needed. A machine vision precision automated weed control system could reduce the usage of chemicals in crop fields. In this paper, an intelligent system for automatic weeding strategy Multi Resolution Combined Statistical & spatial Frequency is used to discriminate the weeds from the crops and to classify them as narrow, little and broad weeds.Keywords: crop weed discrimination, MRCSF, MRFM, Weeddetection, Spatial Frequency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18288581 Enhanced Traffic Light Detection Method Using Geometry Information
Authors: Changhwan Choi, Yongwan Park
Abstract:
In this paper, we propose a method that allows faster and more accurate detection of traffic lights by a vision sensor during driving, DGPS is used to obtain physical location of a traffic light, extract from the image information of the vision sensor only the traffic light area at this location and ascertain if the sign is in operation and determine its form. This method can solve the problem in existing research where low visibility at night or reflection under bright light makes it difficult to recognize the form of traffic light, thus making driving unstable. We compared our success rate of traffic light recognition in day and night road environments. Compared to previous researches, it showed similar performance during the day but 50% improvement at night.
Keywords: Traffic light, Intelligent vehicle, Night, Detection, DGPS (Differential Global Positioning System).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24188580 Optical 3D-Surface Reconstruction of Weak Textured Objects Based on an Approach of Disparity Stereo Inspection
Authors: Thomas Kerstein, Martin Laurowski, Philipp Klein, Michael Weyrich, Hubert Roth, Jürgen Wahrburg
Abstract:
Optical 3D measurement of objects is meaningful in numerous industrial applications. In various cases shape acquisition of weak textured objects is essential. Examples are repetition parts made of plastic or ceramic such as housing parts or ceramic bottles as well as agricultural products like tubers. These parts are often conveyed in a wobbling way during the automated optical inspection. Thus, conventional 3D shape acquisition methods like laser scanning might fail. In this paper, a novel approach for acquiring 3D shape of weak textured and moving objects is presented. To facilitate such measurements an active stereo vision system with structured light is proposed. The system consists of multiple camera pairs and auxiliary laser pattern generators. It performs the shape acquisition within one shot and is beneficial for rapid inspection tasks. An experimental setup including hardware and software has been developed and implemented.Keywords: automated optical inspection, depth from structured light, stereo vision, surface reconstruction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18418579 Analysis and Measuring Surface Roughness of Nonwovens Using Machine Vision Method
Authors: Dariush Semnani, Javad Yekrang, Hossein Ghayoor
Abstract:
Concerning the measurement of friction properties of textiles and fabrics using Kawabata Evaluation System (KES), whose output is constrained to the surface friction factor of fabric, and no other data would be generated; this research has been conducted to gain information about surface roughness regarding its surface friction factor. To assess roughness properties of light nonwovens, a 3-dimensional model of a surface has been simulated with regular sinuous waves through it as an ideal surface. A new factor was defined, namely Surface Roughness Factor, through comparing roughness properties of simulated surface and real specimens. The relation between the proposed factor and friction factor of specimens has been analyzed by regression, and results showed a meaningful correlation between them. It can be inferred that the new presented factor can be used as an acceptable criterion for evaluating the roughness properties of light nonwoven fabrics.Keywords: Surface roughness, Nonwoven, Machine vision, Image processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30938578 LINUX Cluster Possibilities in 3-D PHOTO Quality Imaging and Animation
Authors: Arjun Jain, Himanshu Agrawal, Nalini Vasudevan
Abstract:
In this paper we present the PC cluster built at R.V. College of Engineering (with great help from the Department of Computer Science and Electrical Engineering). The structure of the cluster is described and the performance is evaluated by rendering of complex 3D Persistence of Vision (POV) images by the Ray-Tracing algorithm. Here, we propose an unexampled method to render such images, distributedly on a low cost scalable.Keywords: PC cluster, parallel computations, ray tracing, persistence of vision, rendering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1552