Search results for: Computer Vision System Toolbox.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9336

Search results for: Computer Vision System Toolbox.

9336 Optimizing Machine Vision System Setup Accuracy by Six-Sigma DMAIC Approach

Authors: Joseph C. Chen

Abstract:

Machine vision system provides automatic inspection to reduce manufacturing costs considerably. However, only a few principles have been found to optimize machine vision system and help it function more accurately in industrial practice. Mostly, there were complicated and impractical design techniques to improve the accuracy of machine vision system. This paper discusses implementing the Six Sigma Define, Measure, Analyze, Improve, and Control (DMAIC) approach to optimize the setup parameters of machine vision system when it is used as a direct measurement technique. This research follows a case study showing how Six Sigma DMAIC methodology has been put into use.

Keywords: DMAIC, machine vision system, process capability, Taguchi parameter design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1190
9335 Stereo Motion Tracking

Authors: Yudhajit Datta, Jonathan Bandi, Ankit Sethia, Hamsi Iyer

Abstract:

Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots.

Keywords: Kalman Filter, Stereo Vision, Motion Tracking, Matlab, Object Tracking, Camera Calibration, Computer Vision System Toolbox.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2782
9334 Development of a Computer Vision System for the Blind and Visually Impaired Person

Authors: Roselyn A. Maaño

Abstract:

Eyes are an essential and conspicuous organ of the human body. Human eyes are outward and inward portals of the body that allows to see the outside world and provides glimpses into ones inner thoughts and feelings. Inevitable blindness and visual impairments may results from eye-related disease, trauma, or congenital or degenerative conditions that cannot be corrected by conventional means. The study emphasizes innovative tools that will serve as an aid to the blind and visually impaired (VI) individuals. The researchers fabricated a prototype that utilizes the Microsoft Kinect for Windows and Arduino microcontroller board. The prototype facilitates advanced gesture recognition, voice recognition, obstacle detection and indoor environment navigation. Open Computer Vision (OpenCV) performs image analysis, and gesture tracking to transform Kinect data to the desired output. A computer vision technology device provides greater accessibility for those with vision impairments.

Keywords: Algorithms, Blind, Computer Vision, Embedded Systems, Image Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3550
9333 Computer Vision Applied to Flower, Fruit and Vegetable Processing

Authors: Luis Gracia, Carlos Perez-Vidal, Carlos Gracia

Abstract:

This paper presents the theoretical background and the real implementation of an automated computer system to introduce machine vision in flower, fruit and vegetable processing for recollection, cutting, packaging, classification, or fumigation tasks. The considerations and implementation issues presented in this work can be applied to a wide range of varieties of flowers, fruits and vegetables, although some of them are especially relevant due to the great amount of units that are manipulated and processed each year over the world. The computer vision algorithms developed in this work are shown in detail, and can be easily extended to other applications. A special attention is given to the electromagnetic compatibility in order to avoid noisy images. Furthermore, real experimentation has been carried out in order to validate the developed application. In particular, the tests show that the method has good robustness and high success percentage in the object characterization.

Keywords: Image processing, Vision system, Automation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3284
9332 Machine Vision System for Automatic Weeding Strategy in Oil Palm Plantation using Image Filtering Technique

Authors: Kamarul Hawari Ghazali, Mohd. Marzuki Mustafa, Aini Hussain

Abstract:

Machine vision is an application of computer vision to automate conventional work in industry, manufacturing or any other field. Nowadays, people in agriculture industry have embarked into research on implementation of engineering technology in their farming activities. One of the precision farming activities that involve machine vision system is automatic weeding strategy. Automatic weeding strategy in oil palm plantation could minimize the volume of herbicides that is sprayed to the fields. This paper discusses an automatic weeding strategy in oil palm plantation using machine vision system for the detection and differential spraying of weeds. The implementation of vision system involved the used of image processing technique to analyze weed images in order to recognized and distinguished its types. Image filtering technique has been used to process the images as well as a feature extraction method to classify the type of weed images. As a result, the image processing technique contributes a promising result of classification to be implemented in machine vision system for automated weeding strategy.

Keywords: Machine vision, Automatic Weeding Strategy, filter, feature extraction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1822
9331 A Stereo Image Processing System for Visually Impaired

Authors: G. Balakrishnan, G. Sainarayanan, R. Nagarajan, Sazali Yaacob

Abstract:

This paper presents a review on vision aided systems and proposes an approach for visual rehabilitation using stereo vision technology. The proposed system utilizes stereo vision, image processing methodology and a sonification procedure to support blind navigation. The developed system includes a wearable computer, stereo cameras as vision sensor and stereo earphones, all moulded in a helmet. The image of the scene infront of visually handicapped is captured by the vision sensors. The captured images are processed to enhance the important features in the scene in front, for navigation assistance. The image processing is designed as model of human vision by identifying the obstacles and their depth information. The processed image is mapped on to musical stereo sound for the blind-s understanding of the scene infront. The developed method has been tested in the indoor and outdoor environments and the proposed image processing methodology is found to be effective for object identification.

Keywords: Blind navigation, stereo vision, image processing, object preference, music tones.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4061
9330 The Role of Synthetic Data in Aerial Object Detection

Authors: Ava Dodd, Jonathan Adams

Abstract:

The purpose of this study is to explore the characteristics of developing a machine learning application using synthetic data. The study is structured to develop the application for the purpose of deploying the computer vision model. The findings discuss the realities of attempting to develop a computer vision model for practical purpose, and detail the processes, tools and techniques that were used to meet accuracy requirements. The research reveals that synthetic data represent another variable that can be adjusted to improve the performance of a computer vision model. Further, a suite of tools and tuning recommendations are provided.

Keywords: computer vision, machine learning, synthetic data, YOLOv4

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 782
9329 Vision Based People Tracking System

Authors: Boukerch Haroun, Luo Qing Sheng, Li Hua Shi, Boukraa Sebti

Abstract:

In this paper we present the design and the implementation of a target tracking system where the target is set to be a moving person in a video sequence. The system can be applied easily as a vision system for mobile robot. The system is composed of two major parts the first is the detection of the person in the video frame using the SVM learning machine based on the “HOG” descriptors. The second part is the tracking of a moving person it’s done by using a combination of the Kalman filter and a modified version of the Camshift tracking algorithm by adding the target motion feature to the color feature, the experimental results had shown that the new algorithm had overcame the traditional Camshift algorithm in robustness and in case of occlusion.

Keywords: Camshift Algorithm, Computer Vision, Kalman Filter, Object tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1265
9328 Decimation Filter Design Toolbox for Multi-Standard Wireless Transceivers using MATLAB

Authors: Shahana T. K., Babita R. Jose, K. Poulose Jacob, Sreela Sasi

Abstract:

The demand for new telecommunication services requiring higher capacities, data rates and different operating modes have motivated the development of new generation multi-standard wireless transceivers. A multi-standard design often involves extensive system level analysis and architectural partitioning, typically requiring extensive calculations. In this research, a decimation filter design tool for wireless communication standards consisting of GSM, WCDMA, WLANa, WLANb, WLANg and WiMAX is developed in MATLAB® using GUIDE environment for visual analysis. The user can select a required wireless communication standard, and obtain the corresponding multistage decimation filter implementation using this toolbox. The toolbox helps the user or design engineer to perform a quick design and analysis of decimation filter for multiple standards without doing extensive calculation of the underlying methods.

Keywords: Decimation filter, MATLAB® toolbox, Multistandard transceivers, Sigma-delta A/D converter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2887
9327 Hand Gesture Recognition using Blob Detection for Immersive Projection Display System

Authors: Hasup Lee, Yoshisuke Tateyama, Tetsuro Ogi

Abstract:

We developed a vision interface immersive projection system, CAVE in virtual rea using hand gesture recognition with computer vis background image was subtracted from current webcam and we convert the color space of the imag Then we mask skin regions using skin color range t a noise reduction operation. We made blobs fro gestures were recognized using these blobs. Using recognition, we could implement an effective bothering devices for CAVE. e framework for an reality research field vision techniques. ent image frame age into HSV space. e threshold and apply from the image and ing our hand gesture e interface without

Keywords: CAVE, Computer Vision, Ges Virtual Reality esture Recognition,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2697
9326 Partial 3D Reconstruction using Evolutionary Algorithms

Authors: Mónica Pérez-Meza, Rodrigo Montúfar-Chaveznava

Abstract:

When reconstructing a scenario, it is necessary to know the structure of the elements present on the scene to have an interpretation. In this work we link 3D scenes reconstruction to evolutionary algorithms through the vision stereo theory. We consider vision stereo as a method that provides the reconstruction of a scene using only a couple of images of the scene and performing some computation. Through several images of a scene, captured from different positions, vision stereo can give us an idea about the threedimensional characteristics of the world. Vision stereo usually requires of two cameras, making an analogy to the mammalian vision system. In this work we employ only a camera, which is translated along a path, capturing images every certain distance. As we can not perform all computations required for an exhaustive reconstruction, we employ an evolutionary algorithm to partially reconstruct the scene in real time. The algorithm employed is the fly algorithm, which employ “flies" to reconstruct the principal characteristics of the world following certain evolutionary rules.

Keywords: 3D Reconstruction, Computer Vision, EvolutionaryAlgorithms, Vision Stereo.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832
9325 Web-Based Architecture of a System for Design Assessment of Night Vision Devices

Authors: Daniela I. Borissova, Ivan C. Mustakerov, Evgeni D. Bantutov

Abstract:

Nowadays the devices of night vision are widely used both for military and civil applications. The variety of night vision applications require a variety of the night vision devices designs. A web-based architecture of a software system for design assessment before producing of night vision devices is developed. The proposed architecture of the web-based system is based on the application of a mathematical model for designing of night vision devices. An algorithm with two components – for iterative design and for intelligent design is developed and integrated into system architecture. The iterative component suggests compatible modules combinations to choose from. The intelligent component provides compatible combinations of modules  satisfying given user requirements to device parameters. The proposed web-based architecture of a system for design assessment of night vision devices is tested via a prototype of the system. The testing showed the applicability of both iterative and intelligent components of algorithm. 

Keywords: Night vision devices, design modeling, software architecture, web-based system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2110
9324 Piezoelectric Transducer Modeling: with System Identification (SI) Method

Authors: Nora Taghavi, Ali Sadr

Abstract:

System identification is the process of creating models of dynamic process from input- output signals. The aim of system identification can be identified as “ to find a model with adjustable parameters and then to adjust them so that the predicted output matches the measured output". This paper presents a method of modeling and simulating with system identification to achieve the maximum fitness for transformation function. First by using optimized KLM equivalent circuit for PVDF piezoelectric transducer and assuming different inputs including: sinuside, step and sum of sinusides, get the outputs, then by using system identification toolbox in MATLAB, we estimate the transformation function from inputs and outputs resulted in last program. Then compare the fitness of transformation function resulted from using ARX,OE(Output- Error) and BJ(Box-Jenkins) models in system identification toolbox and primary transformation function form KLM equivalent circuit.

Keywords: PVDF modeling, ARX, BJ(Box-Jenkins), OE(Output-Error), System Identification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2696
9323 A 2D-3D Hybrid Vision System for Robotic Manipulation of Randomly Oriented Objects

Authors: Moulay A. Akhloufi

Abstract:

This paper presents an new vision technique for robotic manipulation of randomly oriented objects in industrial applications. The proposed approach uses 2D and 3D vision for efficiently extracting the 3D pose of an object in the presence of multiple randomly positioned objects. 2D vision permits to quickly select the objects of interest for 3D processing with a new modified ICP algorithm (FaR-ICP), thus reducing significantly the processing time. The extracted 3D pose is then sent to the robot manipulator for picking. The tests show that the proposed system achieves high performances

Keywords: 3D vision, Hand-Eye calibration, robot visual servoing, random bin picking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1757
9322 An Approach for Integration of Industrial Robot with Vision System and Simulation Software

Authors: Ahmed Sh. Khusheef, Ganesh Kothapalli, Majid Tolouei-Rad

Abstract:

Utilization of various sensors has made it possible to extend capabilities of industrial robots. Among these are vision sensors that are used for providing visual information to assist robot controllers. This paper presents a method of integrating a vision system and a simulation program with an industrial robot. The vision system is employed to detect a target object and compute its location in the robot environment. Then, the target object-s information is sent to the robot controller via parallel communication port. The robot controller uses the extracted object information and the simulation program to control the robot arm for approaching, grasping and relocating the object. This paper presents technical details of system components and describes the methodology used for this integration. It also provides a case study to prove the validity of the methodology developed.

Keywords: industrial robot, integration, simulation, vision system

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2170
9321 Human Motion Capture: New Innovations in the Field of Computer Vision

Authors: Najm Alotaibi

Abstract:

Human motion capture has become one of the major area of interest in the field of computer vision. Some of the major application areas that have been rapidly evolving include the advanced human interfaces, virtual reality and security/surveillance systems. This study provides a brief overview of the techniques and applications used for the markerless human motion capture, which deals with analyzing the human motion in the form of mathematical formulations. The major contribution of this research is that it classifies the computer vision based techniques of human motion capture based on the taxonomy, and then breaks its down into four systematically different categories of tracking, initialization, pose estimation and recognition. The detailed descriptions and the relationships descriptions are given for the techniques of tracking and pose estimation. The subcategories of each process are further described. Various hypotheses have been used by the researchers in this domain are surveyed and the evolution of these techniques have been explained. It has been concluded in the survey that most researchers have focused on using the mathematical body models for the markerless motion capture.

Keywords: Human Motion Capture, Computer Vision, Vision based, Tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2434
9320 Usability Evaluation Framework for Computer Vision Based Interfaces

Authors: Muhammad Raza Ali, Tim Morris

Abstract:

Human computer interaction has progressed considerably from the traditional modes of interaction. Vision based interfaces are a revolutionary technology, allowing interaction through human actions, gestures. Researchers have developed numerous accurate techniques, however, with an exception to few these techniques are not evaluated using standard HCI techniques. In this paper we present a comprehensive framework to address this issue. Our evaluation of a computer vision application shows that in addition to the accuracy, it is vital to address human factors

Keywords: Usability evaluation, cognitive walkthrough, think aloud, gesture recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1618
9319 Vision Based Robot Experiment: Measurement of Path Related Characteristics

Authors: M. H. Korayem, K. Khoshhal, H. Aliakbarpour

Abstract:

In this paper, a vision based system has been used for controlling an industrial 3P Cartesian robot. The vision system will recognize the target and control the robot by obtaining images from environment and processing them. At the first stage, images from environment are changed to a grayscale mode then it can diverse and identify objects and noises by using a threshold objects which are stored in different frames and then the main object will be recognized. This will control the robot to achieve the target. A vision system can be an appropriate tool for measuring errors of a robot in a situation where the experimental test is conducted for a 3P robot. Finally, the international standard ANSI/RIA R15.05-2 is used for evaluating the path-related characteristics of the robot. To evaluate the performance of the proposed method experimental test is carried out.

Keywords: Robot, Vision, Experiment, Standard.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1211
9318 Verification of Space System Dynamics Using the MATLAB Identification Toolbox in Space Qualification Test

Authors: Y. V. Kim

Abstract:

This article presents an approach with regards to the Functional Testing of Space System (SS) that could be a space vehicle (spacecraft-S/C) and/or its equipment and components – S/C subsystems. This test should finalize the Space Qualification Tests (SQT) campaign. It could be considered as a generic test and used for a wide class of SS that, from the point of view of System Dynamics and Control Theory, may be described by the ordinary differential equations. The suggested methodology is based on using semi-natural experiment laboratory stand that does not require complicated, precise and expensive technological control-verification equipment. However, it allows for testing totally assembled system during Assembling, Integration and Testing (AIT) activities at the final phase of SQT, involving system hardware (HW) and software (SW). The test physically activates system input (sensors) and output (actuators) and requires recording their outputs in real time. The data are then inserted in a laboratory computer, where it is post-experiment processed by the MATLAB/Simulink Identification Toolbox. It allows for estimating the system dynamics in the form of estimation of its differential equation coefficients through the verification experimental test and comparing them with expected mathematical model, prematurely verified by mathematical simulation during the design process. Mathematical simulation results presented in the article show that this approach could be applicable and helpful in SQT practice. Further semi-natural experiments should specify detail requirements for the test laboratory equipment and test-procedures.

Keywords: system dynamics, space system ground tests, space qualification, system dynamics identification, satellite attitude control, assembling integration and testing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 477
9317 Paddy/Rice Singulation for Determination of Husking Efficiency and Damage Using Machine Vision

Authors: M. Shaker, S. Minaei, M. H. Khoshtaghaza, A. Banakar, A. Jafari

Abstract:

In this study a system of machine vision and singulation was developed to separate paddy from rice and determine paddy husking and rice breakage percentages. The machine vision system consists of three main components including an imaging chamber, a digital camera, a computer equipped with image processing software. The singulation device consists of a kernel holding surface, a motor with vacuum fan, and a dimmer. For separation of paddy from rice (in the image), it was necessary to set a threshold. Therefore, some images of paddy and rice were sampled and the RGB values of the images were extracted using MATLAB software. Then mean and standard deviation of the data were determined. An Image processing algorithm was developed using MATLAB to determine paddy/rice separation and rice breakage and paddy husking percentages, using blue to red ratio. Tests showed that, a threshold of 0.75 is suitable for separating paddy from rice kernels. Results from the evaluation of the image processing algorithm showed that the accuracies obtained with the algorithm were 98.36% and 91.81% for paddy husking and rice breakage percentage, respectively. Analysis also showed that a suction of 45 mmHg to 50 mmHg yielding 81.3% separation efficiency is appropriate for operation of the kernel singulation system.

Keywords: Computer vision, rice kernel, husking, breakage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1471
9316 Non-contact Gaze Tracking with Head Movement Adaptation based on Single Camera

Authors: Ying Huang, Zhiliang Wang, An Ping

Abstract:

With advances in computer vision, non-contact gaze tracking systems are heading towards being much easier to operate and more comfortable for use, the technique proposed in this paper is specially designed for achieving these goals. For the convenience in operation, the proposal aims at the system with simple configuration which is composed of a fixed wide angle camera and dual infrared illuminators. Then in order to enhance the usability of the system based on single camera, a self-adjusting method which is called Real-time gaze Tracking Algorithm with head movement Compensation (RTAC) is developed for estimating the gaze direction under natural head movement and simplifying the calibration procedure at the same time. According to the actual evaluations, the average accuracy of about 1° is achieved over a field of 20×15×15 cm3.

Keywords: computer vision, gaze tracking, human-computer interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884
9315 A Real-time Computer Vision System for VehicleTracking and Collision Detection

Authors: Mustafa Kisa, Fatih Mehmet Botsali

Abstract:

Recent developments in automotive technology are focused on economy, comfort and safety. Vehicle tracking and collision detection systems are attracting attention of many investigators focused on safety of driving in the field of automotive mechatronics. In this paper, a vision-based vehicle detection system is presented. Developed system is intended to be used in collision detection and driver alert. The system uses RGB images captured by a camera in a car driven in the highway. Images captured by the moving camera are used to detect the moving vehicles in the image. A vehicle ahead of the camera is detected in daylight conditions. The proposed method detects moving vehicles by subtracting successive images. Plate height of the vehicle is determined by using a plate recognition algorithm. Distance of the moving object is calculated by using the plate height. After determination of the distance of the moving vehicle relative speed of the vehicle and Time-to-Collision are calculated by using distances measured in successive images. Results obtained in road tests are discussed in order to validate the use of the proposed method.

Keywords: Image possessing, vehicle tracking, license plate detection, computer vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3055
9314 Kinematics and Control System Design of Manipulators for a Humanoid Robot

Authors: S. Parasuraman

Abstract:

In this work, a new approach is proposed to control the manipulators for Humanoid robot. The kinematics of the manipulators in terms of joint positions, velocity, acceleration and torque of each joint is computed using the Denavit Hardenberg (D-H) notations. These variables are used to design the manipulator control system, which has been proposed in this work. In view of supporting the development of a controller, a simulation of the manipulator is designed for Humanoid robot. This simulation is developed through the use of the Virtual Reality Toolbox and Simulink in Matlab. The Virtual Reality Toolbox in Matlab provides the interfacing and controls to an environment which is developed based on the Virtual Reality Modeling Language (VRML). Chains of bones were used to represent the robot.

Keywords: Mobile robot, Robot Kinematics, Robot Navigation, MATLAB.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1543
9313 An Evaluation of Neural Network Efficacies for Image Recognition on Edge-AI Computer Vision Platform

Authors: Jie Zhao, Meng Su

Abstract:

Image recognition enables machine-like robotics to understand a scene and plays an important role in computer vision applications. Computer vision platforms as physical infrastructure, supporting Neural Networks for image recognition, are deterministic to leverage the performance of different Neural Networks. In this paper, three different computer vision platforms – edge AI (Jetson Nano, with 4GB), a standalone laptop (with RTX 3000s, using CUDA), and a web-based device (Google Colab, using GPU) are investigated. In the case study, four prominent neural network architectures (including AlexNet, VGG16, GoogleNet, and ResNet (34/50)), are deployed. By using public ImageNets (Cifar-10), our findings provide a nuanced perspective on optimizing image recognition tasks across Edge-AI platforms, offering guidance on selecting appropriate neural network structures to maximize performance under hardware constraints.

Keywords: AlexNet, VGG, GoogleNet, ResNet, ImageNet, Cifar-10, Edge AI, Jetson Nano, CUDA, GPU.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 98
9312 Double Aperture Camera for High Resolution Measurement

Authors: Venkatesh Bagaria, Nagesh AS, Varun AV

Abstract:

In the domain of machine vision, the measurement of length is done using cameras where the accuracy is directly proportional to the resolution of the camera and inversely to the size of the object. Since most of the pixels are wasted imaging the entire body as opposed to just imaging the edges in a conventional system, a double aperture system is constructed to focus on the edges to measure at higher resolution. The paper discusses the complexities and how they are mitigated to realize a practical machine vision system.

Keywords: Machine Vision, double aperture camera, accurate length measurement

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1525
9311 Automated Textile Defect Recognition System Using Computer Vision and Artificial Neural Networks

Authors: Atiqul Islam, Shamim Akhter, Tumnun E. Mursalin

Abstract:

Least Development Countries (LDC) like Bangladesh, whose 25% revenue earning is achieved from Textile export, requires producing less defective textile for minimizing production cost and time. Inspection processes done on these industries are mostly manual and time consuming. To reduce error on identifying fabric defects requires more automotive and accurate inspection process. Considering this lacking, this research implements a Textile Defect Recognizer which uses computer vision methodology with the combination of multi-layer neural networks to identify four classifications of textile defects. The recognizer, suitable for LDC countries, identifies the fabric defects within economical cost and produces less error prone inspection system in real time. In order to generate input set for the neural network, primarily the recognizer captures digital fabric images by image acquisition device and converts the RGB images into binary images by restoration process and local threshold techniques. Later, the output of the processed image, the area of the faulty portion, the number of objects of the image and the sharp factor of the image, are feed backed as an input layer to the neural network which uses back propagation algorithm to compute the weighted factors and generates the desired classifications of defects as an output.

Keywords: Computer vision, image acquisition device, machine vision, multi-layer neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3236
9310 Machine Vision for the Inspection of Surgical Tasks: Applications to Robotic Surgery Systems

Authors: M. Ovinis, D. Kerr, K. Bouazza-Marouf, M. Vloeberghs

Abstract:

The use of machine vision to inspect the outcome of surgical tasks is investigated, with the aim of incorporating this approach in robotic surgery systems. Machine vision is a non-contact form of inspection i.e. no part of the vision system is in direct contact with the patient, and is therefore well suited for surgery where sterility is an important consideration,. As a proof-of-concept, three primary surgical tasks for a common neurosurgical procedure were inspected using machine vision. Experiments were performed on cadaveric pig heads to simulate the two possible outcomes i.e. satisfactory or unsatisfactory, for tasks involved in making a burr hole, namely incision, retraction, and drilling. We identify low level image features to distinguish the two outcomes, as well as report on results that validate our proposed approach. The potential of using machine vision in a surgical environment, and the challenges that must be addressed, are identified and discussed.

Keywords: Visual inspection, machine vision, robotic surgery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1743
9309 2D and 3D Finite Element Method Packages of CEMTool for Engineering PDE Problems

Authors: Choon Ki Ahn, Jung Hun Park, Wook Hyun Kwon

Abstract:

CEMTool is a command style design and analyzing package for scientific and technological algorithm and a matrix based computation language. In this paper, we present new 2D & 3D finite element method (FEM) packages for CEMTool. We discuss the detailed structures and the important features of pre-processor, solver, and post-processor of CEMTool 2D & 3D FEM packages. In contrast to the existing MATLAB PDE Toolbox, our proposed FEM packages can deal with the combination of the reserved words. Also, we can control the mesh in a very effective way. With the introduction of new mesh generation algorithm and fast solving technique, our FEM packages can guarantee the shorter computational time than MATLAB PDE Toolbox. Consequently, with our new FEM packages, we can overcome some disadvantages or limitations of the existing MATLAB PDE Toolbox.

Keywords: CEMTool, Finite element method (FEM), Numericalanalysis, Partial differential equation (PDE)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3757
9308 FPGA Implement of a Vision Based Lane Departure Warning System

Authors: Yu Ren Lin, Yi Feng Su

Abstract:

Using vision based solution in intelligent vehicle application often needs large memory to handle video stream and image process which increase complexity of hardware and software. In this paper, we present a FPGA implement of a vision based lane departure warning system. By taking frame of videos, the line gradient of line is estimated and the lane marks are found. By analysis the position of lane mark, departure of vehicle will be detected in time. This idea has been implemented in Xilinx Spartan6 FPGA. The lane departure warning system used 39% logic resources and no memory of the device. The average availability is 92.5%. The frame rate is more than 30 frames per second (fps).

Keywords: Lane departure warning system, image, FPGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2030
9307 Supervisory Fuzzy Learning Control for Underwater Target Tracking

Authors: C.Kia, M.R.Arshad, A.H.Adom, P.A.Wilson

Abstract:

This paper presents recent work on the improvement of the robotics vision based control strategy for underwater pipeline tracking system. The study focuses on developing image processing algorithms and a fuzzy inference system for the analysis of the terrain. The main goal is to implement the supervisory fuzzy learning control technique to reduce the errors on navigation decision due to the pipeline occlusion problem. The system developed is capable of interpreting underwater images containing occluded pipeline, seabed and other unwanted noise. The algorithm proposed in previous work does not explore the cooperation between fuzzy controllers, knowledge and learnt data to improve the outputs for underwater pipeline tracking. Computer simulations and prototype simulations demonstrate the effectiveness of this approach. The system accuracy level has also been discussed.

Keywords: Fuzzy logic, Underwater target tracking, Autonomous underwater vehicles, Artificial intelligence, Simulations, Robot navigation, Vision system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1842