Search results for: Vision for graphics.
425 The Conception of Implementation of Vision for European Forensic Science 2020 in Lithuania
Authors: Eglė Bilevičiūtė, Vidmantas Egidijus Kurapka, Snieguolė Matulienė, Sigutė Stankevičiūtė
Abstract:
The Council of European Union (EU Council) has stressed on several occasions the need for a concerted, comprehensive and effective solution to delinquency problems in EU communities. In the context of establishing a European Forensic Science Area and the development of forensic science infrastructure in Europe, EU Council believes that forensic science can significantly contribute to the efficiency of law enforcement, crime prevention and combating crimes. Lithuanian scientists have consolidated to implement a project named “Conception of the vision for European Forensic Science 2020 implementation in Lithuania” (the project is funded for the period of 1 March 2014 - 31 December 2016) with the objective to create a conception of implementation of the vision for European Forensic Science 2020 in Lithuania by 1) evaluating the current status of Lithuania’s forensic system and opportunities for its improvement; 2) analysing achievements and knowledge in investigation of crimes listed in conclusions of EU Council on the vision for European Forensic Science 2020 including creation of a European Forensic Science Area and the development of forensic science infrastructure in Europe: trafficking in human beings, organised crime and terrorism; 3) analysing conceptions of criminalistics, which differ in different EU member states due to the variety of forensic schools, and finding means for their harmonization. Apart from the conception of implementation of the vision for European Forensic Science 2020 in Lithuania, the Project is expected to suggest provisions that will be relevant to other EU countries as well. Consequently, the presented conception of implementation of vision for European Forensic Science 2020 in Lithuania could initiate a project for a common vision of European Forensic Science and contribute to the development of the EU as an area of freedom, security and justice. The article presents main ideas of the project of the conception of the vision for European Forensic Science 2020 of EU Council and analyses its legal background, as well as prospects of and challenges for its implementation in Lithuania and the EU.
Keywords: EUROVIFOR, standardization, Vision for European Forensic Science 2020.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3129424 3D Oil Reservoir Visualisation Using Octree Compression Techniques Utilising Logical Grid Co-Ordinates
Authors: S. Mulholland
Abstract:
Octree compression techniques have been used for several years for compressing large three dimensional data sets into homogeneous regions. This compression technique is ideally suited to datasets which have similar values in clusters. Oil engineers represent reservoirs as a three dimensional grid where hydrocarbons occur naturally in clusters. This research looks at the efficiency of storing these grids using octree compression techniques where grid cells are broken into active and inactive regions. Initial experiments yielded high compression ratios as only active leaf nodes and their ancestor, header nodes are stored as a bitstream to file on disk. Savings in computational time and memory were possible at decompression, as only active leaf nodes are sent to the graphics card eliminating the need of reconstructing the original matrix. This results in a more compact vertex table, which can be loaded into the graphics card quicker and generating shorter refresh delay times.
Keywords: 3D visualisation, compressed vertex tables, octree compression techniques, oil reservoir grids.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736423 Computer Vision Applied to Flower, Fruit and Vegetable Processing
Authors: Luis Gracia, Carlos Perez-Vidal, Carlos Gracia
Abstract:
This paper presents the theoretical background and the real implementation of an automated computer system to introduce machine vision in flower, fruit and vegetable processing for recollection, cutting, packaging, classification, or fumigation tasks. The considerations and implementation issues presented in this work can be applied to a wide range of varieties of flowers, fruits and vegetables, although some of them are especially relevant due to the great amount of units that are manipulated and processed each year over the world. The computer vision algorithms developed in this work are shown in detail, and can be easily extended to other applications. A special attention is given to the electromagnetic compatibility in order to avoid noisy images. Furthermore, real experimentation has been carried out in order to validate the developed application. In particular, the tests show that the method has good robustness and high success percentage in the object characterization.Keywords: Image processing, Vision system, Automation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3321422 Dead-Reckoning Error Calibration using Celling Looking Vision Camera
Authors: Jae-Young Choi, Sung-Gaun Kim
Abstract:
This paper suggests a calibration method to reduce errors occurring due to mobile robot sliding during location estimation using the Dead-reckoning. Due to sliding of the mobile robot caused between its wheels and the road surface while on free run, location estimation can be erroneous. Sliding especially occurs during cornering of mobile robot. Therefore, in order to reduce these frequent sliding errors in cornering, we calibrated the mobile robot-s heading values using a vision camera and templates of the ceiling.Keywords: Dead-reckoning, Localization, Odomerty, Vision Camera
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1782421 The Corporate Vision Effect on Rajabhat University Brand Building in Thailand
Authors: Pisit Potjanajaruwit
Abstract:
This study aims to (1) investigate the corporate vision factor influencing Rajabhat University brand building in Thailand and (2) explore influences of brand building upon Rajabhat University stakeholders’ loyalty, and the research method will use mixed methods to conduct qualitative research with the quantitative research. The qualitative will approach by Indebt-interview the executive of Rathanagosin Rajabhat University group for 6 key informants and the quantitative data was collected by questionnaires distributed to stakeholder including instructors, staff, students and parents of the Rathanagosin Rajabhat University group for 400 sampling were selected by multi-stage sampling method. Data was analyzed by Structural Equation Modeling: SEM and also provide the focus group interview for confirming the model. Findings corporate vision had a direct and positive influence on Rajabhat University brand building were showed direct and positive influence on stakeholder’s loyalty and stakeholder’s loyalty was indirectly influenced by corporate vision through Rajabhat University brand building.
Keywords: Brand building, corporate vision, Rajabhat University, stakeholders’ loyalty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 804420 An Evaluation of Neural Network Efficacies for Image Recognition on Edge-AI Computer Vision Platform
Abstract:
Image recognition enables machine-like robotics to understand a scene and plays an important role in computer vision applications. Computer vision platforms as physical infrastructure, supporting Neural Networks for image recognition, are deterministic to leverage the performance of different Neural Networks. In this paper, three different computer vision platforms – edge AI (Jetson Nano, with 4GB), a standalone laptop (with RTX 3000s, using CUDA), and a web-based device (Google Colab, using GPU) are investigated. In the case study, four prominent neural network architectures (including AlexNet, VGG16, GoogleNet, and ResNet (34/50)), are deployed. By using public ImageNets (Cifar-10), our findings provide a nuanced perspective on optimizing image recognition tasks across Edge-AI platforms, offering guidance on selecting appropriate neural network structures to maximize performance under hardware constraints.
Keywords: AlexNet, VGG, GoogleNet, ResNet, ImageNet, Cifar-10, Edge AI, Jetson Nano, CUDA, GPU.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 219419 2D Spherical Spaces for Face Relighting under Harsh Illumination
Authors: Amr Almaddah, Sadi Vural, Yasushi Mae, Kenichi Ohara, Tatsuo Arai
Abstract:
In this paper, we propose a robust face relighting technique by using spherical space properties. The proposed method is done for reducing the illumination effects on face recognition. Given a single 2D face image, we relight the face object by extracting the nine spherical harmonic bases and the face spherical illumination coefficients. First, an internal training illumination database is generated by computing face albedo and face normal from 2D images under different lighting conditions. Based on the generated database, we analyze the target face pixels and compare them with the training bootstrap by using pre-generated tiles. In this work, practical real time processing speed and small image size were considered when designing the framework. In contrast to other works, our technique requires no 3D face models for the training process and takes a single 2D image as an input. Experimental results on publicly available databases show that the proposed technique works well under severe lighting conditions with significant improvements on the face recognition rates.Keywords: Face synthesis and recognition, Face illumination recovery, 2D spherical spaces, Vision for graphics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1753418 FPGA Implementation of a Vision-Based Blind Spot Warning System
Authors: Yu Ren Lin, Yu Hong Li
Abstract:
Vision-based intelligent vehicle applications often require large amounts of memory to handle video streaming and image processing, which in turn increases complexity of hardware and software. This paper presents an FPGA implement of a vision-based blind spot warning system. Using video frames, the information of the blind spot area turns into one-dimensional information. Analysis of the estimated entropy of image allows the detection of an object in time. This idea has been implemented in the XtremeDSP video starter kit. The blind spot warning system uses only 13% of its logic resources and 95k bits block memory, and its frame rate is over 30 frames per sec (fps).
Keywords: blind-spot area, image, FPGA
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1835417 A Real-Time Rendering based on Efficient Updating of Static Objects Buffer
Authors: Youngjae Chun, Kyoungsu Oh
Abstract:
Real-time 3D applications have to guarantee interactive rendering speed. There is a restriction for the number of polygons which is rendered due to performance of a graphics hardware or graphics algorithms. Generally, the rendering performance will be drastically increased when handling only the dynamic 3d models, which is much fewer than the static ones. Since shapes and colors of the static objects don-t change when the viewing direction is fixed, the information can be reused. We render huge amounts of polygon those cannot handled by conventional rendering techniques in real-time by using a static object image and merging it with rendering result of the dynamic objects. The performance must be decreased as a consequence of updating the static object image including removing an static object that starts to move, re-rending the other static objects being overlapped by the moving ones. Based on visibility of the object beginning to move, we can skip the updating process. As a result, we enhance rendering performance and reduce differences of rendering speed between each frame. Proposed method renders total 200,000,000 polygons that consist of 500,000 dynamic polygons and the rest are static polygons in about 100 frames per second.Keywords: Occlusion query, Real-time rendering, Temporal coherence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699416 FPGA Implement of a Vision Based Lane Departure Warning System
Authors: Yu Ren Lin, Yi Feng Su
Abstract:
Using vision based solution in intelligent vehicle application often needs large memory to handle video stream and image process which increase complexity of hardware and software. In this paper, we present a FPGA implement of a vision based lane departure warning system. By taking frame of videos, the line gradient of line is estimated and the lane marks are found. By analysis the position of lane mark, departure of vehicle will be detected in time. This idea has been implemented in Xilinx Spartan6 FPGA. The lane departure warning system used 39% logic resources and no memory of the device. The average availability is 92.5%. The frame rate is more than 30 frames per second (fps).
Keywords: Lane departure warning system, image, FPGA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2074415 Vision Based People Tracking System
Authors: Boukerch Haroun, Luo Qing Sheng, Li Hua Shi, Boukraa Sebti
Abstract:
In this paper we present the design and the implementation of a target tracking system where the target is set to be a moving person in a video sequence. The system can be applied easily as a vision system for mobile robot. The system is composed of two major parts the first is the detection of the person in the video frame using the SVM learning machine based on the “HOG” descriptors. The second part is the tracking of a moving person it’s done by using a combination of the Kalman filter and a modified version of the Camshift tracking algorithm by adding the target motion feature to the color feature, the experimental results had shown that the new algorithm had overcame the traditional Camshift algorithm in robustness and in case of occlusion.
Keywords: Camshift Algorithm, Computer Vision, Kalman Filter, Object tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1332414 Hand Gesture Recognition using Blob Detection for Immersive Projection Display System
Authors: Hasup Lee, Yoshisuke Tateyama, Tetsuro Ogi
Abstract:
We developed a vision interface immersive projection system, CAVE in virtual rea using hand gesture recognition with computer vis background image was subtracted from current webcam and we convert the color space of the imag Then we mask skin regions using skin color range t a noise reduction operation. We made blobs fro gestures were recognized using these blobs. Using recognition, we could implement an effective bothering devices for CAVE. e framework for an reality research field vision techniques. ent image frame age into HSV space. e threshold and apply from the image and ing our hand gesture e interface without
Keywords: CAVE, Computer Vision, Ges Virtual Reality esture Recognition,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2751413 LINUX Cluster Possibilities in 3-D PHOTO Quality Imaging and Animation
Authors: Arjun Jain, Himanshu Agrawal, Nalini Vasudevan
Abstract:
In this paper we present the PC cluster built at R.V. College of Engineering (with great help from the Department of Computer Science and Electrical Engineering). The structure of the cluster is described and the performance is evaluated by rendering of complex 3D Persistence of Vision (POV) images by the Ray-Tracing algorithm. Here, we propose an unexampled method to render such images, distributedly on a low cost scalable.Keywords: PC cluster, parallel computations, ray tracing, persistence of vision, rendering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1551412 Exploring Dynamics of Regional Creative Economy
Authors: Ari Lindeman, Melina Maunula, Jani Kiviranta, Ronja Pölkki
Abstract:
The aim of this paper is to build a vision of the utilization of creative industry competences in industrial and services firms connected to Kymenlaakso region, Finland, smart specialization focus areas. Research indicates that creativity and the use of creative industry’s inputs can enhance innovation and competitiveness. Currently creative methods and services are underutilized in regional businesses and the added value they provide is not well grasped. Methodologically, the research adopts a qualitative exploratory approach. Data is collected in multiple ways including a survey, focus groups, and interviews. Theoretically, the paper contributes to the discussion about the use creative industry competences in regional development, and argues for building regional creative economy ecosystems in close co-operation with regional strategies and traditional industries rather than as treating regional creative industry ecosystem initiatives separate from them. The practical contribution of the paper is the creative vision for the use of regional authorities in updating smart specialization strategy as well as boosting industrial and creative & cultural sectors’ competitiveness. The paper also illustrates a research-based model of vision building.
Keywords: Business, cooperation, creative economy, regional development, vision.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 824411 Vision-based Network System for Industrial Applications
Authors: Taweepol Suesut, Arjin Numsomran, Vittaya Tipsuwanporn
Abstract:
This paper presents the communication network for machine vision system to implement to control systems and logistics applications in industrial environment. The real-time distributed over the network is very important for communication among vision node, image processing and control as well as the distributed I/O node. A robust implementation both with respect to camera packaging and data transmission has been accounted. This network consists of a gigabit Ethernet network and a switch with integrated fire-wall is used to distribute the data and provide connection to the imaging control station and IEC-61131 conform signal integration comprising the Modbus TCP protocol. The real-time and delay time properties each part on the network were considered and worked out in this paper.Keywords: Distributed Real-Time Automation, Machine Visionand Ethernet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1662410 Accurate Dimensional Measurement of 3D Round Holes Based on Stereo Vision
Authors: Zhiguo Ren, Lilong Cai
Abstract:
This paper present an effective method to accurately reconstruct and measure the 3D curve edges of small industrial parts based on stereo vision. To effectively fit the curve of the measured parts using a series of line segments in the images, a strategy from coarse to fine is employed based on multi-scale curve fitting. After reconstructing the 3D curve of a hole through a curved surface, its axis is adjusted so that it is parallel to the Z axis with least squares error and the dimensions of the hole can be calculated on the XY plane easily. Experimental results show that the presented method can accurately measure the dimensions of round holes through a curved surface.
Keywords: Stereo Vision, 3D Round Hole Measurement, Curve Fitting, 3D Curve Reconstruction, Least Squares Error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1626409 Video Based Ambient Smoke Detection By Detecting Directional Contrast Decrease
Authors: Omair Ghori, Anton Stadler, Stefan Wilk, Wolfgang Effelsberg
Abstract:
Fire-related incidents account for extensive loss of life and material damage. Quick and reliable detection of occurring fires has high real world implications. Whereas a major research focus lies on the detection of outdoor fires, indoor camera-based fire detection is still an open issue. Cameras in combination with computer vision helps to detect flames and smoke more quickly than conventional fire detectors. In this work, we present a computer vision-based smoke detection algorithm based on contrast changes and a multi-step classification. This work accelerates computer vision-based fire detection considerably in comparison with classical indoor-fire detection.Keywords: Contrast analysis, early fire detection, video smoke detection, video surveillance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1583408 RV-YOLOX: Object Detection on Inland Waterways Based on Optimized YOLOX through Fusion of Vision and 3+1D Millimeter Wave Radar
Authors: Zixian Zhang, Shanliang Yao, Zile Huang, Zhaodong Wu, Xiaohui Zhu, Yong Yue, Jieming Ma
Abstract:
Unmanned Surface Vehicles (USVs) hold significant value for their capacity to undertake hazardous and labor-intensive operations over aquatic environments. Object detection tasks are significant in these applications. Nonetheless, the efficacy of USVs in object detection is impeded by several intrinsic challenges, including the intricate dispersal of obstacles, reflections emanating from coastal structures, and the presence of fog over water surfaces, among others. To address these problems, this paper provides a fusion method for USVs to effectively detect objects in the inland surface environment, utilizing vision sensors and 3+1D Millimeter-wave radar. The MMW radar is a complementary tool to vision sensors, offering reliable environmental data. This approach involves the conversion of the radar’s 3D point cloud into a 2D radar pseudo-image, thereby standardizing the format for radar and vision data by leveraging a point transformer. Furthermore, this paper proposes the development of a multi-source object detection network, named RV-YOLOX, which leverages radar-vision integration specifically tailored for inland waterway environments. The performance is evaluated on our self-recording waterways dataset. Compared with the YOLOX network, our fusion network significantly improves detection accuracy, especially for objects with bad light conditions.
Keywords: Inland waterways, object detection, YOLO, sensor fusion, self-attention, deep learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 289407 Vision Based Robotic Interception in Industrial Manipulation Tasks
Authors: Ahmet Denker, Tuğrul Adıgüzel
Abstract:
In this paper, a solution is presented for a robotic manipulation problem in industrial settings. The problem is sensing objects on a conveyor belt, identifying the target, planning and tracking an interception trajectory between end effector and the target. Such a problem could be formulated as combining object recognition, tracking and interception. For this purpose, we integrated a vision system to the manipulation system and employed tracking algorithms. The control approach is implemented on a real industrial manipulation setting, which consists of a conveyor belt, objects moving on it, a robotic manipulator, and a visual sensor above the conveyor. The trjectory for robotic interception at a rendezvous point on the conveyor belt is analytically calculated. Test results show that tracking the raget along this trajectory results in interception and grabbing of the target object.Keywords: robotics, robot vision, rendezvous planning, self organizingmaps.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1474406 Robot Vision Application based on Complex 3D Pose Computation
Authors: F. Rotaru, S. Bejinariu, C. D. Niţâ, R. Luca, I. Pâvâloi, C. Lazâr
Abstract:
The paper presents a technique suitable in robot vision applications where it is not possible to establish the object position from one view. Usually, one view pose calculation methods are based on the correspondence of image features established at a training step and exactly the same image features extracted at the execution step, for a different object pose. When such a correspondence is not feasible because of the lack of specific features a new method is proposed. In the first step the method computes from two views the 3D pose of feature points. Subsequently, using a registration algorithm, the set of 3D feature points extracted at the execution phase is aligned with the set of 3D feature points extracted at the training phase. The result is a Euclidean transform which have to be used by robot head for reorientation at execution step.Keywords: features correspondence, registration algorithm, robot vision, triangulation method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1470405 FPGA based Relative Distance Measurement using Stereo Vision Technology
Authors: Manasi Pathade, Prachi Kadam, Renuka Kulkarni, Tejas Teredesai
Abstract:
In this paper, we propose a novel concept of relative distance measurement using Stereo Vision Technology and discuss its implementation on a FPGA based real-time image processor. We capture two images using two CCD cameras and compare them. Disparity is calculated for each pixel using a real time dense disparity calculation algorithm. This algorithm is based on the concept of indexed histogram for matching. Disparity being inversely proportional to distance (Proved Later), we can thus get the relative distances of objects in front of the camera. The output is displayed on a TV screen in the form of a depth image (optionally using pseudo colors). This system works in real time on a full PAL frame rate (720 x 576 active pixels @ 25 fps).Keywords: Stereo Vision, Relative Distance Measurement, Indexed Histogram, Real time FPGA Image Processor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3001404 Consensus on Climate Change Adaptation among Government and Populace
Authors: Tsung-Hsien Yu, Ya-Hsuan Chou, Ming-Wei Chen, Chi-Ming Chen, Yi-Hsuan Li
Abstract:
Observations and long-term trends indicate that climate change impacts would be significant and affects Taiwan directly and severely. Taiwan engages not only in mitigation, but also in adaptation. However, there are cognitive gaps on adaptation between government and populace. Besides, a vision of zero-carbon and renewable energy 100% will be adopted in future. Therefore, the objectives of this article are to 1) hold a National Forum for knowing differences between the strategies of zero-carbon and renewable energy 100% and cognitions of general populace, and 2) plan a clear roadmap for the vision, strategy, and measures. In this forum, we set 5 group topics, 5 presumed themes, and issues mentioned review for concluding the critical issues. Finally, there are 4 strategies and 14 critical issues which correlate with the vision and strategy of government and the cognition of the general populace.
Keywords: Cognitive gap, world café, renewable energy, zero-carbon.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1731403 Vision Based Hand Gesture Recognition
Authors: Pragati Garg, Naveen Aggarwal, Sanjeev Sofat
Abstract:
With the development of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are not sufficient. Due to the limitation of these devices the useable command set is also limited. Direct use of hands as an input device is an attractive method for providing natural Human Computer Interaction which has evolved from text-based interfaces through 2D graphical-based interfaces, multimedia-supported interfaces, to fully fledged multi-participant Virtual Environment (VE) systems. Imagine the human-computer interaction of the future: A 3Dapplication where you can move and rotate objects simply by moving and rotating your hand - all without touching any input device. In this paper a review of vision based hand gesture recognition is presented. The existing approaches are categorized into 3D model based approaches and appearance based approaches, highlighting their advantages and shortcomings and identifying the open issues.Keywords: Computer Vision, Hand Gesture, Hand Posture, Human Computer Interface.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6339402 Paddy/Rice Singulation for Determination of Husking Efficiency and Damage Using Machine Vision
Authors: M. Shaker, S. Minaei, M. H. Khoshtaghaza, A. Banakar, A. Jafari
Abstract:
In this study a system of machine vision and singulation was developed to separate paddy from rice and determine paddy husking and rice breakage percentages. The machine vision system consists of three main components including an imaging chamber, a digital camera, a computer equipped with image processing software. The singulation device consists of a kernel holding surface, a motor with vacuum fan, and a dimmer. For separation of paddy from rice (in the image), it was necessary to set a threshold. Therefore, some images of paddy and rice were sampled and the RGB values of the images were extracted using MATLAB software. Then mean and standard deviation of the data were determined. An Image processing algorithm was developed using MATLAB to determine paddy/rice separation and rice breakage and paddy husking percentages, using blue to red ratio. Tests showed that, a threshold of 0.75 is suitable for separating paddy from rice kernels. Results from the evaluation of the image processing algorithm showed that the accuracies obtained with the algorithm were 98.36% and 91.81% for paddy husking and rice breakage percentage, respectively. Analysis also showed that a suction of 45 mmHg to 50 mmHg yielding 81.3% separation efficiency is appropriate for operation of the kernel singulation system.
Keywords: Computer vision, rice kernel, husking, breakage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1528401 Deep iCrawl: An Intelligent Vision-Based Deep Web Crawler
Authors: R.Anita, V.Ganga Bharani, N.Nityanandam, Pradeep Kumar Sahoo
Abstract:
The explosive growth of World Wide Web has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. Deep web pages are created dynamically as a result of queries posed to specific web databases. The structure of the deep web pages makes it impossible for traditional web crawlers to access deep web contents. This paper, Deep iCrawl, gives a novel and vision-based approach for extracting data from the deep web. Deep iCrawl splits the process into two phases. The first phase includes Query analysis and Query translation and the second covers vision-based extraction of data from the dynamically created deep web pages. There are several established approaches for the extraction of deep web pages but the proposed method aims at overcoming the inherent limitations of the former. This paper also aims at comparing the data items and presenting them in the required order.Keywords: Crawler, Deep web, Web Database
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2155400 Automatic Inspection of Percussion Caps by Means of Combined 2D and 3D Machine Vision Techniques
Authors: A. Tellaeche, R. Arana, I.Maurtua
Abstract:
The exhaustive quality control is becoming more and more important when commercializing competitive products in the world's globalized market. Taken this affirmation as an undeniable truth, it becomes critical in certain sector markets that need to offer the highest restrictions in quality terms. One of these examples is the percussion cap mass production, a critical element assembled in firearm ammunition. These elements, built in great quantities at a very high speed, must achieve a minimum tolerance deviation in their fabrication, due to their vital importance in firing the piece of ammunition where they are built in. This paper outlines a machine vision development for the 100% inspection of percussion caps obtaining data from 2D and 3D simultaneous images. The acquisition speed and precision of these images from a metallic reflective piece as a percussion cap, the accuracy of the measures taken from these images and the multiple fabrication errors detected make the main findings of this work.Keywords: critical tolerance, high speed decision makingsimultaneous 2D/3D machine vision.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1536399 Stereo Motion Tracking
Authors: Yudhajit Datta, Jonathan Bandi, Ankit Sethia, Hamsi Iyer
Abstract:
Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots.
Keywords: Kalman Filter, Stereo Vision, Motion Tracking, Matlab, Object Tracking, Camera Calibration, Computer Vision System Toolbox.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2821398 Real-Time Vision-based Korean Finger Spelling Recognition System
Authors: Anjin Park, Sungju Yun, Jungwhan Kim, Seungk Min, Keechul Jung
Abstract:
Finger spelling is an art of communicating by signs made with fingers, and has been introduced into sign language to serve as a bridge between the sign language and the verbal language. Previous approaches to finger spelling recognition are classified into two categories: glove-based and vision-based approaches. The glove-based approach is simpler and more accurate recognizing work of hand posture than vision-based, yet the interfaces require the user to wear a cumbersome and carry a load of cables that connected the device to a computer. In contrast, the vision-based approaches provide an attractive alternative to the cumbersome interface, and promise more natural and unobtrusive human-computer interaction. The vision-based approaches generally consist of two steps: hand extraction and recognition, and two steps are processed independently. This paper proposes real-time vision-based Korean finger spelling recognition system by integrating hand extraction into recognition. First, we tentatively detect a hand region using CAMShift algorithm. Then fill factor and aspect ratio estimated by width and height estimated by CAMShift are used to choose candidate from database, which can reduce the number of matching in recognition step. To recognize the finger spelling, we use DTW(dynamic time warping) based on modified chain codes, to be robust to scale and orientation variations. In this procedure, since accurate hand regions, without holes and noises, should be extracted to improve the precision, we use graph cuts algorithm that globally minimize the energy function elegantly expressed by Markov random fields (MRFs). In the experiments, the computational times are less than 130ms, and the times are not related to the number of templates of finger spellings in database, as candidate templates are selected in extraction step.Keywords: CAMShift, DTW, Graph Cuts, MRF.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1635397 A Hybrid Distributed Vision System for Robot Localization
Authors: Hsiang-Wen Hsieh, Chin-Chia Wu, Hung-Hsiu Yu, Shu-Fan Liu
Abstract:
Localization is one of the critical issues in the field of robot navigation. With an accurate estimate of the robot pose, robots will be capable of navigating in the environment autonomously and efficiently. In this paper, a hybrid Distributed Vision System (DVS) for robot localization is presented. The presented approach integrates odometry data from robot and images captured from overhead cameras installed in the environment to help reduce possibilities of fail localization due to effects of illumination, encoder accumulated errors, and low quality range data. An odometry-based motion model is applied to predict robot poses, and robot images captured by overhead cameras are then used to update pose estimates with HSV histogram-based measurement model. Experiment results show the presented approach could localize robots in a global world coordinate system with localization errors within 100mm.Keywords: Distributed Vision System, Localization, Measurement model, Motion model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1339396 Design of a Computer Vision Based Exercise Video Game for Senior Citizens
Abstract:
There are numerous changes, both mental and physical, taking place when people age. We need to understand the different aspects required for healthy living, including meeting nutritional needs, regular physical activities to keep agility, sufficient rest and sleep to have physical and mental well-being, social engagement to avoid the risk of social isolation and depression, and access to healthcare to detect and manage chronic conditions. Promoting physical activities for an ageing population is necessary as many may have enjoyed sedentary lifestyles for some time. In our study, we evaluate the considerations when designing a computer vision video game for the elderly. We need to design some low-impact activities, such as stretching and gentle movements, because some elderly individuals may have joint pains or mobility issues. The exercise game should consist of simple movements that are easy to follow and remember. It should be fun and enjoyable so that they can be motivated to do some exercise. Social engagement can keep the elderly motivated and competitive, and they are more willing to engage in game exercises. Elderly citizens can compare their game scores and try to improve them. We propose a computer vision-based video game for the elderly that will capture and track the movement of the elderly hand pushing a ball on the screen into a circle. It can be easily set up using a PC laptop with a webcam. Our video game adhered to the design framework we employed, and it encompassed ease of use, a simple graphical interface, easy-to-play game exercise, and fun gameplay.
Keywords: Computer vision, video games, gerontology technology, caregiving.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 259