Search results for: embedded camera
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1640

Search results for: embedded camera

1640 Flicker Detection with Motion Tolerance for Embedded Camera

Authors: Jianrong Wu, Xuan Fu, Akihiro Higashi, Zhiming Tan

Abstract:

CMOS image sensors with a rolling shutter are used broadly in the digital cameras embedded in mobile devices. The rolling shutter suffers the flicker artifacts from the fluorescent lamp, and it could be observed easily. In this paper, the characteristics of illumination flicker in motion case were analyzed, and two efficient detection methods based on matching fragment selection were proposed. According to the experimental results, our methods could achieve as high as 100% accuracy in static scene, and at least 97% in motion scene.

Keywords: illumination flicker, embedded camera, rolling shutter, detection

Procedia PDF Downloads 418
1639 Video Sharing System Based On Wi-fi Camera

Authors: Qidi Lin, Jinbin Huang, Weile Liang

Abstract:

This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition that, it is able to send commands to camera and control the camera’s holder to rotate. The platform can be applied to interactive teaching and dangerous area’s monitoring and so on. Testing results show that the platform can share the live video of mobile phone. Furthermore, if the system’s PC sever and the camera and many mobile phones are connected together, it can transfer photos concurrently.

Keywords: Wifi Camera, socket mobile, platform video monitoring, remote control

Procedia PDF Downloads 334
1638 Emotion Detection in a General Human-Robot Interaction System Optimized for Embedded Platforms

Authors: Julio Vega

Abstract:

Expression recognition is a field of Artificial Intelligence whose main objectives are to recognize basic forms of affective expression that appear on people’s faces and contributing to behavioral studies. In this work, a ROS node has been developed that, based on Deep Learning techniques, is capable of detecting the facial expressions of the people that appear in the image. These algorithms were optimized so that they can be executed in real time on an embedded platform. The experiments were carried out in a PC with a USB camera and in a Raspberry Pi 4 with a PiCamera. The final results shows a plausible system, which is capable to work in real time even in an embedded platform.

Keywords: python, low-cost, raspberry pi, emotion detection, human-robot interaction, ROS node

Procedia PDF Downloads 127
1637 Supporting Embedded Medical Software Development with MDevSPICE® and Agile Practices

Authors: Surafel Demissie, Frank Keenan, Fergal McCaffery

Abstract:

Emerging medical devices are highly relying on embedded software that runs on the specific platform in real time. The development of embedded software is different from ordinary software development due to the hardware-software dependency. MDevSPICE® has been developed to provide guidance to support such development. To increase the flexibility of this framework agile practices have been introduced. This paper outlines the challenges for embedded medical device software development and the structure of MDevSPICE® and suggests a suitable combination of agile practices that will help to add flexibility and address corresponding challenges of embedded medical device software development.

Keywords: agile practices, challenges, embedded software, MDevSPICE®, medical device

Procedia PDF Downloads 262
1636 Embedded Digital Image System

Authors: Dawei Li, Cheng Liu, Yiteng Liu

Abstract:

This paper introduces an embedded digital image system for Chinese space environment vertical exploration sounding rocket. In order to record the flight status of the sounding rocket as well as the payloads, an onboard embedded image processing system based on ADV212, a JPEG2000 compression chip, is designed in this paper. Since the sounding rocket is not designed to be recovered, all image data should be transmitted to the ground station before the re-entry while the downlink band used for the image transmission is only about 600 kbps. Under the same condition of compression ratio compared with other algorithm, JPEG2000 standard algorithm can achieve better image quality. So JPEG2000 image compression is applied under this condition with a limited downlink data band. This embedded image system supports lossless to 200:1 real time compression, with two cameras to monitor nose ejection and motor separation, and two cameras to monitor boom deployment. The encoder, ADV7182, receives PAL signal from the camera, then output the ITU-R BT.656 signal to ADV212. ADV7182 switches between four input video channels as the program sequence. Two SRAMs are used for Ping-pong operation and one 512 Mb SDRAM for buffering high frame-rate images. The whole image system has the characteristics of low power dissipation, low cost, small size and high reliability, which is rather suitable for this sounding rocket application.

Keywords: ADV212, image system, JPEG2000, sounding rocket

Procedia PDF Downloads 418
1635 A Study of Effective Stereo Matching Method for Long-Wave Infrared Camera Module

Authors: Hyun-Koo Kim, Yonghun Kim, Yong-Hoon Kim, Ju Hee Lee, Myungho Song

Abstract:

In this paper, we have described an efficient stereo matching method and pedestrian detection method using stereo types LWIR camera. We compared with three types stereo camera algorithm as block matching, ELAS, and SGM. For pedestrian detection using stereo LWIR camera, we used that SGM stereo matching method, free space detection method using u/v-disparity, and HOG feature based pedestrian detection. According to testing result, SGM method has better performance than block matching and ELAS algorithm. Combination of SGM, free space detection, and pedestrian detection using HOG features and SVM classification can detect pedestrian of 30m distance and has a distance error about 30 cm.

Keywords: advanced driver assistance system, pedestrian detection, stereo matching method, stereo long-wave IR camera

Procedia PDF Downloads 411
1634 Image Features Comparison-Based Position Estimation Method Using a Camera Sensor

Authors: Jinseon Song, Yongwan Park

Abstract:

In this paper, propose method that can user’s position that based on database is built from single camera. Previous positioning calculate distance by arrival-time of signal like GPS (Global Positioning System), RF(Radio Frequency). However, these previous method have weakness because these have large error range according to signal interference. Method for solution estimate position by camera sensor. But, signal camera is difficult to obtain relative position data and stereo camera is difficult to provide real-time position data because of a lot of image data, too. First of all, in this research we build image database at space that able to provide positioning service with single camera. Next, we judge similarity through image matching of database image and transmission image from user. Finally, we decide position of user through position of most similar database image. For verification of propose method, we experiment at real-environment like indoor and outdoor. Propose method is wide positioning range and this method can verify not only position of user but also direction.

Keywords: positioning, distance, camera, features, SURF(Speed-Up Robust Features), database, estimation

Procedia PDF Downloads 347
1633 Multichannel Object Detection with Event Camera

Authors: Rafael Iliasov, Alessandro Golkar

Abstract:

Object detection based on event vision has been a dynamically growing field in computer vision for the last 16 years. In this work, we create multiple channels from a single event camera and propose an event fusion method (EFM) to enhance object detection in event-based vision systems. Each channel uses a different accumulation buffer to collect events from the event camera. We implement YOLOv7 for object detection, followed by a fusion algorithm. Our multichannel approach outperforms single-channel-based object detection by 0.7% in mean Average Precision (mAP) for detection overlapping ground truth with IOU = 0.5.

Keywords: event camera, object detection with multimodal inputs, multichannel fusion, computer vision

Procedia PDF Downloads 26
1632 Subpixel Corner Detection for Monocular Camera Linear Model Research

Authors: Guorong Sui, Xingwei Jia, Fei Tong, Xiumin Gao

Abstract:

Camera calibration is a fundamental issue of high precision noncontact measurement. And it is necessary to analyze and study the reliability and application range of its linear model which is often used in the camera calibration. According to the imaging features of monocular cameras, a camera model which is based on the image pixel coordinates and three dimensional space coordinates is built. Using our own customized template, the image pixel coordinate is obtained by the subpixel corner detection method. Without considering the aberration of the optical system, the feature extraction and linearity analysis of the line segment in the template are performed. Moreover, the experiment is repeated 11 times by constantly varying the measuring distance. At last, the linearity of the camera is achieved by fitting 11 groups of data. The camera model measurement results show that the relative error does not exceed 1%, and the repeated measurement error is not more than 0.1 mm magnitude. Meanwhile, it is found that the model has some measurement differences in the different region and object distance. The experiment results show this linear model is simple and practical, and have good linearity within a certain object distance. These experiment results provide a powerful basis for establishment of the linear model of camera. These works will have potential value to the actual engineering measurement.

Keywords: camera linear model, geometric imaging relationship, image pixel coordinates, three dimensional space coordinates, sub-pixel corner detection

Procedia PDF Downloads 275
1631 X-Corner Detection for Camera Calibration Using Saddle Points

Authors: Abdulrahman S. Alturki, John S. Loomis

Abstract:

This paper discusses a corner detection algorithm for camera calibration. Calibration is a necessary step in many computer vision and image processing applications. Robust corner detection for an image of a checkerboard is required to determine intrinsic and extrinsic parameters. In this paper, an algorithm for fully automatic and robust X-corner detection is presented. Checkerboard corner points are automatically found in each image without user interaction or any prior information regarding the number of rows or columns. The approach represents each X-corner with a quadratic fitting function. Using the fact that the X-corners are saddle points, the coefficients in the fitting function are used to identify each corner location. The automation of this process greatly simplifies calibration. Our method is robust against noise and different camera orientations. Experimental analysis shows the accuracy of our method using actual images acquired at different camera locations and orientations.

Keywords: camera calibration, corner detector, edge detector, saddle points

Procedia PDF Downloads 405
1630 Frame Camera and Event Camera in Stereo Pair for High-Resolution Sensing

Authors: Khen Cohen, Daniel Yankelevich, David Mendlovic, Dan Raviv

Abstract:

We present a 3D stereo system for high-resolution sensing in both the spatial and the temporal domains by combining a frame-based camera and an event-based camera. We establish a method to merge both devices into one unite system and introduce a calibration process, followed by a correspondence technique and interpolation algorithm for 3D reconstruction. We further provide quantitative analysis about our system in terms of depth resolution and additional parameter analysis. We show experimentally how our system performs temporal super-resolution up to effectively 1ms and can detect fast-moving objects and human micro-movements that can be used for micro-expression analysis. We also demonstrate how our method can extract colored events for an event-based camera without any degradation in the spatial resolution, compared to a colored filter array.

Keywords: DVS-CIS stereo vision, micro-movements, temporal super-resolution, 3D reconstruction

Procedia PDF Downloads 295
1629 H.263 Based Video Transceiver for Wireless Camera System

Authors: Won-Ho Kim

Abstract:

In this paper, a design of H.263 based wireless video transceiver is presented for wireless camera system. It uses standard WIFI transceiver and the covering area is up to 100m. Furthermore the standard H.263 video encoding technique is used for video compression since wireless video transmitter is unable to transmit high capacity raw data in real time and the implemented system is capable of streaming at speed of less than 1Mbps using NTSC 720x480 video.

Keywords: wireless video transceiver, video surveillance camera, H.263 video encoding digital signal processing

Procedia PDF Downloads 361
1628 Development of 3D Laser Scanner for Robot Navigation

Authors: Ali Emre Öztürk, Ergun Ercelebi

Abstract:

Autonomous robotic systems needs an equipment like a human eye for their movement. Robotic camera systems, distance sensors and 3D laser scanners have been used in the literature. In this study a 3D laser scanner has been produced for those autonomous robotic systems. In general 3D laser scanners are using 2 dimension laser range finders that are moving on one-axis (1D) to generate the model. In this study, the model has been obtained by a one-dimensional laser range finder that is moving in two –axis (2D) and because of this the laser scanner has been produced cheaper. Furthermore for the laser scanner a motor driver, an embedded system control board has been used and at the same time a user interface card has been used to make the communication between those cards and computer. Due to this laser scanner, the density of the objects, the distance between the objects and the necessary path ways for the robot can be calculated. The data collected by the laser scanner system is converted in to cartesian coordinates to be modeled in AutoCAD program. This study shows also the synchronization between the computer user interface, AutoCAD and the embedded systems. As a result it makes the solution cheaper for such systems. The scanning results are enough for an autonomous robot but the scan cycle time should be developed. This study makes also contribution for further studies between the hardware and software needs since it has a powerful performance and a low cost.

Keywords: 3D laser scanner, embedded system, 1D laser range finder, 3D model

Procedia PDF Downloads 272
1627 A Wide View Scheme for Automobile's Black Box

Authors: Jaemyoung Lee

Abstract:

We propose a wide view camera scheme for automobile's black box. The proposed scheme uses the commercially available camera lenses of which view angles are about 120°}^{\circ}°. In the proposed scheme, we extend the view angle to approximately 200° ^{\circ}° using two cameras at the front side instead of three lenses with conventional black boxes.

Keywords: camera, black box, view angle, automobile

Procedia PDF Downloads 410
1626 Modal Analysis of a Cantilever Beam Using an Inexpensive Smartphone Camera: Motion Magnification Technique

Authors: Hasan Hassoun, Jaafar Hallal, Denis Duhamel, Mohammad Hammoud, Ali Hage Diab

Abstract:

This paper aims to prove the accuracy of an inexpensive smartphone camera as a non-contact vibration sensor to recover the vibration modes of a vibrating structure such as a cantilever beam. A video of a vibrating beam is filmed using a smartphone camera and then processed by the motion magnification technique. Based on this method, the first two natural frequencies and their associated mode shapes are estimated experimentally and compared to the analytical ones. Results show a relative error of less than 4% between the experimental and analytical approaches for the first two natural frequencies of the beam. Also, for the first two-mode shapes, a Modal Assurance Criterion (MAC) value of above 0.9 between the two approaches is obtained. This slight error between the different techniques ensures the viability of a cheap smartphone camera as a non-contact vibration sensor, particularly for structures vibrating at relatively low natural frequencies.

Keywords: modal analysis, motion magnification, smartphone camera, structural vibration, vibration modes

Procedia PDF Downloads 146
1625 GIS-Based Automatic Flight Planning of Camera-Equipped UAVs for Fire Emergency Response

Authors: Mohammed Sulaiman, Hexu Liu, Mohamed Binalhaj, William W. Liou, Osama Abudayyeh

Abstract:

Emerging technologies such as camera-equipped unmanned aerial vehicles (UAVs) are increasingly being applied in building fire rescue to provide real-time visualization and 3D reconstruction of the entire fireground. However, flight planning of camera-equipped UAVs is usually a manual process, which is not sufficient to fulfill the needs of emergency management. This research proposes a Geographic Information System (GIS)-based approach to automatic flight planning of camera-equipped UAVs for building fire emergency response. In this research, Haversine formula and lawn mowing patterns are employed to automate flight planning based on geometrical and spatial information from GIS. The resulting flight mission satisfies the requirements of 3D reconstruction purposes of the fireground, in consideration of flight execution safety and visibility of camera frames. The proposed approach is implemented within a GIS environment through an application programming interface. A case study is used to demonstrate the effectiveness of the proposed approach. The result shows that flight mission can be generated in a timely manner for application to fire emergency response.

Keywords: GIS, camera-equipped UAVs, automatic flight planning, fire emergency response

Procedia PDF Downloads 124
1624 Object Recognition System Operating from Different Type Vehicles Using Raspberry and OpenCV

Authors: Maria Pavlova

Abstract:

In our days, it is possible to put the camera on different vehicles like quadcopter, train, airplane and etc. The camera also can be the input sensor in many different systems. That means the object recognition like non separate part of monitoring control can be key part of the most intelligent systems. The aim of this paper is to focus of the object recognition process during vehicles movement. During the vehicle’s movement the camera takes pictures from the environment without storage in Data Base. In case the camera detects a special object (for example human or animal), the system saves the picture and sends it to the work station in real time. This functionality will be very useful in emergency or security situations where is necessary to find a specific object. In another application, the camera can be mounted on crossroad where do not have many people and if one or more persons come on the road, the traffic lights became the green and they can cross the road. In this papers is presented the system has solved the aforementioned problems. It is presented architecture of the object recognition system includes the camera, Raspberry platform, GPS system, neural network, software and Data Base. The camera in the system takes the pictures. The object recognition is done in real time using the OpenCV library and Raspberry microcontroller. An additional feature of this library is the ability to display the GPS coordinates of the captured objects position. The results from this processes will be sent to remote station. So, in this case, we can know the location of the specific object. By neural network, we can learn the module to solve the problems using incoming data and to be part in bigger intelligent system. The present paper focuses on the design and integration of the image recognition like a part of smart systems.

Keywords: camera, object recognition, OpenCV, Raspberry

Procedia PDF Downloads 217
1623 Person Re-Identification using Siamese Convolutional Neural Network

Authors: Sello Mokwena, Monyepao Thabang

Abstract:

In this study, we propose a comprehensive approach to address the challenges in person re-identification models. By combining a centroid tracking algorithm with a Siamese convolutional neural network model, our method excels in detecting, tracking, and capturing robust person features across non-overlapping camera views. The algorithm efficiently identifies individuals in the camera network, while the neural network extracts fine-grained global features for precise cross-image comparisons. The approach's effectiveness is further accentuated by leveraging the camera network topology for guidance. Our empirical analysis on benchmark datasets highlights its competitive performance, particularly evident when background subtraction techniques are selectively applied, underscoring its potential in advancing person re-identification techniques.

Keywords: camera network, convolutional neural network topology, person tracking, person re-identification, siamese

Procedia PDF Downloads 70
1622 Hand Gesture Recognition Interface Based on IR Camera

Authors: Yang-Keun Ahn, Kwang-Soon Choi, Young-Choong Park, Kwang-Mo Jung

Abstract:

Vision based user interfaces to control TVs and PCs have the advantage of being able to perform natural control without being limited to a specific device. Accordingly, various studies on hand gesture recognition using RGB cameras or depth cameras have been conducted. However, such cameras have the disadvantage of lacking in accuracy or the construction cost being large. The proposed method uses a low cost IR camera to accurately differentiate between the hand and the background. Also, complicated learning and template matching methodologies are not used, and the correlation between the fingertips extracted through curvatures is utilized to recognize Click and Move gestures.

Keywords: recognition, hand gestures, infrared camera, RGB cameras

Procedia PDF Downloads 403
1621 An Investigation of Direct and Indirect Geo-Referencing Techniques on the Accuracy of Points in Photogrammetry

Authors: F. Yildiz, S. Y. Oturanc

Abstract:

Advances technology in the field of photogrammetry replaces analog cameras with reflection on aircraft GPS/IMU system with a digital aerial camera. In this system, when determining the position of the camera with the GPS, camera rotations are also determined by the IMU systems. All around the world, digital aerial cameras have been used for the photogrammetry applications in the last ten years. In this way, in terms of the work done in photogrammetry it is possible to use time effectively, costs to be reduced to a minimum level, the opportunity to make fast and accurate. Geo-referencing techniques that are the cornerstone of the GPS / INS systems, photogrammetric triangulation of images required for balancing (interior and exterior orientation) brings flexibility to the process. Also geo-referencing process; needed in the application of photogrammetry targets to help to reduce the number of ground control points. In this study, the use of direct and indirect geo-referencing techniques on the accuracy of the points was investigated in the production of photogrammetric mapping.

Keywords: photogrammetry, GPS/IMU systems, geo-referecing, digital aerial camera

Procedia PDF Downloads 410
1620 Self-Calibration of Fish-Eye Camera for Advanced Driver Assistance Systems

Authors: Atef Alaaeddine Sarraj, Brendan Jackman, Frank Walsh

Abstract:

Tomorrow’s car will be more automated and increasingly connected. Innovative and intuitive interfaces are essential to accompany this functional enrichment. For that, today the automotive companies are competing to offer an advanced driver assistance system (ADAS) which will be able to provide enhanced navigation, collision avoidance, intersection support and lane keeping. These vision-based functions require an accurately calibrated camera. To achieve such differentiation in ADAS requires sophisticated sensors and efficient algorithms. This paper explores the different calibration methods applicable to vehicle-mounted fish-eye cameras with arbitrary fields of view and defines the first steps towards a self-calibration method that adequately addresses ADAS requirements. In particular, we present a self-calibration method after comparing different camera calibration algorithms in the context of ADAS requirements. Our method gathers data from unknown scenes while the car is moving, estimates the camera intrinsic and extrinsic parameters and corrects the wide-angle distortion. Our solution enables continuous and real-time detection of objects, pedestrians, road markings and other cars. In contrast, other camera calibration algorithms for ADAS need pre-calibration, while the presented method calibrates the camera without prior knowledge of the scene and in real-time.

Keywords: advanced driver assistance system (ADAS), fish-eye, real-time, self-calibration

Procedia PDF Downloads 250
1619 A Simple Autonomous Hovering and Operating Control of Multicopter Using Only Web Camera

Authors: Kazuya Sato, Toru Kasahara, Junji Kuroda

Abstract:

In this paper, an autonomous hovering control method of multicopter using only Web camera is proposed. Recently, various control method of an autonomous flight for multicopter are proposed. But, in the previously proposed methods, a motion capture system (i.e., OptiTrack) and laser range finder are often used to measure the position and posture of multicopter. To achieve an autonomous flight control of multicopter with simple equipment, we propose an autonomous flight control method using AR marker and Web camera. AR marker can measure the position of multicopter with Cartesian coordinate in three dimensional, then its position connects with aileron, elevator, and accelerator throttle operation. A simple PID control method is applied to the each operation and adjust the controller gains. Experimental result are given to show the effectiveness of our proposed method. Moreover, another simple operation method for autonomous flight control multicopter is also proposed.

Keywords: autonomous hovering control, multicopter, Web camera, operation

Procedia PDF Downloads 560
1618 Restructuring of Embedded System Design Course: Making It Industry Compliant

Authors: Geetishree Mishra, S. Akhila

Abstract:

Embedded System Design, the most challenging course of electronics engineering has always been appreciated and well acclaimed by the students of electronics and its related branches of engineering. Embedded system, being a product of multiple application domains, necessitates skilled man power to be well designed and tested in every important aspect of both hardware and software. In the current industrial scenario, the requirements are even more rigorous and highly demanding and needs to be to be on par with the advanced technologies. Fresh engineers are expected to be thoroughly groomed by the academic system and the teaching community. Graduates with the ability to understand both complex technological processes and technical skills are increasingly sought after in today's embedded industry. So, the need of the day is to restructure the under-graduate course- both theory and lab practice along with the teaching methodologies to meet the industrial requirements. This paper focuses on the importance of such a need in the present education system.

Keywords: embedded system design, industry requirement, syllabus restructuring, project-based learning, teaching methodology

Procedia PDF Downloads 660
1617 An Automated Procedure for Estimating the Glomerular Filtration Rate and Determining the Normality or Abnormality of the Kidney Stages Using an Artificial Neural Network

Authors: Hossain A., Chowdhury S. I.

Abstract:

Introduction: The use of a gamma camera is a standard procedure in nuclear medicine facilities or hospitals to diagnose chronic kidney disease (CKD), but the gamma camera does not precisely stage the disease. The authors sought to determine whether they could use an artificial neural network to determine whether CKD was in normal or abnormal stages based on GFR values (ANN). Method: The 250 kidney patients (Training 188, Testing 62) who underwent an ultrasonography test to diagnose a renal test in our nuclear medical center were scanned using a gamma camera. Before the scanning procedure, the patients received an injection of ⁹⁹ᵐTc-DTPA. The gamma camera computes the pre- and post-syringe radioactive counts after the injection has been pushed into the patient's vein. The artificial neural network uses the softmax function with cross-entropy loss to determine whether CKD is normal or abnormal based on the GFR value in the output layer. Results: The proposed ANN model had a 99.20 % accuracy according to K-fold cross-validation. The sensitivity and specificity were 99.10 and 99.20 %, respectively. AUC was 0.994. Conclusion: The proposed model can distinguish between normal and abnormal stages of CKD by using an artificial neural network. The gamma camera could be upgraded to diagnose normal or abnormal stages of CKD with an appropriate GFR value following the clinical application of the proposed model.

Keywords: artificial neural network, glomerular filtration rate, stages of the kidney, gamma camera

Procedia PDF Downloads 102
1616 Smart Side View Mirror Camera for Real Time System

Authors: Nunziata Ivana Guarneri, Arcangelo Bruna, Giuseppe Spampinato, Antonio Buemi

Abstract:

In the last decade, automotive companies have invested a lot in terms of innovation about many aspects regarding the automatic driver assistance systems. One innovation regards the usage of a smart camera placed on the car’s side mirror for monitoring the back and lateral road situation. A common road scenario is the overtaking of the preceding car and, in this case, a brief distraction or a loss of concentration can lead the driver to undertake this action, even if there is an already overtaking vehicle, leading to serious accidents. A valid support for a secure drive can be a smart camera system, which is able to automatically analyze the road scenario and consequentially to warn the driver when another vehicle is overtaking. This paper describes a method for monitoring the side view of a vehicle by using camera optical flow motion vectors. The proposed solution detects the presence of incoming vehicles, assesses their distance from the host car, and warns the driver through different levels of alert according to the estimated distance. Due to the low complexity and computational cost, the proposed system ensures real time performances.

Keywords: camera calibration, ego-motion, Kalman filters, object tracking, real time systems

Procedia PDF Downloads 225
1615 Multiplayer RC-car Driving System in a Collaborative Augmented Reality Environment

Authors: Kikuo Asai, Yuji Sugimoto

Abstract:

We developed a prototype system for multiplayer RC-car driving in a collaborative Augmented Reality (AR) environment. The tele-existence environment is constructed by superimposing digital data onto images captured by a camera on an RC-car, enabling players to experience an augmented coexistence of the digital content and the real world. Marker-based tracking was used for estimating position and orientation of the camera. The plural RC-cars can be operated in a field where square markers are arranged. The video images captured by the camera are transmitted to a PC for visual tracking. The RC-cars are also tracked by using an infrared camera attached to the ceiling, so that the instability is reduced in the visual tracking. Multimedia data such as texts and graphics are visualized to be overlaid onto the video images in the geometrically correct manner. The prototype system allows a tele-existence sensation to be augmented in a collaborative AR environment.

Keywords: multiplayer, RC-car, collaborative environment, augmented reality

Procedia PDF Downloads 287
1614 Discussing Embedded versus Central Machine Learning in Wireless Sensor Networks

Authors: Anne-Lena Kampen, Øivind Kure

Abstract:

Machine learning (ML) can be implemented in Wireless Sensor Networks (WSNs) as a central solution or distributed solution where the ML is embedded in the nodes. Embedding improves privacy and may reduce prediction delay. In addition, the number of transmissions is reduced. However, quality factors such as prediction accuracy, fault detection efficiency and coordinated control of the overall system suffer. Here, we discuss and highlight the trade-offs that should be considered when choosing between embedding and centralized ML, especially for multihop networks. In addition, we present estimations that demonstrate the energy trade-offs between embedded and centralized ML. Although the total network energy consumption is lower with central prediction, it makes the network more prone for partitioning due to the high forwarding load on the one-hop nodes. Moreover, the continuous improvements in the number of operations per joule for embedded devices will move the energy balance toward embedded prediction.

Keywords: central machine learning, embedded machine learning, energy consumption, local machine learning, wireless sensor networks, WSN

Procedia PDF Downloads 152
1613 Improvement of Camera Calibration Based on the Relationship between Focal Length and Aberration Coefficient

Authors: Guorong Sui, Xingwei Jia, Chenhui Yin, Xiumin Gao

Abstract:

In the processing of camera-based high precision and non-contact measurement, the geometric-optical aberration is always inevitably disturbing the measuring system. Moreover, the aberration is different with the different focal length, which will increase the difficulties of the system’s calibration. Therefore, to understand the relationship between the focal length as a function of aberration properties is a very important issue to the calibration of the measuring systems. In this study, we propose a new mathematics model, which is based on the plane calibration method by Zhang Zhengyou, and establish a relationship between the focal length and aberration coefficient. By using the mathematics model and carefully modified compensation templates, the calibration precision of the system can be dramatically improved. The experiment results show that the relative error is less than 1%. It is important for optoelectronic imaging systems that apply to measure, track and position by changing the camera’s focal length.

Keywords: camera calibration, aberration coefficient, vision measurement, focal length, mathematics model

Procedia PDF Downloads 363
1612 Analysis and Control of Camera Type Weft Straightener

Authors: Jae-Yong Lee, Gyu-Hyun Bae, Yun-Soo Chung, Dae-Sub Kim, Jae-Sung Bae

Abstract:

In general, fabric is heat-treated using a stenter machine in order to dry and fix its shape. It is important to shape before the heat treatment because it is difficult to revert back once the fabric is formed. To produce the product of right shape, camera type weft straightener has been applied recently to capture and process fabric images quickly. It is more powerful in determining the final textile quality rather than photo-sensor. Positioning in front of a stenter machine, weft straightener helps to spread fabric evenly and control the angle between warp and weft constantly as right angle by handling skew and bow rollers. To process this tricky procedure, the structural analysis should be carried out in advance, based on which, its control technology can be drawn. A structural analysis is to figure out the specific contact/slippage characteristics between fabric and roller. We already examined the applicability of camera type weft straightener to plain weave fabric and found its possibility and the specific working condition of machine and rollers. In this research, we aimed to explore another applicability of camera type weft straightener. Namely, we tried to figure out camera type weft straightener can be used for fabrics. To find out the optimum condition, we increased the number of rollers. The analysis is done by ANSYS software using Finite Element Analysis method. The control function is demonstrated by experiment. In conclusion, the structural analysis of weft straightener is done to identify a specific characteristic between roller and fabrics. The control of skew and bow roller is done to decrease the error of the angle between warp and weft. Finally, it is proved that camera type straightener can also be used for the special fabrics.

Keywords: camera type weft straightener, structure analysis, control, skew and bow roller

Procedia PDF Downloads 291
1611 Development of Intelligent Construction Management System Using Web-Camera Image and 3D Object Image

Authors: Hyeon-Seung Kim, Bit-Na Cho, Tae-Woon Jeong, Soo-Young Yoon, Leen-Seok Kang

Abstract:

Recently, a construction project has been large in the size and complicated in the site work. The web-cameras are used to manage the construction site of such a large construction project. They can be used for monitoring the construction schedule as compared to the actual work image of the planned work schedule. Specially, because the 4D CAD system that the construction appearance is continually simulated in a 3D CAD object by work schedule is widely applied to the construction project, the comparison system between the real image of actual work appearance by web-camera and the simulated image of planned work appearance by 3D CAD object can be an intelligent construction schedule management system (ICON). The delayed activities comparing with the planned schedule can be simulated by red color in the ICON as a virtual reality object. This study developed the ICON and it was verified in a real bridge construction project in Korea. To verify the developed system, a web-camera was installed and operated in a case project for a month. Because the angle and zooming of the web-camera can be operated by Internet, a project manager can easily monitor and assume the corrective action.

Keywords: 4D CAD, web-camera, ICON (intelligent construction schedule management system), 3D object image

Procedia PDF Downloads 505