Search results for: Camera calibration
308 Adjustment and Compensation Techniques for the Rotary Axes of Five-axis CNC Machine Tools
Authors: Tung-Hui Hsu, Wen-Yuh Jywe
Abstract:
Five-axis computer numerical control (CNC) machine tools (three linear and two rotary axes) are ideally suited to the fabrication of complex work pieces, such as dies, turbo blades, and cams. The locations of the axis average line and centerline of the rotary axes strongly influence the performance of these machines; however, techniques to compensate for eccentric error in the rotary axes remain weak. This paper proposes optical (Non-Bar) techniques capable of calibrating five-axis CNC machine tools and compensating for eccentric error in the rotary axes. This approach employs the measurement path in ISO/CD 10791-6 to determine the eccentric error in two rotary axes, for which compensatory measures can be implemented. Experimental results demonstrate that the proposed techniques can improve the performance of various five-axis CNC machine tools by more than 90%. Finally, a result of the cutting test using a B-type five-axis CNC machine tool confirmed to the usefulness of this proposed compensation technique.
Keywords: Calibration, compensation, rotary axis, five-axis computer numerical control (CNC) machine tools, eccentric error, optical calibration system, ISO/CD 10791-6
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4168307 The Mechanistic Deconvolutive Image Sensor Model for an Arbitrary Pan–Tilt Plane of View
Authors: S. H. Lim, T. Furukawa
Abstract:
This paper presents a generalized form of the mechanistic deconvolution technique (GMD) to modeling image sensors applicable in various pan–tilt planes of view. The mechanistic deconvolution technique (UMD) is modified with the given angles of a pan–tilt plane of view to formulate constraint parameters and characterize distortion effects, and thereby, determine the corrected image data. This, as a result, does not require experimental setup or calibration. Due to the mechanistic nature of the sensor model, the necessity for the sensor image plane to be orthogonal to its z-axis is eliminated, and it reduces the dependency on image data. An experiment was constructed to evaluate the accuracy of a model created by GMD and its insensitivity to changes in sensor properties and in pan and tilt angles. This was compared with a pre-calibrated model and a model created by UMD using two sensors with different specifications. It achieved similar accuracy with one-seventh the number of iterations and attained lower mean error by a factor of 2.4 when compared to the pre-calibrated and UMD model respectively. The model has also shown itself to be robust and, in comparison to pre-calibrated and UMD model, improved the accuracy significantly.Keywords: Image sensor modeling, mechanistic deconvolution, calibration, lens distortion
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527306 Low-Cost Robotic-Assisted Laparoscope
Authors: Ege Can Onal, Enver Ersen, Meltem Elitas
Abstract:
Laparoscopy is a surgical operation, well known as keyhole surgery. The operation is performed through small holes, hence, scars of a patient become much smaller, patients can recover in a short time and the hospital stay becomes shorter in comparison to an open surgery. Several tools are used at laparoscopic operations; among them, the laparoscope has a crucial role. It provides the vision during the operation, which will be the main focus in here. Since the operation area is very small, motion of the surgical tools might be limited in laparoscopic operations compared to traditional surgeries. To overcome this limitation, most of the laparoscopic tools have become more precise, dexterous, multi-functional or automated. Here, we present a robotic-assisted laparoscope that is controlled with pedals directly by a surgeon. Thus, the movement of the laparoscope might be controlled better, so there will not be a need to calibrate the camera during the operation. The need for an assistant that controls the movement of the laparoscope will be eliminated. The duration of the laparoscopic operation might be shorter since the surgeon will directly operate the camera.
Keywords: Laparoscope, laparoscopy, low-cost, minimally invasive surgery, robotic-assisted surgery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 949305 Automated Driving Deep Neural Network Model Accuracy and Performance Assessment in a Simulated Environment
Authors: David Tena-Gago, Jose M. Alcaraz Calero, Qi Wang
Abstract:
The evolution and integration of automated vehicles have become more and more tangible in recent years. State-of-the-art technological advances in the field of camera-based Artificial Intelligence (AI) and computer vision greatly favor the performance and reliability of Advanced Driver Assistance System (ADAS), leading to a greater knowledge of vehicular operation and resembling the human behaviour. However, the exclusive use of this technology still seems insufficient to control the vehicular operation at 100%. To reveal the degree of accuracy of the current camera-based automated driving AI modules, this paper studies the structure and behavior of one of the main solutions in a controlled testing environment. The results obtained clearly outline the lack of reliability when using exclusively the AI model in the perception stage, thereby entailing using additional complementary sensors to improve its safety and performance.
Keywords: Accuracy assessment, AI-Driven Mobility, Artificial Intelligence, automated vehicles.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 436304 In situ Real-Time Multivariate Analysis of Methanolysis Monitoring of Sunflower Oil Using FTIR
Authors: Pascal Mwenge, Tumisang Seodigeng
Abstract:
The combination of world population and the third industrial revolution led to high demand for fuels. On the other hand, the decrease of global fossil 8fuels deposits and the environmental air pollution caused by these fuels has compounded the challenges the world faces due to its need for energy. Therefore, new forms of environmentally friendly and renewable fuels such as biodiesel are needed. The primary analytical techniques for methanolysis yield monitoring have been chromatography and spectroscopy, these methods have been proven reliable but are more demanding, costly and do not provide real-time monitoring. In this work, the in situ monitoring of biodiesel from sunflower oil using FTIR (Fourier Transform Infrared) has been studied; the study was performed using EasyMax Mettler Toledo reactor equipped with a DiComp (Diamond) probe. The quantitative monitoring of methanolysis was performed by building a quantitative model with multivariate calibration using iC Quant module from iC IR 7.0 software. 15 samples of known concentrations were used for the modelling which were taken in duplicate for model calibration and cross-validation, data were pre-processed using mean centering and variance scale, spectrum math square root and solvent subtraction. These pre-processing methods improved the performance indexes from 7.98 to 0.0096, 11.2 to 3.41, 6.32 to 2.72, 0.9416 to 0.9999, RMSEC, RMSECV, RMSEP and R2Cum, respectively. The R2 value of 1 (training), 0.9918 (test), 0.9946 (cross-validation) indicated the fitness of the model built. The model was tested against univariate model; small discrepancies were observed at low concentration due to unmodelled intermediates but were quite close at concentrations above 18%. The software eliminated the complexity of the Partial Least Square (PLS) chemometrics. It was concluded that the model obtained could be used to monitor methanol of sunflower oil at industrial and lab scale.
Keywords: Biodiesel, calibration, chemometrics, FTIR, methanolysis, multivariate analysis, transesterification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 934303 Calibration of 2D and 3D Optical Measuring Instruments in Industrial Environments at Submillimeter Range
Authors: A. Mínguez-Martínez, J. de Vicente
Abstract:
Modern manufacturing processes have led to the miniaturization of systems and, as a result, parts at the micro and nanoscale are produced. This trend seems to become increasingly important in the near future. Besides, as a requirement of Industry 4.0, the digitalization of the models of production and processes makes it very important to ensure that the dimensions of newly manufactured parts meet the specifications of the models. Therefore, it is possible to reduce the scrap and the cost of non-conformities, ensuring the stability of the production at the same time. To ensure the quality of manufactured parts, it becomes necessary to carry out traceable measurements at scales lower than one millimeter. Providing adequate traceability to the SI unit of length (the meter) to 2D and 3D measurements at this scale is a problem that does not have a unique solution in industrial environments. Researchers in the field of dimensional metrology all around the world are working on this issue. A solution for industrial environments, even if it is not complete, will enable working with some traceability. At this point, we believe that the study of the surfaces could provide us with a first approximation to a solution. In this paper, we propose a calibration procedure for the scales of optical measuring instruments, particularizing for a confocal microscope, using material standards easy to find and calibrate in metrology and quality laboratories in industrial environments. Confocal microscopes are measuring instruments capable of filtering the out-of-focus reflected light so that when it reaches the detector, it is possible to take pictures of the part of the surface that is focused. Varying and taking pictures at different Z levels of the focus, a specialized software interpolates between the different planes, and it could reconstruct the surface geometry into a 3D model. As it is easy to deduce, it is necessary to give traceability to each axis. As a complementary result, the roughness Ra parameter will be traced to the reference. Although the solution is designed for a confocal microscope, it may be used for the calibration of other optical measuring instruments, by applying minor changes.
Keywords: Industrial environment, confocal microscope, optical measuring instrument, traceability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 410302 Low Cost Technique for Measuring Luminance in Biological Systems
Abstract:
In this work, the relationship between the melanin content in a tissue and subsequent absorption of light through that tissue was determined using a digital camera. This technique proved to be simple, cost effective, efficient and reliable. Tissue phantom samples were created using milk and soy sauce to simulate the optical properties of melanin content in human tissue. Increasing the concentration of soy sauce in the milk correlated to an increase in melanin content of an individual. Two methods were employed to measure the light transmitted through the sample. The first was direct measurement of the transmitted intensity using a conventional lux meter. The second method involved correctly calibrating an ordinary digital camera and using image analysis software to calculate the transmitted intensity through the phantom. The results from these methods were then graphically compared to the theoretical relationship between the intensity of transmitted light and the concentration of absorbers in the sample. Conclusions were then drawn about the effectiveness and efficiency of these low cost methods.Keywords: Tissue phantoms, scattering coefficient, albedo, low-cost method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1301301 Blinking Characteristics and Corneal Staining in Different Soft Lens Materials
Authors: Bashirah Ishak, Jacyln JiaYing Thye, Bariah Mohd Ali, Norhani Mohidin
Abstract:
Background Contact lens (CL) wear can cause changes in blinking and corneal staining. Aims and Objectives To determine the effects of CL materials (HEMA and SiHy) on spontaneous blink rate, blinking patterns and corneal staining after 2 months of wear. Methods Ninety subjects in 3 groups (control, HEMA and SiHy) were assessed at baseline and 2-months. Blink rate was recorded using a video camera. Blinking patterns were assessed with digital camera and slit lamp biomicroscope. Corneal staining was graded using IER grading scale Results There were no significant differences in all parameters at baseline. At 2 months, CL wearers showed significant increment in average blink rate (F1.626, 47.141 = 7.250, p = 0.003; F2,58 = 6.240, p = 0.004) and corneal staining (χ2 2, n=30 = 31.921, p < 0.001; χ2 2, n=30 = 26.909, p < 0.001). Conclusion Blinking characteristics and corneal staining were not influence by soft CL materials.Keywords: Spontaneous blinking, cornea staining, grading, soft contact lenses.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2397300 A New Approach for Counting Passersby Utilizing Space-Time Images
Authors: A. Elmarhomy, S. Karungaru, K. Terada
Abstract:
Understanding the number of people and the flow of the persons is useful for efficient promotion of the institution managements and company-s sales improvements. This paper introduces an automated method for counting passerby using virtualvertical measurement lines. The process of recognizing a passerby is carried out using an image sequence obtained from the USB camera. Space-time image is representing the human regions which are treated using the segmentation process. To handle the problem of mismatching, different color space are used to perform the template matching which chose automatically the best matching to determine passerby direction and speed. A relation between passerby speed and the human-pixel area is used to distinguish one or two passersby. In the experiment, the camera is fixed at the entrance door of the hall in a side viewing position. Finally, experimental results verify the effectiveness of the presented method by correctly detecting and successfully counting them in order to direction with accuracy of 97%.Keywords: counting passersby, virtual-vertical measurement line, passerby speed, space-time image
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1413299 The Excess Loop Delay Calibration in a Bandpass Continuous-Time Delta Sigma Modulators Based on Q-Enhanced LC Filter
Authors: Sorore Benabid
Abstract:
The Q-enhanced LC filters are the most used architecture in the Bandpass (BP) Continuous-Time (CT) Delta-Sigma (ΣΔ) modulators, due to their: high frequencies operation, high linearity than the active filters and a high quality factor obtained by Q-enhanced technique. This technique consists of the use of a negative resistance that compensate the ohmic losses in the on-chip inductor. However, this technique introduces a zero in the filter transfer function which will affect the modulator performances in term of Dynamic Range (DR), stability and in-band noise (Signal-to-Noise Ratio (SNR)). In this paper, we study the effect of this zero and we demonstrate that a calibration of the excess loop delay (ELD) is required to ensure the best performances of the modulator. System level simulations are done for a 2ndorder BP CT (ΣΔ) modulator at a center frequency of 300MHz. Simulation results indicate that the optimal ELD should be reduced by 13% to achieve the maximum SNR and DR compared to the ideal LC-based ΣΔ modulator.Keywords: Continuous-time bandpass delta-sigma modulators, excess loop delay, on-chip inductor, Q-enhanced LC filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 760298 Wrap-around View Equipped on Mobile Robot
Authors: Sun Lim, Sewoong Jun, Il-Kyun Jung
Abstract:
This paper presents a wrap-around view system with 4 smart cameras module and remote motion mobile robot control equipped with smart camera module system. The two-level scheme for remote motion control with smart-pad(IPAD) is introduced on this paper. In the low-level, the wrap-around view system is controlled or operated to keep the reference points lying around top view image plane. On the higher level, a robot image based motion controller is utilized to drive the mobile platform to reach the desired position or track the desired motion planning through image feature feedback. The design wrap-around view system equipped on presents such advantages as follows: 1) a satisfactory solution for the FOV and affine problem; 2) free of any complex and constraint with robot pose. The performance of the wrap-around view equipped on mobile robot remote control is proven by experimental results.Keywords: four smart camera, wrap-around view, remote mobile robot control
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1815297 The Use of the Flat Field Panel for the On-Ground Calibration of Metis Coronagraph on Board of Solar Orbiter
Authors: C. Casini, V. Da Deppo, P. Zuppella, P. Chioetto, A. Slemer, F. Frassetto, M. Romoli, F. Landini, M. Pancrazzi, V. Andretta, E. Antonucci, A. Bemporad, M. Casti, Y. De Leo, M. Fabi, S. Fineschi, F. Frassati, C. Grimani, G. Jerse, P. Heinzel, K. Heerlein, A. Liberatore, E. Magli, G. Naletto, G. Nicolini, M.G. Pelizzo, P. Romano, C. Sasso, D. Spadaro, M. Stangalini, T. Straus, R. Susino, L. Teriaca, M. Uslenghi, A. Volpicelli
Abstract:
Solar Orbiter, launched on February 9th 2020, is an ESA/NASA mission conceived to study the Sun. The payload is composed of 10 instruments, among which there is the Metis coronagraph. A coronagraph aims at taking images of the solar corona: the occulter element simulates a total solar eclipse. This work presents some of the results obtained in the visible light band (580-640 nm) using a flat field panel source. The flat field panel gives a uniform illumination; consequently, it has been used during the on-ground calibration for several purposes: evaluating the response of each pixel of the detector (linearity); and characterizing the Field of View of the coronagraph. As a conclusion, a major result is the verification that the requirement for the Field of View (FoV) of Metis is fulfilled. Some investigations are in progress in order to verify that the performance measured on-ground did not change after launch.Keywords: Space instrumentation, Metis, solar coronagraph, flat field.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 699296 A Low-Cost Vision-Based Unmanned Aerial System for Extremely Low-Light GPS-Denied Navigation and Thermal Imaging
Authors: Chang Liu, John Nash, Stephen D. Prior
Abstract:
This paper presents the design and implementation details of a complete unmanned aerial system (UAS) based on commercial-off-the-shelf (COTS) components, focusing on safety, security, search and rescue scenarios in GPS-denied environments. In particular, The aerial platform is capable of semi-autonomously navigating through extremely low-light, GPS-denied indoor environments based on onboard sensors only, including a downward-facing optical flow camera. Besides, an additional low-cost payload camera system is developed to stream both infra-red video and visible light video to a ground station in real-time, for the purpose of detecting sign of life and hidden humans. The total cost of the complete system is estimated to be $1150, and the effectiveness of the system has been tested and validated in practical scenarios.Keywords: Unmanned aerial system, commercial-off-the-shelf, extremely low-light, GPS-denied, optical flow, infrared video.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1946295 Improved Rare Species Identification Using Focal Loss Based Deep Learning Models
Authors: Chad Goldsworthy, B. Rajeswari Matam
Abstract:
The use of deep learning for species identification in camera trap images has revolutionised our ability to study, conserve and monitor species in a highly efficient and unobtrusive manner, with state-of-the-art models achieving accuracies surpassing the accuracy of manual human classification. The high imbalance of camera trap datasets, however, results in poor accuracies for minority (rare or endangered) species due to their relative insignificance to the overall model accuracy. This paper investigates the use of Focal Loss, in comparison to the traditional Cross Entropy Loss function, to improve the identification of minority species in the “255 Bird Species” dataset from Kaggle. The results show that, although Focal Loss slightly decreased the accuracy of the majority species, it was able to increase the F1-score by 0.06 and improve the identification of the bottom two, five and ten (minority) species by 37.5%, 15.7% and 10.8%, respectively, as well as resulting in an improved overall accuracy of 2.96%.
Keywords: Convolutional neural networks, data imbalance, deep learning, focal loss, species classification, wildlife conservation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1419294 Process and Supply-Chain Optimization for Testing and Verification of Formation Tester/Pressure-While- Drilling Tools
Authors: Vivek V, Hafeez Syed, Darren W Terrell, Harit Naik, Halliburton
Abstract:
Applying a rigorous process to optimize the elements of a supply-chain network resulted in reduction of the waiting time for a service provider and customer. Different sources of downtime of hydraulic pressure controller/calibrator (HPC) were causing interruptions in the operations. The process examined all the issues to drive greater efficiencies. The issues included inherent design issues with HPC pump, contamination of the HPC with impurities, and the lead time required for annual calibration in the USA. HPC is used for mandatory testing/verification of formation tester/pressure measurement/logging-while drilling tools by oilfield service providers, including Halliburton. After market study andanalysis, it was concluded that the current HPC model is best suited in the oilfield industry. To use theexisting HPC model effectively, design andcontamination issues were addressed through design and process improvements. An optimum network is proposed after comparing different supply-chain models for calibration lead-time reduction.Keywords: Hydraulic Pressure Controller/Calibrator, M/LWD, Pressure, FTWD
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1453293 Controlling 6R Robot by Visionary System
Authors: Azamossadat Nourbakhsh, Moharram Habibnezhad Korayem
Abstract:
In the visual servoing systems, the data obtained by Visionary is used for controlling robots. In this project, at first the simulator which was proposed for simulating the performance of a 6R robot before, was examined in terms of software and test, and in the proposed simulator, existing defects were obviated. In the first version of simulation, the robot was directed toward the target object only in a Position-based method using two cameras in the environment. In the new version of the software, three cameras were used simultaneously. The camera which is installed as eye-inhand on the end-effector of the robot is used for visual servoing in a Feature-based method. The target object is recognized according to its characteristics and the robot is directed toward the object in compliance with an algorithm similar to the function of human-s eyes. Then, the function and accuracy of the operation of the robot are examined through Position-based visual servoing method using two cameras installed as eye-to-hand in the environment. Finally, the obtained results are tested under ANSI-RIA R15.05-2 standard.Keywords: 6R Robot , camera, visual servoing, Feature-based visual servoing, Position-based visual servoing, Performance tests.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1385292 Investigation on Toxicity of Manufactured Nanoparticles to Bioluminescence Bacteria Vibrio fischeri
Authors: E. Binaeian, SH. Soroushnia
Abstract:
Acute toxicity of nano SiO2, ZnO, MCM-41 (Meso pore silica), Cu, Multi Wall Carbon Nano Tube (MWCNT), Single Wall Carbon Nano Tube (SWCNT) , Fe (Coated) to bacteria Vibrio fischeri using a homemade luminometer , was evaluated. The values of the nominal effective concentrations (EC), causing 20% and 50% inhibition of biouminescence, using two mathematical models at two times of 5 and 30 minutes were calculated. Luminometer was designed with Photomultiplier (PMT) detector. Luminol chemiluminescence reaction was carried out for the calibration graph. In the linear calibration range, the correlation coefficients and coefficient of Variation (CV) were 0.988 and 3.21% respectively which demonstrate the accuracy and reproducibility of the instrument that are suitable. The important part of this research depends on how to optimize the best condition for maximum bioluminescence. The culture of Vibrio fischeri with optimal conditions in liquid media, were stirring at 120 rpm at a temperature of 150C to 180C and were incubated for 24 to 72 hours while solid medium was held at 180C and for 48 hours. Suspension of nanoparticles ZnO, after 30 min contact time to bacteria Vibrio fischeri, showed the highest toxicity while SiO2 nanoparticles showed the lowest toxicity. After 5 min exposure time, the toxicity of ZnO was the strongest and MCM-41 was the weakest toxicant component.
Keywords: Bioluminescence, effective concentration, nanomaterials, toxicity, Vibrio fischeri.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2960291 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images
Authors: Amit Kr. Happy
Abstract:
This paper is motivated by the importance of multi-sensor image fusion with specific focus on Infrared (IR) and Visible image (VI) fusion for various applications including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like Visible camera & IR Thermal Imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (IR) that may be reflected or self-emitted. A digital color camera captures the visible source image and a thermal IR camera acquires the thermal source image. In this paper, some image fusion algorithms based upon Multi-Scale Transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, but they also make it hard to become deployed in system and applications that require real-time operation, high flexibility and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.
Keywords: Image fusion, IR thermal imager, multi-sensor, Multi-Scale Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 430290 Optical Flow Based System for Cross Traffic Alert
Authors: Giuseppe Spampinato, Salvatore Curti, Ivana Guarneri, Arcangelo Bruna
Abstract:
This document describes an advanced system and methodology for Cross Traffic Alert (CTA), able to detect vehicles that move into the vehicle driving path from the left or right side. The camera is supposed to be not only on a vehicle still, e.g. at a traffic light or at an intersection, but also moving slowly, e.g. in a car park. In all of the aforementioned conditions, a driver’s short loss of concentration or distraction can easily lead to a serious accident. A valid support to avoid these kinds of car crashes is represented by the proposed system. It is an extension of our previous work, related to a clustering system, which only works on fixed cameras. Just a vanish point calculation and simple optical flow filtering, to eliminate motion vectors due to the car relative movement, is performed to let the system achieve high performances with different scenarios, cameras and resolutions. The proposed system just uses as input the optical flow, which is hardware implemented in the proposed platform and since the elaboration of the whole system is really speed and power consumption, it is inserted directly in the camera framework, allowing to execute all the processing in real-time.
Keywords: Clustering, cross traffic alert, optical flow, real time, vanishing point.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 811289 Geometric Contrast of a 3D Model Obtained by Means of Digital Photogrametry with a Quasimetric Camera on UAV Classical Methods
Authors: Julio Manuel de Luis Ruiz, Javier Sedano Cibrián, Rubén Pérez Álvarez, Raúl Pereda García, Cristina Diego Soroa
Abstract:
Nowadays, the use of drones has been extended to practically any human activity. One of the main applications is focused on the surveying field. In this regard, software programs that process the images captured by the sensor from the drone in an almost automatic way have been developed and commercialized, but they only allow contrasting the results through control points. This work proposes the contrast of a 3D model obtained from a flight developed by a drone and a non-metric camera (due to its low cost), with a second model that is obtained by means of the historically-endorsed classical methods. In addition to this, the contrast is developed over a certain territory with a significant unevenness, so as to test the model generated with photogrammetry, and considering that photogrammetry with drones finds more difficulties in terms of accuracy in this kind of situations. Distances, heights, surfaces and volumes are measured on the basis of the 3D models generated, and the results are contrasted. The differences are about 0.2% for the measurement of distances and heights, 0.3% for surfaces and 0.6% when measuring volumes. Although they are not important, they do not meet the order of magnitude that is presented by salespeople.
Keywords: Accuracy, classical topographic, 3D model, photogrammetry, UAV.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 537288 Motion-Based Detection and Tracking of Multiple Pedestrians
Authors: A. Harras, A. Tsuji, K. Terada
Abstract:
Tracking of moving people has gained a matter of great importance due to rapid technological advancements in the field of computer vision. The objective of this study is to design a motion based detection and tracking multiple walking pedestrians randomly in different directions. In our proposed method, Gaussian mixture model (GMM) is used to determine moving persons in image sequences. It reacts to changes that take place in the scene like different illumination; moving objects start and stop often, etc. Background noise in the scene is eliminated through applying morphological operations and the motions of tracked people which is determined by using the Kalman filter. The Kalman filter is applied to predict the tracked location in each frame and to determine the likelihood of each detection. We used a benchmark data set for the evaluation based on a side wall stationary camera. The actual scenes from the data set are taken on a street including up to eight people in front of the camera in different two scenes, the duration is 53 and 35 seconds, respectively. In the case of walking pedestrians in close proximity, the proposed method has achieved the detection ratio of 87%, and the tracking ratio is 77 % successfully. When they are deferred from each other, the detection ratio is increased to 90% and the tracking ratio is also increased to 79%.
Keywords: Automatic detection, tracking, pedestrians.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 826287 Research on the Strategy of Orbital Avoidance for Optical Remote Sensing Satellite
Authors: Zheng Dian Xun, Cheng Bo, Lin Hetong
Abstract:
This paper focuses on the orbit avoidance strategy of the optical remote sensing satellite. The optical remote sensing satellite, moving along the Sun-synchronous orbit, is equipped with laser warning equipment to alert CCD camera from laser attacks. This paper explores the strategy of satellite avoidance to protect the CCD camera and also the satellite. The satellite could evasive to several target points in the orbital coordinates of virtual satellite. The so-called virtual satellite is a passive vehicle which superposes the satellite at the initial stage of avoidance. The target points share the consistent cycle time and the same semi-major axis with the virtual satellite, which ensures the properties of the satellite’s Sun-synchronous orbit remain unchanged. Moreover, to further strengthen the avoidance capability of satellite, it can perform multi-target-points avoid maneuvers. On occasions of fulfilling the satellite orbit tasks, the orbit can be restored back to virtual satellite through orbit maneuvers. There into, the avoid maneuvers adopts pulse guidance. In addition, the fuel consumption is optimized. The avoidance strategy discussed in this article is applicable to optical remote sensing satellite when it is encountered with hostile attack of space-based laser anti-satellite.Keywords: Optical remote sensing satellite, satellite avoidance, virtual satellite, avoid target-point, avoid maneuver.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1499286 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images
Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir
Abstract:
The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement. On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.
Keywords: Automatic landing, multirotor, nonlinear control, parameters estimation, optical flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 527285 Study on Plasma Creation and Propagation in a Pulsed Magnetoplasmadynamic Thruster
Authors: Tony Schönherr, Kimiya Komurasaki, Georg Herdrich
Abstract:
The performance and the plasma created by a pulsed magnetoplasmadynamic thruster for small satellite application is studied to understand better the ablation and plasma propagation processes occurring during the short-time discharge. The results can be applied to improve the quality of the thruster in terms of efficiency, and to tune the propulsion system to the needs required by the satellite mission. Therefore, plasma measurements with a high-speed camera and induction probes, and performance measurements of mass bit and impulse bit were conducted. Values for current sheet propagation speed, mean exhaust velocity and thrust efficiency were derived from these experimental data. A maximum in current sheet propagation was found by the high-speed camera measurements for a medium energy input and confirmed by the induction probes. A quasilinear tendency between the mass bit and the energy input, the current action integral respectively, was found, as well as a linear tendency between the created impulse and the discharge energy. The highest mean exhaust velocity and thrust efficiency was found for the highest energy input.Keywords: electric propulsion, low-density plasma, pulsed magnetoplasmadynamicthruster, space engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2525284 Robot Control by ERPs of Brain Waves
Authors: K. T. Sun, Y. H. Tai, H. W. Yang, H. T. Lin
Abstract:
This paper presented the technique of robot control by event-related potentials (ERPs) of brain waves. Based on the proposed technique, severe physical disabilities can free browse outside world. A specific component of ERPs, N2P3, was found and used to control the movement of robot and the view of camera on the designed brain-computer interface (BCI). Users only required watching the stimuli of attended button on the BCI, the evoked potentials of brain waves of the target button, N2P3, had the greatest amplitude among all control buttons. An experimental scene had been constructed that the robot required walking to a specific position and move the view of camera to see the instruction of the mission, and then completed the task. Twelve volunteers participated in this experiment, and experimental results showed that the correct rate of BCI control achieved 80% and the average of execution time was 353 seconds for completing the mission. Four main contributions included in this research: (1) find an efficient component of ERPs, N2P3, for BCI control, (2) embed robot's viewpoint image into user interface for robot control, (3) design an experimental scene and conduct the experiment, and (4) evaluate the performance of the proposed system for assessing the practicability.
Keywords: Brain-computer interface (BCI), event-related potentials (ERPs), robot control, severe physical disabilities.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2598283 Recording Video in the CAVE
Authors: Mohamed Mediouni
Abstract:
Evaluating the performance of a simulator in the CAVE has to be confirmed by encouraging people to live the experience of virtual reality. In this paper, a detailed procedure of recording video is presented. Limitations of the experimental device are firstly exposed. Then, solutions for improving this idea are finally described.
Keywords: Virtual reality, CAVE, stereoscopic, camera.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2261282 Development of Moving Multifocal Electroretinogram with a Precise Perimetry Apparatus
Authors: Naoto Suzuki
Abstract:
A decline in visual sensitivity at arbitrary points on the retina can be measured using a precise perimetry apparatus along with a fundus camera. However, the retinal layer associated with this decline cannot be identified accurately with current medical technology. To investigate cryptogenic diseases, such as macular dystrophy, acute zonal occult outer retinopathy (AZOOR), and multiple evanescent white dot syndrome (MEWDS), we evaluated an electroretinogram (ERG) function that allows moving the center of the multifocal hexagonal stimulus array to a chosen position. Macular dystrophy is a generalized term used for a variety of functional disorders of the macula lutea, and the ERG shows a diminution of the b-wave in these disorders. AZOOR causes an acute functional disorder to an outer layer of the retina, and the ERG shows a-wave and b-wave amplitude reduction as well as delayed 30 Hz flicker responses. MEWDS causes acute visual loss and the ERG shows a decrease in a-wave amplitude. We combined an electroretinographic optical system and a perimetric optical system into an experimental apparatus that has the same optical system as that of a fundus camera. We also deployed an EO-50231 Edmund infrared camera, a 45-degree cold mirror, a lens with a 25-mm focal length, a halogen lamp, and an 8-inch monitor. Then, we also employed a differential amplifier with gain 10, a 50 Hz notch filter, a high-pass filter with a 21.2 Hz cut-off frequency, and two non-inverting amplifiers with gains 1001 and 11. In addition, we used a USB-6216 National Instruments I/O device, a NE-113A Nihon Kohden plate electrode, a SCB-68A shielded connector block, and LabVIEW 2017 software for data retrieval. The software was used to generate the multifocal hexagonal stimulus array on the computer monitor with C++Builder 10.2 and to move the center of the array toward the left and right and up and down. Cone and bright flash ERG results were observed using the moving ERG function. The a-wave, b-wave, c-wave, and the photopic negative response were identified with cone ERG. The moving ERG function allowed the identification of the retinal layer causing visual alterations.
Keywords: Moving ERG, multifocal ERG, precise perimetry, retinal layers, visual sensitivity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 608281 Optical Verification of an Ophthalmological Examination Apparatus Employing the Electroretinogram Function on Fundus-Related Perimetry
Authors: Naoto Suzuki
Abstract:
Japanese are affected by the most common causes of eyesight loss such as glaucoma, diabetic retinopathy, pigmentary retinal degeneration, and age-related macular degeneration. We developed an ophthalmological examination apparatus with a fundus camera, precisely fundus-related perimetry (microperimetry), and electroretinogram (ERG) functions to diagnose a variety of diseases that cause eyesight loss. The experimental apparatus was constructed with the same optical system as a fundus camera. The microperimetry optical system was calculated and added to the experimental apparatus using the German company Optenso's optical engineering software (OpTaliX-LT 10.8). We also added an Edmund infrared camera (EO-0413), a lens with a 25 mm focal length, a 45° cold mirror, a 12 V/50 W halogen lamp, and an 8-inch monitor. We made the artificial eye of a plane-convex lens, a black spacer, and a hemispherical cup. The hemispherical cup had a small section of the paper at the bottom. The artificial eye was photographed five times using the experimental apparatus. The software was created to display the examination target on the monitor and save examination data using C++Builder 10.2. The retinal fundus was displayed on the monitor at a length and width of 1 mm and a resolution of 70.4 ± 4.1 and 74.7 ± 6.8 pixels, respectively. The microperimetry and ERG functions were successfully added to the experimental ophthalmological apparatus. A moving machine was developed to measure the artificial eye's movement. The artificial eye's rear part was painted black and white in the central area. It was rotated 10 degrees from one side to the other. The movement was captured five times as motion videos. Three static images were extracted from one of the motion videos captured. The images display the artificial eye facing the center, right, and left directions. The three images were processed using Scilab 6.1.0 and Image Processing and Computer Vision Toolbox 4.1.2, including trimming, binarization, making a window, deleting peripheral area, and morphological operations. To calculate the artificial eye's fundus center, we added a gravity method to the program to calculate the gravity position of connected components. From the three images, the image processing could calculate the center position.
Keywords: Ophthalmological examination apparatus, microperimetry, electroretinogram, eye movement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 569280 Novel Approach for Wideband VNA by Sixport Principle
Authors: Tomáš Urbanec
Abstract:
Paper presents simple sixport principle and its frequency bandwidth. The novel multisixport approach is presented with its possibilities, typical parameters and frequency bandwidth. Practical implementation is shown with its measurement parameters and calibration. The bandwidth circa 1:100 is obtained.
Keywords: microwave measurement, sixport, VNA, wideband.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1366279 Nuclear Medical Image Treatment System Based On FPGA in Real Time
Authors: B. Mahmoud, M.H. Bedoui, R. Raychev, H. Essabbah
Abstract:
We present in this paper an acquisition and treatment system designed for semi-analog Gamma-camera. It consists of a nuclear medical Image Acquisition, Treatment and Display chain(IATD) ensuring the acquisition, the treatment of the signals(resulting from the Gamma-camera detection head) and the scintigraphic image construction in real time. This chain is composed by an analog treatment board and a digital treatment board. We describe the designed systems and the digital treatment algorithms in which we have improved the performance and the flexibility. The digital treatment algorithms are implemented in a specific reprogrammable circuit FPGA (Field Programmable Gate Array).interface for semi-analog cameras of Sopha Medical Vision(SMVi) by taking as example SOPHY DS7. The developed system consists of an Image Acquisition, Treatment and Display (IATD) ensuring the acquisition and the treatment of the signals resulting from the DH. The developed chain is formed by a treatment analog board and a digital treatment board designed around a DSP [2]. In this paper we have presented the architecture of a new version of our chain IATD in which the integration of the treatment algorithms is executed on an FPGA (Field Programmable Gate Array)
Keywords: Nuclear medical image, scintigraphic image, digitaltreatment, linearity, spectrometry, FPGA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676