Search results for: Marina Camera
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 729

Search results for: Marina Camera

669 Study on Construction of 3D Topography by UAV-Based Images

Authors: Yun-Yao Chi, Chieh-Kai Tsai, Dai-Ling Li

Abstract:

In this paper, a method of fast 3D topography modeling using the high-resolution camera images is studied based on the characteristics of Unmanned Aerial Vehicle (UAV) system for low altitude aerial photogrammetry and the need of three dimensional (3D) urban landscape modeling. Firstly, the existing high-resolution digital camera with special design of overlap images is designed by reconstructing and analyzing the auto-flying paths of UAVs, which improves the self-calibration function to achieve the high precision imaging by software, and further increased the resolution of the imaging system. Secondly, several-angle images including vertical images and oblique images gotten by the UAV system are used for the detail measure of urban land surfaces and the texture extraction. Finally, the aerial photography and 3D topography construction are both developed in campus of Chang-Jung University and in Guerin district area in Tainan, Taiwan, provide authentication model for construction of 3D topography based on combined UAV-based camera images from system. The results demonstrated that the UAV system for low altitude aerial photogrammetry can be used in the construction of 3D topography production, and the technology solution in this paper offers a new, fast, and technical plan for the 3D expression of the city landscape, fine modeling and visualization.

Keywords: 3D, topography, UAV, images

Procedia PDF Downloads 303
668 Visual Search Based Indoor Localization in Low Light via RGB-D Camera

Authors: Yali Zheng, Peipei Luo, Shinan Chen, Jiasheng Hao, Hong Cheng

Abstract:

Most of traditional visual indoor navigation algorithms and methods only consider the localization in ordinary daytime, while we focus on the indoor re-localization in low light in the paper. As RGB images are degraded in low light, less discriminative infrared and depth image pairs are taken, as the input, by RGB-D cameras, the most similar candidates, as the output, are searched from databases which is built in the bag-of-word framework. Epipolar constraints can be used to relocalize the query infrared and depth image sequence. We evaluate our method in two datasets captured by Kinect2. The results demonstrate very promising re-localization results for indoor navigation system in low light environments.

Keywords: indoor navigation, low light, RGB-D camera, vision based

Procedia PDF Downloads 460
667 A Study on the Comparatison of Mechanical and Thermal Properties According to Laminated Orientation of CFRP through Bending Test

Authors: Hee Jae Shin, Lee Ku Kwac, In Pyo Cha, Min Sang Lee, Hyun Kyung Yoon, Hong Gun Kim

Abstract:

In rapid industrial development has increased the demand for high-strength and lightweight materials. Thus, various CFRP (Carbon Fiber Reinforced Plastics) with composite materials are being used. The design variables of CFRP are its lamination direction, order, and thickness. Thus, the hardness and strength of CFRP depend much on their design variables. In this paper, the lamination direction of CFRP was used to produce a symmetrical ply [0°/0°, -15°/+15°, -30°/+30°, -45°/+45°, -60°/+60°, -75°/+75°, and 90°/90°] and an asymmetrical ply [0°/15°, 0°/30°, 0°/45°, 0°/60° 0°/75°, and 0°/90°]. The bending flexure stress of the CFRP specimen was evaluated through a bending test. Its thermal property was measured using an infrared camera. The symmetrical specimen and the asymmetrical specimen were analyzed. The results showed that the asymmetrical specimen increased the bending loads according to the increase in the orientation angle; and from 0°, the symmetrical specimen showed a tendency opposite the asymmetrical tendency because the tensile force of fiber differs at the vertical direction of its load. Also, the infrared camera showed that the thermal property had a trend similar to that of the mechanical properties.

Keywords: Carbon Fiber Reinforced Plastic (CFRP), bending test, infrared camera, composite

Procedia PDF Downloads 398
666 Crater Detection Using PCA from Captured CMOS Camera Data

Authors: Tatsuya Takino, Izuru Nomura, Yuji Kageyama, Shin Nagata, Hiroyuki Kamata

Abstract:

We propose a method of detecting the craters from the image of the lunar surface. This proposal assumes that it is applied to SLIM (Smart Lander for Investigating Moon) working group aiming at the pinpoint landing on the lunar surface and investigating scientific research. It is difficult to equip and use high-performance computers for the small space probe. So, it is necessary to use a small computer with an exclusive hardware such as FPGA. We have studied the crater detection using principal component analysis (PCA), In this paper, We implement detection algorithm into the FPGA, and the detection is performed on the data that was captured from the CMOS camera.

Keywords: crater detection, PCA, FPGA, image processing

Procedia PDF Downloads 550
665 Camera Model Identification for Mi Pad 4, Oppo A37f, Samsung M20, and Oppo f9

Authors: Ulrich Wake, Eniman Syamsuddin

Abstract:

The model for camera model identificaiton is trained using pretrained model ResNet43 and ResNet50. The dataset consists of 500 photos of each phone. Dataset is divided into 1280 photos for training, 320 photos for validation and 400 photos for testing. The model is trained using One Cycle Policy Method and tested using Test-Time Augmentation. Furthermore, the model is trained for 50 epoch using regularization such as drop out and early stopping. The result is 90% accuracy for validation set and above 85% for Test-Time Augmentation using ResNet50. Every model is also trained by slightly updating the pretrained model’s weights

Keywords: ​ One Cycle Policy, ResNet34, ResNet50, Test-Time Agumentation

Procedia PDF Downloads 208
664 The Strategy of Orbit Avoidance for Optical Remote Sensing Satellite

Authors: Dianxun Zheng, Wuxing Jing, Lin Hetong

Abstract:

Optical remote sensing satellite, always running on the Sun-synchronous orbit, equipped laser warning equipment to alert CCD camera from laser attack. There have three ways to protect the CCD camera, closing the camera cover satellite attitude maneuver and satellite orbit avoidance. In order to enhance the safety of optical remote sensing satellite in orbit, this paper explores the strategy of satellite avoidance. The avoidance strategy is expressed as the evasion of pre-determined target points in the orbital coordinates of virtual satellite. The so-called virtual satellite is a passive vehicle which superposes a satellite at the initial stage of avoidance. The target points share the consistent cycle time and the same semi-major axis with the virtual satellite, which ensures the properties of the Sun-synchronous orbit remain unchanged. Moreover, to further strengthen the avoidance capability of satellite, it can perform multi-object avoid maneuvers. On occasions of fulfilling the orbit tasks of the satellite, the orbit can be restored back to virtual satellite through orbit maneuvers. There into, the avoid maneuvers adopts pulse guidance. and the fuel consumption is also optimized. The avoidance strategy discussed in this article is applicable to avoidance for optical remote sensing satellite when encounter the laser hostile attacks.

Keywords: optical remote sensing satellite, always running on the sun-synchronous

Procedia PDF Downloads 400
663 Evaluation of a Data Fusion Algorithm for Detecting and Locating a Radioactive Source through Monte Carlo N-Particle Code Simulation and Experimental Measurement

Authors: Hadi Ardiny, Amir Mohammad Beigzadeh

Abstract:

Through the utilization of a combination of various sensors and data fusion methods, the detection of potential nuclear threats can be significantly enhanced by extracting more information from different data. In this research, an experimental and modeling approach was employed to track a radioactive source by combining a surveillance camera and a radiation detector (NaI). To run this experiment, three mobile robots were utilized, with one of them equipped with a radioactive source. An algorithm was developed in identifying the contaminated robot through correlation between camera images and camera data. The computer vision method extracts the movements of all robots in the XY plane coordinate system, and the detector system records the gamma-ray count. The position of the robots and the corresponding count of the moving source were modeled using the MCNPX simulation code while considering the experimental geometry. The results demonstrated a high level of accuracy in finding and locating the target in both the simulation model and experimental measurement. The modeling techniques prove to be valuable in designing different scenarios and intelligent systems before initiating any experiments.

Keywords: nuclear threats, radiation detector, MCNPX simulation, modeling techniques, intelligent systems

Procedia PDF Downloads 123
662 Application of Optical Method Based on Laser Devise as Non-Destructive Testing for Calculus of Mechanical Deformation

Authors: R. Daïra, V. Chalvidan

Abstract:

We present the speckle interferometry method to determine the deformation of a piece. This method of holographic imaging using a CCD camera for simultaneous digital recording of two states object and reference. The reconstruction is obtained numerically. This latest method has the advantage of being simpler than the methods currently available, and it does not suffer the holographic configuration faults online. Furthermore, it is entirely digital and avoids heavy analysis after recording the hologram. This work was carried out in the laboratory HOLO 3 (optical metrology laboratory in Saint Louis, France) and it consists in controlling qualitatively and quantitatively the deformation of object by using a camera CCD connected to a computer equipped with software of Fringe Analysis.

Keywords: speckle, nondestructive testing, interferometry, image processing

Procedia PDF Downloads 497
661 Model Development for Real-Time Human Sitting Posture Detection Using a Camera

Authors: Jheanel E. Estrada, Larry A. Vea

Abstract:

This study developed model to detect proper/improper sitting posture using the built in web camera which detects the upper body points’ location and distances (chin, manubrium and acromion process). It also established relationships of human body frames and proper sitting posture. The models were developed by training some well-known classifiers such as KNN, SVM, MLP, and Decision Tree using the data collected from 60 students of different body frames. Decision Tree classifier demonstrated the most promising model performance with an accuracy of 95.35% and a kappa of 0.907 for head and shoulder posture. Results also showed that there were relationships between body frame and posture through Body Mass Index.

Keywords: posture, spinal points, gyroscope, image processing, ergonomics

Procedia PDF Downloads 329
660 Obstacle Detection and Path Tracking Application for Disables

Authors: Aliya Ashraf, Mehreen Sirshar, Fatima Akhtar, Farwa Kazmi, Jawaria Wazir

Abstract:

Vision, the basis for performing navigational tasks, is absent or greatly reduced in visually impaired people due to which they face many hurdles. For increasing the navigational capabilities of visually impaired people a desktop application ODAPTA is presented in this paper. The application uses camera to capture video from surroundings, apply various image processing algorithms to get information about path and obstacles, tracks them and delivers that information to user through voice commands. Experimental results show that the application works effectively for straight paths in daylight.

Keywords: visually impaired, ODAPTA, Region of Interest (ROI), driver fatigue, face detection, expression recognition, CCD camera, artificial intelligence

Procedia PDF Downloads 549
659 The Application of Collision Damage Analysis in Reconstruction of Sedan-Scooter Accidents

Authors: Chun-Liang Wu, Kai-Ping Shaw, Cheng-Ping Yu, Wu-Chien Chien, Hsiao-Ting Chen, Shao-Huang Wu

Abstract:

Objective: This study analyzed three criminal judicial cases. We applied the damage analysis of the two vehicles to verify other evidence, such as dashboard camera records of each accident, reconstruct the scenes, and pursue the truth. Methods: Evidence analysis, the method is to collect evidence and the reason for the results in judicial procedures, then analyze the involved damage evidence to verify other evidence. The collision damage analysis method is to inspect the damage to the vehicles and utilize the principles of tool mark analysis, Newtonian physics, and vehicle structure to understand the relevant factors when the vehicles collide. Results: Case 1: Sedan A turned right at the T junction and collided with Scooter B, which was going straight on the left road. The dashboard camera records showed that the left side of Sedan A’s front bumper collided with the body of Scooter B and rider B. After the analysis of the study, the truth was that the front of the left side of Sedan A impacted the right pedal of Scooter B and the right lower limb of rider B. Case 2: Sedan C collided with Scooter D on the left road at the crossroads. The dashboard camera record showed that the left side of the Sedan C’s front bumper collided with the body of Scooter D and rider D. After the analysis of the study, the truth was that the left side of the Sedan C impacted the left side of the car body and the front wheel of Scooter D and rider D. Case 3: Sedan E collided with Scooter F on the right road at the crossroads. The dashboard camera record showed that the right side of the Sedan E’s front bumper collided with the body of Scooter F and rider F. After the analysis of the study, the truth was that the right side of the front bumper and the right side of the Sedan F impacted the Scooter. Conclusion: The application of collision damage analysis in the reconstruction of a sedan-scooter collision could discover the truth and provide the basis for judicial justice. The cases and methods could be the reference for the road safety policy.

Keywords: evidence analysis, collision damage analysis, accident reconstruction, sedan-scooter collision, dashboard camera records

Procedia PDF Downloads 78
658 6D Posture Estimation of Road Vehicles from Color Images

Authors: Yoshimoto Kurihara, Tad Gonsalves

Abstract:

Currently, in the field of object posture estimation, there is research on estimating the position and angle of an object by storing a 3D model of the object to be estimated in advance in a computer and matching it with the model. However, in this research, we have succeeded in creating a module that is much simpler, smaller in scale, and faster in operation. Our 6D pose estimation model consists of two different networks – a classification network and a regression network. From a single RGB image, the trained model estimates the class of the object in the image, the coordinates of the object, and its rotation angle in 3D space. In addition, we compared the estimation accuracy of each camera position, i.e., the angle from which the object was captured. The highest accuracy was recorded when the camera position was 75°, the accuracy of the classification was about 87.3%, and that of regression was about 98.9%.

Keywords: 6D posture estimation, image recognition, deep learning, AlexNet

Procedia PDF Downloads 155
657 Automatic Identification and Monitoring of Wildlife via Computer Vision and IoT

Authors: Bilal Arshad, Johan Barthelemy, Elliott Pilton, Pascal Perez

Abstract:

Getting reliable, informative, and up-to-date information about the location, mobility, and behavioural patterns of animals will enhance our ability to research and preserve biodiversity. The fusion of infra-red sensors and camera traps offers an inexpensive way to collect wildlife data in the form of images. However, extracting useful data from these images, such as the identification and counting of animals remains a manual, time-consuming, and costly process. In this paper, we demonstrate that such information can be automatically retrieved by using state-of-the-art deep learning methods. Another major challenge that ecologists are facing is the recounting of one single animal multiple times due to that animal reappearing in other images taken by the same or other camera traps. Nonetheless, such information can be extremely useful for tracking wildlife and understanding its behaviour. To tackle the multiple count problem, we have designed a meshed network of camera traps, so they can share the captured images along with timestamps, cumulative counts, and dimensions of the animal. The proposed method takes leverage of edge computing to support real-time tracking and monitoring of wildlife. This method has been validated in the field and can be easily extended to other applications focusing on wildlife monitoring and management, where the traditional way of monitoring is expensive and time-consuming.

Keywords: computer vision, ecology, internet of things, invasive species management, wildlife management

Procedia PDF Downloads 138
656 Research on the Strategy of Orbital Avoidance for Optical Remote Sensing Satellite

Authors: Zheng DianXun, Cheng Bo, Lin Hetong

Abstract:

This paper focuses on the orbit avoidance strategies of optical remote sensing satellite. The optical remote sensing satellite, moving along the Sun-synchronous orbit, is equipped with laser warning equipment to alert CCD camera from laser attacks. There are three ways to protect the CCD camera: closing the camera cover, satellite attitude maneuver and satellite orbit avoidance. In order to enhance the safety of optical remote sensing satellite in orbit, this paper explores the strategy of satellite avoidance. The avoidance strategy is expressed as the evasion of pre-determined target points in the orbital coordinates of virtual satellite. The so-called virtual satellite is a passive vehicle which superposes the satellite at the initial stage of avoidance. The target points share the consistent cycle time and the same semi-major axis with the virtual satellite, which ensures the properties of the satellite’s Sun-synchronous orbit remain unchanged. Moreover, to further strengthen the avoidance capability of satellite, it can perform multi-target-points avoid maneuvers. On occasions of fulfilling the satellite orbit tasks, the orbit can be restored back to virtual satellite through orbit maneuvers. Thereinto, the avoid maneuvers adopts pulse guidance. And the fuel consumption is also optimized. The avoidance strategy discussed in this article is applicable to optical remote sensing satellite when it is encountered with hostile attack of space-based laser anti-satellite.

Keywords: optical remote sensing satellite, satellite avoidance, virtual satellite, avoid target-point, avoid maneuver

Procedia PDF Downloads 404
655 DBN-Based Face Recognition System Using Light Field

Authors: Bing Gu

Abstract:

Abstract—Most of Conventional facial recognition systems are based on image features, such as LBP, SIFT. Recently some DBN-based 2D facial recognition systems have been proposed. However, we find there are few DBN-based 3D facial recognition system and relative researches. 3D facial images include all the individual biometric information. We can use these information to build more accurate features, So we present our DBN-based face recognition system using Light Field. We can see Light Field as another presentation of 3D image, and Light Field Camera show us a way to receive a Light Field. We use the commercially available Light Field Camera to act as the collector of our face recognition system, and the system receive a state-of-art performance as convenient as conventional 2D face recognition system.

Keywords: DBN, face recognition, light field, Lytro

Procedia PDF Downloads 464
654 Automatic Detection of Suicidal Behaviors Using an RGB-D Camera: Azure Kinect

Authors: Maha Jazouli

Abstract:

Suicide is one of the most important causes of death in the prison environment, both in Canada and internationally. Rates of attempts of suicide and self-harm have been on the rise in recent years, with hangings being the most frequent method resorted to. The objective of this article is to propose a method to automatically detect in real time suicidal behaviors. We present a gesture recognition system that consists of three modules: model-based movement tracking, feature extraction, and gesture recognition using machine learning algorithms (MLA). Our proposed system gives us satisfactory results. This smart video surveillance system can help assist staff responsible for the safety and health of inmates by alerting them when suicidal behavior is detected, which helps reduce mortality rates and save lives.

Keywords: suicide detection, Kinect azure, RGB-D camera, SVM, machine learning, gesture recognition

Procedia PDF Downloads 188
653 Constrained RGBD SLAM with a Prior Knowledge of the Environment

Authors: Kathia Melbouci, Sylvie Naudet Collette, Vincent Gay-Bellile, Omar Ait-Aider, Michel Dhome

Abstract:

In this paper, we handle the problem of real time localization and mapping in indoor environment assisted by a partial prior 3D model, using an RGBD sensor. The proposed solution relies on a feature-based RGBD SLAM algorithm to localize the camera and update the 3D map of the scene. To improve the accuracy and the robustness of the localization, we propose to combine in a local bundle adjustment process, geometric information provided by a prior coarse 3D model of the scene (e.g. generated from the 2D floor plan of the building) along with RGBD data from a Kinect camera. The proposed approach is evaluated on a public benchmark dataset as well as on real scene acquired by a Kinect sensor.

Keywords: SLAM, global localization, 3D sensor, bundle adjustment, 3D model

Procedia PDF Downloads 414
652 A Non-Destructive Estimation Method for Internal Time in Perilla Leaf Using Hyperspectral Data

Authors: Shogo Nagano, Yusuke Tanigaki, Hirokazu Fukuda

Abstract:

Vegetables harvested early in the morning or late in the afternoon are valued in plant production, and so the time of harvest is important. The biological functions known as circadian clocks have a significant effect on this harvest timing. The purpose of this study was to non-destructively estimate the circadian clock and so construct a method for determining a suitable harvest time. We took eight samples of green busil (Perilla frutescens var. crispa) every 4 hours, six times for 1 day and analyzed all samples at the same time. A hyperspectral camera was used to collect spectrum intensities at 141 different wavelengths (350–1050 nm). Calculation of correlations between spectrum intensity of each wavelength and harvest time suggested the suitability of the hyperspectral camera for non-destructive estimation. However, even the highest correlated wavelength had a weak correlation, so we used machine learning to raise the accuracy of estimation and constructed a machine learning model to estimate the internal time of the circadian clock. Artificial neural networks (ANN) were used for machine learning because this is an effective analysis method for large amounts of data. Using the estimation model resulted in an error between estimated and real times of 3 min. The estimations were made in less than 2 hours. Thus, we successfully demonstrated this method of non-destructively estimating internal time.

Keywords: artificial neural network (ANN), circadian clock, green busil, hyperspectral camera, non-destructive evaluation

Procedia PDF Downloads 299
651 Estimating the Ladder Angle and the Camera Position From a 2D Photograph Based on Applications of Projective Geometry and Matrix Analysis

Authors: Inigo Beckett

Abstract:

In forensic investigations, it is often the case that the most potentially useful recorded evidence derives from coincidental imagery, recorded immediately before or during an incident, and that during the incident (e.g. a ‘failure’ or fire event), the evidence is changed or destroyed. To an image analysis expert involved in photogrammetric analysis for Civil or Criminal Proceedings, traditional computer vision methods involving calibrated cameras is often not appropriate because image metadata cannot be relied upon. This paper presents an approach for resolving this problem, considering in particular and by way of a case study, the angle of a simple ladder shown in a photograph. The UK Health and Safety Executive (HSE) guidance document published in 2014 (INDG455) advises that a leaning ladder should be erected at 75 degrees to the horizontal axis. Personal injury cases can arise in the construction industry because a ladder is too steep or too shallow. Ad-hoc photographs of such ladders in their incident position provide a basis for analysis of their angle. This paper presents a direct approach for ascertaining the position of the camera and the angle of the ladder simultaneously from the photograph(s) by way of a workflow that encompasses a novel application of projective geometry and matrix analysis. Mathematical analysis shows that for a given pixel ratio of directly measured collinear points (i.e. features that lie on the same line segment) from the 2D digital photograph with respect to a given viewing point, we can constrain the 3D camera position to a surface of a sphere in the scene. Depending on what we know about the ladder, we can enforce another independent constraint on the possible camera positions which enables us to constrain the possible positions even further. Experiments were conducted using synthetic and real-world data. The synthetic data modeled a vertical plane with a ladder on a horizontally flat plane resting against a vertical wall. The real-world data was captured using an Apple iPhone 13 Pro and 3D laser scan survey data whereby a ladder was placed in a known location and angle to the vertical axis. For each case, we calculated camera positions and the ladder angles using this method and cross-compared them against their respective ‘true’ values.

Keywords: image analysis, projective geometry, homography, photogrammetry, ladders, Forensics, Mathematical modeling, planar geometry, matrix analysis, collinear, cameras, photographs

Procedia PDF Downloads 52
650 Full-Field Estimation of Cyclic Threshold Shear Strain

Authors: E. E. S. Uy, T. Noda, K. Nakai, J. R. Dungca

Abstract:

Cyclic threshold shear strain is the cyclic shear strain amplitude that serves as the indicator of the development of pore water pressure. The parameter can be obtained by performing either cyclic triaxial test, shaking table test, cyclic simple shear or resonant column. In a cyclic triaxial test, other researchers install measuring devices in close proximity of the soil to measure the parameter. In this study, an attempt was made to estimate the cyclic threshold shear strain parameter using full-field measurement technique. The technique uses a camera to monitor and measure the movement of the soil. For this study, the technique was incorporated in a strain-controlled consolidated undrained cyclic triaxial test. Calibration of the camera was first performed to ensure that the camera can properly measure the deformation under cyclic loading. Its capacity to measure deformation was also investigated using a cylindrical rubber dummy. Two-dimensional image processing was implemented. Lucas and Kanade optical flow algorithm was applied to track the movement of the soil particles. Results from the full-field measurement technique were compared with the results from the linear variable displacement transducer. A range of values was determined from the estimation. This was due to the nonhomogeneous deformation of the soil observed during the cyclic loading. The minimum values were in the order of 10-2% in some areas of the specimen.

Keywords: cyclic loading, cyclic threshold shear strain, full-field measurement, optical flow

Procedia PDF Downloads 234
649 Characterization of Thermal Images Due to Aging of H.V Glass Insulators Using Thermographic Scanning

Authors: Nasir A. Al-Geelani, Zulkurnain Abdul-Malek, M. Afendi M. Piah

Abstract:

This research paper investigation is carried out in the laboratory on single units of transmission line glass insulator characterized by different thermal images, which aimed to find out the age of the insulators. The tests were carried out on virgin and aged insulators using the thermography scan. Various samples having different periods of aging 20, 15, and 5 years from a 132 kV transmission line which have exhibited a different degree of corrosion. The second group of insulator samples was relatively mild aged insulators, while the third group was lightly aged; finally, the fourth group was the brand new insulators. The results revealed a strong correlation between the aging and the thermal images captured by the infrared camera. This technique can be used to monitor the aging of high voltage insulators as a precaution to avoid disaster.

Keywords: glass insulator, infrared camera, corona diacharge, transmission lines, thermograpy, surface discharge

Procedia PDF Downloads 160
648 A Rotating Facility with High Temporal and Spatial Resolution Particle Image Velocimetry System to Investigate the Turbulent Boundary Layer Flow

Authors: Ruquan You, Haiwang Li, Zhi Tao

Abstract:

A time-resolved particle image velocimetry (PIV) system is developed to investigate the boundary layer flow with the effect of rotating Coriolis and buoyancy force. This time-resolved PIV system consists of a 10 Watts continuous laser diode and a high-speed camera. The laser diode is able to provide a less than 1mm thickness sheet light, and the high-speed camera can capture the 6400 frames per second with 1024×1024 pixels. The whole laser and the camera are fixed on the rotating facility with 1 radius meters and up to 500 revolutions per minute, which can measure the boundary flow velocity in the rotating channel with and without ribs directly at rotating conditions. To investigate the effect of buoyancy force, transparent heater glasses are used to provide the constant thermal heat flux, and then the density differences are generated near the channel wall, and the buoyancy force can be simulated when the channel is rotating. Due to the high temporal and spatial resolution of the system, the proper orthogonal decomposition (POD) can be developed to analyze the characteristic of the turbulent boundary layer flow at rotating conditions. With this rotating facility and PIV system, the velocity profile, Reynolds shear stress, spatial and temporal correlation, and the POD modes of the turbulent boundary layer flow can be discussed.

Keywords: rotating facility, PIV, boundary layer flow, spatial and temporal resolution

Procedia PDF Downloads 180
647 Optimized Road Lane Detection Through a Combined Canny Edge Detection, Hough Transform, and Scaleable Region Masking Toward Autonomous Driving

Authors: Samane Sharifi Monfared, Lavdie Rada

Abstract:

Nowadays, autonomous vehicles are developing rapidly toward facilitating human car driving. One of the main issues is road lane detection for a suitable guidance direction and car accident prevention. This paper aims to improve and optimize road line detection based on a combination of camera calibration, the Hough transform, and Canny edge detection. The video processing is implemented using the Open CV library with the novelty of having a scale able region masking. The aim of the study is to introduce automatic road lane detection techniques with the user’s minimum manual intervention.

Keywords: hough transform, canny edge detection, optimisation, scaleable masking, camera calibration, improving the quality of image, image processing, video processing

Procedia PDF Downloads 94
646 Quantitative Characterization of Single Orifice Hydraulic Flat Spray Nozzle

Authors: Y. C. Khoo, W. T. Lai

Abstract:

The single orifice hydraulic flat spray nozzle was evaluated with two global imaging techniques to characterize various aspects of the resulting spray. The two techniques were high resolution flow visualization and Particle Image Velocimetry (PIV). A CCD camera with 29 million pixels was used to capture shadowgraph images to realize ligament formation and collapse as well as droplet interaction. Quantitative analysis was performed to give the sizing information of the droplets and ligaments. This camera was then applied with a PIV system to evaluate the overall velocity field of the spray, from nozzle exit to droplet discharge. PIV images were further post-processed to determine the inclusion angle of the spray. The results from those investigations provided significant quantitative understanding of the spray structure. Based on the quantitative results, detailed understanding of the spray behavior was achieved.

Keywords: spray, flow visualization, PIV, shadowgraph, quantitative sizing, velocity field

Procedia PDF Downloads 381
645 Experimental Investigation of the Out-of-Plane Dynamic Behavior of Adhesively Bonded Composite Joints at High Strain Rates

Authors: Sonia Sassi, Mostapha Tarfaoui, Hamza Ben Yahia

Abstract:

In this investigation, an experimental technique in which the dynamic response, damage kinetic and heat dissipation are measured simultaneously during high strain rates on adhesively bonded joints materials. The material used in this study is widely used in the design of structures for military applications. It was composed of a 45° Bi-axial fiber-glass mat of 0.286 mm thickness in a Polyester resin matrix. In adhesive bonding, a NORPOL Polyvinylester of 1 mm thickness was used to assemble the composite substrate. The experimental setup consists of a compression Split Hopkinson Pressure Bar (SHPB), a high-speed infrared camera and a high-speed Fastcam rapid camera. For the dynamic compression tests, 13 mm x 13 mm x 9 mm samples for out-of-plane tests were considered from 372 to 1030 s-1. Specimen surface is controlled and monitored in situ and in real time using the high-speed camera which acquires the damage progressive in specimens and with the infrared camera which provides thermal images in time sequence. Preliminary compressive stress-strain vs. strain rates data obtained show that the dynamic material strength increases with increasing strain rates. Damage investigations have revealed that the failure mainly occurred in the adhesive/adherent interface because of the brittle nature of the polymeric adhesive. Results have shown the dependency of the dynamic parameters on strain rates. Significant temperature rise was observed in dynamic compression tests. Experimental results show that the temperature change depending on the strain rate and the damage mode and their maximum exceed 100 °C. The dependence of these results on strain rate indicates that there exists a strong correlation between damage rate sensitivity and heat dissipation, which might be useful when developing damage models under dynamic loading tacking into account the effect of the energy balance of adhesively bonded joints.

Keywords: adhesive bonded joints, Hopkinson bars, out-of-plane tests, dynamic compression properties, damage mechanisms, heat dissipation

Procedia PDF Downloads 212
644 Camera Trapping Coupled With Field Sign Survey Reveal the Mammalian Diversity and Abundance at Murree-Kotli Sattian-Kahuta National Park, Pakistan

Authors: Shehnila Kanwal

Abstract:

Murree-Kotli Sattian-Kahta National Park (MKKNP) was declared in 2009. However, not much is known about the diversity and relative abundance of the mammalian fauna of this park. In the current study, we used field sign survey and infrared camera trapping techniques to get an insight into the diversity of mammalian species and their relative abundance. We conducted field surveys in different areas of the park at various elevations from April 2023 up to March 2024 to record the field signs (scats, pug marks etc.) of the mammals’ species; in addition, we deployed a total of 22 infrared trail camera traps in different areas of the park, for 116 nights. We obtained a total of 5201 photographs using camera trapping. Results of camera trapping coupled with field sign surveys confirmed the presence of a total of twenty-one different mammalian species (large, meso and small mammals) recorded in the study area. The common leopard was recorded at four different sites in the park, with an altitudinal range between 648m-1533m. Distribution of Asiatic jackal and a red fox was recorded positive at all the sites surveyed in the park with an altitudinal range between 498m-1287m and 433m-2049m, respectively. Leopard cats were recorded at two different sites within the altitudinal range between 498m-894m. Jungle cat was recorded at three sites within an altitudinal range between 498m-846. Asian palm civets and small Indian civets were both recorded at three sites. Grey mongoose and small Indian mongoose were recorded at four and three sites. We also collected a total of 75 scats of different mammal species in the park to further confirm their occurrence. For the Indian pangolin, we recorded three field burrows at two different sites. Diversity index (H’=2.369960) and species evenness (E=0.81995) were calculated. Analysis of data revealed that wild boar (Sus sucrofa) was the most abundant species in the park; most of the mammal species were found nocturnal; these remain active from dusk throughout the night, and some of them remain active at dawn time. Leopard and Asian palm civets were highly overlapping species in the study area. Their temporal activity pattern overlapped 61%. Barking deer and Indian crested porcupine were also found to be nocturnal species they remained active throughout the night.

Keywords: MKKNP, diversity, abundance, evenness, distribution, mammals, overlapped

Procedia PDF Downloads 18
643 Optical Flow Localisation and Appearance Mapping (OFLAAM) for Long-Term Navigation

Authors: Daniel Pastor, Hyo-Sang Shin

Abstract:

This paper presents a novel method to use optical flow navigation for long-term navigation. Unlike standard SLAM approaches for augmented reality, OFLAAM is designed for Micro Air Vehicles (MAV). It uses an optical flow camera pointing downwards, an IMU and a monocular camera pointing frontwards. That configuration avoids the expensive mapping and tracking of the 3D features. It only maps these features in a vocabulary list by a localization module to tackle the loss of the navigation estimation. That module, based on the well-established algorithm DBoW2, will be also used to close the loop and allow long-term navigation in confined areas. That combination of high-speed optical flow navigation with a low rate localization algorithm allows fully autonomous navigation for MAV, at the same time it reduces the overall computational load. This framework is implemented in ROS (Robot Operating System) and tested attached to a laptop. A representative scenarios is used to analyse the performance of the system.

Keywords: vision, UAV, navigation, SLAM

Procedia PDF Downloads 606
642 Control of Belts for Classification of Geometric Figures by Artificial Vision

Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez

Abstract:

The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.

Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB

Procedia PDF Downloads 378
641 Low-Cost Robotic-Assisted Laparoscope

Authors: Ege Can Onal, Enver Ersen, Meltem Elitas

Abstract:

Laparoscopy is a surgical operation, well known as keyhole surgery. The operation is performed through small holes, hence, scars of a patient become much smaller, patients can recover in a short time and the hospital stay becomes shorter in comparison to an open surgery. Several tools are used at laparoscopic operations; among them, the laparoscope has a crucial role. It provides the vision during the operation, which will be the main focus in here. Since the operation area is very small, motion of the surgical tools might be limited in laparoscopic operations compared to traditional surgeries. To overcome this limitation, most of the laparoscopic tools have become more precise, dexterous, multi-functional or automated. Here, we present a robotic-assisted laparoscope that is controlled with pedals directly by a surgeon. Thus, the movement of the laparoscope might be controlled better, so there will not be a need to calibrate the camera during the operation. The need for an assistant that controls the movement of the laparoscope will be eliminated. The duration of the laparoscopic operation might be shorter since the surgeon will directly operate the camera.

Keywords: laparoscope, laparoscopy, low-cost, minimally invasive surgery, robotic-assisted surgery

Procedia PDF Downloads 342
640 Automated Driving Deep Neural Networks Model Accuracy and Performance Assessment in a Simulated Environment

Authors: David Tena-Gago, Jose M. Alcaraz Calero, Qi Wang

Abstract:

The evolution and integration of automated vehicles have become more and more tangible in recent years. State-of-the-art technological advances in the field of camera-based Artificial Intelligence (AI) and computer vision greatly favor the performance and reliability of the Advanced Driver Assistance System (ADAS), leading to a greater knowledge of vehicular operation and resembling human behavior. However, the exclusive use of this technology still seems insufficient to control vehicular operation at 100%. To reveal the degree of accuracy of the current camera-based automated driving AI modules, this paper studies the structure and behavior of one of the main solutions in a controlled testing environment. The results obtained clearly outline the lack of reliability when using exclusively the AI model in the perception stage, thereby entailing using additional complementary sensors to improve its safety and performance.

Keywords: accuracy assessment, AI-driven mobility, artificial intelligence, automated vehicles

Procedia PDF Downloads 113