Search results for: Web camera
561 Cracks Detection and Measurement Using VLP-16 LiDAR and Intel Depth Camera D435 in Real-Time
Authors: Xinwen Zhu, Xingguang Li, Sun Yi
Abstract:
Crack is one of the most common damages in buildings, bridges, roads and so on, which may pose safety hazards. However, cracks frequently happen in structures of various materials. Traditional methods of manual detection and measurement, which are known as subjective, time-consuming, and labor-intensive, are gradually unable to meet the needs of modern development. In addition, crack detection and measurement need be safe considering space limitations and danger. Intelligent crack detection has become necessary research. In this paper, an efficient method for crack detection and quantification using a 3D sensor, LiDAR, and depth camera is proposed. This method works even in a dark environment, which is usual in real-world applications. The LiDAR rapidly spins to scan the surrounding environment and discover cracks through lasers thousands of times per second, providing a rich, 3D point cloud in real-time. The LiDAR provides quite accurate depth information. The precision of the distance of each point can be determined within around ±3 cm accuracy, and not only it is good for getting a precise distance, but it also allows us to see far of over 100m going with the top range models. But the accuracy is still large for some high precision structures of material. To make the depth of crack is much more accurate, the depth camera is in need. The cracks are scanned by the depth camera at the same time. Finally, all data from LiDAR and Depth cameras are analyzed, and the size of the cracks can be quantified successfully. The comparison shows that the minimum and mean absolute percentage error between measured and calculated width are about 2.22% and 6.27%, respectively. The experiments and results are presented in this paper.Keywords: LiDAR, depth camera, real-time, detection and measurement
Procedia PDF Downloads 224560 Fundamental Study on Reconstruction of 3D Image Using Camera and Ultrasound
Authors: Takaaki Miyabe, Hideharu Takahashi, Hiroshige Kikura
Abstract:
The Government of Japan and Tokyo Electric Power Company Holdings, Incorporated (TEPCO) are struggling with the decommissioning of Fukushima Daiichi Nuclear Power Plants, especially fuel debris retrieval. In fuel debris retrieval, amount of fuel debris, location, characteristics, and distribution information are important. Recently, a survey was conducted using a robot with a small camera. Progress report in remote robot and camera research has speculated that fuel debris is present both at the bottom of the Pressure Containment Vessel (PCV) and inside the Reactor Pressure Vessel (RPV). The investigation found a 'tie plate' at the bottom of the containment, this is handles on the fuel rod. As a result, it is assumed that a hole large enough to allow the tie plate to fall is opened at the bottom of the reactor pressure vessel. Therefore, exploring the existence of holes that lead to inside the RCV is also an issue. Investigations of the lower part of the RPV are currently underway, but no investigations have been made inside or above the PCV. Therefore, a survey must be conducted for future fuel debris retrieval. The environment inside of the RPV cannot be imagined due to the effect of the melted fuel. To do this, we need a way to accurately check the internal situation. What we propose here is the adaptation of a technology called 'Structure from Motion' that reconstructs a 3D image from multiple photos taken by a single camera. The plan is to mount a monocular camera on the tip of long-arm robot, reach it to the upper part of the PCV, and to taking video. Now, we are making long-arm robot that has long-arm and used at high level radiation environment. However, the environment above the pressure vessel is not known exactly. Also, fog may be generated by the cooling water of fuel debris, and the radiation level in the environment may be high. Since camera alone cannot provide sufficient sensing in these environments, we will further propose using ultrasonic measurement technology in addition to cameras. Ultrasonic sensor can be resistant to environmental changes such as fog, and environments with high radiation dose. these systems can be used for a long time. The purpose is to develop a system adapted to the inside of the containment vessel by combining a camera and an ultrasound. Therefore, in this research, we performed a basic experiment on 3D image reconstruction using a camera and ultrasound. In this report, we select the good and bad condition of each sensing, and propose the reconstruction and detection method. The results revealed the strengths and weaknesses of each approach.Keywords: camera, image processing, reconstruction, ultrasound
Procedia PDF Downloads 104559 Evaluation of Sensor Pattern Noise Estimators for Source Camera Identification
Authors: Benjamin Anderson-Sackaney, Amr Abdel-Dayem
Abstract:
This paper presents a comprehensive survey of recent source camera identification (SCI) systems. Then, the performance of various sensor pattern noise (SPN) estimators was experimentally assessed, under common photo response non-uniformity (PRNU) frameworks. The experiments used 1350 natural and 900 flat-field images, captured by 18 individual cameras. 12 different experiments, grouped into three sets, were conducted. The results were analyzed using the receiver operator characteristic (ROC) curves. The experimental results demonstrated that combining the basic SPN estimator with a wavelet-based filtering scheme provides promising results. However, the phase SPN estimator fits better with both patch-based (BM3D) and anisotropic diffusion (AD) filtering schemes.Keywords: sensor pattern noise, source camera identification, photo response non-uniformity, anisotropic diffusion, peak to correlation energy ratio
Procedia PDF Downloads 441558 Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured GNSS-Denied Environments
Authors: David L. Olson, Stephen B. H. Bruder, Adam S. Watkins, Cleon E. Davis
Abstract:
In global navigation satellite systems (GNSS), denied settings such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation, thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.Keywords: autonomous mobile robotics, dead reckoning, depth camera, inertial navigation, Kalman filtering, localization, sensor fusion
Procedia PDF Downloads 207557 Non-Contact Measurement of Soil Deformation in a Cyclic Triaxial Test
Authors: Erica Elice Uy, Toshihiro Noda, Kentaro Nakai, Jonathan Dungca
Abstract:
Deformation in a conventional cyclic triaxial test is normally measured by using point-wise measuring device. In this study, non-contact measurement technique was applied to be able to monitor and measure the occurrence of non-homogeneous behavior of the soil under cyclic loading. Non-contact measurement is executed through image processing. Two-dimensional measurements were performed using Lucas and Kanade optical flow algorithm and it was implemented Labview. In this technique, the non-homogeneous deformation was monitored using a mirrorless camera. A mirrorless camera was used because it is economical and it has the capacity to take pictures at a fast rate. The camera was first calibrated to remove the distortion brought about the lens and the testing environment as well. Calibration was divided into 2 phases. The first phase was the calibration of the camera parameters and distortion caused by the lens. The second phase was to for eliminating the distortion brought about the triaxial plexiglass. A correction factor was established from this phase. A series of consolidated undrained cyclic triaxial test was performed using a coarse soil. The results from the non-contact measurement technique were compared to the measured deformation from the linear variable displacement transducer. It was observed that deformation was higher at the area where failure occurs.Keywords: cyclic loading, non-contact measurement, non-homogeneous, optical flow
Procedia PDF Downloads 301556 Single-Camera Basketball Tracker through Pose and Semantic Feature Fusion
Authors: Adrià Arbués-Sangüesa, Coloma Ballester, Gloria Haro
Abstract:
Tracking sports players is a widely challenging scenario, specially in single-feed videos recorded in tight courts, where cluttering and occlusions cannot be avoided. This paper presents an analysis of several geometric and semantic visual features to detect and track basketball players. An ablation study is carried out and then used to remark that a robust tracker can be built with Deep Learning features, without the need of extracting contextual ones, such as proximity or color similarity, nor applying camera stabilization techniques. The presented tracker consists of: (1) a detection step, which uses a pretrained deep learning model to estimate the players pose, followed by (2) a tracking step, which leverages pose and semantic information from the output of a convolutional layer in a VGG network. Its performance is analyzed in terms of MOTA over a basketball dataset with more than 10k instances.Keywords: basketball, deep learning, feature extraction, single-camera, tracking
Procedia PDF Downloads 138555 Quantitative Analysis of Camera Setup for Optical Motion Capture Systems
Authors: J. T. Pitale, S. Ghassab, H. Ay, N. Berme
Abstract:
Biomechanics researchers commonly use marker-based optical motion capture (MoCap) systems to extract human body kinematic data. These systems use cameras to detect passive or active markers placed on the subject. The cameras use triangulation methods to form images of the markers, which typically require each marker to be visible by at least two cameras simultaneously. Cameras in a conventional optical MoCap system are mounted at a distance from the subject, typically on walls, ceiling as well as fixed or adjustable frame structures. To accommodate for space constraints and as portable force measurement systems are getting popular, there is a need for smaller and smaller capture volumes. When the efficacy of a MoCap system is investigated, it is important to consider the tradeoff amongst the camera distance from subject, pixel density, and the field of view (FOV). If cameras are mounted relatively close to a subject, the area corresponding to each pixel reduces, thus increasing the image resolution. However, the cross section of the capture volume also decreases, causing reduction of the visible area. Due to this reduction, additional cameras may be required in such applications. On the other hand, mounting cameras relatively far from the subject increases the visible area but reduces the image quality. The goal of this study was to develop a quantitative methodology to investigate marker occlusions and optimize camera placement for a given capture volume and subject postures using three-dimension computer-aided design (CAD) tools. We modeled a 4.9m x 3.7m x 2.4m (LxWxH) MoCap volume and designed a mounting structure for cameras using SOLIDWORKS (Dassault Systems, MA, USA). The FOV was used to generate the capture volume for each camera placed on the structure. A human body model with configurable posture was placed at the center of the capture volume on CAD environment. We studied three postures; initial contact, mid-stance, and early swing. The human body CAD model was adjusted for each posture based on the range of joint angles. Markers were attached to the model to enable a full body capture. The cameras were placed around the capture volume at a maximum distance of 2.7m from the subject. We used the Camera View feature in SOLIDWORKS to generate images of the subject as seen by each camera and the number of markers visible to each camera was tabulated. The approach presented in this study provides a quantitative method to investigate the efficacy and efficiency of a MoCap camera setup. This approach enables optimization of a camera setup through adjusting the position and orientation of cameras on the CAD environment and quantifying marker visibility. It is also possible to compare different camera setup options on the same quantitative basis. The flexibility of the CAD environment enables accurate representation of the capture volume, including any objects that may cause obstructions between the subject and the cameras. With this approach, it is possible to compare different camera placement options to each other, as well as optimize a given camera setup based on quantitative results.Keywords: motion capture, cameras, biomechanics, gait analysis
Procedia PDF Downloads 310554 Exploring the Impacts of Field of View on 3D Game Experiences and Task Performances
Authors: Jiunde Lee, Meng-Yu Wun
Abstract:
The present study attempted to explore how the range differences of ‘Geometric Field of Vision’ (GFOV) and differences in camera control in 3D simulation games, OMSI—The Bus Simulator of the 2013 PC version, affected players’ cognitive load, anxiety, and task performances. The study employed a between-subjects factorial experimental design. A total of 80 subjects completed experiment whose data were eligible for further analysis. The results of this study showed that in the difference of field of view, players had better task performances in a spacious view. Although cognitive resources consumed more of the players’ ‘mental demand,’ ‘physical demand’, and ‘temporal demand’, they had better performances in the experiment, and their anxiety was effectively reduced. On the other hand, in the narrow GFOV, players thought they spent more cognitive resources on ‘effort’ and ‘frustration degree,’ and had worse task performances, but it was not significant enough to reduce their anxiety. In terms of difference of camera control, players had worse performances since the fixed lens restricted their dexterous control. However, there was no significant difference in the players’ subjective cognitive resources or anxiety. The results further illustrated that task performances were affected by the interaction of GFOV and camera control.Keywords: geometric field of view, camera lens, cognitive load, anxiety
Procedia PDF Downloads 149553 Omni-Modeler: Dynamic Learning for Pedestrian Redetection
Authors: Michael Karnes, Alper Yilmaz
Abstract:
This paper presents the application of the omni-modeler towards pedestrian redetection. The pedestrian redetection task creates several challenges when applying deep neural networks (DNN) due to the variety of pedestrian appearance with camera position, the variety of environmental conditions, and the specificity required to recognize one pedestrian from another. DNNs require significant training sets and are not easily adapted for changes in class appearances or changes in the set of classes held in its knowledge domain. Pedestrian redetection requires an algorithm that can actively manage its knowledge domain as individuals move in and out of the scene, as well as learn individual appearances from a few frames of a video. The Omni-Modeler is a dynamically learning few-shot visual recognition algorithm developed for tasks with limited training data availability. The Omni-Modeler adapts the knowledge domain of pre-trained deep neural networks to novel concepts with a calculated localized language encoder. The Omni-Modeler knowledge domain is generated by creating a dynamic dictionary of concept definitions, which are directly updatable as new information becomes available. Query images are identified through nearest neighbor comparison to the learned object definitions. The study presented in this paper evaluates its performance in re-identifying individuals as they move through a scene in both single-camera and multi-camera tracking applications. The results demonstrate that the Omni-Modeler shows potential for across-camera view pedestrian redetection and is highly effective for single-camera redetection with a 93% accuracy across 30 individuals using 64 example images for each individual.Keywords: dynamic learning, few-shot learning, pedestrian redetection, visual recognition
Procedia PDF Downloads 76552 Mechanism of Changing a Product Concept
Authors: Kiyohiro Yamazaki
Abstract:
The purpose of this paper is to examine the hypothesis explaining the mechanism in the case, where the product is deleted or reduced the fundamental function of the product through the product concept changes in the digital camera industry. This paper points out not owning the fundamental technology might cause the change of the product concept. Casio could create new competitive factor so that this paper discusses a possibility of the mechanism of changing the product concept.Keywords: firm without fundamental technology, product development, product concept, digital camera industry, Casio
Procedia PDF Downloads 562551 Flicker Detection with Motion Tolerance for Embedded Camera
Authors: Jianrong Wu, Xuan Fu, Akihiro Higashi, Zhiming Tan
Abstract:
CMOS image sensors with a rolling shutter are used broadly in the digital cameras embedded in mobile devices. The rolling shutter suffers the flicker artifacts from the fluorescent lamp, and it could be observed easily. In this paper, the characteristics of illumination flicker in motion case were analyzed, and two efficient detection methods based on matching fragment selection were proposed. According to the experimental results, our methods could achieve as high as 100% accuracy in static scene, and at least 97% in motion scene.Keywords: illumination flicker, embedded camera, rolling shutter, detection
Procedia PDF Downloads 420550 Design and Implementation of Wireless Syncronized AI System for Security
Authors: Saradha Priya
Abstract:
Developing virtual human is very important to meet the challenges occurred in many applications where human find difficult or risky to perform the task. A robot is a machine that can perform a task automatically or with guidance. Robotics is generally a combination of artificial intelligence and physical machines (motors). Computational intelligence involves the programmed instructions. This project proposes a robotic vehicle that has a camera, PIR sensor and text command based movement. It is specially designed to perform surveillance and other few tasks in the most efficient way. Serial communication has been occurred between a remote Base Station, GUI Application, and PC.Keywords: Zigbee, camera, pirsensor, wireless transmission, DC motor
Procedia PDF Downloads 349549 Invisible Aircraft Using Plasma Display
Authors: C. Ramamoorthy, R. Ranga Raj
Abstract:
In olden days the Ramayana epic depicts the usage of invisible and fuel less aircraft named pushpavimana. The change of color in the reptile family chameleon paves way for the concept of color change phenomenon available in nature. In present scenario the aircrafts are visible so it is easily identified. So there are too many problems from the threatening. Research is still going on about this problem by using Liquid Crystal Display (LCD). Objective of this paper is to find much better to use the concept of invisible aircraft using plasma display through Couple Charged Device camera (CCD), which has a high resolution and can be used for many purposes like spying, defense, etc. Moreover it is cost wise cheap then, escaping the foe viewing.Keywords: CCD camera, chameleon, invisible, plasma display
Procedia PDF Downloads 403548 2-Dimensional Kinematic Analysis on Sprint Start with Sprinting Performance of Novice Athletes
Authors: Satpal Yadav, Biswajit Basumatary, Arvind S. Sajwan, Ranjan Chakravarty
Abstract:
The purpose of the study was to assess the effect of 2D kinematical selected variables on sprint start with sprinting performance of novice athletes. Six (3 National and 3 State level) athletes of sports authority of India, Guwahati has been selected for this study. The mean (M) and standard deviation (SD) of sprinters were age (17.44, 1.55), height (1.74m, .84m), weight (62.25 kg, 4.55), arm length (65.00 cm, 3.72) and leg length (96.35 cm, 2.71). Biokin-2D motion analysis system V4.5 can be used for acquiring two-dimensional kinematical data/variables on sprint start with Sprinting Performance. For the purpose of kinematic analysis a standard motion driven camera which frequency of the camera was 60 frame/ second i.e. handy camera of Sony Company were used. The sequence of photographic was taken under controlled condition. The distance of the camera from the athletes was 12 mts away and was fixed at 1.2-meter height. The result was found that National and State level athletes significant difference in there, trajectory knee, trajectory ankle, displacement knee, displacement ankle, linear velocity knee, linear velocity ankle, and linear acceleration ankle whereas insignificant difference was found between National and State level athletes in their linear acceleration knee joint on sprint start with sprinting performance. For all the Statistical test the level of significance was set at p<0.05.Keywords: 2D kinematic analysis, sprinting performance, novice athletes, sprint start
Procedia PDF Downloads 323547 Examining Foreign Student Visual Perceptions of Online Marketing Tools at a Hungarian University
Authors: Anita Kéri
Abstract:
Higher education marketing has been a widely researched field in recent years. Due to the increasing competition among higher education institutions worldwide, it has become crucial to target foreign students with effective marketing tools. Online marketing tools became central to attracting, retaining, and satisfying the needs of foreign students. Therefore, the aim of the current study is to reveal how the online marketing tools of a Hungarian university are perceived visually by its first-year foreign students, with special emphasis on the university webpage content. Eye-camera tracking and retrospective think-aloud interviews were used to measure visual perceptions. Results show that freshmen students remember those online marketing content more that has familiar content on them. Pictures of real-life students and their experiences attract students’ attention more, and they also remember information on these webpage elements more, compared to designs with stock photos. This research is novel in the sense that it uses eye-camera tracking in the field of higher education marketing, thereby providing insight into the perception of online higher education marketing for foreign students.Keywords: higher education, marketing, eye-camera, visual perceptions
Procedia PDF Downloads 100546 Temperature-Based Detection of Initial Yielding Point in Loading of Tensile Specimens Made of Structural Steel
Authors: Aqsa Jamil, Tamura Hiroshi, Katsuchi Hiroshi, Wang Jiaqi
Abstract:
The yield point represents the upper limit of forces which can be applied to a specimen without causing any permanent deformation. After yielding, the behavior of the specimen suddenly changes, including the possibility of cracking or buckling. So, the accumulation of damage or type of fracture changes depending on this condition. As it is difficult to accurately detect yield points of the several stress concentration points in structural steel specimens, an effort has been made in this research work to develop a convenient technique using thermography (temperature-based detection) during tensile tests for the precise detection of yield point initiation. To verify the applicability of thermography camera, tests were conducted under different loading conditions and measuring the deformation by installing various strain gauges and monitoring the surface temperature with the help of a thermography camera. The yield point of specimens was estimated with the help of temperature dip, which occurs due to the thermoelastic effect during the plastic deformation. The scattering of the data has been checked by performing a repeatability analysis. The effects of temperature imperfection and light source have been checked by carrying out the tests at daytime as well as midnight and by calculating the signal to noise ratio (SNR) of the noised data from the infrared thermography camera, it can be concluded that the camera is independent of testing time and the presence of a visible light source. Furthermore, a fully coupled thermal-stress analysis has been performed by using Abaqus/Standard exact implementation technique to validate the temperature profiles obtained from the thermography camera and to check the feasibility of numerical simulation for the prediction of results extracted with the help of the thermographic technique.Keywords: signal to noise ratio, thermoelastic effect, thermography, yield point
Procedia PDF Downloads 107545 3D Plant Growth Measurement System Using Deep Learning Technology
Authors: Kazuaki Shiraishi, Narumitsu Asai, Tsukasa Kitahara, Sosuke Mieno, Takaharu Kameoka
Abstract:
The purpose of this research is to facilitate productivity advances in agriculture. To accomplish this, we developed an automatic three-dimensional (3D) recording system for growth of field crops that consists of a number of inexpensive modules: a very low-cost stereo camera, a couple of ZigBee wireless modules, a Raspberry Pi single-board computer, and a third generation (3G) wireless communication module. Our system uses an inexpensive Web stereo camera in order to keep total costs low. However, inexpensive video cameras record low-resolution images that are very noisy. Accordingly, in order to resolve these problems, we adopted a deep learning method. Based on the results of extended period of time operation test conducted without the use of an external power supply, we found that by using Super-Resolution Convolutional Neural Network method, our system could achieve a balance between the competing goals of low-cost and superior performance. Our experimental results showed the effectiveness of our system.Keywords: 3D plant data, automatic recording, stereo camera, deep learning, image processing
Procedia PDF Downloads 273544 Automated Localization of Palpebral Conjunctiva and Hemoglobin Determination Using Smart Phone Camera
Authors: Faraz Tahir, M. Usman Akram, Albab Ahmad Khan, Mujahid Abbass, Ahmad Tariq, Nuzhat Qaiser
Abstract:
The objective of this study was to evaluate the Degree of anemia by taking the picture of the palpebral conjunctiva using Smartphone Camera. We have first localized the region of interest from the image and then extracted certain features from that Region of interest and trained SVM classifier on those features and then, as a result, our system classifies the image in real-time on their level of hemoglobin. The proposed system has given an accuracy of 70%. We have trained our classifier on a locally gathered dataset of 30 patients.Keywords: anemia, palpebral conjunctiva, SVM, smartphone
Procedia PDF Downloads 505543 Sniff-Camera for Imaging of Ethanol Vapor in Human Body Gases after Drinking
Authors: Toshiyuki Sato, Kenta Iitani, Koji Toma, Takahiro Arakawa, Kohji Mitsubayashi
Abstract:
A 2-dimensional imaging system (Sniff-camera) for gaseous ethanol emissions from a human palm skin was constructed and demonstrated. This imaging system measures gaseous ethanol concentrations as intensities of chemiluminescence (CL) by luminol reaction induced by alcohol oxidase and luminol-hydrogen peroxide system. A conversion of ethanol distributions and concentrations to 2-dimensional CL was conducted on an enzyme-immobilized mesh substrate in a dark box, which contained a luminol solution. In order to visualize ethanol emissions from human palm skin, we developed highly sensitive and selective imaging system for transpired gaseous ethanol at sub ppm-levels. High sensitivity imaging allows us to successfully visualize the emissions dynamics of transdermal gaseous ethanol. The intensity of each pixel on the palm shows the reflection of ethanol concentrations distributions based on the metabolism of oral alcohol administration. This imaging system is significant and useful for the assessment of ethanol measurement of the palmar skin.Keywords: sniff-camera, gas-imaging, ethanol vapor, human body gas
Procedia PDF Downloads 370542 A Process of Forming a Single Competitive Factor in the Digital Camera Industry
Authors: Kiyohiro Yamazaki
Abstract:
This paper considers a forming process of a single competitive factor in the digital camera industry from the viewpoint of product platform. To make product development easier for companies and to increase product introduction ratios, development efforts concentrate on improving and strengthening certain product attributes, and it is born in the process that the product platform is formed continuously. It is pointed out that the formation of this product platform raises product development efficiency of individual companies, but on the other hand, it has a trade-off relationship of causing unification of competitive factors in the whole industry. This research tries to analyze product specification data which were collected from the web page of digital camera companies. Specifically, this research collected all product specification data released in Japan from 1995 to 2003 and analyzed the composition of image sensor and optical lens; and it identified product platforms shared by multiple products and discussed their application. As a result, this research found that the product platformation was born in the development of the standard product for major market segmentation. Every major company has made product platforms of image sensors and optical lenses, and as a result, this research found that the competitive factors were unified in the entire industry throughout product platformation. In other words, this product platformation brought product development efficiency of individual firms; however, it also caused industrial competition factors to be unified in the industry.Keywords: digital camera industry, product evolution trajectory, product platform, unification of competitive factors
Procedia PDF Downloads 158541 Characterization of Kopff Crater Using Remote Sensing Data
Authors: Shreekumari Patel, Prabhjot Kaur, Paras Solanki
Abstract:
Moon Mineralogy Mapper (M3), Miniature Radio Frequency (Mini-RF), Kaguya Terrain Camera images, Lunar Orbiter Laser Altimeter (LOLA) digital elevation model (DEM) and Lunar Reconnaissance Orbiter Camera (LROC)- Narrow angle camera (NAC) and Wide angle camera (WAC) images were used to study mineralogy, surface physical properties, and age of the 42 km diameter Kopff crater. M3 indicates the low albedo crater floor to be high-Ca pyroxene dominated associated with floor fracture suggesting the igneous activity of the gabbroic material. Signature of anorthositic material is sampled on the eastern edge as target material is excavated from ~3 km diameter impact crater providing access to the crustal composition. Several occurrences of spinel were detected in northwestern rugged terrain. Our observation can be explained by exposure of spinel by this crater that impacted onto the inner rings of Orientale basin. Spinel was part of the pre-impact target, an intrinsic unit of basin ring. Crater floor was dated by crater counts performed on Kaguya TC images. Nature of surface was studied in detail with LROC NAC and Mini-RF. Freshly exposed surface and boulder or debris seen in LROC NAC images have enhanced radar signal in comparison to mature terrain of Kopff crater. This multidisciplinary analysis of remote sensing data helps to assess lunar surface in detail.Keywords: crater, mineralogy, moon, radar observations
Procedia PDF Downloads 160540 The Way Digitized Lectures and Film Presence Coaching Impact Academic Identity: An Expert Facilitated Participatory Action Research Case Study
Authors: Amanda Burrell, Tonia Gary, David Wright, Kumara Ward
Abstract:
This paper explores the concept of academic identity as it relates to the lecture, in particular, the digitized lecture delivered to a camera, in the absence of a student audience. Many academics have the performance aspect of the role thrust upon them with little or no training. For the purpose of this study, we look at the performance of the academic identity and examine tailored film presence coaching for its contributions toward academic identity, specifically in relation to feelings of self-confidence and diminishment of discomfort or stage fright. The case is articulated through the lens of scholar-practitioners, using expert facilitated participatory action research. It demonstrates in our sample of experienced academics, all reported some feelings of uncertainty about presenting lectures to camera prior to coaching. We share how power poses and reframing fear, produced improvements in the ease and competency of all participants. We share exactly how this insight could be adapted for self-coaching by any academic when called to present to a camera and consider the relationship between this and academic identity.Keywords: academic identity, digitized lecture, embodied learning, performance coaching
Procedia PDF Downloads 337539 A Simple Algorithm for Real-Time 3D Capturing of an Interior Scene Using a Linear Voxel Octree and a Floating Origin Camera
Authors: Vangelis Drosos, Dimitrios Tsoukalos, Dimitrios Tsolis
Abstract:
We present a simple algorithm for capturing a 3D scene (focused on the usage of mobile device cameras in the context of augmented/mixed reality) by using a floating origin camera solution and storing the resulting information in a linear voxel octree. Data is derived from cloud points captured by a mobile device camera. For the purposes of this paper, we assume a scene of fixed size (known to us or determined beforehand) and a fixed voxel resolution. The resulting data is stored in a linear voxel octree using a hashtable. We commence by briefly discussing the logic behind floating origin approaches and the usage of linear voxel octrees for efficient storage. Following that, we present the algorithm for translating captured feature points into voxel data in the context of a fixed origin world and storing them. Finally, we discuss potential applications and areas of future development and improvement to the efficiency of our solution.Keywords: voxel, octree, computer vision, XR, floating origin
Procedia PDF Downloads 133538 A Monocular Measurement for 3D Objects Based on Distance Area Number and New Minimize Projection Error Optimization Algorithms
Authors: Feixiang Zhao, Shuangcheng Jia, Qian Li
Abstract:
High-precision measurement of the target’s position and size is one of the hotspots in the field of vision inspection. This paper proposes a three-dimensional object positioning and measurement method using a monocular camera and GPS, namely the Distance Area Number-New Minimize Projection Error (DAN-NMPE). Our algorithm contains two parts: DAN and NMPE; specifically, DAN is a picture sequence algorithm, NMPE is a relatively positive optimization algorithm, which greatly improves the measurement accuracy of the target’s position and size. Comprehensive experiments validate the effectiveness of our proposed method on a self-made traffic sign dataset. The results show that with the laser point cloud as the ground truth, the size and position errors of the traffic sign measured by this method are ± 5% and 0.48 ± 0.3m, respectively. In addition, we also compared it with the current mainstream method, which uses a monocular camera to locate and measure traffic signs. DAN-NMPE attains significant improvements compared to existing state-of-the-art methods, which improves the measurement accuracy of size and position by 50% and 15.8%, respectively.Keywords: monocular camera, GPS, positioning, measurement
Procedia PDF Downloads 144537 Unmanned Aerial Vehicle Use for Emergency Purpose
Authors: Shah S. M. A., Aftab U.
Abstract:
It is imperative in today’s world to get a real time information about different emergency situation occurred in the environment. Helicopters are mostly used to access places which are hard to access in emergencies like earthquake, floods, bridge failure or in any other disasters conditions. Use of helicopters are considered more costly to properly collect the data. Therefore a new technique has been introduced in this research to promptly collect data using drones. The drone designed in this research is based on trial and error experimental work with objective to construct an economical drone. Locally available material have been used for this purpose. And a mobile camera were also attached to prepare video during the flight. It was found that within very limited resources the result were quite successful.Keywords: UAV, real time, camera, disasters
Procedia PDF Downloads 237536 Study on Construction of 3D Topography by UAV-Based Images
Authors: Yun-Yao Chi, Chieh-Kai Tsai, Dai-Ling Li
Abstract:
In this paper, a method of fast 3D topography modeling using the high-resolution camera images is studied based on the characteristics of Unmanned Aerial Vehicle (UAV) system for low altitude aerial photogrammetry and the need of three dimensional (3D) urban landscape modeling. Firstly, the existing high-resolution digital camera with special design of overlap images is designed by reconstructing and analyzing the auto-flying paths of UAVs, which improves the self-calibration function to achieve the high precision imaging by software, and further increased the resolution of the imaging system. Secondly, several-angle images including vertical images and oblique images gotten by the UAV system are used for the detail measure of urban land surfaces and the texture extraction. Finally, the aerial photography and 3D topography construction are both developed in campus of Chang-Jung University and in Guerin district area in Tainan, Taiwan, provide authentication model for construction of 3D topography based on combined UAV-based camera images from system. The results demonstrated that the UAV system for low altitude aerial photogrammetry can be used in the construction of 3D topography production, and the technology solution in this paper offers a new, fast, and technical plan for the 3D expression of the city landscape, fine modeling and visualization.Keywords: 3D, topography, UAV, images
Procedia PDF Downloads 303535 Visual Search Based Indoor Localization in Low Light via RGB-D Camera
Authors: Yali Zheng, Peipei Luo, Shinan Chen, Jiasheng Hao, Hong Cheng
Abstract:
Most of traditional visual indoor navigation algorithms and methods only consider the localization in ordinary daytime, while we focus on the indoor re-localization in low light in the paper. As RGB images are degraded in low light, less discriminative infrared and depth image pairs are taken, as the input, by RGB-D cameras, the most similar candidates, as the output, are searched from databases which is built in the bag-of-word framework. Epipolar constraints can be used to relocalize the query infrared and depth image sequence. We evaluate our method in two datasets captured by Kinect2. The results demonstrate very promising re-localization results for indoor navigation system in low light environments.Keywords: indoor navigation, low light, RGB-D camera, vision based
Procedia PDF Downloads 461534 A Study on the Comparatison of Mechanical and Thermal Properties According to Laminated Orientation of CFRP through Bending Test
Authors: Hee Jae Shin, Lee Ku Kwac, In Pyo Cha, Min Sang Lee, Hyun Kyung Yoon, Hong Gun Kim
Abstract:
In rapid industrial development has increased the demand for high-strength and lightweight materials. Thus, various CFRP (Carbon Fiber Reinforced Plastics) with composite materials are being used. The design variables of CFRP are its lamination direction, order, and thickness. Thus, the hardness and strength of CFRP depend much on their design variables. In this paper, the lamination direction of CFRP was used to produce a symmetrical ply [0°/0°, -15°/+15°, -30°/+30°, -45°/+45°, -60°/+60°, -75°/+75°, and 90°/90°] and an asymmetrical ply [0°/15°, 0°/30°, 0°/45°, 0°/60° 0°/75°, and 0°/90°]. The bending flexure stress of the CFRP specimen was evaluated through a bending test. Its thermal property was measured using an infrared camera. The symmetrical specimen and the asymmetrical specimen were analyzed. The results showed that the asymmetrical specimen increased the bending loads according to the increase in the orientation angle; and from 0°, the symmetrical specimen showed a tendency opposite the asymmetrical tendency because the tensile force of fiber differs at the vertical direction of its load. Also, the infrared camera showed that the thermal property had a trend similar to that of the mechanical properties.Keywords: Carbon Fiber Reinforced Plastic (CFRP), bending test, infrared camera, composite
Procedia PDF Downloads 398533 Crater Detection Using PCA from Captured CMOS Camera Data
Authors: Tatsuya Takino, Izuru Nomura, Yuji Kageyama, Shin Nagata, Hiroyuki Kamata
Abstract:
We propose a method of detecting the craters from the image of the lunar surface. This proposal assumes that it is applied to SLIM (Smart Lander for Investigating Moon) working group aiming at the pinpoint landing on the lunar surface and investigating scientific research. It is difficult to equip and use high-performance computers for the small space probe. So, it is necessary to use a small computer with an exclusive hardware such as FPGA. We have studied the crater detection using principal component analysis (PCA), In this paper, We implement detection algorithm into the FPGA, and the detection is performed on the data that was captured from the CMOS camera.Keywords: crater detection, PCA, FPGA, image processing
Procedia PDF Downloads 550532 Camera Model Identification for Mi Pad 4, Oppo A37f, Samsung M20, and Oppo f9
Authors: Ulrich Wake, Eniman Syamsuddin
Abstract:
The model for camera model identificaiton is trained using pretrained model ResNet43 and ResNet50. The dataset consists of 500 photos of each phone. Dataset is divided into 1280 photos for training, 320 photos for validation and 400 photos for testing. The model is trained using One Cycle Policy Method and tested using Test-Time Augmentation. Furthermore, the model is trained for 50 epoch using regularization such as drop out and early stopping. The result is 90% accuracy for validation set and above 85% for Test-Time Augmentation using ResNet50. Every model is also trained by slightly updating the pretrained model’s weightsKeywords: One Cycle Policy, ResNet34, ResNet50, Test-Time Agumentation
Procedia PDF Downloads 208