Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured Global Navigation Satellite System Denied Environments
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32797
Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured Global Navigation Satellite System Denied Environments

Authors: David L. Olson, Stephen B. H. Bruder, Adam S. Watkins, Cleon E. Davis

Abstract:

In global navigation satellite system (GNSS) denied settings, such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.

Keywords: Autonomous mobile robotics, dead reckoning, depth camera, inertial navigation, Kalman filtering, localization, sensor fusion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 633

References:


[1] S. Y. Cho, M. S. Chae and K. H. Shin, “Reliability Analysis of the Integrated Navigation System Based on Real Trajectory and Calculation of Safety Margin Between Trains,” in IEEE Access, vol. 9, pp. 32986-32996, 2021.
[2] Paul D. Groves, “Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems 2e,” Artech House, 2013.
[3] J. Kunhoth, A. Karkar, S. Al‑Maadeed, and A. Al‑Ali, “Indoor positioning and wayfinding systems: a survey,” Human Centric Computing and Information Sciences, 10, 18 (2020).
[4] E. J. Alqahtani, F. H. Alshamrani, H. F. Syed and F. A. Alhaidari, “Survey fon Algorithms and Techniques for Indoor Navigation Systems.,” 2018 21st Saudi Computer Society National Computer Conference (NCC), 2018.
[5] http://mercury.pr.erau.edu/~bruders/teaching/EE440/ee440.html
[6] T. Fauser, S. Bruder and A. El-Osery, “A comparison of inertial-based navigation algorithms for a low-cost indoor mobile robot,” 2017 12th International Conference on Computer Science and Education (ICCSE), 2017.
[7] Robert Grover Brown and Patrick Y. C. Hwang, “Introduction to Random Signals and Applied Kalman Filtering with Matlab Exercises, 4th Edition,” Wiley, February 2012.
[8] A. M. Pinto, P. Costa, A. P. Moreira, L. F. Rocha, G. Veiga and E. Moreira, “Evaluation of Depth Sensors for Robotic Applications,” 2015 IEEE International Conference on Autonomous Robot Systems and Competitions, 2015, pp. 139-143, doi: 10.1109/ICARSC.2015.24.
[9] Limberger, F. A. (2014). Real-Time Detection of Planar Regions in Unorganized Point Clouds (thesis). PPGC da UFRGS, Porto Alegre.
[10] Hough, Paul V. C. “Machine Analysis of Bubble Chamber Pictures.” (1959).
[11] Borrmann, D., Elseberg, J., Lingemann, K. et al. The 3D Hough Transform for plane detection in point clouds: A review and a new accumulator design. 3D Res 2, 3 (2011). https://doi.org/10.1007/3DRes.02(2011)3
[12] T. Mallick, P. P. Das and A. K. Majumdar, “Characterizations of Noise in Kinect™ Depth Images: A Review,” in IEEE Sensors Journal, vol. 14, no. 6, pp. 1731-1740, June 2014, doi: 10.1109/JSEN.2014.2309987.
[13] Khoshelham, Kourosh, and Sander Oude Elberink. 2012. “Accuracy and Resolution of Kinect™ Depth Data for Indoor Mapping Applications” Sensors 12, no. 2: 1437-1454. https://doi.org/10.3390/s120201437
[14] Park, Jae-Han, Yong-Deuk Shin, Ji-Hun Bae, and Moon-Hong Baeg. 2012. “Spatial Uncertainty Model for Visual Features Using a Kinect™ Sensor” Sensors 12, no. 7: 8640-8662. https://doi.org/10.3390/s120708640