Tape-Shaped Multiscale Fiducial Marker: A Design Prototype for Indoor Localization
Authors: Marcell S. A. Martins, Benedito S. R. Neto, Gerson L. Serejo, Carlos G. R. Santos
Abstract:
Indoor positioning systems use sensors such as Bluetooth, ZigBee, and Wi-Fi, as well as cameras for image capture, which can be fixed or mobile. These computer vision-based positioning approaches are low-cost to implement, mainly when it uses a mobile camera. The present study aims to create a design of a fiducial marker for a low-cost indoor localization system. The marker is tape-shaped to perform a continuous reading employing two detection algorithms, one for greater distances and another for smaller distances. Therefore, the location service is always operational, even with variations in capture distance. A minimal localization and reading algorithm was implemented for the proposed marker design, aiming to validate it. The accuracy tests consider readings varying the capture distance between [0.5, 10] meters, comparing the proposed marker with others. The tests showed that the proposed marker has a broader capture range than the ArUco and QRCode, maintaining the same size. Therefore, reducing the visual pollution and maximizing the tracking since the ambient can be covered entirely.
Keywords: Multiscale recognition, indoor localization, tape-shaped marker, Fiducial Marker.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 178References:
[1] M. Koˇst’´ak and A. Slab`y, “Designing a simple fiducial marker for localization in spatial scenes using neural networks,” Sensors, vol. 21, no. 16, p. 5407, 2021.
[2] H. Lim and Y. S. Lee, “Real-time single camera slam using fiducial markers,” in 2009 ICCAS-SICE. IEEE, 2009, pp. 177–182.
[3] H. Kato and M. Billinghurst, “Marker tracking and hmd calibration for a video-based augmented reality conferencing system,” in Proceedings 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR’99). IEEE, 1999, pp. 85–94.
[4] M. Fiala, “Artag, a fiducial marker system using digital techniques,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 2. IEEE, 2005, pp. 590–596.
[5] R. Bencina, M. Kaltenbrunner, and S. Jorda, “Improved topological fiducial tracking in the reactivision system,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)-Workshops. IEEE, 2005, pp. 99–99.
[6] B. Atcheson, F. Heide, and W. Heidrich, “Caltag: High precision fiducial markers for camera calibration.” in VMV, vol. 10, 2010, pp. 41–48.
[7] E. Olson, “Apriltag: A robust and flexible visual fiducial system,” in 2011 IEEE international conference on robotics and automation. IEEE, 2011, pp. 3400–3407.
[8] F. Bergamasco, A. Albarelli, E. Rodola, and A. Torsello, “Rune-tag: A high accuracy fiducial marker with strong occlusion resilience,” in CVPR 2011. IEEE, 2011, pp. 113–120.
[9] J. DeGol, T. Bretl, and D. Hoiem, “Chromatag: A colored marker and fast detection algorithm,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 1472–1481.
[10] G. Yu, Y. Hu, and J. Dai, “Topotag: A robust and scalable topological fiducial marker system,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 9, pp. 3769–3780, 2020.
[11] R. Mu˜noz-Salinas, M. J. Mar´ın-Jimenez, E. Yeguas-Bolivar, and R. Medina-Carnicer, “Mapping and localization from planar markers,” Pattern Recognition, vol. 73, pp. 158–171, 2018.
[12] L. B. Gatrell, W. A. Hoff, and C. W. Sklair, “Robust image features: Concentric contrasting circles and their image extraction,” in Cooperative Intelligent Robotics in Space II, vol. 1612. SPIE, 1992, pp. 235–244.
[13] Y. Cho, J. Lee, and U. Neumann, “A multi-ring color fiducial system and an intensity-invariant detection method for scalable fiducial-tracking augmented reality,” in In IWAR. Citeseer, 1998.
[14] C.-C. Lin and R. L. Tummala, “Mobile robot navigation using artificial landmarks,” Journal of Robotic Systems, vol. 14, no. 2, pp. 93–106, 1997.
[15] J. Rekimoto and Y. Ayatsuka, “Cybercode: designing augmented reality environments with visual tags,” in Proceedings of DARE 2000 on Designing augmented reality environments, 2000, pp. 1–10.
[16] Y. P. Wang and A. Ye, “Maxicode data extraction using spatial domain features,” Jun. 10 1997, uS Patent 5,637,849.
[17] B. ISO, “Iec 16022: information technology-automatic identification and data capture techniques-data matrix bar code symbology specification,” BS ISO/IEC, vol. 16022, 2006.
[18] J. Liu, S. Chen, H. Sun, Y. Qin, and X. Wang, “Real time tracking method by using color markers,” in 2013 International Conference on Virtual Reality and Visualization. IEEE, 2013, pp. 106–111.
[19] D. Jurado-Rodr´ıguez, R. Mu˜noz-Salinas, S. Garrido-Jurado, and R. Medina-Carnicer, “Design, detection, and tracking of customized fiducial markers,” IEEE Access, vol. 9, pp. 140 066–140 078, 2021.
[20] Z. Zhang, Y. Hu, G. Yu, and J. Dai, “Deeptag: A general framework for fiducial marker design and detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
[21] S. Garrido-Jurado, R. Mu˜noz-Salinas, F. J. Madrid-Cuevas, and M. J. Mar´ın-Jim´enez, “Automatic generation and detection of highly reliable fiducial markers under occlusion,” Pattern Recognition, vol. 47, no. 6, pp. 2280–2292, 2014.
[22] E. Costanza and J. Robinson, “A region adjacency tree approach to the detection and design of fiducials.” 2003.
[23] M. Kabuka and A. Arenas, “Position verification of a mobile robot using standard pattern,” IEEE Journal on Robotics and Automation, vol. 3, no. 6, pp. 505–516, 1987.
[24] V. F. Leavers, Shape detection in computer vision using the Hough transform. Springer, 1992, vol. 1.