Object Detection Based on Plane Segmentation and Features Matching for a Service Robot
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33093
Object Detection Based on Plane Segmentation and Features Matching for a Service Robot

Authors: António J. R. Neves, Rui Garcia, Paulo Dias, Alina Trifan

Abstract:

With the aging of the world population and the continuous growth in technology, service robots are more and more explored nowadays as alternatives to healthcare givers or personal assistants for the elderly or disabled people. Any service robot should be capable of interacting with the human companion, receive commands, navigate through the environment, either known or unknown, and recognize objects. This paper proposes an approach for object recognition based on the use of depth information and color images for a service robot. We present a study on two of the most used methods for object detection, where 3D data is used to detect the position of objects to classify that are found on horizontal surfaces. Since most of the objects of interest accessible for service robots are on these surfaces, the proposed 3D segmentation reduces the processing time and simplifies the scene for object recognition. The first approach for object recognition is based on color histograms, while the second is based on the use of the SIFT and SURF feature descriptors. We present comparative experimental results obtained with a real service robot.

Keywords: Service Robot, Object Recognition, 3D Sensors, Plane Segmentation.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1124017

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1674

References:


[1] L. Martinez, P. Loncomilla, and P. Ruiz-del Solar, “Object recognition for manipulation tasks in real domestic settings: A comparative study,” in Proceedings of RoboCup 2014 Symposium, Joao Pessoa, Brazil, July 2014.
[2] “RoboCup Federation official website,” www.robocup.org, accessed: 2015-09-30.
[3] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision, vol. 60, no. 2, pp. 91–110, Nov. 2004.
[4] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (surf),” Comput. Vis. Image Underst., vol. 110, no. 3, pp. 346–359, Jun. 2008.
[5] R. B. Rusu, Z. C. Marton, N. Blodow, and M. Beetz, “Persistent Point Feature Histograms for 3D Point Clouds,” in Proceedings of the 10th International Conference on Intelligent Autonomous Systems (IAS-10), Baden-Baden, Germany, 2008.
[6] J. Borenstein and Y. Koren, “The Vector Field Histogram - Fast Obstacle Avoidance for Mobile Robots,” IEEE Transactions on Robotics and Automation, vol. 7, no. 3, pp. 278–288, 1991.
[7] R. B. Rusu, A. Holzbach, G. Bradski, and M. Beetz, “Detecting and segmenting objects for mobile manipulation,” in Proceedings of IEEE Workshop on Search in 3D and Video (S3DV), held in conjunction with the 12th IEEE International Conference on Computer Vision (ICCV), Kyoto, Japan, September 27 2009.
[8] R. B. Rusu and S. Cousins, “3d is here: Point cloud library (pcl),” in International Conference on Robotics and Automation, Shanghai, China, 2011 2011.
[9] Y. Holdstein and A. Fischer, “Three-dimensional surface reconstruction using meshing growing neural gas (mgng),” The Visual Computer, vol. 24, no. 4, pp. 295–302, 2008.
[10] R. Dwiputra, M. F¨uller, F. Hegger, S. Schneider, I. A. J. M. S. Loza, A. Y. Ozhigov, S. Biswas, N. V. Deshpand, A. H. I. Ivanovska, P. G. Ploeger, and G. K. Kraetzschmar, “The b-it-bots robocup@home 2014 team description paper,” Joao Pessoa, Brazil, 2014.
[11] S. A. M. C. J. J. M. Lunenburg and T. T. J. Derksen, “Tech united eindhoven @home 2014 team description paper,” Joao Pessoa, Brazil, 2014.
[12] T. D. Jager, “Robust object detection for service robotics,” PhD thesis, Utrecht University, 2013.
[13] “Robot Operating System official website,” www.ros.org, accessed: 2015-09-30.
[14] J. St¨uckler and S. Behnke, “Multi-resolution surfel maps for efficient dense 3d modeling and tracking,” J. Vis. Comun. Image Represent., vol. 25, no. 1, pp. 137–147, Jan. 2014.
[15] J. St¨uckler, B. Waldvogel, H. Schulz, and S. Behnke, “Dense real-time mapping of object-class semantics from rgb-d video,” Journal of Real-Time Image Processing, 2014.
[16] “CAMBADA@HOME official website,” http://robotica.ua.pt/ CAMBADA@HOME/, accessed: 2015-09-30.
[17] “Microsoft Kinect official website,” https://dev.windows.com/en-us/ kinect, accessed: 2015-09-30.
[18] “Point Cloud Library official website,” http://pointclouds.org/, accessed: 2015-09-30.
[19] M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM, vol. 24, no. 6, pp. 381–395, 1981.
[20] D. G. R. Bradski and A. Kaehler, Learning Opencv, 1st Edition, 1st ed. O’Reilly Media, Inc., 2008.
[21] M. J. Swain and D. H. Ballard, “Color indexing,” Int. J. Comput. Vision, vol. 7, no. 1, pp. 11–32, Nov. 1991.
[22] B. Schiele and J. L. Crowley, “Object recognition using multidimensional receptive field histograms,” in Proceedings of the 4th European Conference on Computer Vision-Volume I - Volume I, ser. ECCV ’96. London, UK: Springer-Verlag, 1996, pp. 610–619.
[23] M. Muja and D. G. Lowe, “Fast approximate nearest neighbors with automatic algorithm configuration,” in In VISAPP International Conference on Computer Vision Theory and Applications, 2009, pp. 331–340.