Segmentation of Korean Words on Korean Road Signs
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32804
Segmentation of Korean Words on Korean Road Signs

Authors: Lae-Jeong Park, Kyusoo Chung, Jungho Moon

Abstract:

This paper introduces an effective method of segmenting Korean text (place names in Korean) from a Korean road sign image. A Korean advanced directional road sign is composed of several types of visual information such as arrows, place names in Korean and English, and route numbers. Automatic classification of the visual information and extraction of Korean place names from the road sign images make it possible to avoid a lot of manual inputs to a database system for management of road signs nationwide. We propose a series of problem-specific heuristics that correctly segments Korean place names, which is the most crucial information, from the other information by leaving out non-text information effectively. The experimental results with a dataset of 368 road sign images show 96% of the detection rate per Korean place name and 84% per road sign image.

Keywords: Segmentation, road signs, characters, classification.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1110385

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2689

References:


[1] Road Sign Management System, Tech. Report, Ministry of Land, Infrastructure, and Transport, pp. 219-299. 2002.
[2] E. Kim, D. Cho, K. Chung, and S. Kim, “Efficient methods for road sign database construction,” J. of the Korean Society for Geo-Spatial Information System, vol. 19, no. 3, pp. 91-98, 2011.
[3] A. Ruta, Y. Li, and X. Liu, “Real-time traffic sign recognition from video by class-specific discriminative features,” Pattern Recognition, vol. 43, pp. 416~430, 2010.
[4] W. Wu, X. Chen, and J. Yang, “Detection of text on road signs from video,” IEEE Trans. on Intelligent Transportation Systems, vol. 6. no. 4. pp. 378~390, 2005.
[5] A. Vavilin and K-H Jo, “Road guidance sign recognition in urban areas by structure,” Int. Forum on Strategic Technology, pp. 293~296, 2006.
[6] J.-E. Ha, “Grouping contents on Korean road signs,” Int. J. of Control, Automation, and Systems, vol. 9. no. 6, pp. 1187~1193, 2001.
[7] A. Soetedjo, K. Yamada, and F. Y. Limpraptono, “Segmentation of road guidance sign symbols and characters based on normalized RGB chromaticity diagram,” Int. J. of Computer Applications, vol. 3, no. 3, pp. 10~15, 2010.
[8] A. Gonzalez, L. M. Bergasa, J. Yebes, and M. A. Sotelo, “Automatic information recognition of traffic panels using SIFT descriptors and HMMs,” Conf. on Intelligent Transportation Systems, Portugal, September, pp. 1289~1294, 2010.
[9] A. Gonzalez, L. M. Bergasa, J. Yebes, and J. Almazan, “Text recognition on traffic panels from street-level imagery,” Intelligent Vehicles Symposium, Spain, June, pp. 340~345, 2012.
[10] B. Epshtein, E. Ofek, and Y. Wexler, “Detecting text in natural scenes with stroke width transform,” IEEE Conf. on Computer Vision and Pattern Recognition, pp. 2963-2970, 2010.
[11] L-J Park, M. Cho, J. Yoon, and K-S Chung, “Detection of Hangul texts and symbols on road signs on stroke width transform,” Fall Conf. of Korea Information Processing Society, pp. 1318-1320, 2013.
[12] R. Smith, “An overview of the Tesseract OCR,” Conf. on Document Analysis and Recognition, pp. 629-633, 2007.