Block-Based 2D to 3D Image Conversion Method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33093
Block-Based 2D to 3D Image Conversion Method

Authors: S. Sowmyayani, V. Murugan

Abstract:

With the advent of three-dimension (3D) technology, there are lots of research in converting 2D images to 3D images. The main difference between 2D and 3D is the visual illusion of depth in 3D images. In the recent era, there are more depth estimation techniques. The objective of this paper is to convert 2D images to 3D images with less computation time. For this, the input image is divided into blocks from which the depth information is obtained. Having the depth information, a depth map is generated. Then the 3D image is warped using the original image and the depth map. The proposed method is tested on Make3D dataset and NYU-V2 dataset. The experimental results are compared with other recent methods. The proposed method proved to work with less computation time and good accuracy.

Keywords: Depth map, 3D image warping, image rendering, bilateral filter, minimum spanning tree.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 358

References:


[1] W. J. Tam, and L. Zhang, “3D-TV content generation: 2D-to-3D conversion,” in Proc. ICME, pp. 1869-1872, 2006.
[2] X. Y. Mao and L. K. lbsiyasu, “Hierarchical representations of 2D/3D Gray-Scale Images and their 2D/3D two way conversion,” IEEE, pp. 37-44, 1987.
[3] T. L. Chin, C. L. Chin, K. W. Fan, and C.Y. Lin, “A novel architecture for converting single 2D image into 3D effect image,” IEEE, pp. 52-55.
[4] H. Murata, X Mori, S. Yamashita, A. Maenaka, S. Okada, K. Oyamada, and S. Kishimoto, “A real-time 2-D to 3-D image conversion technique using computed image depth,” SID Symposium Digest of Technical Papers, vol. 29, no. 1, pp. 919-923, 1998.
[5] C. C. Cheng, C. T. Li, and L. G. Chen, “A 2D-to-3D conversion system using edge information,” in Proc. Digest of Technical Papers International Conference on Consumer Electronics, 2010, pp. 377-378.
[6] Z. B. Zhang, Y. Z. Wang, T. T. Jiang, and G. Wen, “Visual pertinent 2D-TO-3D video conversion by multi-cue fusion,” in Proc. 18th IEEE International Conference on Image Processing, 2011, pp. 909-912.
[7] C. L. Su, K. N. Pang, T. M. Chen, G. S. Wu, et al., “A real-time Full-HD 2D-to-3D conversion system using multicore technology,” in Proc. fifth FTRA International Conference on Multimedia and Ubiquitous Engineering, IEEE, 2011, pp. 273-276.
[8] Y. K. Lai, Y. F. Lai, and Y. C. Chen, “An effective hybrid depth-generation algorithm for 2D-to-3D conversion in 3D displays,” Journal of Display Technology, vol. 9 no. 3, pp. 154-161, March 2013.
[9] Y-L. Chang, et al, “Depth Map Generation For 2D-To-3D Conversion by Short-Term Motion Assisted Color Segmentation” in Proceedings of ICME, 2007.
[10] D. Kim, D. Min, and K. Sohn, “A Stereoscopic Video Generation Method Using Stereoscopic Display Characterization and Motion Analysis”, in IEEE Trans. On Broadcasting, Vol. 54, Issue 2, pp. 188-197, 2008.
[11] C. C. Cheng, C. T. Li, P. S. Huang, T. K. Lin, Y. M. Tsai, and L. G. Chen, “A block-based 2D-to-3D conversion system with bilateral filter” in Proceedings of IEEE International Conference on Consumer Electronics, 2009.
[12] S. Bharathi, A. Vasuki, “2D-To-3D Conversion of Images using Edge Information” International Journal of Computer Applications.
[13] S. Sowmyayani, P. Arockia Jansi Rani, “Block based Motion Estimation using Octagon and Square Pattern”, International Journal of Signal Processing, Image Processing and Pattern Recognition, 2014, Vol. 7, Iss. 4, pp.317-324.
[14] S. Paris and F. Durand, “A fast approximation of the bilateral filter using a signal processing approach”, in MIT Technical Report (MIT-CSAIL-TR-2006-073), 2006
[15] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proc. ICCV, pp. 839-846, January 1998.
[16] W.-Y. Chen and Y.-L. Chang and S.-F. Lin and L.-F. Ding and L.-G. Chen.” Efficient depth image based rendering with edge dependent depth filter and interpolation,” in Proc. ICME, pp. 1314-1317, 2005.
[17] N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. Indoor Segmentation and Support Inference from RGBD Images. In ECCV, 2012.
[18] D. Eigen, C. Puhrsch, and R. Fergus. Depth map prediction from a single image using a multi-scale deep network. In Advances in Neural Information Processing Systems 27, 2014.
[19] F. Liu, C. Shen, and G. Lin. Deep convolutional neural fields for depth estimation from a single image. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
[20] D. Eigen and R. Fergus. Predicting depth, surface normal and semantic labels with a common multi-scale convolutional architecture. In Int. Conference on Computer Vision, 2015.