Hybrid Temporal Correlation Based on Gaussian Mixture Model Framework for View Synthesis
As 3D video is explored as a hot research topic in the last few decades, free-viewpoint TV (FTV) is no doubt a promising field for its better visual experience and incomparable interactivity. View synthesis is obviously a crucial technology for FTV; it enables to render images in unlimited numbers of virtual viewpoints with the information from limited numbers of reference view. In this paper, a novel hybrid synthesis framework is proposed and blending priority is explored. In contrast to the commonly used View Synthesis Reference Software (VSRS), the presented synthesis process is driven in consideration of the temporal correlation of image sequences. The temporal correlations will be exploited to produce fine synthesis results even near the foreground boundaries. As for the blending priority, this scheme proposed that one of the two reference views is selected to be the main reference view based on the distance between the reference views and virtual view, another view is chosen as the auxiliary viewpoint, just assist to fill the hole pixel with the help of background information. Significant improvement of the proposed approach over the state-of –the-art pixel-based virtual view synthesis method is presented, the results of the experiments show that subjective gains can be observed, and objective PSNR average gains range from 0.5 to 1.3 dB, while SSIM average gains range from 0.01 to 0.05.
Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1129684Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 796
 M.C. Frederic Dufaux, Beatrice Pesquet-Popescu, Emerging Technologies for 3D Video: Creation, Coding, Transmission and Rendering, JOHN WILEY & SONS INC, 2013.
 M. Tanimoto, M.P. Tehrani, T. Fujii, and T. Yendo, “Free-viewpoint TV,” IEEE Signal Processing Magazine, vol.28, no.1, pp.67–76, Jan. 2011.
 A.I. Purica, E.G. Mora, B. Pesquet-Popescu, M. Cagnazzo, and B. Ionescu, “Multiview plus depth video coding with temporal prediction view synthesis,” IEEE Transactions on Circuits and Systems for Video Technology, vol.26, no.2, pp.360–374, Feb. 2016.
 C.M. Cheng, S.J. Lin, S.H. Lai, and J.C. Yang, “Improved novel view synthesis from depth image with large baseline,” Proc. 19th Int. Conf. Pattern Recognition ICPR 2008, pp.1–4, Dec. 2008.
 Z. w. Liu, P. An, S. x. Liu, and Z. y. Zhang, “Arbitrary view generation based on dibr,” Proc. Int. Symp. Intelligent Signal Processing and Communi -cation Systems ISPACS 2007, pp.168–171, Nov. 2007.
 I. Ahn and C. Kim, “A novel depth-based virtual view synthesis method for free viewpoint video,” IEEE Transactions on Broadcasting, vol.59, no.4, pp.614–626, Dec. 2013.
 L. Zhang, W.J. Tam, and D. Wang, “Stereoscopic image generation based on depth images,” Proc. Int. Conf. Image Processing ICIP ’04, pp.2993–2996 Vol. 5, Oct. 2004.
 L. Zhang and W.J. Tam, “Stereoscopic image generation based on depth images for 3D TV,” IEEE Transactions on Broadcasting, vol.51, no.2, pp.191–199, June 2005.
 P.J. Lee and Effendi, “Nongeometric distortion smoothing approach for depth map preprocessing,” IEEE Transactions on Multimedia, vol.13, no.2, pp.246–254, April 2011.
 A. Criminisi, P. Perez, and K. Toyama, “Region filling and object removal by exemplar-based image inpainting,” IEEE Transactions on Image Processing, vol.13, no.9, pp.1200–1212, Sept. 2004.
 Y. Zhao, C. Zhu, Z. Chen, D. Tian, and L. Yu, “Boundary artifact reduction in view synthesis of 3D video: From perspective of texture depth alignment,” IEEE Transactions on Broadcasting, vol.57, no.2, pp.510–522, June 2011.
 M. Bertalmio, A.L. Bertozzi, and G. Sapiro, “Navier-stokes, fluid dynamics, and image and video inpainting,” Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition CVPR 2001,pp.I–355–I–362 vol.1, 2001.
 M. Bertalmio, “Strong-continuation, contrast-invariant inpainting with a third-order optimal pde,” IEEE Transactions on Image Processing, vol.15, no.7, pp.1934–1938, July 2006.
 M. Schmeing and X. Jiang, “Depth image based rendering: A faithful approach for the disocclusion problem,” Proc. Transmission and Display of 3D Video 2010 3DTV-Conf.: The True Vision - Capture, pp.1–4, June 2010.
 K.Y. Chen, P.K. Tsung, P.C. Lin, H.J. Yang, and L.G. Chen, “Hybrid motion/depth-oriented inpainting for virtual view synthesis in multi-view applications,” Proc. Transmission and Display of 3D Video 2010 3DTV-Conf.: The True Vision - Capture, pp.1–4, June 2010.
 M. Köppel, P. Ndjiki-Nya, D. Doshkov, H. Lakshman, P. Merkle, K. Müller, and T. Wiegand, “Temporally consistent handling of disocclusions with texture synthesis for depth-image-based rendering,” Proc. IEEE Int. Conf. Image Processing, pp.1809–1812, Sept. 2010.
 P. Ndjiki-Nya, M. Koppel, D. Doshkov, H. Lakshman, P. Merkle, K. Muller, and T. Wiegand, “Depth image-based rendering with advanced texture synthesis for 3-D video,” IEEE Transactions on Multimedia, vol.13, no.3, pp. 453–465, June 2011.
 E. Bosc, M. Köppel, R. Pépion, M. Pressigout, L. Morin, P. Ndjiki-Nya, and P.L. Callet, “Can 3D synthesized views be reliably assessed through usual subjective and objective evaluation protocols?,” Proc.18th IEEE Int. Conf. Image Processing, pp.2597–2600, Sept. 2011.
 C. Yao, T. Tillo, Y. Zhao, J. Xiao, H. Bai, and C. Lin, “Depth map driven hole filling algorithm exploiting temporal correlation information,” IEEE Transactions on Broadcasting, vol.60, no.2, pp.394–404, June 2014.
 C. Stauffer and W.E.L. Grimson, “Adaptive background mixture models for real-time tracking,” Proc. IEEE Computer Society Conf Computer Vision and Pattern Recognition, p.252 Vol. 2, 1999.
 D. Tian, P.L. Lai, P. Lopez, and C. Gomila, “View synthesis techniques for 3D video,” Applications of Digital Image Processing XXXII, ed. A.G. Tescher, SPIE-Intl Soc Optical Eng, aug 2009.