Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 31106
NANCY: Combining Adversarial Networks with Cycle-Consistency for Robust Multi-Modal Image Registration

Authors: Mirjana Ruppel, Rajendra Persad, Amit Bahl, Sanja Dogramadzi, Chris Melhuish, Lyndon Smith

Abstract:

Multimodal image registration is a profoundly complex task which is why deep learning has been used widely to address it in recent years. However, two main challenges remain: Firstly, the lack of ground truth data calls for an unsupervised learning approach, which leads to the second challenge of defining a feasible loss function that can compare two images of different modalities to judge their level of alignment. To avoid this issue altogether we implement a generative adversarial network consisting of two registration networks GAB, GBA and two discrimination networks DA, DB connected by spatial transformation layers. GAB learns to generate a deformation field which registers an image of the modality B to an image of the modality A. To do that, it uses the feedback of the discriminator DB which is learning to judge the quality of alignment of the registered image B. GBA and DA learn a mapping from modality A to modality B. Additionally, a cycle-consistency loss is implemented. For this, both registration networks are employed twice, therefore resulting in images ˆA, ˆB which were registered to ˜B, ˜A which were registered to the initial image pair A, B. Thus the resulting and initial images of the same modality can be easily compared. A dataset of liver CT and MRI was used to evaluate the quality of our approach and to compare it against learning and non-learning based registration algorithms. Our approach leads to dice scores of up to 0.80 ± 0.01 and is therefore comparable to and slightly more successful than algorithms like SimpleElastix and VoxelMorph.

Keywords: Deep learning, gan, Multimodal image registration, cycle consistency

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 196

References:


[1] A. E. Kavur, M. A. Selver, O. Dicle, M. Bar, and N. S. Gezer, “Chaos - combined (ct-mr) healthy abdominal organ segmentation challenge data,” Apr. 2019.
[Online]. Available: https://doi.org/10.5281/zenodo.3362844
[2] E. Ferrante and N. Paragios, “Slice-to-volume medical image registration: A survey,” Medical image analysis, vol. 39, pp. 101–123, 2017.
[3] M. Simonovsky, B. Guti´errez-Becker, D. Mateus, N. Navab, and N. Komodakis, “A deep metric for multimodal registration,” in International conference on medical image computing and computer-assisted intervention. Springer, 2016, pp. 10–18.
[4] D. Mattes, D. R. Haynor, H. Vesselle, T. K. Lewellyn, and W. Eubank, “Nonrigid multimodality image registration,” in Medical Imaging 2001: Image Processing, vol. 4322. International Society for Optics and Photonics, 2001, pp. 1609–1620.
[5] D. Mahapatra, B. Antony, S. Sedai, and R. Garnavi, “Deformable medical image registration using generative adversarial networks,” in 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). IEEE, 2018, pp. 1449–1453.
[6] G. Balakrishnan, A. Zhao, M. R. Sabuncu, J. Guttag, and A. V. Dalca, “An unsupervised learning model for deformable medical image registration,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 9252–9260.
[7] G. Balakrishnan, A. Zhao, M. R. Sabuncu, and J. Guttag, “Voxelmorph: a learning framework for deformable medical image registration,” IEEE transactions on medical imaging, 2019.
[8] X. Yang, R. Kwitt, and M. Niethammer, “Fast predictive image registration,” in Deep Learning and Data Labeling for Medical Applications. Springer, 2016, pp. 48–57.
[9] X. Yang, R. Kwitt, M. Styner, and M. Niethammer, “Quicksilver: Fast predictive image registration–a deep learning approach,” NeuroImage, vol. 158, pp. 378–396, 2017.
[10] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223–2232.
[11] B. C. Lowekamp, D. T. Chen, L. Ib´a˜nez, and D. Blezek, “The design of simpleitk,” Frontiers in neuroinformatics, vol. 7, p. 45, 2013.
[12] J. Mitra, S. Ghose, D. Sidib´e, A. Oliver, R. Marti, X. Llado, J. C. Vilanova, J. Comet, and F. M´eriaudeau, “Weighted likelihood function of multiple statistical parameters to retrieve 2d trus-mr slice correspondence for prostate biopsy,” in 2012 19th IEEE International Conference on Image Processing. IEEE, 2012, pp. 2949–2952.
[13] C. Reynier, J. Troccaz, P. Fourneret, A. Dusserre, C. Gay-Jeune, J.-L. Descotes, M. Bolla, and J.-Y. Giraud, “Mri/trus data fusion for prostate brachytherapy. preliminary results,” Medical physics, vol. 31, no. 6, pp. 1568–1575, 2004.
[14] S. Xu, J. Kruecker, B. Turkbey, N. Glossop, A. K. Singh, P. Choyke, P. Pinto, and B. J. Wood, “Real-time mri-trus fusion for guidance of targeted prostate biopsies,” Computer Aided Surgery, vol. 13, no. 5, pp. 255–264, 2008.
[15] S. Zagoruyko and N. Komodakis, “Learning to compare image patches via convolutional neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 4353–4361.
[16] S. Miao, Z. J. Wang, and R. Liao, “A cnn regression approach for real-time 2d/3d registration,” IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1352–1363, 2016.
[17] M. F. Stollenga, W. Byeon, M. Liwicki, and J. Schmidhuber, “Parallel multi-dimensional lstm, with application to fast biomedical volumetric image segmentation,” in Advances in neural information processing systems, 2015, pp. 2998–3006.
[18] R. Wright, B. Khanal, A. Gomez, E. Skelton, J. Matthew, J. V. Hajnal, D. Rueckert, and J. A. Schnabel, “Lstm spatial co-transformer networks for registration of 3d fetal us and mr brain images,” in Data Driven Treatment Response Assessment and Preterm, Perinatal, and Paediatric Image Analysis. Springer, 2018, pp. 149–159.
[19] J. Fan, X. Cao, Q. Wang, P.-T. Yap, and D. Shen, “Adversarial learning for mono- or multi-modal registration,” Medical Image Analysis, vol. 58, p. 101545, 2019.
[20] Z. Zhang, L. Yang, and Y. Zheng, “Translating and segmenting multimodal medical volumes with cycle- and shape-consistency generative adversarial network,” CoRR, vol. 1802.09655, 2018.
[21] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.
[22] C. Ledig, L. Theis, F. Husz´ar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang et al., “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4681–4690.
[23] C. Li and M. Wand, “Precomputed real-time texture synthesis with markovian generative adversarial networks,” in European conference on computer vision. Springer, 2016, pp. 702–716.
[24] T. Sørenson, A method of establishing groups of equal amplitude in plant sociology based on similarity of species content and its application to analyses of the vegetation on Danish commons, 1948.
[25] P. Cignoni, C. Rocchini, and R. Scopigno, “Metro: measuring error on simplified surfaces,” in Computer Graphics Forum, vol. 17, no. 2. Wiley Online Library, 1998, pp. 167–174.
[26] C. Wang, J. Yang, L. Xie, and J. Yuan, “Kervolutional neural networks,” CoRR, vol. 1904.03955, 2019.