\r\ntask which is why deep learning has been used widely to address it in

\r\nrecent years. However, two main challenges remain: Firstly, the lack

\r\nof ground truth data calls for an unsupervised learning approach,

\r\nwhich leads to the second challenge of defining a feasible loss

\r\nfunction that can compare two images of different modalities to judge

\r\ntheir level of alignment. To avoid this issue altogether we implement a

\r\ngenerative adversarial network consisting of two registration networks

\r\nGAB, GBA and two discrimination networks DA, DB connected by

\r\nspatial transformation layers. GAB learns to generate a deformation

\r\nfield which registers an image of the modality B to an image of the

\r\nmodality A. To do that, it uses the feedback of the discriminator DB

\r\nwhich is learning to judge the quality of alignment of the registered

\r\nimage B. GBA and DA learn a mapping from modality A to modality

\r\nB. Additionally, a cycle-consistency loss is implemented. For this,

\r\nboth registration networks are employed twice, therefore resulting in

\r\nimages ˆA, ˆB which were registered to ˜B, ˜A which were registered

\r\nto the initial image pair A, B. Thus the resulting and initial images

\r\nof the same modality can be easily compared. A dataset of liver

\r\nCT and MRI was used to evaluate the quality of our approach and

\r\nto compare it against learning and non-learning based registration

\r\nalgorithms. Our approach leads to dice scores of up to 0.80 ± 0.01

\r\nand is therefore comparable to and slightly more successful than

\r\nalgorithms like SimpleElastix and VoxelMorph.","references":"[1] A. E. Kavur, M. A. Selver, O. Dicle, M. Bar, and N. S.\r\nGezer, \u201cChaos - combined (ct-mr) healthy abdominal organ\r\nsegmentation challenge data,\u201d Apr. 2019. [Online]. Available:\r\nhttps:\/\/doi.org\/10.5281\/zenodo.3362844\r\n[2] E. Ferrante and N. Paragios, \u201cSlice-to-volume medical image\r\nregistration: A survey,\u201d Medical image analysis, vol. 39, pp. 101\u2013123,\r\n2017.\r\n[3] M. Simonovsky, B. Guti\u00b4errez-Becker, D. Mateus, N. Navab,\r\nand N. Komodakis, \u201cA deep metric for multimodal registration,\u201d\r\nin International conference on medical image computing and\r\ncomputer-assisted intervention. Springer, 2016, pp. 10\u201318.\r\n[4] D. Mattes, D. R. Haynor, H. Vesselle, T. K. Lewellyn, and W. Eubank,\r\n\u201cNonrigid multimodality image registration,\u201d in Medical Imaging 2001:\r\nImage Processing, vol. 4322. International Society for Optics and\r\nPhotonics, 2001, pp. 1609\u20131620.\r\n[5] D. Mahapatra, B. Antony, S. Sedai, and R. Garnavi, \u201cDeformable\r\nmedical image registration using generative adversarial networks,\u201d in\r\n2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI\r\n2018). IEEE, 2018, pp. 1449\u20131453.\r\n[6] G. Balakrishnan, A. Zhao, M. R. Sabuncu, J. Guttag, and A. V.\r\nDalca, \u201cAn unsupervised learning model for deformable medical image\r\nregistration,\u201d in Proceedings of the IEEE conference on computer vision\r\nand pattern recognition, 2018, pp. 9252\u20139260.\r\n[7] G. Balakrishnan, A. Zhao, M. R. Sabuncu, and J. Guttag, \u201cVoxelmorph:\r\na learning framework for deformable medical image registration,\u201d IEEE\r\ntransactions on medical imaging, 2019.\r\n[8] X. Yang, R. Kwitt, and M. Niethammer, \u201cFast predictive image\r\nregistration,\u201d in Deep Learning and Data Labeling for Medical\r\nApplications. Springer, 2016, pp. 48\u201357.\r\n[9] X. Yang, R. Kwitt, M. Styner, and M. Niethammer, \u201cQuicksilver: Fast\r\npredictive image registration\u2013a deep learning approach,\u201d NeuroImage,\r\nvol. 158, pp. 378\u2013396, 2017.\r\n[10] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, \u201cUnpaired image-to-image\r\ntranslation using cycle-consistent adversarial networks,\u201d in Proceedings\r\nof the IEEE international conference on computer vision, 2017, pp.\r\n2223\u20132232.\r\n[11] B. C. Lowekamp, D. T. Chen, L. Ib\u00b4a\u02dcnez, and D. Blezek, \u201cThe design\r\nof simpleitk,\u201d Frontiers in neuroinformatics, vol. 7, p. 45, 2013.\r\n[12] J. Mitra, S. Ghose, D. Sidib\u00b4e, A. Oliver, R. Marti, X. Llado, J. C.\r\nVilanova, J. Comet, and F. M\u00b4eriaudeau, \u201cWeighted likelihood function of\r\nmultiple statistical parameters to retrieve 2d trus-mr slice correspondence\r\nfor prostate biopsy,\u201d in 2012 19th IEEE International Conference on\r\nImage Processing. IEEE, 2012, pp. 2949\u20132952.\r\n[13] C. Reynier, J. Troccaz, P. Fourneret, A. Dusserre, C. Gay-Jeune, J.-L.\r\nDescotes, M. Bolla, and J.-Y. Giraud, \u201cMri\/trus data fusion for prostate\r\nbrachytherapy. preliminary results,\u201d Medical physics, vol. 31, no. 6, pp.\r\n1568\u20131575, 2004.\r\n[14] S. Xu, J. Kruecker, B. Turkbey, N. Glossop, A. K. Singh, P. Choyke,\r\nP. Pinto, and B. J. Wood, \u201cReal-time mri-trus fusion for guidance of\r\ntargeted prostate biopsies,\u201d Computer Aided Surgery, vol. 13, no. 5, pp.\r\n255\u2013264, 2008.\r\n[15] S. Zagoruyko and N. Komodakis, \u201cLearning to compare image patches\r\nvia convolutional neural networks,\u201d in Proceedings of the IEEE\r\nconference on computer vision and pattern recognition, 2015, pp.\r\n4353\u20134361.\r\n[16] S. Miao, Z. J. Wang, and R. Liao, \u201cA cnn regression approach for\r\nreal-time 2d\/3d registration,\u201d IEEE transactions on medical imaging,\r\nvol. 35, no. 5, pp. 1352\u20131363, 2016.\r\n[17] M. F. Stollenga, W. Byeon, M. Liwicki, and J. Schmidhuber, \u201cParallel\r\nmulti-dimensional lstm, with application to fast biomedical volumetric\r\nimage segmentation,\u201d in Advances in neural information processing\r\nsystems, 2015, pp. 2998\u20133006. [18] R. Wright, B. Khanal, A. Gomez, E. Skelton, J. Matthew, J. V. Hajnal,\r\nD. Rueckert, and J. A. Schnabel, \u201cLstm spatial co-transformer networks\r\nfor registration of 3d fetal us and mr brain images,\u201d in Data Driven\r\nTreatment Response Assessment and Preterm, Perinatal, and Paediatric\r\nImage Analysis. Springer, 2018, pp. 149\u2013159.\r\n[19] J. Fan, X. Cao, Q. Wang, P.-T. Yap, and D. Shen, \u201cAdversarial learning\r\nfor mono- or multi-modal registration,\u201d Medical Image Analysis, vol. 58,\r\np. 101545, 2019.\r\n[20] Z. Zhang, L. Yang, and Y. Zheng, \u201cTranslating and segmenting\r\nmultimodal medical volumes with cycle- and shape-consistency\r\ngenerative adversarial network,\u201d CoRR, vol. 1802.09655, 2018.\r\n[21] O. Ronneberger, P. Fischer, and T. Brox, \u201cU-net: Convolutional networks\r\nfor biomedical image segmentation,\u201d in International Conference on\r\nMedical image computing and computer-assisted intervention. Springer,\r\n2015, pp. 234\u2013241.\r\n[22] C. Ledig, L. Theis, F. Husz\u00b4ar, J. Caballero, A. Cunningham, A. Acosta,\r\nA. Aitken, A. Tejani, J. Totz, Z. Wang et al., \u201cPhoto-realistic single\r\nimage super-resolution using a generative adversarial network,\u201d in\r\nProceedings of the IEEE conference on computer vision and pattern\r\nrecognition, 2017, pp. 4681\u20134690.\r\n[23] C. Li and M. Wand, \u201cPrecomputed real-time texture synthesis with\r\nmarkovian generative adversarial networks,\u201d in European conference on\r\ncomputer vision. Springer, 2016, pp. 702\u2013716.\r\n[24] T. S\u00f8renson, A method of establishing groups of equal amplitude in plant\r\nsociology based on similarity of species content and its application to\r\nanalyses of the vegetation on Danish commons, 1948.\r\n[25] P. Cignoni, C. Rocchini, and R. Scopigno, \u201cMetro: measuring error on\r\nsimplified surfaces,\u201d in Computer Graphics Forum, vol. 17, no. 2. Wiley\r\nOnline Library, 1998, pp. 167\u2013174.\r\n[26] C. Wang, J. Yang, L. Xie, and J. Yuan, \u201cKervolutional neural networks,\u201d\r\nCoRR, vol. 1904.03955, 2019.","publisher":"World Academy of Science, Engineering and Technology","index":"Open Science Index 164, 2020"}