Abstract | ||
---|---|---|
Multimodal registration is a challenging problem due the high variability of tissue appearance under different imaging modalities. The crucial component here is the choice of the right similarity measure. We make a step towards a general learning-based solution that can be adapted to specific situations and present a metric based on a convolutional neural network. Our network can be trained from scratch even from a few aligned image pairs. The metric is validated on intersubject deformable registration on a dataset different from the one used for training, demonstrating good generalization. In this task, we outperform mutual information by a significant margin. |
Year | Venue | DocType |
---|---|---|
2016 | MICCAI | Conference |
Volume | Citations | PageRank |
abs/1609.05396 | 23 | 0.88 |
References | Authors | |
8 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Martin Simonovsky | 1 | 121 | 5.33 |
Benjamín Gutiérrez-Becker | 2 | 36 | 2.18 |
Diana Mateus | 3 | 417 | 32.74 |
Nassir Navab | 4 | 6594 | 578.60 |
Nikos Komodakis | 5 | 2301 | 108.03 |