Abstract | ||
---|---|---|
Accurate and robust fusion of pre-procedure magnetic resonance imaging (MRI) to intra-procedure trans-rectal ultrasound (TRUS) imaging is necessary for image-guided prostate cancer biopsy procedures. The current clinical standard for image fusion relies on non-rigid surface-based registration between semi-automatically segmented prostate surfaces in both the MRI and TRUS. This surface-based registration method does not take advantage of internal anatomical prostate structures, which have the potential to provide useful information for image registration. However, non-rigid, multi-modal intensity-based MRI-TRUS registration is challenging due to highly non-linear intensities relationships between MRI and TRUS. In this paper, we present preliminary work using image synthesis to cast this problem into a mono-modal registration task by using a large database of over 100 clinical MRI-TRUS image pairs to learn a joint model of MR-TRUS appearance. Thus, given an MRI, we use this learned joint appearance model to synthesize the patient's corresponding TRUS image appearance with which we could potentially perform mono-modal intensity-based registration. We present preliminary results of this approach. |
Year | DOI | Venue |
---|---|---|
2016 | 10.1007/978-3-319-46630-9_16 | Lecture Notes in Computer Science |
DocType | Volume | ISSN |
Conference | 9968 | 0302-9743 |
Citations | PageRank | References |
2 | 0.38 | 10 |
Authors | ||
6 |
Name | Order | Citations | PageRank |
---|---|---|---|
John A. Onofrey | 1 | 23 | 6.16 |
Ilkay Öksüz | 2 | 54 | 9.32 |
Saradwata Sarkar | 3 | 6 | 1.46 |
Rajesh Venkataraman | 4 | 6 | 1.46 |
Lawrence H. Staib | 5 | 526 | 63.63 |
Xenophon Papademetris | 6 | 248 | 16.31 |