Abstract | ||
---|---|---|
Combined PET-CT scan is an important diagnostic tool in modern medicine, e.g. for staging or treatment planning in the field of oncology. Especially in small structures, like a tumour, textural variations visible in a PET image are not visually recognizable within a CT scan from the same region. Thus, both modalities are necessary for diagnosis. Since both techniques expose the patient to radiation, it would be desirable to get the same information about metabolic activity contained in the PET image from a CT scan only. To investigate the relationship between both imaging modalities, we propose a machine learning approach to automatically identify regions in a CT scan corresponding to areas with high FDG uptakes in a PET image. |
Year | Venue | Field |
---|---|---|
2018 | ICASSP | Pattern recognition,Imaging modalities,Computer science,Radiation treatment planning,Feature extraction,Computed tomography,Positron emission tomography,Artificial intelligence,Radiology,Stage (cooking) |
DocType | Citations | PageRank |
Conference | 1 | 0.36 |
References | Authors | |
0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Annika Liebgott | 1 | 3 | 2.44 |
Florian Liebgott | 2 | 1 | 0.36 |
Bin Yang | 3 | 4 | 2.42 |
Sergios Gatidis | 4 | 31 | 8.17 |
Konstantin Nikolaou | 5 | 23 | 4.36 |