Title
Deep Residual Learning for Instrument Segmentation in Robotic Surgery.
Abstract
Detection, tracking, and pose estimation of surgical instruments provide critical information that can be used to correct inaccuracies in kinematic data in robotic-assisted surgery. Such information can be used for various purposes including integration of pre- and intra-operative images into the endoscopic view. In some cases, automatic segmentation of surgical instruments is a crucial step towards full instrument pose estimation but it can also be solely used to improve user interactions with the robotic system. In our work we focus on binary instrument segmentation, where the objective is to label every pixel as instrument or background and instrument part segmentation, where different semantically separate parts of the instrument are labeled. We improve upon previous work by leveraging recent techniques such as deep residual learning and dilated convolutions and advance both binary-segmentation and instrument part segmentation performance on the EndoVis 2017 Robotic Instruments dataset. The source code for the experiments reported in the paper has been made public (https://github.com/warmspringwinds/pytorch-segmentation-detection).
Year
DOI
Venue
2017
10.1007/978-3-030-32692-0_65
Lecture Notes in Computer Science
Field
DocType
Volume
Residual,Computer vision,Binary segmentation,Pattern recognition,Computer science,Segmentation,Convolution,Pose,Robotic surgery,Pixel,Artificial intelligence
Journal
11861
ISSN
Citations 
PageRank 
0302-9743
10
0.65
References 
Authors
8
5
Name
Order
Citations
PageRank
Daniil Pakhomov1111.69
Vittal Premachandran2645.39
Max Allan312910.14
Mahdi Azizian4141.75
Nassir Navab56594578.60