Abstract | ||
---|---|---|
Detection, tracking, and pose estimation of surgical instruments provide critical information that can be used to correct inaccuracies in kinematic data in robotic-assisted surgery. Such information can be used for various purposes including integration of pre- and intra-operative images into the endoscopic view. In some cases, automatic segmentation of surgical instruments is a crucial step towards full instrument pose estimation but it can also be solely used to improve user interactions with the robotic system. In our work we focus on binary instrument segmentation, where the objective is to label every pixel as instrument or background and instrument part segmentation, where different semantically separate parts of the instrument are labeled. We improve upon previous work by leveraging recent techniques such as deep residual learning and dilated convolutions and advance both binary-segmentation and instrument part segmentation performance on the EndoVis 2017 Robotic Instruments dataset. The source code for the experiments reported in the paper has been made public (https://github.com/warmspringwinds/pytorch-segmentation-detection). |
Year | DOI | Venue |
---|---|---|
2017 | 10.1007/978-3-030-32692-0_65 | Lecture Notes in Computer Science |
Field | DocType | Volume |
Residual,Computer vision,Binary segmentation,Pattern recognition,Computer science,Segmentation,Convolution,Pose,Robotic surgery,Pixel,Artificial intelligence | Journal | 11861 |
ISSN | Citations | PageRank |
0302-9743 | 10 | 0.65 |
References | Authors | |
8 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Daniil Pakhomov | 1 | 11 | 1.69 |
Vittal Premachandran | 2 | 64 | 5.39 |
Max Allan | 3 | 129 | 10.14 |
Mahdi Azizian | 4 | 14 | 1.75 |
Nassir Navab | 5 | 6594 | 578.60 |