Abstract | ||
---|---|---|
Autonomous Vehicles are required to operate robustly across widely varied scenarios and conditions, however environmental factors such as weather and lighting can impede the capabilities of the perception systems required for safe operation. In this work we investigate the effects lighting changes can have on semantic segmentation of urban road scenes, specifically how segmentation performance is affected by underexposed imagery. Using two publicly available datasets, we simulate incorrectly set camera exposure and compare the performance of a standard pre-trained deep semantic segmentation network on correctly and incorrectly exposed images. We then introduce a novel input optimization network, which aims to modify a given image such that it generates an optimal response from a pre-trained semantic segmentation network. We compare our approach to an adversarially-trained model and demonstrate significantly improved semantic segmentation performance over that of unoptimised images. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/SSRR50563.2020.9292626 | 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR) |
Keywords | DocType | ISSN |
autonomous vehicles,environmental factors,urban road scenes,input optimisation network,underexposed images,pretrained deep semantic segmentation network,camera exposure,multiple object classes,image features,road scene understanding | Conference | 2374-3247 |
ISBN | Citations | PageRank |
978-1-6654-0391-7 | 1 | 0.38 |
References | Authors | |
0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Christopher J. Holder | 1 | 1 | 0.38 |
Majid Khonji | 2 | 1 | 0.38 |
Jorge Dias | 3 | 175 | 33.83 |