Title
CLoDSA: a tool for augmentation in classification, localization, detection, semantic segmentation and instance segmentation tasks.
Abstract
Deep learning techniques have been successfully applied to bioimaging problems; however, these methods are highly data demanding. An approach to deal with the lack of data and avoid overfitting is the application of data augmentation, a technique that generates new training samples from the original dataset by applying different kinds of transformations. Several tools exist to apply data augmentation in the context of image classification, but it does not exist a similar tool for the problems of localization, detection, semantic segmentation or instance segmentation that works not only with 2 dimensional images but also with multi-dimensional images (such as stacks or videos). In this paper, we present a generic strategy that can be applied to automatically augment a dataset of images, or multi-dimensional images, devoted to classification, localization, detection, semantic segmentation or instance segmentation. The augmentation method presented in this paper has been implemented in the open-source package CLoDSA. To prove the benefits of using CLoDSA, we have employed this library to improve the accuracy of models for Malaria parasite classification, stomata detection, and automatic segmentation of neural structures. CLoDSA is the first, at least up to the best of our knowledge, image augmentation library for object classification, localization, detection, semantic segmentation, and instance segmentation that works not only with 2 dimensional images but also with multi-dimensional images.
Year
DOI
Venue
2019
10.1186/s12859-019-2931-1
BMC Bioinformatics
Keywords
Field
DocType
Data augmentation, Classification, Detection, Segmentation, Multi-dimensional images
Biology,Pattern recognition,Segmentation,Artificial intelligence,Overfitting,Deep learning,Bioinformatics,Contextual image classification
Journal
Volume
Issue
ISSN
20
1
1471-2105
Citations 
PageRank 
References 
1
0.36
5
Authors
7