Abstract | ||
---|---|---|
While deep learning has led to significant advances in visual recognition over the past few years, such advances often require a lot of annotated data. While unsupervised domain adaptation has emerged as an alternative approach that doesnu0027t require as much annotated data, prior evaluations of domain adaptation have been limited to relatively simple datasets. This work pushes the state of the art in unsupervised domain adaptation through an in depth evaluation of AlexNet, DenseNet and Residual Transfer Networks (RTN) on multimodal benchmark datasets that shows and identifies which layers more effectively transfer features across different domains. We also modify the existing RTN architecture and propose a novel domain adaptation architecture called MagNet that combines Deep Convolutional Blocks with multiple Maximum Mean Discrepancy losses. Our experiments show quantitative and qualitative improvements in performance of our method on benchmarking datasets for complex data domains. |
Year | Venue | Field |
---|---|---|
2017 | arXiv: Computer Vision and Pattern Recognition | Maximum mean discrepancy,Residual,Architecture,Computer science,Domain adaptation,Complex data type,Visual recognition,Artificial intelligence,Deep learning,Benchmarking,Machine learning |
DocType | Volume | Citations |
Journal | abs/1712.02286 | 0 |
PageRank | References | Authors |
0.34 | 21 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yunhan Zhao | 1 | 0 | 0.68 |
Haider Ali | 2 | 84 | 15.04 |
rene victor valqui vidal | 3 | 5331 | 260.14 |