Abstract | ||
---|---|---|
Deep metric learning has gained much popularity in recent years, following the success of deep learning. However, existing frameworks of deep metric learning based on contrastive loss and triplet loss often suffer from slow convergence, partially because they employ only one positive example and one negative example while not interacting with the other positive or negative examples in each update. In this paper, we firstly propose the strict discrimination concept to seek an optimal embedding space. Based on this concept, we then propose a new metric learning objective called Margin-based Discriminate Loss which tries to keep the similar and the dissimilar strictly discriminate by pulling multiple positive examples together while pushing multiple negative examples away at each update. Importantly, it doesn't need expensive sampling strategies. We demonstrate the validity of our proposed loss compared with the triplet loss as well as other competing loss functions for a variety of tasks on fine-grained image clustering and retrieval. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1007/978-3-319-97785-0_11 | Lecture Notes in Computer Science |
Keywords | DocType | Volume |
Metric learning,Deep embedding,Representation learning,Neural networks | Conference | 11004 |
ISSN | Citations | PageRank |
0302-9743 | 0 | 0.34 |
References | Authors | |
0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Peng Sun | 1 | 38 | 14.21 |
Wenzhong Tang | 2 | 1 | 1.70 |
Bai Xiao | 3 | 470 | 48.67 |