Abstract | ||
---|---|---|
We introduce a new method for representing the dynamics of human-object interactions in videos. Previous algorithms tend to focus on modeling the spatial relationships between objects and actors, but ignore the evolving nature of this relationship through time. Our algorithm captures the dynamic nature of human-object interactions by modeling how these patterns evolve with respect to time. Our experiments show that encoding such temporal evolution is crucial for correctly discriminating human actions that involve similar objects and spatial human-object relationships, but only differ on the temporal aspect of the interaction, e.g. answer phone and dial phone We validate our approach on two human activity datasets and show performance improvements over competing state-of-the-art representations. |
Year | DOI | Venue |
---|---|---|
2013 | 10.1109/ICCVW.2013.72 | ICCVW '13 Proceedings of the 2013 IEEE International Conference on Computer Vision Workshops |
Keywords | Field | DocType |
human action,answer phone,action recognition,human-object interaction,spatial relationship,human activity datasets,dynamic nature,dial phone,temporal aspect,spatio-temporal human-object interactions,show performance improvement,spatial human-object relationship | Computer vision,Computer science,Action recognition,Support vector machine,Phone,Artificial intelligence,Hidden Markov model,Machine learning,Semantics,Encoding (memory) | Conference |
Volume | Issue | Citations |
2013 | 1 | 7 |
PageRank | References | Authors |
0.45 | 17 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Victor Escorcia | 1 | 88 | 5.44 |
Juan Carlos Niebles | 2 | 1485 | 80.45 |