Abstract | ||
---|---|---|
To realize context aware applications for smart home environments, it is necessary to recognize function or usage of objects as well as categories of them. On conventional research for environment recognition in an indoor environment, most of previous methods are based on shape models. In this paper, we propose a method for recognizing objects focused on the relationship between human actions and functions of objects. Such relationship becomes obvious on human action patterns when he or she handles an object. To estimate object categories by using action patterns, we represent such relationship in Dynamic Bayesian Networks (DBNs). By learning human actions toward objects statistically, objects can be recognized. Finally, we performed experiments and confirmed that objects can berecognized from human actions without shape models. |
Year | DOI | Venue |
---|---|---|
2008 | 10.1109/FGCN.2008.62 | Future Generation Communication and Networking, 2008. FGCN '08. Second International Conference |
Keywords | Field | DocType |
object recognition,dynamic bayesian networks,mobile projection display,environment recognition,small volume,probability networks,lcos display,video communication,video conference,ubiquitous computing,dynamic bayesian network,face,smart home,probability,feature extraction,skin | Computer science,Feature extraction,Home automation,Artificial intelligence,Ubiquitous computing,Machine learning,Dynamic Bayesian network,Cognitive neuroscience of visual object recognition | Conference |
Volume | ISBN | Citations |
2 | 978-0-7695-3431-2 | 0 |
PageRank | References | Authors |
0.34 | 6 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Hiroshi Miki | 1 | 0 | 0.34 |
Atsuhiro Kojima | 2 | 178 | 16.61 |
Koichi Kise | 3 | 948 | 139.96 |