Abstract | ||
---|---|---|
This paper outlines an architecture for multi-camera and multi-modal sensor fusion. We define a high-level architecture in which image sensors like standard color, thermal, and time of flight cameras can be fused with high accuracy location systems based on UWB, Wifi, Bluetooth or RFID technologies. This architecture is specially well-suited for indoor environments, where such heterogeneous sensors usually coexists. The main advantage of such a system is that a combined non-redundant output is provided for all the detected targets. The fused output includes in its simplest form the location of each target, including additional features depending of the sensors involved in the target detection, e.g., location plus thermal information. This way, a surveillance or context-aware system obtains more accurate and complete information than only using one kind of technology. |
Year | DOI | Venue |
---|---|---|
2010 | 10.1007/978-3-642-14883-5_39 | DISTRIBUTED COMPUTING AND ARTIFICIAL INTELLIGENCE |
Field | DocType | Volume |
Data mining,Architecture,Multi camera,Image sensor,Computer science,Sensor fusion,Real-time computing,Ultra-wideband,Complete information,Modal,Bluetooth,Embedded system | Conference | 79 |
ISSN | Citations | PageRank |
1867-5662 | 2 | 0.42 |
References | Authors | |
10 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Alvaro Luis Bustamante | 1 | 13 | 3.70 |
José M. Molina | 2 | 604 | 67.82 |
Miguel A. Patricio | 3 | 305 | 38.05 |