Abstract | ||
---|---|---|
Video alignment is an important task for environments with distributed multiple cameras, making possible, for instance, to verify the exact instant that an event happened on different views. The alignment comprises establishing a temporal correspondence among frames captured by different video cameras. Many works have been proposed to solve this problem when the cameras present overlapping of Field of View (FOV) or are located close to each other. However, in this work, we present a novel approach to perform video alignment for cameras without overlapping FOV (i.e., the cameras might be located on different floors of a building). The method employs the sensor data generated by a smartphone (synchronized to a time server), to align multiple videos by finding a temporal matching between the videos which captured a person in the scene and the signal of the smartphone accelerometer carried by this individual, providing the exact time that a movement was performed. To the best of our knowledge, this is the first attempt at performing such type of alignment. According to experimental results, the proposed approach was able to align multiple videos with a length of 30 minutes with errors as low as 160 ms. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1109/AVSS.2018.8639468 | 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) |
Keywords | Field | DocType |
Tracking,Cameras,Visualization,Accelerometers,Synchronization,Servers,Data mining | Network Time Protocol,Field of view,Computer vision,Synchronization,Multi camera,Computer science,Accelerometer,Visualization,Server,Artificial intelligence | Conference |
ISBN | Citations | PageRank |
978-1-5386-9294-3 | 0 | 0.34 |
References | Authors | |
0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Antonio C. Nazare | 1 | 7 | 1.81 |
Filipe De O. Costa | 2 | 22 | 2.01 |
William Robson Schwartz | 3 | 956 | 75.15 |