Title
Vi-Fi: Associating Moving Subjects across Vision and Wireless Sensors
Abstract
In this paper, we present Vi-Fi, a multi-modal system that leverages a user's smartphone WiFi Fine Timing Measurements (FTM) and inertial measurement unit (IMU) sensor data to associate the user detected on a camera footage with their corresponding smartphone identifier (e.g. WiFi MAC address). Our approach uses a recurrent multi-modal deep neural network that exploits FTM and IMU measurements along with distance between user and camera (depth information) to learn affinity matrices. As a baseline method for comparison, we also present a traditional non deep learning approach that uses bipartite graph matching. To facilitate evaluation, we collected a multi-modal dataset that comprises camera videos with depth information (RGB-D), WiFi FTM and IMU measurements for multiple participants at diverse real-world settings. Using association accuracy as the key metric for evaluating the fidelity of Vi-Fi in associating human users on camera feed with their phone IDs, we show that Vi-Fi achieves between 81% (real-time) to 91% (offline) association accuracy.
Year
DOI
Venue
2022
10.1109/IPSN54338.2022.00024
2022 21st ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN)
Keywords
DocType
ISBN
Vi-Fi,moving subjects,wireless sensors,multimodal system,camera footage,corresponding smartphone identifier,multimodal deep neural network,depth information,traditional nondeep learning approach,multimodal dataset,camera videos,human users,camera feed,91% association accuracy
Conference
978-1-6654-9625-4
Citations 
PageRank 
References 
0
0.34
0
Authors
12
Name
Order
Citations
PageRank
Hansi Liu1232.58
Abrar Alali200.34
Mohamed Ibrahim300.34
Bryan Bo Cao400.68
Nicholas Meegan500.68
Hongyu Li614917.22
Marco Gruteser74631309.81
Shubham Jain801.35
Kristin J. Dana9946115.45
Ashwin Ashok1001.35
Bin Cheng11638.38
Hongsheng Lu12718.73