The main goal here was to identify people in a crowd, or vehicles in an intersection, by their Wi-Fi signature. Tracking people by their MAC address has surveillance applications, and since smartphones are ubiquitous and constantly perform exploratory Wi-Fi requests, they present a useful tool for tracking. Tracking vehicles in an intersection is also valuable, as it can be used to extract datasets of different driver behaviours by pairing it with lidar and vision data. These datasets have grown to become essential, as the field of autonomous driving has increasingly adopted machine learning techniques.
What it does
Given a MAC address to track, the 3 antennae continuously log the signal strength of the device corresponding to the required MAC address. These signal strengths are then sent to FireBase. On a separate computer, we use a Kinect to extract the RGB and IR vision data of a given scene. This data is then used to identify the approximate "skeleton" of all people in the frame. Based on the IR beams of the Kinect, we can determine where the center of each skeleton is in 3-dimensional space. The antennae readings are then be read from FireBase by the Kinect-connected computer, which then performs triangulation on the readings. This triangulation then gives the location of the MAC address, and the closest "skeleton" then corresponds to the most likely person to be carrying the device associated with that MAC address. This "skeleton" is then highlighted to denote this.
How we built it
To identify the humans in a given frame, as well their locations, we used the Microsoft Kinect C# SDK, as well as some of the library functions developed by Vangos Pternas. The Firebase operations were done using the Firebase Admin Python SDK. The antennae were attached to Raspberry Pi's, and the signal strength data was extracted using Python.
Challenges we ran into
We found the antenna hardware to be somewhat inconsistent when interfacing with the Raspberry Pi's and the antennae. We also had a lot of issues using the C# APIs with Firebase, so we had to come up with Python workarounds.
Accomplishments that we're proud of
We were able to get some solid results with the antenna hardware we had, and we were quite happy with how fast we got a working solution for the Kinect portion of our project up and running. Most importantly, we were most happy with the wide variety of technologies we got exposed to through this project.
What we learned
We got a great opportunity to learn how to interface with a variety of hardware peripherals, such as antennae, the Kinect, and Raspberry Pi's. We also got a chance to learn how the Kinect determines the location of users in a given frame, as well as how to triangulate a signal based on multiple readings. Finally, we also got some experience interfacing with Firebase using Python, which was completely new to us.
What's next for Wi-Fi Localization using a Kinect
We would like to generalize this to multiple types of signals, such as Bluetooth and GPS signals as well. This could be fused with other signals to help improve localization of target, which has applications ranging from robotic search and rescue all the way to improved target surveillance.