A lot of health monitoring technologies today allow us to measure a variety of metrics regarding ourselves, but what if you were interested connecting yourself with the measurements of others around you? What if you were a track coach looking to push your athletes to their full potential? What if you were a football coach looking to push your athletes to their full potential and also monitoring your players' risks of getting concussions during a match? What if you were a health care professional who can receive data on vital signs just by looking at your patients?
There are many situations in which the need to monitor other people's health is important, and while all of that information can be collected on portable or wearable devices such as smartphones or watches, we believe that constantly having to check a device is still a bit of an inconvenience. Why not have clients' sensors report back to a wearable head-mounted display and have the information displayed in an augmented reality fashion?
What it does
This weekend we decided to explore that possibility with a Google Cardboard display. We created a proof-of-concept project that features an Android smartphone running a Google Cardboard app connected to a mesh network of sensors via Bluetooth. This mesh network would represent a group of clients whose physiology you are interested in monitoring, and constantly sends sensor information to the Android device. Depending on who the wearer of the Google Cardboard is looking at, information from corresponding sensors are displayed next to the subject. Information we are currently able to collect includes: heart rate, temperature, steps taken, concussion probability, and speed.
How we built it
In order to pick out individual clients from the wearer's field of view, we used OpenCV to process the phone camera feed's frames. Ultimately we want face-recognition to be used in detecting clients in our field of view, but right now we're going off of color markers.
Challenges we ran into
Finding a way to combine the capabilities of OpenCV with the Google Cardboard API was probably the biggest challenge on the software end.
The 2.4Ghz Mesh Network was initially unstable. Connections between the modules were frequently lost at random instances due to the power ripple in the RF Module rail. This was resolved by designing an RC circuit to smooth the signals. In addition, the transmitting and receiving buffers were consistently being overflowed with packets, causing previous cycles' packets to be delayed. This was resolved by clearing the transmitting and receiving buffers to prevent delays in the packet processing. Furthermore, the data needed to be sent and received as an unsigned 64 bit-integer packet. We resolved this by designing an algorithm which manipulated other data types and created and spliced the outgoing and incoming packets at certain indices designated for health factors such as heart rate, velocity, and percent chance of concussion.
Accomplishments that we're proud of
In terms of software, consistent color recognition and then subsequent displaying of a client's sensor information as an overlay onto the phone's camera feed was our biggest accomplishment.
What we learned
Exposure to OpenCV, Google Cardboard API, and the pains of implementing a mesh network on an 8-bit micro-controller
What's next for AR Biometrics Overwatch
There's plenty more to be done with AR Biometrics Overwatch. As mentioned above, a key component of this system would be facial recognition of clients. Also ideally we would like to move away from the Cardboard to a head-mounted display better suited for augmented reality. In terms of data collection, we hope to expand the scope of biometrics measured by each sensor to provide a more exhaustive analysis of clients.
We may even look to introduce a web component that users of the system can use to combine all of the data collected by the mesh network. This allows for the application of big data analysis and visualization features that can prove useful to users.