You're walking down the street and meet a friend/coworker you know really well, but to save your life you can't remember their name and now it's too awkward to ask them. Something I encounter almost on a daily basis. Using our device, we can send a notification to your smart-watch or Google Glass with their name.

Or, if you are at a conference and need to find other people with similar interests or work areas, but introductions are tough - using our app we can pull up all the LinkedIn profiles of the people around.

What it does

The device works by first taking an image using either a mobile, smart-watch or Google Glass and passing it through our app onto our server which then uses Microsoft's Project Oxford API to cross reference it with other photos from LinkedIn and Facebook. Once a match is found, the persons name and profile is sent to your smart watch / google glass and voila, you know their name!

How I built it

We broke down the idea into multiple subparts and divided it across the team. Our first tasks involved being able to hack the Android phone so that we can obtain a camera feed live onto a web server so we could then download and run it through Project Oxford, for which we used a python script. We also had to find a way to collect data from Facebook and LinkedIn and train the API.

Post the initial stages, we developed an Android app for the mobile and a Microsoft Band app to collect and present the data. The Android mobile app acts like a relay between the Microsoft Band and the server. Once we were able to have all the devices communicate with each other, we trained our API to speed up the recognition process. Lastly, we spent our remaining time tidying up the backend and improving the user experience for the front end.

Challenges I ran into

Our main challenges came from the cap Microsoft Project Oxford has (20 image verifications per minute) and being able to communicate between the Android and the Microsoft Band. Also, we were unable to post our entire app on a online server such as Linode due to Brown University's firewall. We were able to accomplish the online server on Linode using VPNs, however, due to the time-lag induced by the VPN, we were unable to use it for our final demonstration.

Accomplishments that I'm proud of

The link between the Microsoft API and the camera and all the python script in between was the most challenging yet the most rewarding aspect of the project. Also, using the Microsoft Band is an accomplishment we are proud off, along with the ability of our device to scan and recognise faces relatively quickly (almost realtime) and fairly accurately (8/10 times it was a success during testing)

What I learned

How to build an app on Windows Band and Android phone. How to hack the camera of the Samsung Note 4. How to use the Microsoft Project Oxford API and collecting data from Linkedin How to get Android phone app to communicate with the Microsoft Band and a server we set up.

What's next for Remember me

To have the entire system either constrained in one mobile app, or an app and a server. Currently, we have the server set-up but because of Brown's firewall, we are unable to access it outside Brown's wifi.

Also, we hope to move away from photo based detection to real time video feed detection thereby removing the need to take multiple photos. This could ideally be done using Open CV and have it a multilayer filter: first filtering based upon gender to reduce our pool, and the face verification after having trained our API.

Built With

Share this project: