We decided to build fbIRL when our team all realized that we were all passionate about both AR and helping people connect in the real world. We decided to bridge these two interests by building an AR social network that allowed people to connect with others in real life through an events-based technological medium.

What it does

fbIRL is an AR social network for real-world connections. When the user opens the app, they're prompted to login with their Facebook account, giving us access to data that we use to create their fbIRL account.

How we built it

On the frontend, we used react native as well as arkit to build an application that could recognize faces and track them across the screen, and send a photo to a backend endpoint running on flask when a new face was detected in the phone's camera frame. That photo would then hit a backend endpoint that uses both an object segmentation model to do bounding-box facial detection, as well as a 1-vs-k classifier for classification of a user's face. Once we knew who the user was based off of their face, we were able to make an HTTP request to the facebook graph api to fetch information about a user, including their likes, hometown and email. We packaged this all in a JSON response to the frontend, where it was subsequently interpreted and displayed in AR as a "virtual profile". We also built in a facebook login using the react-native facebook sdk that used location-based tracking to find events near you. The idea behind that feature was that if you could attend events near you and use fbIRL there, you could build more meaningful connections with people with similar interests.

Challenges we ran into

Making sure the machcine learning classifier was accurate was very hard. We spent hours running through about 5 different models of varying complexity (from SVM to CNN) before we came upon a pipeline for face detection and classification that worked well. This was particularly hard because we had to gather a lot of training data, and construct high-level embeddings for each face that would boost our classifier's accuracy.

Performant code was a surprisingly hard part of this project. We had to ensure that the image being tracked on the frontend in realtime did not take too long to load as we searched for a proper profile to match the face with a name. This involved a lot of experimentation with compression of the image on the client side, as well as playing around with model architectures, network latency, multiprocessing/concurrency, and caching certain parts of the image in memory on the server side.

Accomplishments that we're proud of

We are particularly proud of the mobile front-end that cleanly and efficiently tracks faces across a screen. Additionally, we worked hard to train an ensemble of machine learning models that could be both accurate and performant. Through overcoming significant technical challenges, we were able to produce a working application that we are all very proud of.

What we learned

This project was a learning experience for all of us. It was our first introduction to an augmented reality application, and this had a steep but enjoyable learning curve. We also all learned a lot about team communication and efficiently working together - none of us had met before the hackathon, and we had to learn how to work together well as quickly as possible.

What's next for fbIRL

We believe that the next barrier to adoption for this project is the amount of data available. We were constrained by the amount of time we had here, but if we had access to a facebook-scale dataset, we could build a network in AR that would connect the entire world and drive meaningful real-world connections for everyone.

Share this project: