As a dedicated student who occasionally checks his online school work maybe once per month, I find that it incredibly hard to believe how hard teachers are working to bring the classroom experience online. Some classes are certainly easier to integrate into this new, virtual format, while others not so much. One of those courses that is inherently limited by design is chemistry. Students miss out on the crucial experimental component of the class: doing labs, interacting with chemicals and seeing physical reactions. LabARtory aims to address this issue.
What it does
LabARtory enables students to upload pictures of their lab procedures and then conduct experiments with that lab procedure in Augmented Reality. Our mission is to bring science to every student stuck at their home with minimal equipment.
How we built it
We have 2 main components to our hack: a mobile-based frontend and a google cloud based backend.
The front-end is composed of a react-native mobile app that allows users to take pictures of their lab procedures. This is then processed by our backend and key terms (e.g., lab equipment used, actions .etc) are sent back. We then add a redirect to an AR.js web page where we then proceed with the AR lab experience. (We are unable to get unity to run because it turns out that we all had iPhones without any macs/xcode) The AR component comes in the form of small "lab-pads" that contain a QR code that projects AR lab equipment holograms. Once on this screen, the user can interact with the various pieces of lab equipment.
As for the backend, we built it using a combination of Firebase's storage and GCP's vision and pubsub APIs. Once a user uploads an image from their phones over the front-end, they pipe that image over to our firebase storage server. We have a cloud function listening for changes in said storage bucket; it applies both Google's cloud vision API (specifically optical character recognition) as well as a Spacy NLP model that detects nouns and verbs out of the given text block. This cloud function then publishes that text data to a pubsub topic. Another cloud function then listens for updates in that bucket, formats it and sends it to a bucket as a .txt file.
Challenges we ran into
We had a massive bug setting up Spacy with cloud functions. The problem we found eventually was that we forgot to our en_core_web_md folder as python package. We eventually added the proper dependency as a folder and the bug was patched.
Accomplishments that we're proud of
One of our design decisions is to have minimal hardware requirements for this hack to work. We were able to the requirements down to only your phone and a few pieces of cardboard and we were pretty proud of fulfilling our design commitment.
This is the first time we have ever done anything related to AR! Working with echoAR was an overall really amazing experience and we learned a lot about how projections work, how to design immersive UIs .etc.
This is also the first time we have ever used firebase's cloud functions. We decided to take on the challenge of learning something new and we
What we learned
- AR basics w/ echoAR
- How to update metadata in echoAR
- How to deploy models to echoAR
- Cloud functions w/ Firebase/GCP
- Better overall integration
- Having multiple models (echoAR) display at the same time
- Better video editing skills lol
(Video was edited backwards whoops, best viewing is from 1:10 to the end and then from start to 1:10)
Also our domain.com name is LabARtory