Inspiration
With the COVID-19 pandemic, schools worldwide have gone almost entirely online. However, the technology used in virtual education is still largely unhelpful for students as it isn’t as interactive or hands-on as in-person learning
Researchers have found that interactive activities are almost six times more likely to help students learn, with students being more likely to participate and less likely to drop out of an interactive class.
— Carnegie Mellon University
With AugmentEd, our goal is to provide students and institutions with an accessible, affordable, and interactive way of learning.
What it does
AugmentEd is an educational tool that projects virtual simulations of science experiments right in front of you. All the user needs to do is draw a picture representing what type of experiment they want to visualize, and AugmentEd will project a simulation directly on the sheet of paper the picture is drawn on. This way, students will be able to visualize concepts such as tectonic plate movement or circuits right from the comfort of their home. The only materials required are a printer, camera, and paper!
How it works
The tool starts by scanning the user's drawing with OpenCV. The drawing is then passed through our mobilenet v2 model which is trained on a custom dataset to identify what the user has drawn. Once the machine learning has determined the type of simulation the user would like to run, AugmentEd is able to render the simulation in real time directly onto the person’s drawing by detecting Aruco markers on the sheet of paper. Aruco markers are used to identify the correspondence between the real environment coordinates and projection of science simulations (or experiments).
How we built it
AugmentEd is built completely with Python, from the computer vision program in OpenCV to the machine learning done with mobilenet v2 and Tensorflow/Keras on Google Colab. We have also made a Figma prototype for the tool's future implementation in an iOS app.
For more technical details click here
Challenges we ran into
Running inference with our custom machine model was a challenge as we only had experience training models. We decided to load our model with Keras however we would like to try and use TF graphs for faster inference.
Another challenge was projecting the simulation onto a live camera feed. We faced many bugs while trying to figure out how to use homography transformation to warp the perspective of our simulation such that it looks like it's on the paper.
Accomplishments that we're proud of
There are a lot of moving parts to this project as we have machine learning, computer vision, and graphics to manage. So getting all the programs and tasks to cooperate and create a fast and pleasant user experience is something we're proud of.
What we learned
We learned how to train an image classifier and run inference on a real-time application. We also learned how to project graphics into 3D space using a homography transformation and Aruco trackers.
What's next for AugmentedEd
In the future, we are looking to be able to host the app on the cloud which will enable us to scale our machine learning and run our app on smaller devices that would not be able to run a larger neural network. We also want to upgrade the mobilenet v2 to an ssd mobilenet v2, as more spatial data would enable us to interact with the user in different ways and understand more complicated simulation setups. Finally, we'd like to increase accessibility to those who do not have printers by removing the Aruco markers, making AugmentEd fully usable with only paper and a camera.
Built With
- arucotracker
- computer-vision
- figma
- keras
- machine-learning
- mobilenet
- opencv
- photoshop
- python
- tensorflow
Log in or sign up for Devpost to join the conversation.