Inspiration

We want to create a 3D point cloud mapping tool for robotics and autonomous driving applications using the novel computer vision technique visual SLAM.

What it does

We create 3D point cloud maps of physical environments using feature-based visual SLAM. Feature-based visual Simultaneous Localization and Mapping (SLAM) is a computer vision technique to perform location and mapping functions in an unknown environment by tracking and matching feature points between video frames. User can upload a video of an observer exploring a physical environment to create a 3D point cloud map of the environment, which is visualized in Three.js.

How I built it

We implemented visual SLAM in C++ with the help of OpenCV. The web app is built with Flask, Python, Three.js, SQLite, and GCP.

Challenges I ran into

Implementing visual SLAM algorithm from scratch is complicated and debugging C++ can be difficult. I had to implement a huge amount of functionality, like a generalized RANSAC discriminator and circular reference counting. The hardest part was creating the pose graph between frames.

Accomplishments that I'm proud of

Despite all the challenges we were able to create a web app that allows everyone to try out the visual SLAM technology through the web app. We are also proud of being able to implement visual SLAM algorithm from scratch.

What I learned

We learned about visual SLAM and how to build a web app around it.

What's next for 3D point cloud mapping vSLAM

There is so much to do, we could use KeyFrames for higher base lines, thread our RANSAC, perform radius searches better, etc. We would optimize our visual SLAM algorithm to make it run faster.

Share this project:

Updates