💡 Inspiration

2 million square feet. That's how big UTD's campus is. With a campus that large, it's natural to get lost and wonder: what in the world is Canyon Creek? No student has explored all of UTD, and yet every student is expected to navigate it from day one. So we decided to build Astrolabe: Google Street View-like navigation for UTD, powered by our custom CV model.

🧭 What it does

Astrolabe is a campus navigation app that uses your phone's camera to figure out where you are and determine where you need to go. Point it at any interior on campus and our advanced CV model tells you what it is, what's inside, and the fastest walking route to your destination. No more squinting at a static PDF map or asking a stranger "who is JSOM". It's real-time, visual, and built by UTD students for UTD students.

🛠️ How we built it

The front end runs on all mobile platforms with React Native & Expo. Our FastAPI backend handles the HTTP requests, interfacing various APIs from Nebula Labs to ElevenLabs. Our campus building metadata and location info lives in a MongoDB database, leveraging it's free-form NoSQL structure to hold anything and everything we could need. The CV model itself was trained entirely on video of UTD's campus that we went out and collected ourselves. We then processed these videos into individual frames, sorted them based on location, and use Tensorflow Keras to train the model. We then export the model as a tflite file, which we're able to run on-device

😵‍💫 Challenges we ran into

There is no existing dataset of UTD campus imagery, so we built one from scratch, taking videos from every angle in every lighting condition we could manage during a hackathon. Getting the model to hold up under extremely similar locations (rooms in the same hallway/building) was very hard and took a lot of strategy to avoid. Keeping the AR direction overlay anchored correctly as users walk around turned out to be way harder than we expected, and took more iterations and compute than one would expect. The hardest part wasn't building the app, it was generating the data to make it work.

🏆 Accomplishments that we're proud of

We built a functional vertical-slice labeled training dataset for UTD's campus from nothing in under 23 hours. Watching Astrolabe correctly identify the location in real time for the first time was one of the best hackathon moments we've ever experienced. The full pipeline runs fast enough to feel live, not laggy: camera input, CV inference, FastAPI response, AR overlay. We came in wanting to build something useful for UTD students specifically, and we did exactly that.

📝 What we learned

We learned that data collection is the unglamorous backbone of any computer vision project. Behind every great model is an even better dataset. We also learned how big the gap is between a model that works on your laptop and one that performs on a real phone in variable conditions. Coordinating a mobile frontend project alongside a CV pipeline and a backend API under time pressure forced us to get a lot better at scoping and integration fast. Any time wasted at the beginning slowly began to add up near the end and as the clock struck closer to 11am, we entered into a flow state.

If we could only take 1 thing away from this experience, it would be to plan deep. If you plan in-depth, you acquire more background knowledge and your agent instructions become more accurate.

💫 What's next for Astrolabe

The obvious next frontier is to expand our dataset. Our project proves this concepts, but doesn't fully realize it. We want to improve the reliability and accuracy of our model as well as integrating cross-building navigation. The architecture is built to generalize, so expanding to other campuses is a real possibility. With so much potential, our only next step is to shoot for the moon. Even if we miss, we'll land right where we want to: amongst the stars.

Share this project:

Updates