Inspiration
Despite living and learning alongside thousands of people, students often remain unaware of the lived experiences of those around them. Social, cultural, and identity-based intersections remain invisible in everyday spaces, even when people are sharing the same campus, the same courtyard, the same bench.
Most storytelling platforms amplify the loudest voices through algorithmic ranking, burying quieter, more personal perspectives before they can be heard. No space exists that is hyper-local, community-driven, and built around empathy rather than virality.
Voices Around Us makes those intersections visible.
What it does
Voices Around Us is a mobile, location-based storytelling platform where anyone can drop a short personal story — voice or text — tied to the exact physical location where it happened. These stories are waiting to be discovered by others simply by browsing the map or by walking nearby.
No algorithm. No ranking. Discovery is purely geographic: if you are standing in that place, you can find that story. A first-generation student's memory holds the same weight on the map as anyone else's. Stories are location-anchored; they live where they happened, permanently. Users can also tag stories based on identity, culture, and experience, allowing other users to filter and browse stories based on the tags.
Voices Around Us is Empathy-first. It's designed to foster understanding through glimpses into someone's lived experiences, not engagement metrics.
How we built it
We first used Figma to create LoFi prototypes to validate our Supabase connection and finalize the UI. Then, we migrated to React Native with Expo for the frontend, Mapbox for the map and pin rendering, and Supabase for backend and file storage.
Challenges we ran into
Setting up development environments on everyone's devices was difficult. Connecting with Supabase was difficult. Initially, the project was very disorganized, and there were lots of conflicting versions with no proper version control. Authentication was difficult to implement and caused a few hours of errors. Setting up the IOS simulator on Xcode was difficult and time-consuming.
Accomplishments that we're proud of
We're proud that we were able to create a full-stack project with APIs like OpenAI for audio transcription.
What we learned
We learned how to implement Supabase and React Native into a full-stack project.
What's next for Voices Around Us
We'd like to gather actual users and begin collecting real user stories.
Built With
- javascript
- json
- mapkit
- netlify
- postgresql
- react
- reactnative
- sql
- supabase
Log in or sign up for Devpost to join the conversation.