Inspiration
We've seen Snapchat, we've seen BeReal, we've seen Twitter. They're fun social media, but they lack a human feel, they lack a human voice. Furthermore, while they make it easier for us to connect from long distances, it also makes us doomscroll, procrastinate, overthink, and we lose a sense of empathy and sensitivity to tonality towards one another when we simply read what they wrote.
What it does
That's why we made Echosphere! It's a proximity and voice-based social media platform where users can upload audio clips of their thoughts - which we call Echoes - either through your phone, or through Omi's little necklace!
And remember, it's proximity based, which means the content that you see on your homepage, the ones accessible to you, are only from those around you, making the whole experience feel more localized and intimate. You simply click record, set the title, and done! Additionally, you can also do that by simply speaking "Hey Echo!" on the Omi necklace, and it'll automatically record, set a title, and upload your voice, without opening any other devices!
And did I mention that you can Re-Echo??? Hm, what's that, you might ask? Well, it's a way for you to engage with others' Echoes, by making your location a new center-point for said Echo, enlarging it's reach, and giving more people the opportunity to listen, and even potentially Re-Echo, it's what we call an "Echo Effect"
Simply put, if you like someone's 8AM rambling about the prospect of the next shoddy cryptocurrency so much, you can Re-Echo it, and now more people can tune in, and if they believe so, perhaps even Re-Echo it!
How we built it
Audio Pipeline: We implemented Omi's Python SDK to receive live audio streams from the Omi devkit. The audio is processed in real-time through Deepgram's live transcription API, which continuously transcribes the user's speech. We built a trigger word detection system that listens for specific commands like "record echo" — when detected, the device automatically records the next 30 seconds of audio and automatically uploads it to the backend.
Backend Infrastructure: The backend is powered by Supabase, leveraging its Postgres database and serverless functions. Recorded audio files are uploaded to Supabase storage buckets, which trigger processing functions to extract and store key details from each recording. We utilized the PostGIS library to handle geospatial data, implementing custom Postgres functions that efficiently query "echoes" and "reechoes" within a specific radius of given coordinates.
Frontend: The user interface was built with Next.js, allowing for a responsive and modern web experience. We "vibe coded" much of the frontend with assistance from Claude, rapidly iterating on designs and functionality to create an intuitive user experience.
Challenges we ran into
SDK Migration: Our biggest initial hurdle was with the Omi SDK. We originally planned to use the React Native SDK but encountered significant integration issues that slowed our progress. After valuable time spent troubleshooting, we made the tactical decision to pivot to the Python SDK, which proved more straightforward to implement.
Limited Documentation: The Omi DevKit's sparse documentation presented ongoing challenges throughout development. This lack of comprehensive guides and examples limited the features we could confidently implement and required significant trial-and-error experimentation.
Geospatial Calculations: Working with Postgres functions proved trickier than anticipated, particularly when calculating distances between coordinates. The nuances of PostGIS distance calculations and coordinate system transformations required deep dives into the documentation and several iterations to get accurate results for our location-based queries.
Accomplishments that we're proud of
One thing we're proud of is being able to do all of this at a Berkeley dorm hall, with abysmal sleep hours. It's our first Hackathon together, and so there were a lot of hurdles, a lot of hiccups here and there, especially so on the first day with one of our flights being delayed for more than 2 hours. But ultimately, we got through them together, and I feel that that is what counts in Hackathon like these.
What we learned
We learned how valuable time is, and to be honest we did not expect to be so panicky at the last few hours prior to submission.
What's next for Echosphere: Next Generation Social Media
We will develop a monetization path for our project. People could pay to make longer echoes (from 30 seconds to 1 minute). Additionally, their echoes will have a bigger radius which covers a lot more people. Think of it as a verified badge in Instagram.
In the future, we will also develop a map view as an addition to the list view we have right now. The map view will clearly show the radius of the circle, echo spots nearby as well as how many reechoes does one echo get (if you click on an echo).
Built With
- claude
- deepgram
- next.js
- omi
- postgresql
- python
- supabase
- tailwindcss


Log in or sign up for Devpost to join the conversation.