## Inspiration We’ve all had that “Wait, what did they say again?” moment right after a conversation. Whether it’s a lecture, meeting, or deep 2AM heart-to-heart, the human brain just... forgets. So I thought — what if we had an app that remembered for us?

EchoTwin was born out of a need to capture, transcribe, and summarize real-life conversations — kind of like having an AI-powered brain twin. The goal? Let people live in the moment without losing the important stuff.

##What it does EchoTwin is a mobile-first app that:

Records real-world conversations (through your phone mic)

Transcribes them using Whisper AI

Summarizes the key points with Bolt AI / GPT

Delivers clean, readable summaries back to the user

Basically, it’s like Notes app x AI x Superpowers. Perfect for students, interviewers, journalists, or anyone with a forgetful goldfish memory

How we built it

Frontend: Built in React Native with Expo for easy cross-platform development

Backend: Node.js + Express hosted on Render

Audio Uploads: Handled using Multer

Transcription: Powered by OpenAI Whisper

Summarization: Generated with Bolt AI / GPT

APIs: REST architecture to connect frontend ↔ backend ↔ AI

Flow goes like this:

User records audio → file gets sent to backend

Backend runs Whisper for transcription

Transcript is passed to GPT/Bolt for summarization

Summary is returned to the user in the app

Challenges we ran into

Expo Audio Recording: Permissions, file formats, and file paths were chaotic at first

Large file uploads: Whisper sometimes choked on longer recordings → had to limit durations and compress files

CORS issues: Classic dev rite of passage. Fixed with good ol’ cors() and some backend tweaks

API rate limits + latency: Managing AI response time while keeping the app responsive

Debugging on physical devices: Logging anything was a headache. Console logs and caffeine saved the day.

Accomplishments that we're proud of Got a full AI pipeline working: record → transcribe → summarize

Built a smooth mobile interface that actually feels like a real app

Learned to wrangle audio files, deploy backend, and integrate multiple AI tools — solo.

Created a project that solves a real, relatable problem.

##What we learned How to build a full-stack AI-powered app from scratch

Real-world Expo and React Native development tricks

Managing backend file uploads and async AI calls

That Whisper is powerful, but can be picky

That deploying to Render is way easier than it looks (and free-ish!)

What's next for EchoTwin

Add user authentication and secure transcript storage

Improve the UX/UI for vibe and accessibility

Let users edit, delete, or bookmark key transcripts

Build a custom AI model for more tailored summaries

Add language support for non-English convos

Possibly integrate a “Voice Highlights” feature — like Spotify Wrapped, but for your convos 👀

Built With

Share this project:

Updates