Meemaw: Digital Dignity for Everyone

Inspiration

Watching older generations struggle with everyday technology sparked our idea. Adrián was thinking about his grandmother, who constantly faces hurdles with simple tasks like changing the TV input. We wanted to build a tool that could step in and patiently assist our loved ones whenever we aren't available to help.

What it does

Meemaw is your patient, on-demand digital assistant for everyday problems. Having trouble getting a wireless mouse to connect? Just snap a picture! Meemaw will guide you through the debugging process with the calm, friendly tone of a helpful family member (but with infinite patience).

Meemaw visually annotates images to pinpoint exactly what to click or look at, strictly avoiding confusing tech jargon. If technical terms are absolutely unavoidable (like a printed label on a device), Meemaw breaks them down and explains them simply.

How we built it

After iterating through several ideas and learning from a few failed attempts, we landed on a robust stack that got the job done:

Frontend: TypeScript + Next.js (you really can't go wrong here).

Backend: FastAPI powered by Gemini-2.5, with Microsoft Presidio layered in to securely process and anonymize user prompts.

Data: MongoDB (though we intentionally store very little user data to respect privacy).

Auth: Firebase.

Deploy: DigitalOcean.

Challenges we ran into

Architecturally, building out the AR and image annotation features proved to be a massive hurdle and we had to abort the idea and move to annotations.

On a more personal note, Adrian (who is writing this section) wanted to throw Swagger out the window. After spending three agonizing hours debugging an issue where Swagger was crashing calls before they even hit the backend, he is now officially a hardcore curl evangelist (not actually, no Curl Church exists, atleast we hope).

Accomplishments that we're proud of

We are incredibly proud of pushing through the architectural troubles and getting the image annotations to work most of the time. We're especially hyped about the dynamic image "re-pulling" feature—the ability to take a previously processed image and re-annotate it on the fly for a completely different task. It works beautifully.

What we learned

We leveled up our technical skills across the board, specifically diving into:

Configuring MongoDB Atlas setups

Integrating Google AI Studio

Leveraging ElevenLabs for voice generation

Using uv for lightning-fast Python package management (previously only conda experience)

What's next for Meemaw

We'd love to fully implement Augmented Reality (AR). We ran into a ton of roadblocks trying to build it within the strict 24-hour time window, but we strongly believe an AR overlay guiding users in real time through their phone cameras would completely change the Meemaw experience.

Built With

Share this project:

Updates