Inspiration

The idea for Anchor was born from a very personal struggle with "Task Paralysis." For neurodivergent minds (ADHD/Autism) or just overwhelmed humans, a simple instruction like "Clean the house" isn't a single task—it is a terrifying wall of 50 undefined decisions.

​I realized that traditional to-do lists actually make this anxiety worse by showing you everything at once. I wanted to build something that acts as a calming "First Mate"—an AI that doesn't just list tasks, but understands them, breaks them down, and hands them to you one at a time. The nautical theme wasn't just an aesthetic choice; it was the perfect metaphor. When life feels like a storm, you need an Anchor.

What it does

Anchor is an Executive Function tool disguised as a pirate's deck of cards. ​The Quartermaster: Users input a vague, chaotic task (e.g., "My kitchen is a disaster"). ​The Reasoning Engine: Powered by Google Gemini 3, the app analyzes the request, detects urgency, and breaks it down into a single, immediate starting point (e.g., "Step 1: Throw away the trash on the counter").

​The Deck: Instead of a long scrollable list, the user is presented with one "Treasure Map" card at a time. They Mark Completed or snooze, gamifying the process of clearing the mental clutter.

How we built it

I built Anchor as a mobile-first experience using React Native (Expo) to ensure it feels responsive and tactile. Then moved on from the repo to ai.studio.

The Brain: The core is powered by the Google Gemini 3 Multimodal API. I use structured prompting to force the LLM to return valid JSON data containing task summaries, urgency ratings, and specific next steps.

The Body: I used TypeScript for robust code and Tamagui for a performant, universal UI that allowed us to build the custom "Parchment" card aesthetic.

​The Engine: State management is handled by Zustand, ensuring the app remains snappy as the user swipes through their deck. I then used my repo built on Github and let ai.studio analyse it to supercharge the build.

Challenges we ran into This voyage wasn't smooth sailing! I battled several krakens:

​The "Time Traveler" Bug: I initially struggled with API 404 errors because I was trying to access models that had been deprecated or region-locked. I had to carefully navigate model versions to find the stable gemini-2.5 channel.

​Library Mutinies: Integrating high-performance animations (react-native-reanimated) with gesture handlers caused crashes on Android. I had to pivot quickly from a complex physics-based deck to a stable "Captain's Log" scroll view to ensure a crash-free demo for the judges.

​Expo SDK Updates: Midway through the build, an Expo Go update removed the notification library I was relying on, forcing us to engineer a custom in-app Alert system to keep the "Add Task" feature functional.

No fog horn in the distance! At the time of writting, I am struggling to get the app to produce background ambient ocean music. Also notification sounds when youve completed the task.

Accomplishments that we're proud of

​Real Reasoning: I didn't just build a wrapper; I built a tool that actually thinks. Seeing Gemini correctly infer that "Schedule Dentist" implies "Find the phone number first" was a huge win.

​Resilience: When omy animation libraries broke I successfully refactored the entire core component (Deck.tsx) into a stable list view without losing the pirate aesthetic.

The Treasure: ai.studio saved my project and what would have taken a number of collaborators made this project INSANELY fun, addictive and deffinately caught the bug to explore my ideas.

What we learned

The biggest lesson was that LLMs are excellent executive function prosthetics. I initially expected Gemini to just summarize text. Instead, I found it could effectively simulate the pre-frontal cortex functions that neurodivergent users struggle with: prioritizing, sequencing, and initiating. I also learned the vital importance of building "Safe Mode" UI components so the ship stays afloat even when experimental features fail.

What's next for Anchor

I am just leaving the harbor. My roadmap includes: ​ "Message in a Bottle": Integrating voice-to-text so users can speak their chaos directly into the app.

​The Seamless Agent: Evolving Anchor from a passive list into a background agent that proactively nudges you based on time and location.

​Full "Ocean Physics": Restoring the full swiping deck mechanics with the new Reanimated V3 architecture for that perfect tactile feel.

Full Admiral Promotion: To be able to integrate fully into gmail suite for neurodiverse Users to be able to manage day to days tasks easier.

Thank you all and all the best of health.

Built With

  • ai.studio
  • api
  • gemini3
  • react
  • typeset
Share this project:

Updates