đź’ˇ Inspiration
The inspiration for ChoiceEase comes from the "Invisible Tax" paid by neurodivergent individuals. For those with ADHD, Autism, or Anxiety, the "cognitive battery" required for executive function is a finite daily resource. We noticed that while AI is often used for high-level complex reasoning, it is rarely used as a cognitive prosthetic for the "small" daily choices—like what to eat or whether to shower—that lead to burnout when the brain is overwhelmed.
We wanted to build a tool that intervenes when the brain's "Executive Function" hits zero, turning analysis paralysis into manageable momentum.
🛠️ How we built it
We focused on a "Low-Friction, High-Empathy" architecture:
- The Intelligence: We used Llama 3.1 8B Instruct via the Hugging Face Inference API. We built a modular prompt engine that performs Prompt Augmentation. Before the model ever sees the user's question, it is provided with a "Context Block" containing the user's neurotype, core values, and current energy level.
- The Stack: We chose a HTML/CSS/JS frontend with a Node.js/Express backend. This "no-build-step" approach ensures the app is lightweight and extremely fast—critical for a user who is already feeling overstimulated.
- The Math of Overwhelm: We modeled the user's decision fatigue state using a decay function. If $E_0$ is the initial cognitive energy, the cost of a decision $D$ can be modeled as: $$E_{next} = E_{current} - \sum_{i=1}^{n} (O_i \cdot \omega_i)$$ where $n$ is the number of options and $\omega$ is the "neurodivergent weight" or difficulty factor. ChoiceEase uses Llama to artificially cap $n$ at 3, preventing the energy $E$ from reaching zero.
đźš§ Challenges we ran into
- Prompt "Tone" Calibration: Llama 3.1 is highly capable, but early versions of our prompts felt too "clinical" or "bossy." For neurodivergent users, being told exactly what to do can trigger demand avoidance. We had to iterate extensively on Permission Language, ensuring the model always includes phrases like "It is okay to pick the low-energy option" or "You know yourself best."
- State Management without a Database: To prioritize privacy, we committed to not using a traditional database. The challenge was maintaining a personalized feel across sessions. We solved this by building a robust localStorage sync that hydrates the Llama context block on every request.
- Contextual Adaptation: Teaching the model to distinguish between a "High Energy" recommendation and a "Low Energy" one required complex system instructions to ensure it didn't just give the same advice regardless of the user's mood check.
🏆 Accomplishments that we're proud of
- Zero-Jargon Accessibility: We managed to take complex AI outputs and structure them into "scannable" lists, avoiding the "wall of text" that often causes neurodivergent users to close an app in frustration.
- Privacy-First AI: We built a deeply personalized experience where the most sensitive data (the user's neurotype and personal values) never actually lives on a server.
📚 What we learned
We learned that the most "meaningful" AI applications are often the most invisible. We gained deep insights into Cognitive Accessibility—designing for different "brain fuel" states. We also learned that small, open-weight models like Llama 3.1 8B are more than sufficient for high-empathy tasks when paired with precise, context-aware prompt engineering.
🚀 What's next for ChoiceEase
The hackathon is just the beginning. We plan to add:
- On-Device Inference: Transitioning to WebLLM or Llama.cpp to allow the app to work 100% offline.
- Voice-to-Decision: Allowing users to "vent" their stress via audio, which ChoiceEase will then distill into a calm action plan.
- Semantic History: Using vector search to remind users of "Past Wins" (e.g., "Last time you felt this overwhelmed, a 5-minute walk helped. Want to try that again?").
Built With
- express.js
- html
- huggingface
- javascript
- llama
- node.js
Log in or sign up for Devpost to join the conversation.