Inspiration
We all know the cycle: you sit down to study, check one notification, and suddenly you’ve lost an hour to doomscrolling on TikTok or Instagram. For students, the phone is the ultimate enemy of productivity.
We wanted to flip the script. Instead of fighting the urge to check our phones, what if we occupied the phone with something helpful? We were inspired by the concept of "body doubling"—where just having someone else in the room helps you focus—and decided to combine that with a "digital lock." We built Spuddy to be a companion that demands your phone’s attention so you can't give it yours.
What it does
Spuddy is an emotionally intelligent desk companion that effectively "bricks" your phone with cuteness.
To work, Spuddy needs to see you. You have to prop your phone up on your desk, open the app, and leave it alone. The app uses the front-facing camera to continuously monitor your facial expressions, categorizing them into 7 basic emotions (like Happy, Sad, or Frustrated).
As long as the app is open and facing you, Spuddy hangs out. If you look stressed, he might sigh sympathetically ("Ughhh...") or offer encouragement. If you look happy, he bounces. But the moment you pick up your phone to scroll social media, you break the connection. By forcing the phone to be a passive "observer" on your desk, Spuddy physically removes the temptation to procrastinate.
How we built it
We built Spuddy using React Native (Expo). The architecture relies on the app remaining in the foreground to access the camera stream:
The "Eyes" (Face Detection): We implemented real-time face detection to capture the user's emotional state. This requires the phone to be stationary and facing the user.
The "Brain" (Groq & Llama 3.1): When a mood shift is detected, the app hits the Groq API with a specific system prompt that gives Spud his lazy, supportive personality.
The "Voice" (Fish Audio): We pipe text responses into Fish Audio to generate realistic, emotive speech with phonetic markers.
The "Body" (React Native Reanimated): Spud is a dynamic puppet controlled by shared values. When he speaks, we trigger a physics-based "squash and stretch" animation loop to make him feel alive.
Challenges we ran into
The biggest technical hurdle was concurrency and state synchronization.
Because facial expressions change milliseconds apart, Spuddy would often try to trigger three different reactions at once. This caused "race conditions" where his audio would overlap, or he would get stuck in a "busy" state forever. We had to engineer a custom Mutex (locking mechanism) using React refs and a "Watchdog Timer" to ensure Spuddy finishes one thought before starting another.
Accomplishments that we're proud of
The "Anti-Scroll" UX: Designing an interface that encourages not touching the screen. By making the character reactive to the user's face, we gamified the act of leaving the phone alone.
The "Squash and Stretch": Getting the animation to perfectly sync with the audio state so Spud looks like he is actually talking.
Prompt Engineering: getting an LLM to be concise and passive is harder than it looks. We tuned Spud to feel like a pet, not a chatbot.
What we learned
This project taught us that constraint is a feature. By forcing the user to keep the app open and the camera active, we solved the physical problem of phone addiction with a software solution. We also learned deep lessons in React Native animation states, specifically how to sync an asynchronous audio stream with a 60fps visual loop.
What's next for Spuddy
We are currently experimenting with Bluetooth (BLE) integration to gamify the experience further—perhaps locking the phone screen entirely while a separate physical "Spud" device sits on the desk. We also plan to add "skins" (hats, glasses) that users unlock based on how many minutes they spend studying without picking up their device.
Built With
- fishaudio
- groq
- hume
- openai
- react-reanimate
- reactnative
- typescript
Log in or sign up for Devpost to join the conversation.