Inspiration
Everyone deserves a butler — someone who just handles things for you. Alfred is that idea brought to life through AI and robotics. It's a general-purpose voice-controlled robot butler that can assist anyone with physical tasks around the home. But today, we're showcasing it for the people who need it most: amputees and individuals with disabled arms or legs. For them, simple daily tasks like eating a meal or grabbing something from another room aren't just inconvenient — they're impossible without help. We wanted to build something that gives them that help on demand, with nothing but their voice.
What it does
Alfred is an AI-powered robot butler you control entirely by voice. You call Alfred from your phone and just talk. Alfred listens, understands what you need, and commands the right robot to do it.
While Alfred is designed to be a general physical assistant for anyone, today's demo focuses on two life-changing actions for people with limb disabilities: a robotic feeding arm that helps the user eat bite by bite, and a mobile robot dog that fetches items from around the house. Say "I'm hungry" and the feeding arm activates. Say "grab my water" and the robot dog heads to the kitchen. No app, no buttons — just your voice.
How we built it
We built a three-layer pipeline that turns a phone call into a robot action. Smallest.ai powers the real-time voice conversation — the user talks naturally and Alfred responds like a person, not a machine. Toolhouse is the orchestration brain — it interprets what the user needs and routes the request to the correct robot via API. Cyberwave is the execution layer — it holds pre-trained robot instructions and carries out commands like "feed the user" or "fetch from the kitchen." We also created a knowledge base so Alfred knows what items are in the home and where to find them.
Challenges we ran into
Our first voice prompt over-engineered the conversation. It asked for confirmation before every action — "are you sure?" "which room?" — which is frustrating for anyone, but especially for someone who physically cannot interact any other way. We stripped it down to act immediately when intent is clear. The robot also hung mid-task occasionally, leaving the user in silence. We built recovery handling into the voice prompt itself so Alfred detects the issue, tells the user what's happening, and retries. Chaining three systems together with low enough latency for real-time voice interaction was the biggest technical hurdle.
Accomplishments that we're proud of
Alfred feels invisible. The user doesn't think about APIs, robots, or which system is doing what. They just talk and things happen. We're proud that we got the interaction down to one sentence per response with immediate action — no loops, no menus, no friction. For our target users, that directness isn't a nice-to-have — it's the difference between independence and waiting for someone else.
What we learned
The robots aren't the hard part — the interface is. Great hardware means nothing if controlling it requires a complex app or memorized commands. Voice is the most accessible interface that exists. We also learned that voice agent prompting is its own discipline — completely different from text chatbots. Brevity, assumptions, and instant action matter far more than thoroughness or confirmation.
What's next for Alfred Robot Butler
Today Alfred handles feeding and fetching for users with limb disabilities. But the architecture is general-purpose. The same voice-to-robot pipeline can expand to anyone who wants a physical assistant — elderly individuals, people recovering from surgery, or anyone who just wants a robot butler. Next steps include controlling wheelchairs, smart home devices, door locks, lights, and appliances. One voice, full control over your physical world.
Built With
- claudecode
- cyberwave
- jade.hosting
- openai
- smallest.ai
- toolhouse.ai


Log in or sign up for Devpost to join the conversation.