Inspiration:
Across the U.S., a significant proportion of long-term care facilities report being understaffed: for example, a survey by the American Health Care Association found that nearly 99 % of nursing homes reported a staffing shortage.Meanwhile, new federal rules set a minimum standard of 3.48 hours of direct nursing care per resident per day — yet only about 19 % of facilities meet this threshold accord to JAMA Network. In this context, the need for innovative solutions is clear — that’s why Hudson, an autonomous, conversational, vision-enabled robotic assistant, aims to fill the human-care gap by providing personalized interaction, monitoring, and support when human caregivers simply cannot keep up.
How we built it:
Hudson was built by carefully combining available components into a cohesive autonomous system. Using a Python–Arduino bridge, Raspberry Pi, and Arduino UNO, it integrates Gemini and ElevenLabs APIs with OpenCV for real-time coordination between vision, speech, reasoning, and motion. Each module is modular and upgradeable, allowing flexible development. The result is a practical, human-centered robot capable of understanding context and acting on it to assist in real-world environments.
What it can do:
While we cannot directly address the poor healthcare industry, we can mitigate some of the side effects. The lack of attention that elderly people get can possibly be treated by AI. With AI, we can automate the more repeatable actions that are often done to assist the elderly. For example, taking medication. Our project is able to receive voice instructions to direct it to locate and pick up medication. You’re able to talk to it and give it commands like moving in a certain direction (forward, right, left, backward), grabbing and dropping found materials, and an automatic process to find any desired projects near it. To mitigate false calls, all commands must be started with a wake word.
Accomplishments we are proud of:
While we had many struggles, we’re proud of the fact that we were able to increase its speed to the point where you can have a near real time conversation with it. It basically its own autonomy and its design were also something we were proud of. The design was customly made from individual parts and 3D printed pieces for our specific needs.
What we Learned:
We learned to stay focused and put in the effort to bring such an ambitious project to life, even if we weren’t able to accomplish all the goals we initially had in mind, this experience made us grow as a team and learn a lot through the challenges we faced. Moreover, we learned to have fun even under stress.
Challenges We Faced
Throughout HackPSU, the main challenges we faced were deciding whether an idea was feasible to add, along with the hardware limitations that we faced with the underpowered components we had. The ideas that were thought out were all imagined to be possible, but when we looked at our hardware, mainly the low powered Arduino and Raspberry Pi 3b+. Some of the ideas we tried to implement were, the detection of objects in front of a walking stick to help the blind and a detection sensor that would detect if someone fell to the ground. Both of these are good ideas, but it was very hard to implement with the weak hardware we had. These challenges showed us how to work through hardships and overcome difficulties under time pressure.
Log in or sign up for Devpost to join the conversation.