Research Focus
The world of home conversational AI, represented by intelligent home voice assistants such as Amazon Alexa and Google Home, spans multiple areas. With Speak UP, we have designed a working application that serves those who are part of the accessibility community and have needs in their home that could be supported by voice technology. This includes supporting the home ecosystem environment through Voice UI or developing a skill that supports a particular population (aging adults, or cognitively impaired adults).
What it does
Speak UP offers a range of inclusivity tools, including a Speech Therapy training model, an accent and pronunciation guide, and the ability to learn figurative speech in various languages.
How we built it
We built Speak UP using VoiceFlow UI to design and prototype our voice chatbots. Additionally, we integrated node.js and firebase for backend services into our project to be able to save user information within a database.
Challenges we ran into
We ran into several challenges in the beginning such as figuring out what steps to take and what to work on while researching for our project. We needed to identify the boundaries of our project, such as what aspects of accessibility to focus on, which specific user groups to address, and how deep to dive into technology integration. To address these challenges, we met to clearly define the objectives and scope of our research, outlining the specific areas of accessibility and user groups we aimed to target while also communicating regularly with our team and our research guide.
Accomplishments
We successfully developed a range of functionalities, including the ability to retain user preferences and needs across sessions, saving a user's distinctive figurative expressions, providing guidance through structured speech therapy exercises, and breaking down word pronunciation to enhance comprehension and facilitate more personalized interactions.
What we learned
We learned a lot about the accessibility community as well as the importance of user design. Placing the user at the core of our design process greatly allowed us to break down our work, as understanding their needs, preferences, and challenges is crucial for creating solutions. Additionally, we learned how to work and communicate effectively in regard to our research mentors and with each other on a virtual platform. This allowed us to work on a shared vision, resolve issues, and accelerate progress.
What's next for Speak Up!
Our next steps would be to make Speak UP more robust and personalized. We can do this through user feedback on current exercises, creating a more personalized approach to learning. Additionally, the foundation of Speak UP can be applied to supporting different languages, not just English, furthering our mission of inclusivity and accessibility. Finally, we would like to train the AI to better understand the user’s individual voice (cadence, accents, etc.). This can be done through positive reinforcement, training the AI to recognize patterns through complex neural networks, creating a structure similar to that of ChatGPT. These additions will create a more engaging and accessible experience for users of voice recognition devices.



Log in or sign up for Devpost to join the conversation.