To fully experience the project and have a more comprehensive overview, please refer to my GitHub: Project Github Link
I've been thinking about...
How might the integration of Augmented Reality (AR), Artificial Intelligence (AI), and the Internet of Things (IoT) technologies in Head-Mounted Devices (HMD) potentially open up new avenues for improving the home living experience of DHH individuals?
Inspiration
The inspiration for this project came from a critical observation: the prevalent use of voice commands in today's technology, which excludes the Deaf and Hard of Hearing (DHH) community.
Therefore, my goal is to create a solution that not only integrates with the existing smart home ecosystem but also enhances accessibility, safety, and communication for enhancing DHH individuals' home living experience.
What it does
Quiet Interaction" is a Mixed Reality (MR) application that combines AR, AI, and the IoT to create an accessible smart home environment tailored for the DHH community. This application allows users to interact with their home environment visually and intuitively, offering real-time control over household devices without reliance on auditory cues.
How I built it
I developed this project using Unity and Meta XR SDK to leverage its AR capabilities for creating an accessible interface. Smart home connectivity was achieved through a Zigbee dongle connected to a Raspberry Pi and Zigbee devices, utilizing MQTT for efficient device communication.
Additionally, I've integrated speech services from Microsoft Azure to better meet the communication needs of the DHH community. This includes features like real-time translation and transcription.
Challenges I ran into
One of the major challenges was designing an interface that is both intuitive and effective for DHH users. Balancing the visual complexity necessary for informative feedback with ease of use required careful consideration and multiple design iterations. Additionally, ensuring real-time performance and reliability in translating IoT device statuses into AR visualizations posed significant technical hurdles.
Quest3: Audio capture limits (Meta please be aware and fix it!)
Unfortunately, I couldn't implement speaker diarization. The issue is from using the Python script to capture and send audio to Azure which currently doesn't support speaker diarization.
Most importantly, the microphone on the Quest3 headset isn't optimal for capturing clear audio from other speakers except the user :( This constraint significantly impacts the app design.
Accomplishments that I am proud of
I am particularly proud of developing an innovative application that directly addresses a gap in technology accessibility for the DHH community. The successful integration of different emerging technologies into a cohesive and functional system that enhances the daily lives of DHH individuals stands as a significant achievement.
What's the next step?
The next phase will involve incorporating real DHH participants into the development process. Due to the time constraints and limited resources, the first stages of development proceeded without direct input from the community. Engaging DHH individuals will allow for valuable feedback and insights, ensuring that the application meets their needs more effectively. This step is crucial for refining the functionality and user interface, enhancing the overall effectiveness and accessibility of the application.



Log in or sign up for Devpost to join the conversation.