Inspiration
Our inspiration was that we realized that hearing issues are very prevalent worldwide and we realized that there was not any interface for the majority of the people within those areas. We built our project using Raspberry Pi, pyTorch, openCV, and socket. We faced many challenges, but our biggest challenge was to make the AI model to translate ASL to English. During this, we ended up needing to change some libraries, and those libraries were not compatible with the Pi, making it so that we couldn't use the Raspberry Pi Camera or previously made openCV programs. Instead, we had to rely on our computer's local webcam and then transfers data from the AI model to the Pi, where it reports the translation on the small watch-emulation display.
Log in or sign up for Devpost to join the conversation.