Inspiration

The inspiration for this project is providing a fun educational tool to provide quality education in ASL, providing a more accessible way to communicate with the deaf and hard of hearing.

What it does

Sign Meow interoperates your hand signs and compares it to a massive ASL alphabet database and uses Gemini to generate unique sentences for the letter prompts. The user then signs out the letters in these sentences, when signed correctly the cat companion will do a happy dance to encourage continuous improvement.

How we built it

We used Python with the assistance of Microsoft Copilot, Deepseek, Open AI's ChatGPT, Google's Gemini, and Cursor to develop the computer vision software and bridge the code to be compatible with our hardware. We also used C++ with Gemini's API to generate text to be signed for Sign Meow. By using the ESP32 as the brain of the operation, assisted by the Arduino Uno as a power source, we are able to power our fun cat buddy to help motivate students. The cat companion is 3D printed using PLA filament and has a lose chain connected to a servo to control the cats motion.

Challenges we ran into

As a team of two mechanical and two electrical engineers, we initially struggled with setting up the proper frameworks and tools to create our product. For example, the computer vision code required the download of multiple libraries and databases to accurately read ASL alphabet signs, though it was easy to make the code track having the software to accurately read and interpret it was challenging. Going back to our roots as engineers, we also wanted an hardware component to this project. Due to the computer vision code being programmed in Python, we had to figure out how to bridge the code over to the C++ based ESP32 and Arduino architecture.

Accomplishments that we're proud of

We were proud of incorporating both ai, hardware and software in a project. Especially as this is our first project involving ai and computer vision, we were proud that we could successfully create a software that could read signs, and incorporate that with something tangible like a cat companion.

What we learned

We learned how to set up a 3d printer, how to use different hardware technologies and their specific uses such as an arduino vs an esp32, how to integrate the gemini api and use it to display on the LCD screen and finally how to use libraries in computer vision and how ai and basic machine learning principles that are able to review asl libraries to successfully.

What's next for Sign Meow

we were only able to integrate alphabet asl reading, in the future we plan to incoprorwate sentence recognition onto the neural network and hopefully get to an advanced enough stage where sign meow could work with multiple sign languages, since sign lauanges vary per country and even through regions and dialects, for example certain words in sign langauage in Canada would be different than in the US. we want to make the world of sign language more accessible to everyone around the world.

Built With

Share this project:

Updates