We are a group of female Deaf students who recently attended the 2023 WiHacks event at Rochester Institute of Technology. At the event, we were challenged to build a proof of concept project that demonstrates how AI technologies can augment people’s lives in positive and meaningful ways. Almost immediately, we thought about how fruitful the intersection of AI technologies with American Sign Language (ASL) could be. All of us primarily use ASL in our daily lives, and it is our belief that ASL is a beautiful and powerful language that casts a bright light on American Deaf Culture and is thus deserving of recognition and appreciation. However, we are also fully cognizant of the fact that Deaf people are very much a linguistic and cultural minority in the U.S., and for that reason many people are not familiar with ASL. So we thought it would be wonderful if the power of AI could be harnessed to facilitate ASL learning and thus increase awareness of ASL and Deaf culture. We are particularly interested in how ChatGPT can be used to develop ASL awareness and proficiency because ChatGPT is freely available to everyone and has low barriers to entry.

What it does

SignGPT is a plugin integrated with ChatGPT in order to extend its capabilities and it is a tool for learning and practicing ASL as well as learning about Deaf culture. Like ChatGPT, SignGPT can provide a personalized and adaptive learning experience that caters to the individual needs and preferences of users.

How we built it

We used Figma for the design part and Visual Studio Code to build our HTML, CSS, and React code. In the future, we would call the ChatGPT API within our React application in order to run our SignGPT program. We elaborate more on the technical details of our process in our documentation, which is in a README file in our GitHub repository. Our documentation also contains a good deal of information on other aspects of our project, so please check it out if you can!

Challenges we ran into

One of the major difficulties we encountered was a lack of sleep. HAHA! But on a more serious note, we did struggle with finding ways to display resource videos that were compatible with our codebase. We did try embedding videos, but they only appeared in SignGPT as empty black screens. We did check to make sure that the path was correct, but it was still not able to display resource videos properly. Another major difficulty we had was finding a way to make sure that SignGPT is able to detect whether users' signs are correct or incorrect so it can provide accurate feedback.

Accomplishments that we're proud of

We are proud of having created and designed SignGPT, our precious brainchild! :) We had a lot of fun during the process, and we feel that our Figma design is reflected well by our codebase.

What we learned

We learned about how ChatGPT technology can be used to bring awareness to ASL and Deaf Culture and educate people. We also learned a lot about the process of translating our ideas to design and code. Trying to figure out how to combine HTML, CSS, and React into a well-executed and beautiful expression of our vision was a very enlightening experience for all those involved. We also learned about how to incorporate ASL video datasets and camera views into our code. We also learned how to utilize the various skills of our teammates to our advantage in order to collaborate effectively on this project. Working on this project in the future, given its complexity and scale, will surely involve a great deal of continuous learning :)

Built With

Share this project: