Inspiration
The inspiration came from us coming together and brainstorming hard about what problems people face this day and age even though we have such incredible advancements with technology. For us, we knew that the biggest barrier to a language is not understanding it or not being able to communicate through the same language. For this, we targeted the American Sign Language as the language we wanted to try to translate with the use of neural networks.
What it does
With SignOn, you will capture an image of a person performing a character within the American Sign Language and once it has captured that image, it will be sent to our neural network and the neural network will perform several transformations and output percentages to the potential character that it understands.
How we built it
As mentioned before, we built it utilizing OpenCV and Python. It has potential to be used on other systems as well.
Challenges we ran into
OpenCV and iOS. You tell me how Base64 works.
Accomplishments that we're proud of
Making a neural network work!
What we learned
Python scripting fairly well and how to tell people you are about to participate in our hackathon project!
What's next for iSign
Increasing the number of recognized characters
Special Thanks
To all of the lovely mentors and students that helped us make this lovely hack. Thank you to FSU's supercomputer lab as well! With FSU's help, we managed to train our neural network EXCEPTIONALLY faster.
Log in or sign up for Devpost to join the conversation.