Background

Half a million people in the United States identify as deaf or hard of hearing. Accessibility has always been a huge problem, but especially nowadays as the number of people that know ASL is decreasing each year. The best time to learn a language is as a child, so we aim to target teaching ASL at the root. Our app, HelloSign, is an gamified, e-learning web app focused on teaching small children ASL.

What is HelloSign?

HelloSign uses artificial intelligence to provide instant feedback on hand sign technique and augmented reality for an interactive and kinesthetic learning experience. Our main features include:

Lessons & Quizzes

The student can learn through our lessons and test their skills with our quizzes. The lessons are easy to understand and the learner can learn kinesthetically with augmented reality. The quizzes use machine learning to detect the signs and provide real-time feedback.

Badges & Prizes

We took into account that kids don't like traditional e-learning as they find it boring, so we gamified it. You can earn badges and prizes by completing lessons and quizzes

Friends & Leaderboard

We wanted to make sure you can interact and practice with your friends through video call. We believe that socializing and friends is a part of what makes learning fun. The leaderboard is a competitive aspect to encourage children to do their very best.

Donate Cryptocurrency

To further develop and maintain our free app, we give the option for users to donate cryptocurrency, a digital payment system that doesn't rely on banks to verify transactions, using a technology called blockchain.

How HelloSign was built

HelloSign was built by a team consisting of both beginner and advanced hackers, including both designers and developers.

HelloSign’s design was created based on our audience, need-finding, user personas and user-flow.

HelloSign’s tech was made with React, Redux, Material-UI, Framer-Motion and many other technologies. Tensorflow Object Detection API and Python were used to create our own machine learning model, and then we converted our pretrained model to TensorFlow.js and hosted our model on cloud object store so that we could use it with our frontend. The backend was made with Node and Express with MongoDB as our database to store user data. Cryptocurrency transactions are made possible with Ethers and MetaMask. Finally, EchoAR was used to view 3D hand sign models in augmented reality.

Engineering

link

Our tech stack.

link

UX/UI

img

User personas helped direct and guide us with the design of our app

Hi-fi Prototypes

image

We selected a red, green, and blue color palette, font, and developed the art style from there.

Challenges

We used a wide variety of technologies for our frontend, our main ones being: React, Redux, CSS, JavaScript, and HTML. We also used Framer Motion for animations and Material-UI as a component library for icons and modals.

The real time object detection machine learning model was trained through transfer learning using SSD MobileNet and tensorflow object detection api from using our own dataset by labeling our images with LabelImg.As a result, it can detect hand signs in real time with your webcam and OpenCV. After making our own model, we then converted it to Tensorflow.js and hosted it through a cloud object store. From there, we could use our machine learning model with our frontend React app.

The most frustrating part would be creating our own dataset from scratch, as it was very large and time consuming. Another challenge we faced was implementing the bounding boxes for the image recognition. The feature itself was unnecessary, but it greatly improves the user interface altogether. We also struggled to provide real-time feedback/scoring, and figuring our Base64 encoding and integrating all of these components within a short period of time

Accomplishments we are proud of

We are proud that we managed to polish and finish our app! We finished slightly earlier than the hackathon deadline, so we decided to add extra details with Material-UI icons and animations with Framer-Motion to make our user interface look more professional and organized.

What we learned

We learned to work remotely from each other, since this was an online hackathon. We relied on discord for communication, google docs for brainstorming, figma for our design and more. The technologies and apis we used were a fun and good challenge as well.

What’s next?

If there was more time to work on our project, we would have added a lot more features. This includes making our own 3d models to view in AR, and providing more lessons and quizzes. We would also train our machine learning model to recognize more hand signs, including hand signs that aren’t part of ASL, such as Chinese Sign Language and French Sign Language. Ultimately, we hope to see the app launch one day, we want to encourage more children to learn ASL and socialize!.

Built With

Share this project:

Updates