We wanted to create an app which incorporated computer vision into something educational. Hence we came up with the idea to help people learn how to write Chinese characters by hand, and so, Mashi Mashi was created!

What it does

With a User Interface built to cater to all kinds of people of all ages, it allows the user to take a picture of a white paper with their hand-drawn character and upload it to the app where it will compare it to the given character and say whether or not the drawn character was written correctly. The App itself has two modes:
- Game Mode: The app gives you a random word (not the character but the English word) and asks for you to submit a picture of how you believe the character is written. It then looks at whether or not you wrote it correctly.
- Learn Mode: You can look up a word in the App and if this word exists, it shows you the character. You then write it on a piece of paper, take a picture of it and upload it to the app. Again, it will tell you whether or not it was drawn correctly.

How we built it

We coded both an app and trained models of Chinese characters using a database found on Kaggle to create a "template":
- App: We used Xcode as well as the swift programming language to create the storyboard of the application.
- Computer vision: The database we have is of the 15 Chinese digits (一, 二, 三, 四, 五, 六....) written by hand. We took 330 of the pictures to run for the initial version of the model and are currently running the total database of 15000 images to create a more intensive model. To do this we used ImageAI, which is an "easy to use Computer Vision Python Library" link.
- Link between the two: To link our models with our Application, we used Firebase. This allowed us to both push images from the UI (App) to Firebase, which then the Computer Vision Models can pull. The Computer Vision Models can also push certain variables into Firebase and the UI can pull these variables from Firebase into the app.

Challenges we ran into

On Friday, none of us knew how to code in swift or use any of the programs mentioned above. Learning and implementing all of this code in a weekend truly was a challenge. However, perhaps the most frustrating part of this entire process was the constant wait times which we had to endure. From downloading xCode to creating the models, each step could run for hours without us being able to do much about it except wait for it to be done, this was mostly due to the limitations of the hardware which we had at hand.

Accomplishments that we're proud of

I believe our proudest achievement is having created the app itself. Being able to upload an image to the application and using Computer Vision Models to analyze this image is also one of our proudest achievements. Nonetheless, overall, what we are most proud of is having created an easy-to-use UI which can truly help people gain more knowledge and education.

What we learned

Our biggest takeaway from this weekend may be the amount of time that creating and implementing computer vision takes, training the AI took us over 6 hours for a sample size of 330 images.. Not only can it be a challenge to code but you also have to be patient as your models are created. Another thing we learned from this weekend is the limitation of hardware when it comes to AI and computer vision. We also think we gained a useful skill in learning to program IOS Apps which may help us for future projects.

What's next for Mashi Mashi - A Chinese Learning App

In the near future, we would like to increase our database, moving from only digits to full numbers and perhaps even words and phrases. Also, something which we are very interested about is furthering this app into being able to detect where the issues are in your drawing of a Character. If you are missing a stroke for example. Even further in the future we could see Mashi Mashi becoming a platform which hosts a multitude of languages that do not employ the English alphabet. This includes languages such as Japanese, Arabic, Korean...

Built With

Share this project: