Inspiration
As anime fans, we all enjoyed the classics. Of course, one of the main ones being Naruto. It was always fun imagining what we can do in that world. So we thought, why not bring that world to us?
What it does
The idea was to have the app detect what sort of symbol the user was makiong with their hands. Once a certain sequence of symbols were made, the appropriate animation would be made (a fireball, summoning a toad, etc.)
How we built it
In order to capture the user's motion and display the animations, the plan was to use Unity and GoogleARCore. We took an image every few frames and sent it to our server. There the image was processed to check if there were any hands. If so, That part of the image was would be sent to the Google-Vision-AI that we trained with our own images and labels. It would return the type of symbol that the hands were making. From there, the plan was to put them in order and display an animation if it matched a preset sequence.
Challenges we ran into
The main issue was getting Unity to work. Problem after problem occurred while dealing with Unity that eventually, the person who was working on the Unity portion was unable to open the project any more! Due to this, the project was unable to be completely finished on the client side. In addition, determining what ML libraries to use, as well as decoding and encoding images from GoogleARCore proved to be quite a problem. Another issue was figuring out how OAuth worked. It apparently was needed for the Google-Vision-AI.
Accomplishments that we are proud of
The major accomplishment that we are proud of is getting the image recognition AI trained and actually able to detect images. In addition, learning to network from Unity was something new that we did as well.
What's next for Ninjutsu Simulator
Once we get Unity fixed, finishing the animations would be great.
Log in or sign up for Devpost to join the conversation.