We found almost all (if not actually all) virtual pet apps are following a hard-coded procedure, and players of the game need to do a lot of tedious and annoying tasks. Eventually, after all these tasks were done, the purpose of play the virtual pet game was just keeping it alive, since there were no more room to explore the game after all tasks were done. On the other hand, AI technologies for games right now are almost all adversarial to human players, like Deep Mind's Star War AIs, AlphaGo, etc. To the best of our knowledge, there is no Game AI that actually works with human players.
We are trying to (1) make AI technologies for games that learn from human players, work with human players and even grow with human players (especially for kids); (2) make a (virtual pet) game whose roadmap is pretty much unpredictable: different human player may raise pets that have unique characteristics and "real" emotions; (3) pets can talk to each other, share their feelings and interesting stories, so that they help their masters to extend their social network (people tend not to chat with strangers, but if pets do, so that they can help their masters to start a conversation); (4) as more users are playing our games, the massive amount of data will be very comprehensive that cover almost all different scenarios of daily life of human, which is way more valuable that any present dataset we have in academia. We do think what we are doing is a promising path towards AGI (artificial general intelligence) that brings positive effects to our society.
What it does
We intended to make an AI virtual pet that can progressively grow it own characteristics and learn to express emotions by interacting with human continuously. Moreover, pets can talk to each other in a virtual "playground" so that they will not only learn from their interactions with their masters, but also from other pets. Some pets may be more peaceful since they follow peaceful masters, and some pets may like more adventure since they saw other pets followed their masters to go hiking or traveling.
Due to the limited time of the Hack, we tried our best to convey our idea in a complete manner under certain limited scenarios: (1) a pet can interact with its master by watching the images its master took for the pet (there is a photo taking button in the game); (2) a pet can response its feeling about the image its master took with gifs (from giphy); (3) a pet will receive some images other pets gave to it (under their masters' permission), and it will share these new stories with its master, by showing him/her the images and its feelings about them.
How we built it
We explored multiple game frameworks (i.e. cocos2d-x, react-native, iOS Sprite), and different machine learning models (i.e. classification network trained on MS COCO dataset, YOLO, iOS Facial Detection and Facial Landmark Detection). We also explored different sponsored software support (i.e. giphy, google cloud platform).
Practically, we used react-native as our game engine, and used YOLO + Tensorflow.js + Node.js to build our multi-instance object detection and recognition pipeline which is deployed on Google Cloud platform accessible by customized APIs. We also designed a emotional feelings encoder and an emotion scoring mechanism, the score of which will be converted to some keywords to find memes with giphy APIs to convey the feelings of the pet to its master. We also designed a demo for the hidden playground, where the pet learned about some interested stories from other pets.
Challenges we ran into
The biggest challenges came from the sub-components integration. Currently there are NO game engine that supports machine learning applications on iOS platform. We tried the cocos2d-x but it was we went through a lot of difficulties such as using camera in cocos2d-x and integrating Tensorflow models in it. We then tried react-native, which is a little bit more easier to program, but lots of random errors also popped up that severely prolonged our time for development.
Accomplishments that we're proud of
At the beginning, we were quite lost about how to move forward, because our goal is quite ambitious and there does not seem to be any good balance point between the time limit of the hackathon and conveying our core concepts in an intuitive way (or says, making a demo that is not hard to implement but enough to convey our idea). But as the project progressed and more discussions came up, we gradually consolidated our ideas to several specific scenarios of the game demo.
Additionally, we successfully made the tested several different machine learning models, created an online API using an implementation of the YOLO, and deployed that onto the Google cloud platform.
What we learned
During the Hack, we actually learned a lot! We tried different game frameworks and different ML frameworks, which will improved our technical skills a lot. And we also learned how to collaborate as a team to run a project from scratch in such a short time. More importantly, we had a very wonderful time here purely for making our Hack work!
What's next for PennPets.ai
The next things to do for our game are threefold:
Improving the graphics and the outlook of the pets. We were thinking about making this game a 3D one, and we may make more different options of pet styles.
Improving the machine learning models behind the scene. The ML pipeline we are using currently is fairly simple and infers strong assumption. We may use models that can handle time sequence data especially, such as deep reinforcement learning and RNN.
Completing the playground and multi-pet communication. We need to continue on making the playground and multi-pet communication utility complete, with interesting story discovery and story and feelings sharing.