💡 Inspiration

On the trend of being artistic, our group thought of creating either a platform that brings out a person’s artistic side, showcases’s other people’ artwork, or to create a tool that assists in the art scene. From there, we gathered that we planned for our platform to have as many interactive elements in it. What better way for users to interact other than using their body. As such, we decided to visit AI and ML models that detects users’ action and motions and created our platform for them to explore!

🔍 What it does

Just a fun little art exhibit! We created an art exhibition to try out new visualization stacks. Play around without our interactive features and our minigames as we try new stacks to bring out a more immersive exhibition with our new skills.

🔧 How we built it

Initially, we tried to integrate mediapose into our system architecture. However after researching, we decided to utilize and practice the use of Tensorflow js pretrained models. This was less straightforward as quite a few dependencies or existing modules have been deprecated in favor of other frames. This led to a lot of experimenting and researching to replace and updating existing open source codes that we found. Additionally we used Pixi.js to create our game canvases for interaction between the pose-detection APIs and the game mechanics created. On the base, we implemented ReactJS, HTML and CSS to build the website.

🏃‍♂️ Challenges we ran into

Was a bit confused about the tensorflow js pose detection api at first due to many outdated apis used Laggy sometimes, limitation on our understanding of ML and ability to resolve latency issues in open source models. We tried to resolve the issue of webcam input generating a model with a flipped image. We managed to find a interesting workaround that ultimately resolves this particular issue.

🏅 Accomplishments that we're proud of

We are proud that we are able to get a working model with an acceptable latency identification to poses. We also take pride in tackling ReactJS, with Pixi.js without the use the the existing library – reactPIXI. Overall, it was quite a feat that we were able to create quite a few features within the timeframe given (2days) for this hackathon.

🧠 What we learned

How to use the pose-detection pre-trained model! This is a new concept to us, as we have not dived deep into AI or ML in a few months, and the furthest we have explored as a team was to use HandTrack.js, another existing and simplified ML library. Furthermore we were able to set up the basics using Pixi faster than expected and were able to create generative Art which we found cool!

⏭️ What's next for Interactive Art Garden

Moving forward. We want to be able to link and improve our features on the theme of the art garden. Currently our features are rather detached from each other and implement the basics of the game design or exhibition outlook of what we hope to achieve. To improve platform usability, more information and instructions can be given for more intuitive user usage. Given more time and assistance, we believe that we will be able to bring forth a platform that better involves the younger audience to learn more about gardening or the green landscape via our interactive activities.

Built With

Share this project:

Updates