AVATAR - TARDIS | Agora.io Challenge
The project aims to build a playground/plugin which developers can use to define gestures within video calls. For ex. Ability to add VFX like Doctor Strange making those golden circles, Thor lightning up with electricity flowing through him, even do sign language(not finger level fidelity)/human pose detection for a variety of tasks, all within the browser be it on PC or mobile using TensorFlow for ML+AR & Agora to do the video-chat heavy lifting.
About
We've used Agora.io stack to help people learn, instruct others & participate in immersive AR experiences without requiring costly hardware & delivered via a webpage for PC/Android browsers. Selling hosting & add-ons would be another way for developers to create more use cases with our product by incorporating human pose detection data which is being performed on the device itself
Youtube: https://youtu.be/sN4ITLhJ5zs
Steps to run
Clone this repo & cd into it
Install node.js & run,
npm install --global surgeRun,
surge& provide a domain of your choosingVoila, it is going to be hosted at your subdomain on surge.sh, visit it over https
Modules used
Agora.io, Token Server for Agora on heroku at https://tardis-demo.herokuapp.com/access_token?channel=test&uid=1234
TensorFlow.js, PoseNet Model for realtime human pose detection
Materializecss for themeing, jQuery for helping with JS.
Media
Built With
- agora.io
- javascript-html
- tensorflow.js

Log in or sign up for Devpost to join the conversation.