Inspiration

I've been working on a lot of WebVR lately and have been interested in learning more about WebRTC. GigHacks seemed like a good opportunity to merge the two.

How it works

One browser creates a WebRTC peer connection to another and sets up a data channel. Each browser acquires kinect depth data via a web socket and uses it to update a WebGL particle system in the browser. It also sends the depth data across the data connection to the other browser so that it can update another particle system positioned opposite the camera. The effect is to see particles representing your hands in blue in the foreground and the other person in orange facing you. Running this on chromium or Firefox nightly using WebVR allows the user to view the scene as an immersive 3D environment.

Challenges I ran into

WebRTC is complicated. I spent most of my time simply trying to send a text message between browsers. WebRTC data channels have a limited packet size, so the geometry data that can be sent a single frame is currently limited to 64K. It's more complicated to expose a kinect over a web socket than I anticipated.

Accomplishments that I'm proud of

I'm proud of getting the WebRTC connection working and moving the data through the system fast enough to maintain high refresh rates. This was also my first experience with WebGL particle effect shaders, and I felt the visual effect came off well.

What I learned

WebRTC is complicated, but not really hard, and everything goes down better with VR.

What's next for Kinect Across

Chunked data for larger 3D data frames. RTC server to communicate offers and ice candidates. 3D datasources aside from the kinect. A better kinect web socket solution. Attaching camera position to skeleton tracking data. On and on...

Built With

Share this project:
×

Updates