Creating humanoid model using Maya
Computer vision to determine joints and limbs
Virtual reality often isolates the user from the world around them. This creates an immersive refuge for gamers who strap on their headsets. But we decided to do away with convention. Our goal was to break down those barriers and bring elements of the real world into the fray. Welcome to a virtual world where others can simply step into. No strings attached.
What it does
"Where's Rares" drops users into an immersive 3D environment, far removed from their computers and desk chairs. There are no NPC's to be found in the source code. Characters are generated on-the-fly using the camera at the rear of the Gear VR. Computer vision algorithms analyze live video to detect people, piping this information through a trained neural network to identify features such as face, torso, elbow, and various joints of the human skeleton. A virtual skeleton is then rendered and used to control the movements of a character within the 3D world. Magic pixies dancing through the Samsung's quad-core processors at 1.21 Jiggahertz animate these characters, bringing
How we built it
- OpenCV for computer vision support, and tools with which to implement feature detection, attribute tracking, etc.
- Unity and the Oculus Mobile SDK for interfacing with the Gear VR to create the 3D environment
- Berkeley Vision and Learning Center's Caffe - a deep learning framework providing an extensive amount of sample data for feature recognition
- Coffee and Sugary Snacks
Challenges we ran into
- Took the better part of two days to get build environment set up. This is because our vision component was based on research papers, and reasearchers don't know how to code
- May have developed diabetes
Accomplishments that we're proud of
- Ryan asked that cute girl out
- Doing absolutely nothing and reading C++ forums for two days (Rares)