We were inspired to build Viro to create safety for all.
What it Does
Think of Viro as an extension of yourself. Be at two places at the same time, or venture into unexplored environments off limits to humans.
How we Built Viro
We interfaced the tracking and accelerometer data from the Oculus Rift, as well as motion, tracking, and EMG data from the Myo to control the robot. Then we sent a live video stream back to our computers so we can see where the robot is going.
Challenges we Ran Into
Byte array data generated from the robot's camera was hard to parse into a image/video file. Still having troubles recreating images in real time from raw binary data, and incorporating frames into Oculus video. Furthermore, due to the nature of the Oculus environment, the virtual camera tracking using head movements virtually within the software is glitchy, and further implementations of a 3D camera mount on top of the robot may be needed to enhance user interaction.
Accomplishments that I'm proud of
Interfacing three different pieces of tech together, all while using the same programming language, and creating the necessary networking and framework was an accomplishment in itself.
What I learned
As first time hackathon contestants, we learned a lot of languages, the ability to collaborate and work together as a team, robotics, and virtual reality simulation.
What's next for Viro
Further implementations include the live stream of video directly into Oculus Rift, as well as automatic object recognition using external API's so that the robot may identify potential hazards and threats before the human.
Log in or sign up for Devpost to join the conversation.