What it does
VR LiDAR immerses you in a virtual world of Velodyne point clouds. Similar to Oakland Robotic's autonomous robot, you must navigate the world with only a rudimentary mesh as your guide.
Originally planning to create a LiDAR tracking system, the idea evolved into a user-experience aimed at raising appreciation for AI systems controlling autonomous vehicles from robots to cars. This project places the user into a virtual world based on LiDAR data wirelessly received from a Velodyne Puck fixed to the top of an omni-directional robot. The user then has the task of navigating the robot by remote through the real world while only having point clouds as visual data. We created the VR LiDAR in hopes to have users better understand how LiDAR data is processed and have the opportunity to view it in the first-person utilizing Oculus VR.
Challenges we ran into
Considering we spent 8.5 hours in the beginning simply trying to set up Oculus correctly (not knowing that it required a dedicated graphics card directly through HDMI), we had more than a few hiccups in our implementation. We were able to borrow a desktop computer to fix Oculus, but that meant downloading an entirely fresh Windows. Our original idea was to use ROS packages to decode LiDAR data, but ever since Facebook bought out Oculus, Linux is unsupported... we had to completely write our own scripts to parse the LiDAR packets and make calculations to find the Cartesian plot points for Unity.
Accomplishments that we're proud of
We're proud we were able to implement VR LiDAR successfully even with the added speed bumps of a new computer, operating system, lack of ROS, and lack of WiFi capabilities to say the least.
What we learned
We learned how to parse LiDAR packets over Ethernet/WiFi and calculate Cartesian variables to import into Unity. Speaking of which, we learned how to use Unity to create a complete VR world based on real life objects displayed as point clouds for the user to explore.