๐ About the Project
We are a team of four engineering students from Wayne State University with a shared passion for exploring the digital world around us. Our latest challenge was to dive deep into the fascinating realm of 360-degree sensor systems and learn how to fuse them for real-world applications. Inspired by autonomous technologies, we set out to create a platform that integrates data from a 360-degree camera and a LiDAR sensor, merging them into a unified visualization environment. The ultimate goal? To better understand sensor fusion and push the boundaries of immersive, real-time data visualization.
๐ก What Inspired Us
As the world moves toward smarter and more connected systems, we saw an opportunity to contribute to this transformation. The idea of fusing 360-degree visual data with LiDAR depth information fascinated us. It felt like a perfect way to combine advanced engineering concepts with cutting-edge tools, while also preparing ourselves for the challenges in the rapidly evolving fields of robotics and autonomous systems.
๐ ๏ธ How We Built It
Tools and Technologies:
- Hardware: Insta360 X4, Velodyne VLP-16, LinkStar Router
- Software: ROS 2 (Humble), Unity 2022, OpenCV, CUDA 11.5, CuPy
- Visualization: Meta Quest VR headset
- Programming Languages: Python, C++
Step-by-Step Process:
- Data Capture:
- Captured 360-degree camera data and LiDAR point clouds simultaneously.
- Established a LAN/WLAN network with the LinkStar Router to share data between devices.
- Fusion Algorithm:
- Used OpenCV to transform LiDAR point cloud data onto the 360-degree camera image.
- Accelerated processing with CuPy and CUDA for efficient computation.
- Lane Segmentation:
- Applied HSV filters in OpenCV to detect and process lane data.
- Visualization:
- Built a Unity 3D environment and projected equirectangular video feeds onto a Skybox.
- Integrated the Meta Quest VR headset for an immersive experience.
- LiDAR Interpolation:
- Developed a median interpolation algorithm to enhance LiDAR resolution from 16 layers to 32.
- Hardware Acceleration:
- Improved processing time by a factor of 20 by applying hardware acceleration
๐ What We Learned
- Fusion and Transformation:
- Mastered the use of transformation matrices for aligning LiDAR and camera data.
- Networking and Streaming:
- Explored VLC Media Player, FFmpeg, and various protocols for streaming fused data.
- Hardware-Software Integration:
- Learned to optimize system resources and overcome latency challenges in distributed systems.
- VR Visualization:
- Discovered how to utilize Unityโs Skybox for immersive visualization and VR playback.
๐ Challenges We Faced
- Data Streaming:
- Struggled with streaming ROS 2 message types over HTTP/RTSP but gained valuable insights into video transport methods.
- Transformation Accuracy:
- Mounting and calibrating sensors proved tricky, especially when the LiDAR was tilted 45 degrees.
- Processing Power:
- Fusion algorithms demanded more computational resources than anticipated, resulting in latency.
๐ Celebrations
- Success with Sensor Fusion:
- Achieved a high-quality fused image that exceeded our expectations.
- VR Playback:
- Successfully integrated the Meta Quest VR headset, allowing users to experience the "robotโs perspective."
- LiDAR Interpolation:
- Enhanced LiDAR resolution with scalability, achieving a balance between accuracy and resolution.
- New Skills Unlocked:
- Gained a deeper understanding of video encoding, point cloud processing, and 360-degree data representation.
๐ Final Thoughts
This project was a thrilling journey into the world of 360-degree sensor systems and real-time data fusion. It challenged us, taught us, and left us inspired to explore even more advanced applications in sensor integration and immersive visualization. Weโre excited to take these lessons forward into future projects and push the boundaries of whatโs possible!
Log in or sign up for Devpost to join the conversation.