Baseline Goals

Baseline 1: Running ORB SLAM using a monocular camera using Nvidia Jetson TX1
Baseline 2: Run SLAM on the onboard computer
Baseline 3: Getting pose from ORB SLAM and publish to ROS node.

Reach Goals

Reach 1: Perform SLAM on a car and generating a closed loop map in a real environment.
Reach 2: Stop sign detection.
Reach 3: Pure pursuit.

Overall system architecture

Plan
System Architecture

Software effort

ORB-SLAM

We began by using ORB-SLAM 1 running on Ubuntu 14.04 and ROS Indigo. We calibrated the camera to get a feature map of the room we worked in and could efficiently do that. But soon we realized that there were many limitations with the software we were using.

  1. It was not able to save the localized map that we created.
  2. It did not support stereo camera (which is important for the future versions of this project) and also did not support RGB cameras.

Camera-Caliberation
Camera Calibration

Hence we then decided to switch to a newer version of the software, ORB-SLAM 2. This ran on Ubuntu 16.04 with ROS Kinetic. Although this had the capability of storing the localized map points, this too had a limitation that the map points were deleted as we exited the program. We wanted that it should load the previous map when we re-launched the ORB-SLAM 2. Hence we implemented that functionality by adding Save/Load functions. Here is the demo of the work we did with localization and saving the maps.
Demo1 video

Our next goal was to try this in the real environment, the Toyota Prius. Hence we mounted it on the car and drove around Pennovation parking lot. It took a couple of days to figure out which loop gives us a higher point density and hence a better localization.
Video: Creating the map
Video: Localization mode

Pure Pursuit

Under the next stage of integration Pure Pursuit is a tracking algorithm which moves a vehicle along a curvature from its current position to some goal position. The whole point of the algorithm is to choose a goal position that is some distance ahead of the vehicle on the path. Essentially, it is to make the car chase a lookahead point in front of it. To achieve this on our car, we created a map in an indoor setting and saved points while localizing in it. These waypoints become the target the car will be chasing during the operation of the algorithm. The video below shows a partial implementation of the algorithm:
Video: Pure Pursuit

Stop Sign Detection

To perform initial level object detection, we decided to detect stop signs using open source object detection algorithms. YoloV3 is a ROS package developed for object detection and can run on GPU and CPU. The algorithm generates bounding boxes on the objects in frame and also gives a confidence rating for the detection. On the Jetson TX2, this package made use of the CUDA cores and was able to successfully run on the GPU. Link to YoloV3 Yolov3 GitHub
Stop-Sign-detection

Hardware effort

  • Learning to use Nvidia Jetson TX1 and TX2.
  • Getting the camera drivers and calibrating camera correctly.
  • Starting the Toyota Prius and getting it to Pennovation.
  • Finding the correct placement and building a mount for the camera.
    • We realized that placing the camera inside the car had a limitation because the car's windshield acted as a lens caused an offset in the calibration. Hence we had to re-calibrate it.
    • Finding the correct mount to keep the camera stable. Robert Ward from another team helped us 3D print the mount.
  • Installing Jetson in the car, powering it up and connecting it to an old monitor. Hardware
    Hardware Used

System Evaluation

  • Our solution works well in different environments. We have tested it on a real car outside in the night time and also tested it on a small F1/10 car in full light. This shows that our solution is not only robust but also reliable in different conditions.
  • Closed loop in map: While running SLAM, if we drive in a loop and come back to the same point, the map too closes upon itself. This shows the accuracy of the calibration and the reliability on the system.

Why this project is awesome?

This project is the first step towards achieving an autonomous driving solution. It mainly uses computer vision as a tool to enable the car to localize and map a vehicle in its environment. With this implemented, we can focus on taking up more challenging aspects of the Autonomous vehicle stack. These would include control algorithms, path planning and also implementing deep learning algorithms for detection.

Built With

  • ros
  • orbslam
  • opencv
  • nvidiatx2
  • yolov3
  • darknet-ros
Share this project:
×

Updates