Inspiration

The inspiration for this project came from being in the Connected Automated Vehicles class this semester, and 360-degree cameras.

What it does

Currently, is it projects the images onto spherical coordinates and starts the stitching process.

How we built it

I got the data from the FordAVDataset. The dataset has the calibrated data for each camera intrinsic values (focal length, sensor skew) and extrinsic values (rotation, and translation) Using this information I was able to project where the rays of the camera would land on other projections.

$$ \begin{bmatrix} x \ y \ w \end{bmatrix} = \begin{bmatrix} \alpha & s & u_0 \ 0 & \beta & v_0 \ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} r_{11} & r_{12} & r_{13} & t_x \ r_{21} & r_{22} & r_{23} & t_y \ r_{31} & r_{32} & r_{33} & t_z \end{bmatrix} \begin{bmatrix} X \ Y \ Z \ 1 \end{bmatrix} $$

Challenges we ran into

  • math
  • projection looking strange for rear

Accomplishments that we're proud of

While the result isn't perfect, you can clearly see where the front and sides of the car are supposed to be. also the speed ran moderately fast.

What we learned

  • not to fully trust calibration data
  • First multi-day hack

What's next for Multi-Cam to Sphere Projection

  • depth map for overlapping images
  • creating a map of the environment based on the panorama.

Built With

Share this project:

Updates