Inspiration

Digital equipment such as Lidar is supporting operations of autonomous vehicles on land, sea and air. Such equipment produces valuable info to navigate through urban or harsh environments. The main reason for the existence of Lidar is to scan the environment for obstacles. Re-constructing the environment where the autonomous vehicle was located or passed by, can provide military intelligence or forensic reconstruction and timelining of incidents involving autonomous vehicles.

What it does

Transforms a PCD file to a human readable plot, with the ability to provide a video stream as if it was in 3rd person view. Visualisation of the street and the surrounding is quite important to identify features of the location

How we built it

We discovered the libraries needed to handle the PointCloud and ploted the points, and a cool script to visualize the car path in bird's eye view

Challenges we ran into

Figuring out the place with human input only, based on guesses and intuition. Timelining was hard as well... We had a dot map features process in place that created simulated dots on the maps. That was taking quite long but gave us a quick way of identifying where the location can be. The rest was human input, which we were trying to convert into a more automated process.

Accomplishments that we're proud of

Recreating the trajectory of the vechicle in a 3rd person camera bird mode, as well as figuring out the plac where the lidar recording took place.

Park Rd right in front of the Computer Science Building.

What we learned

Learned the numerous PointCloud features and decide what features are needed for the plot of the the lidar, in a humanly comprehensible way.

What's next for OxINT2022

Define a "home point" for coordinates of reference, and translate the lidar frame to KML file, to be ported on Google Earth for immediate visualization on 3D field.

Built With

Share this project:

Updates