We wanted to combine deep learning with VR, and we were really impressed by artistic style transfer results in the recent papers. We wanted to use this with real world images like Google street view.

What it does

Our project is a VR environment that uses deep learning techniques to transfer artistic style to any given panorama. We can then visualize the panorama with Oculus, and with Google street view we can do it for any real world place, making it feel like the person is walking around in a painted world.

How we built it

For artistic style transfer we used some fast Chainer implementations of Johnson et al., ECCV 2016. This lets us convert images into the stylized version. We then used Google street view to get real world panoramas (e.g. from berkeley) and converted them. We use Oculus to visualize the painted world, and we use Kinect to naturally move around in that world.

Challenges we ran into

We ran into many challenges with our libraries and equipment breaking. After successfully using Unity to detect gestures from the Kinect input, the Kinect stopped working. We wanted to stitch together images from Google's Street View API into panoramas, but we had a lot of issues using libraries like OpenCV. There was also the problem of images pulled from Street View missing certain information and being distorted, which caused errors when we produced panoramas.

Accomplishments that we're proud of

Pulling data from Google's Street View API and having a stitching algorithm that combines the images reasonably well. We also got successful gesture recognition with Unity and Kinect. We also used deep learning to create images in the style of an artist, the results of which are quite nice.

What we learned

Hacking is hard, but also fun. A lot of things didn't go our way, and almost every single one of our tools/libraries were broken in some way or other. We persevered through it, and now we are quite proud of it. We also collectively learned gesture recognition, deep learning search, modelling games in Unity,

What's next for The VR-ry Night

Next steps would be allowing users to select any Google Street View location to view a panorama in VR, rather than our pre-selected locations. We also want to enhance the ability to move in Street View, such that users can navigate through VR panoramas as they can through Google maps. We also want to add more high quality panoramas covering a wide range of styles and locations.


Libraries and examples we used: Google Street View API Kinect Library OpenCV Oculus SDK for Unity

Built With

Share this project: