We wanted to bring the fun experience of Prisma to the immersive world of VR.

What it does

The user explores Google street view photospheres transformed by our app into the style of different artists.

How we built it

We have a distributed server architecture comprising of a Node.js server and a Python server (which hosts our trained Torch models). The two run on different AWS instances and communicate with each other with the gRPC library. A latitude longitude request is sent to the Node.js server, which saves a photosphere to Amazon S3 and sends the link to the Python server which processes the image in the style of the artist and sends back an S3 image link. The Node.js server then sends the S3 image link to the VR app.

Challenges we ran into

1) The photosphere images for VR are 16MB. Right now, deep learning models are capable of dealing with images that are 1MB. 2) Training and integrating a deep learning network into Python. 3) Decreasing the latency of the model in creating different styles which is difficult as it is an iterative optimization process.

Share this project: