Inspiration
We set out to create a fascinating visual art experience for the user. Our work was inspired by “A Neural Algorithm of Artistic Style” by Gatys, Ecker, and Bethge. We hope the project will inspire the user to seek artwork in real life, but if they are unable, we have created a way for anyone to enrich their life with visual art through virtual reality. We want everyone to be able to experience an EverydaY MasterpiecE
What it does
The user enters a virtual reality environment where they can switch between original images and a version that has been manipulated. Using the algorithm created by Gatys, Ecker, and Bethge, the user experiences the same image but translated into the style of a famous painting.
How we built it
We used the algorithm created by Gatys, Ecker, and Bethge which allowed us to transform pictures into different styles of art taken from masterpieces. We then developed a program to display these pictures in a personal experience. Specifically, we captured images using fisheye lenses and filters. We then ran the images through the algorithm to change them into the different art styles. Finally, we created a program to display these images in virtual reality with the Oculus Rift.
Challenges we ran into
At first, we could not even figure out how to hook up the Oculus Rift to the computer. We also had lots of difficulties adding our images to Unity and switching between them. For the non-photorealistic rendering, we based our method on a recent advancement in the literature of deep neural networks, and there is some demo code online that we used to render our images. However, making all the dependencies including caffe, torch, cutorch, and cudnn function correctly is not a trivial task given the limited amount of time that we had. As deep neural networks require a huge amount of computation, we tried use the Amazon Cloud Computing Service (AWS) to facilitate our computation. We were able to use the CPU to complete our rendering, but we were unable to successfully use the GPU to render at a faster pace.
Accomplishments that we're proud of
We are proud to be using some of the latest technologies and especially a very recent advancement in non-photorealistic rendering using deep neural networks.
What we learned
We learned the importance of search engine optimization while creating our webpage.
What's next for EyMe
We would try to move towards a real time rendering. We could attach a camera to the front of the Oculus Rift so the world would be translated into art in real time. This would require huge improvements to the way the algorithm works and also to the hardware we would use for the rendering. This goal is very lofty, but there is one feasible step that could get us started.. We would try to use GPU computing through AWS instead of CPU, which would make great improvements to our rendering time. Another step would be to automate the entire process. Currently, it is tedious to manually submit each photo for rendering without a queue. By creating a queue and auto-retrieving results, lots of time could be saved.
Paintings used The Starry Night by Vincent van Gogh Woman with a Hat by Henri Matisse A Wheatfield with Cypresses by Vincent van Gogh
Please note: As attributed above, the algorithm for the rendering came from “A Neural Algorithm of Artistic Style” by Gatys, Ecker, and Bethge. We did not write our own code for the non-photorealistic rendering. We used the github project https://github.com/jcjohnson/neural-style , which depends on a few key projects: https://github.com/soumith/cudnn.torch https://github.com/szagoruyko/loadcaffe as well as the following caffe install instruction: https://github.com/BVLC/caffe/wiki/Install-Caffe-on-EC2-from-scratch-(Ubuntu,-CUDA-7,-cuDNN)
Log in or sign up for Devpost to join the conversation.