Inspiration

In Proust’s illustrious magnum opus, "A la Recherche du Temps Perdu", the narrator is hurled down memory lane upon tasting a madeleine. Suddenly, we enter the world of reminiscence.

Memories surround us. They inhabit us.

For this hackathon, we wanted to center presence in the act of remembering, sharing and building together.

We sought to answer: what if we could enhance remembrance through technology?

What it does

Our piece invites users to step into a collective memory space and living sculpture! We utilize passthrough virtual reality technology to place users in both the virtual and real environments simultaneously. In that space you can explore a sculpture built from memory data - and then add your own memory to the piece. Your memory will be sent to our Python server where the language is analyzed for meaning and emotion with machine learning. That data is then sent back to Unity and incorporated into the sculpture, changing the color, composition, movement, and textual content of the moving artwork.

Within the experience, you can reach out with hand tracking to pull a particular memory out of the sculpture and towards you for closer inspection. You can read the text from by the person who left the memory to examine the form and textual content of that particular piece of the larger puzzle.

How we built it

We built the experience for the Vive XR Elite headset in Unity with Vive's WAVE SDK. We used underlay passthrough and handtracking technology. We send the user's memory to a Python server and run language processing and sentiment analysis on the text. That analyzed memory is then reincorporated back into the collective sculpture for future users to read and interact with. The sculpture was built with Unity's VFX and Shader Graphs.

Challenges we ran into

Working with the VIVE headset though extremely fun proved difficult, and we had issues successfully exporting our builds to the devices.

We spent about a day getting the underlay passthrough to work with Unity's Universal Render Pipeline, which is what was allowing us to build the sculpture itself with Effects Graph. Shoutout to the Vive representatives who worked with us to solve the problem and update the SDK documentation!

Another challenge was the speech-to-text plugin. Ideally, users would be able to input memories by speaking into the microphone and the text would be automatically interpreted and sent to the server. We got the speech-to-text working in-editor, but the Android build process proved incompatible with the location of the language models in our project. We tried working with both Unity and Vive reps to resolve this, but ran out of time. Instead, for demo purposes we simply add your memory's text manually to the server, where sentiment analysis and machine learning can be run on it and where it can still live on.

Accomplishments that we're proud of

Integrating the new render pipeline with the Vive WAVE SDK.

Setting up the hand tracking and making the piece interactive with gestures.

Combining all the technical aspects while maintaining sensitivity to the concept and the message we wanted to send. Balancing engineering, art, and design.

What we learned

  • Working with passthrough
  • Working with hand tracking
  • Shaders and Unity Effects Graph
  • Building live experiences in Unity
  • Making a responsive digital art piece that will respond to new datapoints added by users
  • Working with Speech to Text Recognition

What's next for Mems

  • We would like to have each instance of the sculpture connected to a real life location - so that the sculpture is populated with the memories of those who actually pass through that space. Perhaps the memories can even cross pollinate with each other to a small degree.
  • The speech-to-text is important functionality that we would like to finish.
  • The dream of Mems lives beyond the Hackathon! We will continue to build interfaces that allow people to be vulnerable and present.

Built With

Share this project:

Updates