In 2020, there's never been a better time to ReImagine Reality. The pandemic has touched us all- everyone in the entire world- and almost everyone has lost touch with something important to them. Often it's the places that we miss most- the restaurants, parks, friend's houses, amusement parks, and even the office buildings that serve as the backdrop of our lives. Without them, life just doesn't feel the same.
But what if there was a way to bring those places to us, even if we can't go to them? What if there was a way to reconnect with the reality that we lost, and ReImagine it along the way? And what if there was a way to make the places that we know and love even better than they were before, using the power of our creativity and imagination? And best of all, what if it was accessible to everyone , regardless of how skilled they are at technology and computer science?
WonderLab is the solution.
What it does
WonderLab places the user in a digital realm accompanied by the assistant robot Rollanda ("Roll", get it?). Rolanda can make anything you desire appear in the space, and has access to a large library of assets in many colors. She can make rooms, trees, garden fences, chairs, paintings, computers, kitchen items and anything else you can imagine come out of thin air, just by telling her "I want a [object]". For example, if you wanted a room, you simply tell Rollanda "I want a room", and she will make one appear for you. She'll also give a verbal affirmation that she did what you asked, including her favorite word "voila!" (pronounced voy - lah).
This means that anybody, at any skill level can jump into the WonderLab and reimagine their reality.
Now, let's take a look at a few examples of where this technology can be useful, and how it improves upon previous VR applications:
Let's imagine that you've been in quarantine for the last few months, dutifully avoiding public spaces and forgoing family visits during the coronavirus pandemic. As Thanksgiving rolls around, you really may not be able to head home to eat at the family table like always. With WonderLab, you have the power to recreate that environment in an immersive and intuitive way, without needing any coding or modelling experience at all. There is no application that will ever be in the Oculus store which can simulate the most cherished locations in our lives; the ones which hold meaning only for ourselves. WonderLabs can do that.
Or, perhaps you have some great ideas about how beautiful your neighborhood could be, if only you could add your own touches. Maybe you think it would look better if your house was a massive castle, or your neighbors house (who never mows the lawn) was in the dungeon. WonderLab empowers users to exercise their creativity regardless of skill level, and allows them to create imaginative environments that are still based on and reflect the real world. In this way you can still connect with the places you love while exercising your imagination, all the while breaking the monotony of quarantined life. You are empowered to reimagine your reality; to make it better, more ideal than the one left behind before the pandemic.
Lastly, lets imagine you're a youth coordinator who runs a number of programs in your neighborhood, but find that you can no longer meet up in person due to the coronavirus. While you could meet up in a zoom call or even in services like RecRoom, nothing will ever replicate the places that you know and love. With WonderLab, you can create virtual environments that reflect the places you're already attached to, so you don't have to lose touch with them even though you can't be there in person.
How we built it
Hackathon projects require a great deal of organization and time management to pull off, especially when the members are not located in the same space. We exercised project management skills to break out team up into a modelling and development team. The development of this project (especially IBM integration) proved to be quite difficult, so almost all of our members worked primarily to code the WonderLab environment. Thus, our team divided into these groups:
Development The development team integrated Unity with 3 IBM Services:
- Watson Assistant
- Text to Speech
- Speech to Text
They built two primary functions: spawning items on voice command, and changing out spawned items for other prefabbed objects. The utilized dictionary and list data structures to manage the number of items that Rollanda could spawn, and they set up the Oculus Quest for use with the WonderLab.
Another aspect of the development teams work was to code the interactions that users can have with objects in the scene, including rotation and scaling. This was accomplished thanks to Patrick's knowledge of UI/UX design in Oculus, which was one of the most fun aspects of this project.
This chart demonstrates the user interface development workflow for object interaction that we developed during this Hackathon.
Much of what we accomplished during this hackathon none of us had ever tried before. We aimed to provide a novel use for IBM's speech to text software. While most use cases concern data collection or process automation, we felt that it would be interesting to see what could be done with IBM cloud services beyond those fields. There were few tutorials or resources for anything like this on the internet, and we truly learned a great deal in the development of WonderLab.
The development team was mainly represented by Austin, who used storyboarding and reference imagery to develop a repository of items for use in this application. Due to the number of items necessary for use in this scene, we had to rely on a small number of free asset packs to fill in the items that were impossible to model in the short time frame. We used assets from Unity's Essential Japan Office scene, as well as a forest pack that supplied trees. There were nearly 50 items in total that Rollanda could instantiate, which was far too many for one person to model and texture in 36 hours.
The modelling team relied on the Maya - Substance Painter - Unity pipeline to produce these assets. Almost every asset has at least one variant, which Rollanda can instantiate at random. This was accomplished through the use of prefabs, where textures exported from Substance Painter were used to created a variety of different themed materials for each mesh in Unity. These were saved as prefabs and used to provide a sense of variety and immersion in the simulation.
Challenges we ran into
Our team ran into a number of challenges during the hackathon, from communicating with a member who lived in another state, wrangling with IBM's many services for the first time, learning how speech to text events can impact game objects, or even producing the sheer number of assets that were required for this project. At the moment, sleep deprivation is probably the greatest challenge this author faces in completing our submission for HackGT 2020.
One particularly difficult challenge was developing the scaling feature. We calculated the scaling in real time by measuring the distance between the controllers. This was quite difficult and required a huge time investment, which occupied one of our members throughout HackGT2020. We are proud to say that we were able to successfully build out this feature.
We created a guide on how to use this product, which is posted in our GitHub, so that future users who are interested in making this service work on their own computers might have an easier time with it and learn from some of our mistakes.
Accomplishments that we're proud of
Hackathons are just plain difficult, but they are ultimately rewarding. We built what we think is an amazing product, capable of producing a variety of entertaining, useful, and beautiful environments that are accessible to people of all skill levels.
All of the team members remarked that none of us would've been able to make a project like this only two months ago. This hackathon season was difficult, but each of us learned a lot in every hackathon and improved by strides every weekend. This project represented the culmination of our growth thus far, and we feel that we worked together extremely well.
We used Premiere Pro for the first time. Austin spent 12 HOURS making the 2 minutes demo video. PLEASE ENJOY !
What we learned
The modelling team learned a great deal about Unity and Premier Pro this hackathon, having used both these services infrequently in the past. The development team learned how to make Watson assistant instantiate objects in game, which was a major accomplishment. The development team is proud to say that they learned how to scale objects using Oculus controllers in real time using a new and innovative method. This is both visually appealing and has interesting coding supporting it, which we count as a major accomplishment for our team.
What's next for WonderLab
The major functionality that we want to add is a multiplayer feature. We would also like to create many more assets than we had time to create this hackathon.