The idea arose at the Great Uni Hackathon 2016. The challenge announced by Noma was to create a solution to the following problem:

How do we help a potential occupier understand size and available space options in a building without physically taking them there?

We began with the end user in mind. Who would actually be using this product?

The operator is going to be the agencies trying to let the spaces in the buildings. The user would be anyone interested in occupying the building. We do not make assumptions about the technical abilities of either of these users. Hence we needed to devise something which was intuitive to use, but not too difficult to set up. Virtual reality is one option, as it allows the potential occupier to 'walk' around the office, and get a real feel for the size.

We couldn't use an Oculus Rift. Here's why:

  • It requires a computer with exceptional graphics ability
  • It requires someone with technical knowledge to set up
  • It is relatively expensive
  • The wires can be very cumbersome

We decided to go instead with the Google Cardboard:

  • It uses a phone. Most people in business have a phone capable of creating VR scenes.
  • The setup is minimal. You launch an app and the experience is setup
  • Google Cardboard costs £5 -- A more comfortable Homido costs £50.

What it does

There are two parts to the system, the website, and the mobile application.

The Website

The Website is for use by the operator - the building space retailer. It provides a very simple interface which allows the user to draw out the scene, and then see a very basic visualisation of how the scene might look in three dimensions. It provides options for visualising the scene, clearing any mistakes, and sending the data over to the mobile application.

The Application

The mobile application was built for Android, and allowed the display to be configured for Virtual Reality. With simplicity in mind, the user only needs to launch the app, request the data from the website with a simple click, then put the phone in the headset to have a look around.

How we built it

The Website

The website was developed with the standard suite of web languages: HTML, CSS and JavaScript. For the CSS, we used Bootstrap to ensure that there was a consistent style guide, and to make the webpage pseudo-responsive. For the 3D modelling and display, we used three.js to provide a JavaScript wrapper for OpenGL. We also used jQuery to send AJAX requests to build the API.

We took the following approach to building the website:

  1. Design and create the layout
  2. Have the 3D visualisation work for a fake input matrix
  3. Create a grid matrix to feed into the visualisation
  4. Create the API which can then be read by the mobile application

The Application

The application was build with Unity, which provides tools to create Virtual Reality scenes. The application connected to the API and downloaded a matrix representing the geometry of the space. It then goes through the matrix and procedurally generates the space. The user is then able to move around the scene by using the phone's gyrometer, and by performing different actions.

Challenges we ran into

  • Deserialising JSON in Unity
  • Setting up the Unity development environment
  • Demoing what would be seen by the person using VR. Manchester Metropolitan university blocked port 8888 meaning that screen mirroring from phone to laptop was made impossible.

Accomplishments that we're proud of

  • For two of the four team members, this was their first Hackathon
  • Nobody in the team knew how to work with unity or Virtual Reality before the project started
  • We managed to create everything that we had planned for

What we learned

  • Unity and developing for virtual reality
  • Git and Collaborative Programming (including Pair Programming)
  • How to create an API between a website and an application

What's next for VSpace

Who knows? :)

Share this project: