The team formation process at this hackathon posed a challenge for us. We needed to form groups of individuals that were interested in the same topic/type of project while at the same time bringing an appropriate diversity of skills to the table. Our group found four members but got stuck when looking for a fifth to fill the "designer/artist" role. Amidst the chaos of the crowd, it was almost impossible to locate people with this set of skills so that we could pitch them our original idea (which was related to AR overlays for clothing). We pivoted on the spot and decided to work on this specific problem: the difficulty of live team formation at a hackathon. The more work we did to flesh out our concept, however, the more we realized the wide variety of use cases that could be addressed with similar technology. Thus, the VSiBL platform project was born.
What it does
VSiBL makes it possible for people in the same physical space to discover and connect with one another by using AR to reveal contextual information that is anchored to the users’ personal devices. Using a multi-user spatial mesh network to establish a shared reference frame, VSiBL users join an AR session overlaying their current environment. In the session, participants can visually identify each other’s location as well as whatever information they have chosen to express about themselves.
This tool is extremely useful in situations where traditional methods of communication (such as speaking) are not viable. There are many such occasions: in the hackathon team formation example, it is difficult to be heard when yelling over a crowded room. In another scenario, a hearing-impaired individual might be attending a social mixer and wish to visually identify who else in the room can speak sign language and where those people are standing. Other people at the same social mixer may want to visually identify who is single and ready to mingle. Finally, teachers may wish to scan their classroom before beginning a lecture and identify where students with certain learning disorders are sitting so that they can adjust their pedagogy accordingly during the lesson. This is just a small sample of the possible use cases for VSiBL.
Consent was one of the primary topics of discussion for our team during this hackathon. VSiBL makes your personal information visible and localized to your position in an environment, so there may be scenarios in which users engage in predatory behavior. We would like to emphasize that VSiBL will only display information that users have knowingly and voluntarily chosen to share with others. Future versions of the platform will (1) provide extensive privacy controls and (2) have UI features that clearly indicate to users what information about them is visible to others at all times.
How we built it
We aimed to build a shared experience that would allow hackathon participants to view the roles of other participants via data anchored to them, signified by an AR marker and the title of the role (colors varied based on the user role).
We leveraged the functionality of Apple ARKit to synchronize multiple devices to a single multi-user session by leveraging spatial mapping to create a point cloud within the environment and enabling a shared reference frame.
Once this shared reference frame is established, new devices/participants would join a mesh network where users can use their device cameras to visualize information anchored above each others’ physical location in that space (the information would be tied to a unique identifier associated with every unique user).
Simultaneously, we created a real-time database hosted on a Firebase Cloud server. This database can add/remove new users and assign information, such as Name, Occupation, and other fields.
We enabled each user to be able to input information that would be dynamically stored and retrieved from this real-time database, then display this information in AR. This would also enable users to perform actions like queries to filter the information displayed in their view, enabling only the participants with the information that met the criteria of the query to be visible (while “turning off” those that did not meet the criteria).
Challenges we ran into
We wanted to display data and 3D assets on top of the user’s head. However, in the current prototype, we exclusively used mobile devices. This presented a challenge given that we only had the pose of the mobile device and not the user’s head. To overcome this challenge, we created a head pose estimation to properly display data and content on top of the user. More work is needed to make this more robust for a diverse audience.
Once a common reference frame is established, data is communicated over the decentralized mesh network. This is both a strength and a weakness. When a user gets kicked from an active session, information is lost. To maintain persistence across sessions, a cloud real-time database was used to keep data synced and backed up. The cloud database essentially served as our source of truth.
Accomplishments that we're proud of
Most AR applications allow you to add 3D content anchored to a space or apply face filters. However, to our knowledge, anchoring content to a person that can move through a space is a novel concept. VSiBL gives information a spatial context. We are really proud of this and cannot wait to continue development of our platform to support XR creators to create the future of personal, spatial communication and expression.
VSiBL will enable a new generation of social and spatial applications across a wide variety of use cases such as social, education, accessibility, entertainment and much more. We look forward to supporting XR creators to build the future with the VSiBL platform.
What we learned
We gained a fundamental understanding of the underlying technology that supports frameworks such as ARKit and RealityKit. We learned that each AR-enabled device builds a spatial map to understand the surrounding environment. To create a multi-user experience, devices should have a common understanding of the environment and a shared reference frame. Synchronizing devices in new environments can be challenging, so this was certainly a great learning experience. Furthermore, we also learned that device synchronization can be greatly improved through pre-mapping environments. This will be implemented in future iterations of the VSiBL platform.
What's next for VSiBL
We are so excited by all the possible use cases for VSiBL and plan to build a much more robust platform to support them.
Future users of VSiBL will able to filter through the markers that appear in the AR overlay, making it easier to find the people they are looking for. In the hackathon team formation scenario, markers might be different colors and/or feature different icons to identify each person as a Developer, Designer, or Specialist. If I was trying to find a Designer, I could use a checklist on the key/legend for those markers to hide everyone that did not identify themselves as a Designer.
Through a combination of UI information bars, public and private rooms, and functions such as “incognito mode,” users will always know exactly what others can see about them and may choose to stop sharing their information at any time. For example, while all information might be visible to everyone in a "public" VSiBL environment, users can also create "private" environments in which participants' data is only visible to certain people.
We imagine use cases such as team formation and job fairs where users are scanning the room and finding people they want to connect with, then approaching them to initiate a conversation. As they get closer to others, more information would become available in the AR layer.
The future VSiBL platform will integrate with other platforms that have public APIs so that users could easily make more types of information about themselves available, should they be in a context where sharing it is relevant.
In this hackathon, we built an app for iOS. Next, we will build for Hololens in anticipation of a future where more people are wearing headsets on a daily basis.