On Day 1 of Reality Hack, our team members found one another through our shared interest in how spatial realities relate to one another, how the display of location-persistent text and models should be prioritized relative to one another, and how to facilitate user-centered curation of the spatial media relevant to their needs.
For Leon, coming to Cambridge for the hackathon brought him back to last summer and the many happy moments he shared with dancers on the MIT ballroom team. Yuting and Colin were lost between workshops sessions, wishing they could call up relevant wayfinding but anticipating that such an interface could be just as cluttered as the cork boards in the university hallways. Jared and William came to the project specifically to answer, “What is a hashtag for spatial reality?”
Our team’s design philosophy examines the precedent social media to clarify the utility of hashtags and user-assigned metadata channels within existing virtual spaces and forums. While user-submitted tags differ from platform to platform, they share a curation function- connecting users with channels of communication relevant to their wants and needs. From Internet Relay Chat to Twitter and beyond, hashtags have an established lineage as a stand-in for a user saying, “I think this content is relevant to topic X.” We immediately saw a need for equivalent functionality in virtual spaces and across digital realities. Once users are empowered to post content affixed to the underlying ‘default reality,’ the volume of content will quickly exceed the physical space’s ability to comfortably accommodate this content.
AR is still an emerging technology, and many of the challenges we faced came when we pushed the technology right up to its current limitations. Things we take for granted, even using a smartphone keyboard or posting a video, are radically different because we’re working with tech that’s still rough around the edges.
We learned about how the world of atoms and the world of bits interact. You wouldn’t think that a lightweight pair of glasses could be uncomfortable, but try wearing it as its embedded computer starts warming up your eyebrows. Or try wearing your regular pair of glasses behind the Nreal, pushing it closer to the tip of your nose.
We learned that there’s so much room to grow in this space, and we’re unbelievably excited to see what the future holds!
What languages, APIs, hardware, hosts, libraries, UI Kits or frameworks are you using?
The client is built with Unity (in C#) for Nreal using the beta NRSDK. We used many of the unique functionalities of Nreal including accurate image tracking and using the phone as a controller. We also experimented with the offline mapping system currently in beta, but found that changing ambient conditions at our workspace significantly affected our anchoring system. As such, we currently base our anchoring on tracked images, but plan to switch to the SDK’s offline mapping system once it becomes more robust.
Nreal’s choice of using an Android phone as the controller provided us with many unique opportunities to use the phone’s services. As such, we were able to implement a user-friendly UI on the phone screen. In input fields, the builtin Android keyboard gives a much more fluid experience than typing experiences on other AR or VR devices, and responsive button presses on the phone screen also contribute to the experience. We also poll the phone’s builtin location and compass data in order to tag objects with location data. Finally, the Android phone’s ability to use mobile data fits perfectly with our vision of being able to see and place media outdoors where the user may not have wifi.
The backend is written in Python (with pymongo) and is deployed on Azure as a Functions App. The backend functions as a bridge between our client and our Azure Cosmos database by providing handlers for various HTTP requests. In order to provide scalability, we created our web API following the REST guidelines. We wrote a custom RESTful client in Unity via the UnityWebRequest class to interact with the database from the client.
In order to experiment with how our program could scale to VR devices, we used Aardvark, an open source framework to develop AR apps in VR. We used Figma, Rhino and Vectary to prototype and design UI elements and user experience. The video was created with Premiere Pro and After Effects.
What is your team number?
28
What is your table number?
6B-07
Grand Challenge Choice #1 (For your project to be eligible for a prize, choose the MAIN category for which this project is submitted for judging:)
Best of AR
Grand Challenge Choice #2 (optional) (If your project fits more than one category, what is the SECONDARY category for which this project is submitted for judging?)
Best of VR
Log in or sign up for Devpost to join the conversation.