Inspiration

AURA was inspired by a recurring gap we observed in social XR and remote meditation experiences.

Many existing systems emphasize explicit interaction, instruction, or performance.
They focus on what individuals do or say, rather than how a group gradually comes to feel together.

In real-world rituals, togetherness rarely emerges from talking or coordinating actions.
Instead, it forms through rhythm, repetition, sound, light, and shared pacing, creating a collective atmosphere that participants enter together.

This led us to ask a different question:
can XR support emotional co-presence by sensing and shaping shared states, allowing a collective field to emerge rather than simulating interaction?


What it does

AURA is an XR system that helps people feel together. Shared emotions shape the virtual environment through breathing, movement, and social feedback.

Users begin grounded in physical reality.
Through music, light, and subtle spatial guidance, the environment gradually transitions into a shared virtual space.

Both individual regulation and group alignment are visualized in real time, allowing users to feel together without needing to speak or perform actions.

The experience includes multiple ritual-based scenes, such as a fire gathering, where visuals, sound, and spatial qualities respond continuously rather than through discrete triggers.


How we built it

AURA is built as a mixed reality XR platform integrating embodied sensing, real time networking, and immersive audiovisual feedback.

Unity serves as the core development environment for rendering, spatial interaction, audio control, and state transitions. User input is divided into system level signals, such as rhythm and melody, and user level signals, including breathing, head movement, hand motion, and voice intensity.

Multi user synchronization is implemented using a TCP based service unit that relays and normalizes signals between participants. Environmental and audio responses are driven by aggregated data mapped into a shared 0 to 1 signal space.

To support the transition from reality to virtual space, we adapted a Gaussian field inspired rendering pipeline. Instead of direct real time Gaussian field rendering, which proved difficult to port into VR, we modified the approach using high resolution modeling and material based rendering optimized for immersive scenes.

The system supports two participation setups: an XR headset with a breathing belt for embodied sensing, and a desktop based setup with wearable or simulated input for broader accessibility.

Challenges we ran into

One major challenge was achieving a smooth transition from reality to virtual space.
Most Gaussian field pipelines are not designed for immersive VR contexts, making direct integration difficult.

Another challenge was synchronizing multiple input modalities across users in real time while maintaining stability.
Mapping breath, motion, and voice into a coherent collective response required extensive tuning and iteration.

We also needed to balance technical ambition with accessibility.
This led us to design alternative input setups that lower the barrier to participation without compromising the core experience.


Accomplishments that we’re proud of

We successfully implemented real-time synchronization between two users, enabling shared environmental and audio responses driven by embodied input.

We built an experience that prioritizes a gradual transition and emotional regulation over explicit interaction.

We also created a flexible system that supports both XR and desktop setups, making the project more inclusive and easier to scale.


What we learned

We learned that co-presence does not require constant interaction.
Subtle, continuous feedback based on rhythm and embodied signals can be more powerful than explicit control.

We also learned that cinematic rendering pipelines often need to be fundamentally rethought when applied to immersive and interactive systems.

Designing for collective experience means treating synchronization, pacing, and ambiguity as design features rather than technical problems.


What’s next for AURA

Next, we plan to scale beyond two users and explore larger group dynamics.

We aim to deepen physiological sensing, including more robust breathing and arousal detection, and explore eye movement as an additional signal.

We also plan to expand AURA into more generative and customizable ritual worlds, allowing communities to create shared environments shaped by collective affect.

Built With

Share this project:

Updates