Sensaura
Inspiration
Human conversations are full of signals that we rarely notice consciously micro-expressions, subtle hesitation, changes in vocal tone, and shifts in attention. These signals shape how we interpret trust, confidence, and emotion, yet they often pass too quickly for us to process in real time.
Sensaura was inspired by the idea that augmented reality could help visualize these hidden layers of communication. Instead of relying solely on instinct, we imagined a wearable system that could surface subtle behavioral cues during conversations, helping people better understand the dynamics of human interaction.
What it does
Sensaura is a concept for augmented reality glasses that interpret subtle behavioral signals and visualize them in real time.
Using embedded sensors and analysis systems, the glasses detect patterns such as:
- Blink rate and eye behavior
- Facial micro-expressions
- Vocal tension or stress patterns
- Engagement and attention signals
- Hesitation indicators like throat movement
These signals are translated into intuitive visual cues in the AR display. For example:
- Cyan pulses represent blink activity
- Magenta waveforms represent speech tension
- Red highlights indicate micro-expressions
- Green auras represent engagement
- Violet distortions suggest cognitive overload
The goal is not to replace human judgment, but to augment perception by revealing patterns that might otherwise go unnoticed by our normal sense of sight and sound.
How we built it
Sensaura was developed as a conceptual prototype combining wearable design, behavioral analysis, and visual storytelling.
First, we designed the hardware concept: a lightweight AR glasses form factor with transparent lenses, a minimal metallic frame, and embedded sensing points for eye tracking, voice input, and facial detection.
Next, we mapped behavioral signals to visual outputs. Instead of presenting raw data, we designed a system of color-coded holographic indicators that communicate insights quickly and intuitively.
Finally, we created visual demonstrations of the product experience. These included product renders and simulated augmented-reality overlays that illustrate how Sensaura might interpret human signals during a conversation.
Challenges we ran into
One of the biggest challenges was figuring out how to demonstrate how the glasses actually worked given that the tech doesn't exist.
At first, we tried explaining the concept through static images, but it was difficult to convey how signals would appear dynamically during conversations. We then experimented with video demonstrations to show interactions more clearly, but these were often too complex or time-consuming to produce.
Eventually, we landed on using GIF-based scenario demonstrations, which struck the right balance. GIFs allowed us to show the behavior signals appearing and changing in real time while still keeping the demonstrations lightweight and easy to understand.
Accomplishments that we're proud of
One accomplishment we are particularly proud of is the visual design language we created for behavioral signals. Translating complex psychological cues into simple, intuitive visual indicators required careful design and iteration.
We are also proud of the product concept itself. Sensaura explores a new direction for augmented reality. One focused not just on digital information, but on helping people understand each other better.
Finally, we successfully created a clear visual demonstration of the concept, making an abstract idea feel tangible and understandable.
What we learned
This project taught us a great deal about the complexity of human communication. Signals like facial expressions, vocal tone, and attention patterns are deeply interconnected and context-dependent.
We also learned how important visual communication is when presenting a technical idea. Even a powerful concept can be difficult to understand without clear demonstrations of how it works in practice.
Most importantly, we learned that technology aimed at interpreting human behavior must be designed carefully, with strong attention to clarity, ethics, and user trust.
What's next for Sensaura
Looking forward, the next steps for Sensaura would involve developing a more functional prototype.
This could include integrating real sensing technologies such as eye tracking, voice analysis, and computer vision models for facial signal detection. From there, we could test how real users interpret the augmented signals during conversations and refine the visualization system.
Long-term, Sensaura could evolve into a platform that helps people navigate complex social environments, improve communication awareness, and better understand the subtle signals that shape human interaction.
Built With
- figma
Log in or sign up for Devpost to join the conversation.