Inspiration: One of our team members has friends who are neurodivergent and noticed that they were often ostracized or bullied because of it. After hearing about this problem, we began looking into the statistics around how many people face these challenges today. We wanted to build something that could help people with various neurodivergent conditions such as Autism, ADHD, and others. In terms of technology, we believe augmented reality is going to be the next major platform that many people can benefit from.
What it does: Nara is an assistive augmented reality system that helps neurodivergent individuals interpret social cues in real time. Through an AR interface, Nara analyzes signals such as gaze direction, tone, and conversational context, then provides subtle visual guidance to help users stay engaged and interpret interactions more confidently. For example, Nara can gently guide a user’s attention back toward a conversation partner, detect cues such as sarcasm or tone shifts, and provide feedback that helps users better understand what might otherwise be missed. By making the invisible visible, Nara helps users build confidence in everyday social interactions.
How we built it: We designed Nara as a prototype AR experience using Figma Make to simulate how assistive augmented reality could support real-time social awareness. The prototype recreates a classroom conversation scenario from the user's point of view through AR glasses. A gaze-based cursor simulates where the user is looking, while subtle visual cues help guide attention back toward the conversation partner when focus drifts. Interactive elements demonstrate how the system might detect tone or conversational cues and present them in a clear but unobtrusive way. In addition to the AR interface, we designed a companion app that helps users reflect on interactions by tracking metrics such as social attention ratio, stress signals, and social cue detection over time.
Challenges we ran into:
One of the biggest challenges was designing an interface that provides helpful feedback without overwhelming the user. Augmented reality overlays must remain subtle so they support the interaction rather than distracting from it. Another challenge was translating complex behavioral signals into meaningful and understandable UI elements. Because many social cues are ambiguous, the system must present guidance carefully so that it assists interpretation rather than making definitive judgments. We also explored how to visually represent AR interactions within a prototype environment while maintaining a realistic point-of-view experience. Another challenge was visualizing an AR glasses experience inside Figma Make, as this was something new to all of our team members. We had to carefully refine the prompts we gave it in order to generate the output we needed for the demo. Creating the classroom scenario in Figma Make was also important to help tell the story. Simply showing the UI would not have been enough to communicate the experience. Designing the animations and interactions we wanted was challenging for a no-code prototype, but Figma Make helped us bring the concept to life.
Accomplishments that we're proud of: We are proud of creating a prototype that demonstrates how AR could meaningfully support neurodivergent users in everyday conversations. Our project shows how subtle social signals can be quantified and visualized in real time, turning complex human interactions into understandable guidance. We were also excited to design a cohesive ecosystem that includes both an AR interface and a reflection-based companion app, showing how immediate assistance and long-term learning could work together. We are also proud of the way our team came together. We were complete strangers at the beginning of the hackathon, but quickly aligned on a problem that resonated with all of us and moved from ideation to execution in a very short amount of time. Despite being in different locations and time zones, we collaborated effectively. Each of us brought different backgrounds - design, development, and research, which created a strong combination of perspectives. We are proud that we were able to combine our expertise and prior experience to create Nara, and we genuinely believe a solution like this could help many people.
What we learned: This project reinforced how important thoughtful interface design is when building assistive technologies. We learned that the most effective solutions are often the least intrusive. Rather than overwhelming users with information, assistive systems should provide simple, contextual guidance that supports natural interaction. We also gained insight into how augmented reality could become a powerful medium for accessibility, helping people navigate complex social environments with greater confidence. Through our research, we also learned more about the real challenges neurodivergent individuals face, particularly the invisible sensory and social experiences that are difficult to interpret in real time. This inspired us to think about how technology could help visualize and support those experiences. We also learned that Figma Make is a powerful tool, but it requires multiple iterations and careful prompt refinement to achieve the desired results. It is not as simple as typing a prompt and receiving the perfect output.
Designing AR experiences within Figma was particularly challenging, and we had to work together to creatively solve those constraints.
What's next for Nara — Neurodivergence Augmented Reality Assistant: The next step for Nara would be expanding the system beyond a prototype into a more fully integrated assistive platform.
We would like to explore additional functionality that supports other cognitive and sensory challenges beyond focus and social cues. For example, we discussed ways the system could help users who become overstimulated in loud or overwhelming environments by providing calming music, relaxing visuals, or sensory regulation tools. Due to the short duration of the hackathon, we were not able to fully research and prototype these ideas, but we would like to explore them further in future iterations. Future development could include real-time multimodal sensing, personalized feedback models, and adaptive interfaces that learn from user preferences and experiences. Our long-term vision is to create an assistive AR ecosystem that empowers neurodivergent individuals by making invisible social signals visible, understandable, and actionable.
Built With
- ai-assisted-prototyping
- augmented-reality-interaction-design
- chatgpt
- claude
- css
- figma
- figma-make
- hci
- html
- react
- typescript
- ux-research-methods

Log in or sign up for Devpost to join the conversation.