Inspiration

Modern industrial production is increasingly complex, fast-paced, and safety-critical. Skilled workers are expected to master detailed procedures, adapt to frequent process changes, and maintain high quality under pressure. At the same time, industries worldwide face a growing shortage of experienced workers, making on-boarding and training more challenging than ever.

Our team saw an opportunity to combine advances in XR, multimodal AI, and wearable computing to support workers directly where they need it most, on the factory floor. Seeing how even small mistakes can lead to downtime, safety issues, or expensive quality deviations inspired us to envision an adaptive system that guides, protects, and empowers workers in real time.

We were also motivated by the ethical dimension: industrial AI must not become surveillance. Instead, it should be privacy-respecting, worker-centric, and aligned with EU AI Act principles. This balance between innovation and responsibility shaped our concept from the beginning.

What it does

Our system delivers adaptive XR-based training and real-time assistance for industrial workers. It uses multimodal AI (camera data, eye gaze, hand tracking, and spatial understanding) to interpret tasks and provide context-aware visual, audio, and haptic guidance.

Workers receive step-by-step instructions, safety alerts, and personalized feedback as they gain experience. Supervisors use a dashboard to view anonymized system-level metrics, respond to critical events, and optionally request access to a worker’s point of view.

All data is processed with privacy-by-design, storing only lightweight metadata to keep latency low and ensure responsible, human-centric use.

How we built it

We began day one by exploring all tracks and challenges, discussing different ideas, and ultimately choosing the industrial use case because of its strong real-world relevance and transformative potential. Once aligned, we defined a clear structure for the pipeline, the XR ecosystem, and the demos we wanted to deliver.

On day two, we focused on building: implementing the multimodal AI pipeline, developing the codebase, and assembling the interactive demos. We refined the workflow, integrated multisensory elements, and consulted mentors to ensure that ethical, technical, and design aspects were addressed. This iterative process allowed us to move from concept to a coherent, working prototype within the hackathon timeframe.

Challenges we ran into

Our first challenge was that the team wasn’t complete until midday on the first day, which meant we spent significant time finding teammates and aligning on which track and challenge to pursue. Even after the team was finalized, we needed to reassess all ideas and narrow down several strong candidates before committing to the industrial use case.

A second challenge was coordinating different working styles, strengths, and expectations. We addressed this by dividing responsibilities clearly and aligning frequently to keep the project coherent. Because our topic involved high-stakes industrial environments, we also had to carefully consider ethical and privacy implications. With guidance from mentors, we ensured that governance, transparency, and worker protection were fully integrated into the design.

Finally, building the codebase and demo required thinking through many possible edge cases and real-world scenarios. Ensuring robustness in such a complex context was challenging, but it helped us create a more realistic and comprehensive prototype.

Accomplishments that we're proud of

We’re proud of how quickly we formed a coherent, collaborative team, bringing together different skills and perspectives. We’re also proud to have chosen an application domain with real economic and societal value, something that can genuinely improve safety, training, and working conditions in industry.

Despite the limited time, we built a functional prototype, refined a complete workflow, and delivered a clear narrative. And finally, we’re proud of the connections we made during the hackathon, new teammates, mentors, and friends who helped shape the project.

What we learned

We learned how to narrow down a broad set of ideas into a focused concept with a clear and realistic scope. This helped us understand how to move from abstract discussions to a concrete workflow and a demo that others can actually follow and evaluate.

We also gained insight into real industrial processes, how work is structured, where difficulties arise, and what workers genuinely need. Turning a theoretical idea into a practical, comprehensible prototype taught us a lot about teamwork, communication, and presentation. Along the way, we deepened our understanding of the industrial domain and discovered new opportunities where XR and AI can add real value.

What's next for AIXR-Coach

We see clear potential to continue this project beyond the hackathon. Based on feedback, we plan to expand the codebase, refine the prototype, and explore additional industrial scenarios where XR and AI can make a meaningful impact. Some team members (including Vivek, who is already active in this research area) may extend the work into master’s theses or exploratory research projects.

In the longer term, as XR hardware and multimodal AI continue to advance, AIXR-Coach could evolve into a fully realised, ethical industrial training framework with a complete technical stack. Our goal is to develop something that remains worker-centric, privacy-preserving, and genuinely useful in real industrial environments.

Built With

Share this project:

Updates