Inspiration

The inspiration for this project came from a deep desire to make education accessible to all, particularly students with physical disabilities who often face additional challenges in traditional learning environments. After interacting with a few friends who rely on assistive technologies for their education, I realized that current tools are often inadequate in providing a seamless, engaging, and fully inclusive experience. AR, VR, hand sign recognition, and voice assistants have made huge strides in recent years, and I wanted to harness these technologies to break down barriers to learning for students with diverse physical abilities.

What it does

Through this project, I learned that building accessible applications requires not only technical knowledge but also empathy and user-centered design. Working with assistive technologies like hand sign recognition and voice interfaces showed me the importance of understanding the end-user’s needs. I dove into frameworks like WebXR for AR/VR experiences and MediaPipe for hand tracking, which expanded my knowledge of interactive technologies. I also learned that inclusivity goes beyond just implementing features; it’s about ensuring that every user, regardless of their physical limitations, can fully engage with and benefit from the platform.

How we built it

The project was built using a combination of React.js for the frontend and Python (Flask) for the backend. For AR and VR integration, I utilized the WebXR API, allowing for immersive learning experiences directly from the browser. I implemented TensorFlow.js for hand sign recognition, which uses machine learning models to detect hand gestures in real-time, translating them into commands or text for users who rely on sign language. For voice assistant capabilities, I integrated the Google Assistant SDK, which allows students to navigate the platform using voice commands, making the platform truly hands-free. Throughout the project, I ensured a focus on accessibility, adding intuitive navigation and user-friendly design principles.

Challenges we ran into

One of the biggest challenges was integrating multiple assistive technologies into a cohesive platform. The hand sign recognition required a lot of fine-tuning to ensure that it worked accurately across different lighting conditions and hand shapes. Ensuring smooth performance for AR/VR components on various devices was also challenging, especially when trying to make the platform accessible on lower-end devices. Additionally, syncing the voice assistant with real-time actions and ensuring it worked seamlessly alongside the gesture controls took considerable effort. Finally, creating a truly inclusive experience meant continually testing and iterating with potential users, which was both time-consuming and essential for the success of the project

Accomplishments that we're proud of

We’re proud of creating a truly accessible learning platform that combines cutting-edge technologies like AR, VR, hand sign recognition, and voice assistants. One of the major accomplishments was successfully integrating real-time hand sign recognition, allowing students who rely on sign language to communicate and interact with the platform seamlessly. The immersive AR/VR features bring a new dimension to education, making learning interactive and engaging for all students, regardless of their physical abilities. The voice assistant integration has also made it possible for students to navigate the platform without the need for physical interaction, ensuring that the platform is truly hands-free and inclusive. Seeing the platform come together as a functional tool for students is something we’re particularly proud of.

What we learned

Throughout this project, we learned the importance of user-centered design and the need for continuous feedback, particularly from the target audience. Working on accessibility features taught us how critical it is to have an inclusive mindset, not just technically but in terms of user experience. From a technical perspective, we deepened our understanding of WebXR, TensorFlow.js, and Google Assistant SDK, exploring how to integrate these different technologies into a single, cohesive platform. Additionally, we learned that the intersection of AI and accessibility can open up incredible possibilities for education, and that overcoming technical challenges like gesture accuracy and device compatibility requires constant iteration and user testing.

What's next for AI & AR/VR Learning: Personalized, Immersive, and Engaging

The next steps for this project involve expanding the platform’s capabilities. We aim to add more personalization features, tailoring the learning content to each student’s unique needs, abilities, and learning pace using AI-driven insights. We’re also looking to enhance the AR/VR learning environments, creating subject-specific modules like virtual labs for science courses, or immersive historical reconstructions for history lessons. In the future, we plan to expand support for more languages and enhance the hand sign recognition to cover a broader range of gestures, potentially incorporating regional sign languages. Ultimately, we envision this platform being adopted by schools globally, empowering physically disabled students with the tools they need to engage fully in their education.

Share this project:

Updates