Inspiration

The aim of this project is to enhance STEM education, specifically human anatomy, by utilizing augmented reality technology. This approach aims to overcome the limitations of traditional teaching methods, such as anatomy books and models, which can be confusing for new students. The project intends to use a wearable 3D flashcard to provide information overlaid onto the body, making anatomy more accessible and engaging outside of classrooms and labs. This simplified approach makes anatomy easier for students to understand and learn. Additionally, the app aims to aid in learning pronunciation and identification of anatomy through interactive elements, further improving students' understanding and retention of the subject.

What it does

The project aims to make learning about anatomy an engaging and interactive experience by utilizing an open-source anatomy library in Blender and a user-friendly interface. The app allows users to view and explore specific facial muscles, with different structures highlighted as the slider is moved. Corresponding labels are revealed and read aloud, helping users to better retain and understand the information. The app also includes a quiz mode where users can test their knowledge, encouraging them to strive for a high score and reinforcing their understanding of the subject.

How I built it

The project was developed using a blend of 3D modeling, VoiceML, Text-to-Speech, scripting, and interface design. The open-source Blender anatomy library, z-anatomy, was utilized to create 3D models of the musculature. UI components were employed to control the display of anatomical labels. Text-to-Speech was utilized to recite the anatomical features. Custom scripts were written to dynamically adjust the opacity and muscle name slider values and change the properties of the 50+ meshes in real-time. The quiz mode used the Levenshtein distance function to evaluate the similarity between spoken words and the anatomical structure, allowing for the calculation of comparison scores and classification of words as correctly, closely, or incorrectly heard. The anatomical structures were sorted alphabetically and a particle confetti/clap sound effect was added to celebrate high scores.

Challenges I ran into

In the development process, a number of challenges were faced. One such challenge was the presence of multiple render meshes in certain geometries, which resulted in issues with referencing shaders. Another challenge was the inversion of some meshes caused by negative scaling, causing difficulties with the highlight shader functionality. These challenges required careful attention and problem-solving to overcome and ensure a seamless and functional end product.

Initially, I planned to use touch interaction where users would highlight a structure by touching it and revealing its name. But structures are layered and often occluded, making touch interaction challenging. Using buttons to navigate among 50+ structures could lead to UI overload and inaccurate tapping by users with larger fingers. So, I opted for a UI slider instead.

To transcribe uncommon anatomical terms, I used speech context boosters. Despite that, some words still have incorrect transcriptions. To fix this, I added a toggleable multiple choice format to the quiz.

Accomplishments that I'm proud of

  • The ability to change transparency of the structures,
  • The ability to randomly highlight important anatomical structures
  • The ability to add speech to text, text to speech, study/quiz modes, and high scores
  • Keeping the lens size under 2 MB!

What I learned

During the development of this project, I learned about:

  • Working with shader graph
  • Interfacing with UI elements
  • Writing custom scripts
  • Interfacing custom variables with shaders and UI elements
  • Using VoiceML
  • Using Persistent Storage
  • Mode switching within the lens

In the future, I plan to

Add blendshapes for the face and have the anatomical structures respond to facial movements Create similar experiences for hands and feet. But I could also extend this technology to create educational experiences in the surgery, architecture, manufacturing, automotive, and veterinary domains. Overall, the project aims to make the study of complex assemblies like human anatomy more engaging and interactive for students through an innovative 3D wearable flashcards

Built With

Share this project:

Updates