Masalah yang Dihadapi

Disability lover: The importance of paying attention to the communication needs of speech and hearing impaired people Current application:

  • HearMe: Translates voice into sign language motion animation
  • HandTalk: Same as HearMe Gaps: There are no apps yet that use motion sensors to convert sign language to text in real-time

What it does

Solution description: The Master Gesture application uses the motion sensor from the camera to translate sign language into text Main advantages:

  • Facilitate two-way communication between people with disabilities and other people
  • Real-time translation from sign language to text

How we built it

  1. Collect signal master set data
  2. Training dataset using python and yolo
  3. Validate the dataset model
  4. Dataset optimization
  5. Export the trained dataset into Pytorch and Onnx models
  6. Frontend backend implementation

Challenges we ran into

Device Compatibility: Requires higher GPU specs for dataset training and prediction The client side requires higher device specs for real-time accuracy

Accomplishments that we're proud of

Working prototype: Successfully created a prototype that can translate sign language movements to text Positive feedback: Get positive feedback from the disabled user community

What we learned

Understand the needs and challenges faced by people with disabilities in communicating Technological innovation: Gain insight into how motion sensor technology and AI can be used for inclusion

What's next for Master Gesture

Collaboration with Video Conference Platforms Integration with video conferencing tools: Google Meet, Zoom, Microsoft Teams Added real-time sign language to text conversion feature

Built With

Share this project:

Updates