Masalah yang Dihadapi
Disability lover: The importance of paying attention to the communication needs of speech and hearing impaired people Current application:
- HearMe: Translates voice into sign language motion animation
- HandTalk: Same as HearMe Gaps: There are no apps yet that use motion sensors to convert sign language to text in real-time
What it does
Solution description: The Master Gesture application uses the motion sensor from the camera to translate sign language into text Main advantages:
- Facilitate two-way communication between people with disabilities and other people
- Real-time translation from sign language to text
How we built it
- Collect signal master set data
- Training dataset using python and yolo
- Validate the dataset model
- Dataset optimization
- Export the trained dataset into Pytorch and Onnx models
- Frontend backend implementation
Challenges we ran into
Device Compatibility: Requires higher GPU specs for dataset training and prediction The client side requires higher device specs for real-time accuracy
Accomplishments that we're proud of
Working prototype: Successfully created a prototype that can translate sign language movements to text Positive feedback: Get positive feedback from the disabled user community
What we learned
Understand the needs and challenges faced by people with disabilities in communicating Technological innovation: Gain insight into how motion sensor technology and AI can be used for inclusion
What's next for Master Gesture
Collaboration with Video Conference Platforms Integration with video conferencing tools: Google Meet, Zoom, Microsoft Teams Added real-time sign language to text conversion feature
Built With
- bun
- ionic
- onnx
- postgresql
- python
- vue
- yolo
Log in or sign up for Devpost to join the conversation.