Inspiration

One of our teammates grew up with deaf immigrant parents, experiencing firsthand how the world isn't designed for the deaf community. The statistics are striking:

466 million people worldwide are deaf or hard of hearing Less than 5% of online video content is accessible The ADA and WCAG require accessible digital content, yet current solutions are inefficient

Beyond deafness, immigrant parents face additional challenges with rapid English captions in a language they're still learning. This personal connection, combined with the clear need for better accessibility solutions, inspired us to create a platform that makes digital content truly accessible in ASL.

What it does

What it does Signify transforms digital content into fluid ASL video sequences through AWS's powerful infrastructure: Architecture:

Input Processing

Text-to-ASL conversion using AWS Bedrock Speech-to-Text using Amazon Transcribe Real-time processing via Lambda functions

Video Generation WLASL dataset integration for accurate signing Seamless video sequencing through MoviePy S3 storage for efficient video retrieval

Delivery System API Gateway for request handling CloudFront for fast video delivery React frontend for smooth playback

How we built it

We developed Signify using a comprehensive tech stack:

Frontend: React for dynamic user interface NextJS for web application stack

Backend: Python for video processing AWS Lambda for serverless computing WLASL dataset integration MoviePy for video concatenation

AWS Services: S3 for video storage Bedrock for language processing API Gateway for endpoints Lambda for serverless functions

Challenges we ran into

Dataset Limitations

Gaps in WLASL vocabulary Multiple signers affecting consistency Video quality variations

Technical Hurdles

Video sequence optimization AWS service integration Performance optimization

Accomplishments that we're proud of

Successfully built an end-to-end AWS-powered ASL translation system Integrated and optimized the WLASL dataset Created smooth video transitions despite dataset challenges Developed a scalable, cloud-based architecture Built with real users in mind

What we learned

Through building Signify, we gained crucial insights:

AWS Infrastructure: Orchestrating cloud services for real-time video processing Video Processing: The complexities of handling sequential ASL video playback Accessibility First: The importance of designing with the deaf community in mind Dataset Management:*8 Working with WLASL and handling its limitations **Cultural Impact: Understanding how technology can bridge communication gaps for deaf immigrants

What's next for Signify

Open Source Growth Share our enhanced WLASL dataset with the community Contribute to public ASL resources Enable collaborative dataset improvement

Technical Optimization Implement AWS performance enhancements Improve video transition smoothness Develop mobile-first experience

Community Impact Support multiple sign languages Enable community contributions Build real-time translation

Built With

Share this project:

Updates