This project was inspired by the need to bridge the communication gap between deaf–mute individuals and the hearing community. We built a system that captures hand gestures through a camera, processes them, and uses a CNN model to recognize and translate signs into text. During development, we faced challenges such as limited datasets, varying backgrounds, and accurate hand segmentation. Despite this, we successfully created a high-accuracy, hardware-free recognition system using deep learning. Through this work, we learned how to preprocess images effectively, design and train CNN models, and improve gesture recognition using augmentation and tuning.

Built With

Share this project:

Updates