Inspiration

The inspiration for our project, CLIF ( Communicative Learning Intelligent Friend), stemmed from a deep, personal connection with a deaf friend who navigated daily communication challenges with resilience and determination. This individual, along with some 20% of the global population who struggle with hearing loss on a day-to-day basis, inspired our team to bring CLIF to life.

Project Story

Originally CLIF started off as just a Python program and a webcam, we later expanded on the idea and made a module and added cosmetics to make it into an friendly appearance that can fit into variety of households

What it does

CLIF (Communicative Learning Intelligent Friend) is a device designed to facilitate communication for mute and deaf individuals. By leveraging computer vision and machine learning, CLIF interprets sign language gestures captured by a camera in real time. It then translates these gestures into spoken words, which are audibly relayed through a speaker, and simultaneously displayed on an LCD screen. This innovative device bridges the communication gap, empowering individuals with hearing and speech impairments to engage more effectively with others daily.

How we built it

Initially, we meticulously crafted a sturdy physical prototype, prioritizing approachability and user-friendliness in its design. We meticulously curated a diverse dataset consisting of 30 distinct categories of images, encompassing the alphabet and common sign language phrases. With each class comprising 200 images, our dataset comprised a total of 6000 images, all meticulously captured by our team. Utilizing the powerful capabilities of the MediaPipe library, we meticulously processed each image and its corresponding label. Leveraging the library's robust functionality, we precisely landmarked the hands depicted in every image. With 42 landmarks meticulously identified for each hand, and 84 when considering both hands, our program achieved exceptional precision in hand gesture recognition. Subsequently, we employed Scikit-learn, coupled with a random forest classifier, to train an advanced AI module. This module endowed our creation, affectionately named CLIF, with the ability to swiftly and accurately recognize various sign language gestures in real time. To enhance user interaction, we seamlessly integrated a Bluetooth speaker into our design. This feature empowered CLIF to seamlessly translate American Sign Language gestures into spoken English, providing users with instant comprehension and accessibility. Lastly, we employed PySerial to facilitate communication between CLIF and an Arduino board. This enabled CLIF to transmit analyzed words to an LCD screen in real time, ensuring users received immediate visual feedback on the interpreted sign language gestures.

Accomplishments that we’re proud of

  • Development of a real-time interpreter and display system.
  • Implementation of hand detection capability, enabling the device to detect two hands using 84 landmarks, as opposed to the standard 42.
  • Achieving 100% accuracy on test cases, demonstrating the robustness and reliability of the system.

Challenges we ran into

Throughout this project, our team faced challenges including ideating a prototype that could realistically be used in daily life, developing a working solution within the constraints of available electronics and materials, and connecting and configuring a variety of new integrated electronics. Despite the hurdles we faced, we embraced these challenges as opportunities for growth and innovation, ultimately creating a communication device that we believe has the potential to improve the lives of mute and deaf individuals significantly.

What we learned

Throughout this project's journey, we learned a variety of invaluable lessons. In the initial phase, our team learned about developing an idea that can become a fully functional prototype. Aspects such as problem definition, usage goals, ergonomics, and more were considered. As participants in the MakeUofT Hackathon, we gained expertise in formulating a plan that functions within the material constraints. In addition, our team explored several different approaches, exploring and integrating various hardware components, such as cameras, Bluetooth modules, and Arduino Uno boards, into our design. This led us to gain knowledge in various technology applications, communication methods, and more.

What's next for CLIF

The future of CLIF is bright. We are committed to refining its functionality and accessibility, ensuring it becomes increasingly useful for empowering individuals with hearing and speech impairments in various settings and communities. More specifically, incorporating Bluetooth components and having better wire control can help further simplify the usage and portability of CLIF. Furthermore, a wider variety of languages and recognizable ASL phrases can introduce CLIF internationally. Lastly, changes such as improving the UI and optimizing the accuracy and speed of the processing can bring CLIF to new heights of effectiveness and usability.

Built With

Share this project:

Updates