Inspiration

Our inspiration is to harness the power of technology to make a positive impact on the lives of visually impaired individuals. By utilizing AI, machine learning, computer vision, and other relevant technologies, our goal is to create a user-friendly solution that promotes inclusivity, equality, and independence for blind individuals. We are committed to leveraging technology to enhance accessibility and empower visually impaired individuals to navigate their environment, access information, and perform daily tasks with confidence and independence.

What it does

Our blind person assistant project is inspired by the vision of leveraging technology to enhance the lives of visually impaired individuals. Our solution incorporates advanced features such as object detection, text messaging, phone call capabilities, facial recognition for identifying familiar faces, and label reading to provide a comprehensive and inclusive toolset. By combining cutting-edge technologies like AI, machine learning, and computer vision, our goal is to create an all-in-one solution that empowers blind individuals to navigate their environment, access information, and communicate with ease, ultimately promoting independence and inclusivity.

How we built it

Object Detection: We utilized computer vision techniques and pre-trained machine learning models to detect objects in the environment. We used popular libraries such as OpenCV and TensorFlow to implement object detection capabilities.
Text Messaging and Phone Call: We integrated with APIs or libraries that allow sending text messages and making phone calls programmatically. For example, we used Twilio for SMS and phone call functionalities, which enabled our solution to send text messages and make phone calls for communication purposes.
Facial Recognition: We employed facial recognition algorithms to detect and recognize familiar faces. We used libraries such as OpenCV and dlib for facial recognition tasks, and trained our own facial recognition model using labeled data.
Label Reading: We utilized optical character recognition (OCR) techniques to read and interpret labels on objects. We used libraries such as Tesseract, which is a popular OCR library in Python, to extract text from images and recognize labels.
Backend and Frontend Development: We built the backend of our solution using Python and Flask, a web development framework, to handle server-side logic, API integrations, and data processing. For the frontend, we used HTML, CSS, and JavaScript to create a user-friendly interface for interacting with our solution.
Testing and Iteration: We conducted thorough testing and iterative development to ensure the reliability, accuracy, and usability of our solution. We gathered feedback from visually impaired individuals to make improvements and refine our solution based on their needs and preferences.

Challenges we ran into

One of the challenges we encountered during the development of our blind person assistant project was the integration and linking of all the components together. This involved seamlessly connecting object detection, facial recognition, OCR, text messaging, phone call, and other functionalities in a cohesive and user-friendly manner. We had to ensure that all the components worked together seamlessly and provided a unified experience for visually impaired users, which required careful coordination and integration of various technologies and APIs.

Accomplishments that we're proud of

We are proud of creating a user-friendly and straightforward system in our blind person assistant project. This includes designing an intuitive interface and integrating all the functionalities in a seamless manner, making it easy for visually impaired users to interact with the system and utilize its features effectively. Our focus on accessibility and inclusivity has resulted in a solution that is user-friendly and accommodating for individuals with visual impairments, making it a significant accomplishment for our project.

What we learned

Throughout the development of our blind person assistant project, we gained valuable insights into incorporating different APIs and integrating them into a cohesive system. We also learned about optimizing the system for fast runtimes to ensure smooth and efficient performance. The project provided us with hands-on experience in working with various technologies and APIs, enhancing our skills in application development, machine learning, computer vision, and human-computer interaction. We also gained a deeper understanding of the challenges faced by individuals with visual impairments and the importance of creating accessible solutions. Overall, the project was a valuable learning experience for our team, allowing us to expand our technical knowledge and develop practical skills in building inclusive and empowering solutions.

What's next for Ozmo

The next steps for Ozmo, our blind person assistant project, involve further refinement and improvement of the family recognition feature. We aim to enhance the accuracy and reliability of the facial recognition system to enable it to recognize multiple family members with greater precision. This may involve additional training of the machine learning models, fine-tuning of the algorithms, and incorporating feedback from users for continuous improvement. We also plan to explore potential integration with other functionalities, such as voice recognition for more seamless interaction and customization options for users. Our goal is to make Ozmo a comprehensive and reliable assistant that can provide valuable support and assistance to visually impaired individuals in their daily lives.

Share this project:

Updates