Inspiration

Millions of displaced persons and refugees are unable to get proper access to healthcare due to language barriers and communication issues. In particular, deaf refugees are not able to express their symptoms to the healthcare provider, which usually ends up in inefficient treatment. Moreover, a sizable percentage of refugees are either illiterate or do not speak English as their first language, making them challenge invaluable. The solution has been motivated in a way to address these and other disparities, granting better chances of effective healthcare to such populations. By putting AI and sign language recognition front and center, we are trying to empower refugees and migrants by equipping them with such an instrument to articulate their symptoms perfectly, allowing improved healthcare for such vulnerable populations.

What it does

LYRA AI is a new app for refugees with a feature for them to express their symptoms using the following methods: Sign Language Recognition: Converts and shoes a comprehension of movements used to express symptoms in writing, thus enable deaf refugees to express their symptoms. Real-Time Translation Output: Improves understanding of symptoms, regardless of how they are signed, by putting such into texts that health professionals can read, thus allowing a better diagnosis to be offered and treatment to be provided. Unfortunately, language and disability barriers make it difficult for them, but this solution makes it possible for healthcare providers to make better decisions.

How we built it

We used the following technologies in the development of LYRA AI to achieve our goal: Frontend Development: React Native: For the development of a cross platform mobile application that can run on both Android and iOS. Backend Development: Flask: We have employed Flask to deal with API requests and process symptom data. firebase or mongodb: To save user data, session histories, and translations. AI & Machine Learning: TensorFlow and OpenCV: We incorporated a sign language recognition model to determine the gestures and splice them as text. PyTorch/Scikit-learn: For training models to analyze signs and produce text outputs of the description (e.g., 'pain in chest'). Cloud & Deployment: The app was developed using Google Cloud platform to support the AI models and to guarantee that the app can easily be scaled up. Quick backend deployment and real time testing was done using Heroku.

Challenges we ran into

Sign Language Detection Accuracy: Many were met in the project regarding the fact that the training of the sign language recognition systems to detect the gestures accurately and translate them with meaning into medical terminologies required more attention than was anticipated. This meant fine-tuning the model, while testing the application in rapid stages, with multiple languages and gestures. Real-Time Processing: The challenge for the latency was to ensure real-time translating of the gestures. To enable this, it required the optimization of our back-end and AI models for faster performance, especially in low-end environments like mobile device scenarios. Multilingual Support: In consideration of the diversity within the users supported, the major thing is had to be to account for multiple languages. This would require research efforts in determining how a sign language gesture could be mapped with medical precision into many different languages.

Accomplishments that we're proud of

Sign Language Recognition: Integrated successfully sign language recognition to the app with a high degree of accuracy on commonly used signs (e.g., "pain," "headache"). Real-time Translation: The project enables the real-time translation of both signs in the text, enabling the healthcare provider an immediate evaluation of-the patient's symptoms.

Mobile App Deployment: Successfully launched the app on both Android and iOS tech platforms, creating accessibility for refugees using widely available mobile devices.

What we learned

UI-Centered Design's Importance: Understanding the user's needs became critical. We realised from the research that the application needed to be simple and user-friendly since refugees will have a limited understanding of high-tech devices or may not have sufficient experience with the use of apps. Therefore, we laid much emphasis on designing a very easy-to-navigate UI. AI Optimization in Less Resource: We realised that it was key for us to optimise the AI models to work well in mobile devices for real-time processing. We learnt to decrease latency issues in order to perceive a more pleasure-inducing user experience, even on less capable smartphones. Multilinguality in Heathcare: Our project revealed the challenges of interlinking medical vocabularies among languages. Data preprocessing is in good ways to map symptom expressions in a common understanding. Collaboration across Disciplines: Collaboration with AI researchers, mobile developers, and healthcare professionals taught us how to integrate unique skill sets for a cohesive solution. Collaboration is all important in addressing a challenge like access to health care for refugees.

What's next for Lyra.ai

LYRA AI is an innovative and innovative project intended to provide medical access to refugees who are dealing with language barriers. Despite the challenge with the accuracy of sign language used and real-time translation, we have produced a working prototype that could help close the communication do that the refugees can give the correct amount of their diagnosis for optimal treatment.

Built With

Share this project:

Updates

posted an update

Why LYRA AI—It's a real-time AI-powered sign language translator that bridges the gap between deaf patients and doctors, ensuring accurate diagnosis, effective treatment, and equal access to healthcare.

*Healthcare is a right. LYRA AI makes it a reality.

Log in or sign up for Devpost to join the conversation.