Project Inspiration:
The inspiration for this project stems from recognizing the often-overlooked challenges faced by caregivers and families of Alzheimer's patients. While much focus is placed on the patients, the immense responsibility, emotional toll, and hard work of caregivers are frequently neglected. Our full-stack application, powered by AI/ML and video recognition, aims to ease this burden by providing fall detection and other supportive features, ensuring not only the safety of the patients but also the well-being of the caregivers. We were motivated by the belief that caregivers, too, deserve care and support with their duties. NeuroGuard was created to keep Alzheimer patients safe and support their caregivers.
What it does:
NeuroGuard is a full-stack application with a website/browser option that has multiple features like video recognition for fall detection, personalized medical agent, fall history and medical records, an alert message system, (and much more) to aid caregivers in their duties for their patients. Since Alzheimer's patients are at much higher risk of falling and serious injury, video recognition technology can quickly identify when a patient falls and alert the caregiver and emergency services if needed. As well, the personalized medical agent utilizes an AI API and is tailored to each individual user to allow swift usage of the application, access to information, and support to the caregiver.
How we built it:
Back-end: Implemented advanced computer vision techniques coupled with image analysis and processing to precisely detect sudden changes in human motion. Utilized OpenCV for real-time image processing and motion analysis. Integrated the Pose Detection algorithm, part of Google's powerful MEDIAPIPE machine learning pipeline, to track and interpret human movement with high accuracy. Conducted research into incorporating habit pattern recognition using the RoboFlow library and the Pixela public API, a promising addition that would have significantly enriched the system's functionality, though it was not fully realized within the project timeline. Aimed to apply object-oriented programming (OOP) principles to design a modular and efficient timer class, which would further optimize system performance and enhance flexibility.
Personalized Medical Agent: We developed an AI chatbot using Voiceflow to support caregivers in Alzheimer’s care, integrating external patient data through Voiceflow APIs, including Zendesk Ticket APIs, to deliver personalized interactions. The system combines workflow automation, knowledge base features, and LLM integration for specialized training, ensuring accuracy in healthcare scenarios. It dynamically processes real-time data from patients, streamlining communication between caregivers and healthcare providers. The chatbot operates in two modes: an AI assistant powered by a custom GPT-4 model that provides context-specific responses to healthcare-related questions, and an incident reporting feature that guides users through submitting detailed reports. The system pre-fills the user's email, requests a minimum input for incidents, and sends reports via email to healthcare providers while offering the option to continue the session or conclude it. This solution enhances caregiver support by automating information flow and improving communication efficiency.
Front-end: Developed dynamic and visually appealing web interfaces using React.js to showcase project functionality. Utilized the Axios library to seamlessly integrate the Flask backend with the React frontend through API communication, enabling efficient data flow. Overcame challenges such as implementing robust user authentication logic and establishing a live video stream from the backend to the frontend for real-time display. The result is an interactive and responsive frontend that enhances user experience while ensuring smooth backend integration.
Challenges we ran into:
The challenges we faced were numerous, but each obstacle helped us grow and refine our approach. Initially, we struggled with setting up the video recognition system, which required extensive troubleshooting. Without access to an external camera, we improvised by using an iPhone-to-PC connection to capture video. Training the model to accurately identify falls also proved difficult, so we incorporated MediaPipe to create nodes and leveraged the relationships between them to improve recognition accuracy. Additionally, integrating the Flask backend with the React frontend posed some challenges, but through persistence and problem-solving, we were able to overcome these hurdles and push the project forward.
Accomplishments that we're proud of:
Creating a fully function video recognition application without prior experience and perseverance through several challenges. Also, pushing ourselves to challenge ourselves with new technologies and combine multiple tech techniques and tools.
What we learned:
Life doesn't always go as expected, especially during a time-crunching hackathon. Having a solid plan, taking reasonable risks, and challenging ourselves with tech tools and techniques can be very rewarding!
What's next for NeuroGuard:
Extracting, plotting, visualizing, and predicting behaviour changes through graphs from a trained ML model can help provide helpful insight into medication and environmental changes to healthcare professionals. Therefore expanding our technology to the healthcare sector that is currently needing more help.
Log in or sign up for Devpost to join the conversation.