Inspiration

The inspiration for this project came from a deeply personal experience. One of our team members lost their grandmother in a tragic home fire accident while her children were not at home. This made us wonder: how can we remotely ensure the safety of our loved ones—especially those who may not be able to seek help themselves? But this problem isn't just limited to the elderly. Children, patients recovering at home, and people with limited mobility or cognitive impairments face similar risks. Our app is built for all of them—ensuring that no one is left unsafe or unsupported, even when alone at home. There are 15 million at-home care patients in the US who need constant attention and personalized spaces that prevent hazards.

What it does

Our AI-powered healthcare safety app, RiskItFree, revolutionizes patient care by creating a dynamic bridge between healthcare spaces and patient needs. At its core, the system maintains a sophisticated semantic knowledge base that understands the complex relationships between spaces and patients. Through remote monitoring, it detects potential safety hazards, both critical and microhazards, and gives a safety score to each of the spaces to better understand and analyze the space. It also provides actionable safety recommendations while enabling meaningful user engagement through our chatbot.

How we built it

Our development journey began with extensive research into existing healthcare safety solutions. We recognized that caregivers needed a way to assess environmental risks for elderly or vulnerable patients across multiple spaces. Our team was then divided into specialized units that focused on different aspects of the solution. The backend team established the foundation with robust database architecture and API development, while the frontend team crafted an intuitive user interface with strong accessibility features. As the whole team was well-versed in AI, we split up the tasks and worked on different features using VLMs, LLMs, and LangGraph for a chatbot.

Development order: Core data structures for patients, spaces, and safety assessments MongoDB integration for persistent storage React Native mobile UI for capturing and displaying information AI research in the best models for our use case and AI integration Backend development with database and AI inference Experiments with different hyperparameters, optimizations, and data privacy Chatbot development using Agentic AI for intelligent retrieval and response generation

Challenges we ran into

  • Context-Aware AI Responses: Ensuring that the Gemini AI model provided relevant, factual responses based only on available data. We solved this by structuring detailed prompts with explicit instructions to prevent hallucination.
  • Prioritizing Safety Information: The system needed to intelligently determine which information was most relevant to a user's query (e.g., when they ask about a specific relationship like "father" or a specific space).
  • Processing images to efficiency infer from multimodal models and iteratively designing the AI systems for better performance

Accomplishments that we're proud of

  1. Our sophisticated context-aware AI agent that can understand natural language queries about specific relationships or spaces.
  2. The hierarchical hazard classification system (high/medium/low priority) with targeted recommendations.
  3. Understanding the user base and bridging the gap in a novel way

What we learned

We learned a lot about end-to-end development of an AI app that uses state-of-the-art models, multi-modal input processing, and agent AI using LangGraph. Through the process, we also learned in optimizing our time to deliver the best results in the short amount of time while having a full-time job. We attended all the workshops and loved the interactions with the speakers and fellow participants. We got the opportunity to learn more about Google ADK, LlamaGuard and PurpleLlama, responsible way of using infrastructure and more.

What's next for Risk It Free

What's next for our Project Our future plan is to turn this into real-time monitoring of patients for hazards. This involves tracking if the person being monitored has had a fall, placed an obstacle in their walk path, has had a spill unknowingly, placed something sharp, etc. We aim to make the app detect these hazards in real time and notify both the patient and the remote caretaker. We also plan to improve the features in making them more efficient in latency, performance through fine-tuning and RLHF, and scalable using distributed processing. We aim to improve the algorithm we use to calculate the safety score to make it more realistic and align well with the performance of AI models. We see potential in making this app work with cameras and motion sensors to expand to more than just monitoring them for their safety, but track their metrics like movement (steps), sleep schedule, eating habits, etc. We can also collaborate with at home care clinics and help them monitor their patients.

Built With

  • agent
  • amazon-web-services
  • architecture)
  • data
  • image
  • instructor
  • langgraph
  • ml
  • models)
  • owlv2
  • presigned
  • pydantic
  • react-native-(mobile-frontend)-python-(backend-apis-and-pytorch)-mongodb-(database-and-retrieval)-google-gemini-ai
  • response
  • s3
  • secure
  • storage)
  • urls
  • validation)
  • with
  • workflow
Share this project:

Updates