Inspiration

Late one evening, one of our team members was walking home through an unfamiliar neighborhood. He pulled up a crime map app—it showed dozens of incidents nearby, but offered no guidance. Should he take the well-lit main road or the shorter path through the park? Did it matter that it was dark and foggy? The app couldn't answer these questions because existing safety tools are blind to context.

What it does

GuardAIn is a context-aware AI safety companion that combines real-time data from multiple sources to provide intelligent, personalized route recommendations. At its core, GuardAIn performs multi-factor context analysis by aggregating UK Police crime statistics for any location, checking current weather conditions including visibility and precipitation, calculating day/night status with sunrise and sunset times.

What makes GuardAIn different is that it doesn't just display data—it thinks. Using Claude AI with our custom MCP server, it identifies dangerous combinations that matter for real-world safety. For example, it understands that dark plus foggy conditions in a high theft area creates elevated risk, while daytime with good visibility on a main road presents lower concern.

How we built it

We employed a divide-and-conquer approach by splitting the work into four parallel streams:

  1. Developing the MCP client and implementing data post-processing.
  2. Creating the MCP server to deliver essential functions.
  3. Engineering prompts to execute specific tasks.
  4. Building the front-end for the user interface.

Through effective teamwork and simultaneous development across all components, we completed the entire development process both quickly and efficiently.

Challenges we ran into

Understanding the relationship between the MCP server, MCP client, and LLM (Claude) proved difficult. We needed to utilize both system prompts and user prompts to guide the LLM through the mission step by step. The most demanding aspect was teaching the LLM to analyze and classify safety levels by incorporating references and examples while considering all data retrieved from the tools.

And also deploying Node + Python environments was technically difficult.

Accomplishments that we're proud of

The LLM successfully uses tools to gather data from multiple sources, then analyzes and classifies safety levels accordingly.

What's next for GuardAIn

We plan to incorporate additional features that enable further user input to generate more contextually relevant and personalized results. Future enhancements may include routing functionality that calculates paths from the user's current location to a specified destination while intelligently avoiding hazardous areas.

Built With

Share this project:

Updates