Inspiration

I've always believed that safety shouldn't be a luxury it should be accessible, transparent, and local. But most civic safety tools are outdated, disconnected, or too complex for everyday people to use. I wanted to build something that felt more intelligent and human, powered by AI, but grounded in reality something that lets people report issues, see what's happening around them, and get real-time insights they can trust. That’s how SafeNet.AI was born.

What it does

SafeNet.AI is a community-driven safety platform. Users can report incidents like crime, fires, or suspicious activity either through text or images. Once submitted, our system uses AI to:

1) Understand the type of incident (like theft, accident, etc.) 2) Check for dangerous or toxic language 3) Analyze any images that were uploaded 4) Automatically pin the incident on a live, interactive map

How we built it

This was built as a full-stack web app using: 1) React & Tailwind for the frontend 2) Node.js & Express for the backend 3) MongoDB Atlas to store geolocation-tagged reports 4) Mapbox to show crime heatmaps

For the AI part:

1) I used DeepSeek R1 to classify the incident and summarize it in plain language. 2) NLPCloud helped with detecting things like hate, stress, or sarcasm in the report. 3) For any photos submitted, I ran them through Gemini Vision to understand what’s happening visually (e.g., fire, accident, damage).

Challenges we ran into

Honestly, getting all the AI tools to talk to each other especially across text and image was tricky. I had to fine-tune prompts so that LLMs like DeepSeek could understand unstructured incident reports.

Also, rate limits on free APIs were a constant battle, and designing for ethical issues like privacy, bias, and safety took a lot of thought. I didn’t want to just throw AI at the problem — I wanted it to be responsible and useful.

Accomplishments that we're proud of

I’m really proud of building something that works end-to-end from someone writing a report, to AI understanding it, to that incident showing up on the map in real-time.

Integrating multiple AI tools text, vision, and classification into one cohesive app was a big win. And I’m proud that it’s more than just a tech demo — it’s something that could actually help people in the real world.

What we learned

This project taught me that working with LLMs is not just about code — it’s about how you ask the AI the right questions. Prompt design is as important as model choice.

I also learned how hard but rewarding it is to merge AI, UX, and real-world needs into one clean system. It made me think deeply about how we present AI decisions in a way that people can understand and trust.

What's next for SafeNet.AI

There’s so much potential here. Some next steps I’m planning:

1) Letting users post and receive reports in multiple languages. 2) Launching a mobile app for faster incident sharing. 3) Creating push alerts when there's a spike in reports nearby. 4) Building a dashboard for local authorities or city councils to monitor trends. 5) Continuing to work on fairness, privacy, and responsible AI use.

This project started as a hackathon idea, but I genuinely want to see how far it can go in the real world.

Built With

Share this project:

Updates