Inspiration

Underserved communities face violence rates 3x higher than affluent areas but lack resources for dedicated security personnel. Schools, shelters, and community centers operate with minimal staff who can't monitor multiple camera feeds simultaneously. When incidents occur, existing systems flood staff with duplicate alerts instead of providing coordinated responses, delaying critical interventions when seconds matter most. This is what Lookout seeks to solve.

What it does

Lookout connects multiple security cameras so they work as a team instead of separate systems. When some unusual behavior happens - fighting, robbery, weapons - the AI detects it across 13 different threat categories. But here's the key part, instead of bombarding security staff with duplicate alerts from every camera, the system contextualizes information between cameras to give you one clear picture of what's actually happening.

It automatically:

  • Tracks threats as they move between camera zones
  • Saves evidence clips with timestamps (kind of)
  • Generates incident reports
  • Prevents duplicate alerts

How I built it

I built this using four specialized AI agents that work together:

Camera Detection Agents - Processes all camera angles and does frame-by-frame analysis using OpenCV, running a Convolutional Neural Network I trained on surveillance footage from the DCSASS dataset with 16,853 videos across 13 threat categories to spot violence in real-time using PyTorch

Coordination Hub - The brain of the system that uses Redis to share information between all cameras and prevent duplicate alerts

Risk Assessment Agent - Figures out how serious each threat is (Code Yellow for fights, Code Red for weapons) and provides a confidence interval

Response Agent - Handles the actual alerts and evidence collection (Sends a Discord message to security if severity of threat is HIGH and confidence is over 90%)

The whole thing runs on Python with a React dashboard that shows live camera feeds. I used NVIDIA's AgentIQ toolkit to orchestrate all the agents.

Challenges I ran into

Getting all these different systems to work together was super hard at first. I had PyTorch models, Redis messaging, React frontend, and NVIDIA's toolkit all trying to coordinate without stepping on each other.

I'm still working on automatically generating building maps from camera angles using computer vision. Right now you have to manually set up the floor plan, which isn't ideal.

Accomplishments that I'm proud of

  • I let my CNN train overnight while I actually got sleep, woke up to ~70% accuracy
  • Getting cameras to share context without creating duplicate alerts is something unique so I'm proud of that
  • This time I actually wrote out my technical plan before coding, which saved me hours of debugging
  • Making four AI agents work together with NVIDIA AgentIQ was cool when it finally clicked

What I learned

Planning makes everything easier. Usually I just start coding and figure it out as I go, but this time I documented exactly which tools I'd use and how they'd fit together. It cut my debugging time significantly.

What's next for Lookout

  • Auto-generated building maps: Use computer vision to create floor plans automatically from camera angles
  • Better accuracy: Optimize the CNN with techniques like model compression or gradient descent to improve detection
  • Prettier interface: In this hackathon, I mainly focused on the backend - the frontend needs help
  • Edge deployment: Run everything locally for places that need privacy or don't have reliable internet

p.s. don't worry about the noise in the demo video oops

Built With

Share this project:

Updates