Inspiration

An Army officer including four personnel killed near LoC in a bid to stop infiltrators. This led us to wonder if we could improvise the existing AIOS by smartisation of the fence that could make it easier for the defense personnel on a lookout for infiltrators crossing the fence, which is why we chose this very topic. We, as students of the Computer Science Engineering department, are keen on solving problems that require technical solutions. Our mentor Nagamalleshwari ma'am, our AI faculty suggested we take up this challenge!

What it does

We have built an image detection model using opencv - a computer vision library in Python. The model performs image detection on the surveillance footage and puts a colored bounding box around any moving human, animal or any other suspicious movement near the fence. As an attempt at improving this model, we built a YOLO model- You Only Look Once, which is an object detection algorithm much different from the region based algorithms like rcnn. In YOLO a single convolutional network predicts the bounding boxes and the class probabilities for these boxes. How YOLO works is that we take an image and split it into an SxS grid, within each of the grid we take m bounding boxes. For each of the bounding box, the network outputs a class probability and offset values for the bounding box. The bounding boxes having the class probability above a threshold value is selected and used to locate the object within the image. YOLO is orders of magnitude faster(45 frames per second) than other object detection algorithms.

How we built it

We used OpenCV, an extremely useful computer vision library in python. It is a cross-platform library using which we can develop real-time computer vision applications. It can be easily implemented without delays unlike any other technology It mainly focuses on image processing, video capture and analysis including features like face detection and object detection. HOG, or Histogram of Oriented Gradients, is a feature descriptor that is often used to extract features from image data. It is widely used in computer vision tasks for object detection.

Challenges we ran into

Some challenges we ran into included:

  1. Choosing the right model
  2. Preparing custom datasets
  3. Labelling and annotation
  4. Live detection
  5. Errors in the code
  6. The limitation of the YOLO algorithm , that it struggles with small objects within the image.

Accomplishments that we're proud of

We have successfully made a user friendly, easy to use, working model that works for any given input detects any unrequired movements and builds a box around it. This shows that our model is able to detect infiltrators, which was our aim. We have a complete working prototype of the demo which can be repetitively executed with any input. We have gone out of the way to think of both hardware and software technologies and how our model can actually be implemented. We had other ideas in mind, but we wanted to try only the solutions which were practically possible in such an area, and so we have. However, this was not IT was us, so we started working on another model - YOLO for better precision.

What we learned

Honestly, we learned a lot. Beginning with, how to come up with solutions for a problem statement, how to think out-of-the-box (since we took up on a problem that was partially solved) , presentation and time management. In terms of technicality, we learnt about ML Models using computer vision, how to make custom dataset and annotate images, setting weights and biases & testing and training the model using the dataset. We went through a large number of resources including websites and online videos. Lastly, we learnt the importance of working in teams.

What's next for HE-4

We are working on a more accurate ml model - yolo. We have completed the coding for the same. Testing the model is next!

Output Videos and Screenshots

Click on the link below: https://drive.google.com/drive/folders/1pMdBi9mFpqSJRyjXaWJJHgU3Mt2tMzp0?usp=sharing

Built With

Share this project:

Updates