Inspiration'

With the rise of IoT devices and the backbone support of the emerging 5G technology, BVLOS drone flights are becoming more readily available. According to CBInsights, Gartner, IBISworld, this US$3.34B market has the potential for growth and innovation.

What it does

Reconnaissance drone software that utilizes custom object recognition and machine learning to track wanted targets. It performs close to real-time speed with nearly 100% accuracy and allows a single operator to operate many drones at once. Bundled with a light sleek-designed web interface, it is highly inexpensive to maintain and easy to operate.

There is a Snapdragon Dragonboard that runs physically on the drones capturing real-time data and processing the video feed to identify targets. Identified targets are tagged and sent to an operator that is operating several drones at a time. This information can then be relayed to the appropriate parties.

How I built it

There is a Snapdragon Dragonboard that runs physically on the drones capturing real-time data and processing the video feed to identify targets. This runs on a Python script that then sends the information to a backend server built using NodeJS (coincidentally also running on the Dragonboard for the demo) to do processing and to use Microsoft Azure to identify the potential targets. Operators use a frontend to access this information.

Challenges I ran into

Determining a way to reliably demonstrate this project became a challenge considering the drone is not moving and the GPS is not moving as well during the demonstration. The solution was to feed the program a video feed with simulated moving GPS coordinates so that the system believes it is moving in the air.

The training model also required us to devote multiple engineers to spending most of their time training the model over the hackathon.

Accomplishments that I'm proud of

The code flow is adaptable to virtually an infinite number of scenarios with virtually no hardcoding for the demo except feeding it the video and GPS coordinates rather than the camera feed and actual GPS coordinates

What I learned

We learned a great amount on computer vision and building/training custom classification models. We used Node.js which is a highly versatile environment and can be configured to relay information very efficiently. Also, we learned a few javascript tricks and some pitfalls to avoid.

What's next for Recognaissance

Improving the classification model using more expansive datasets. Enhancing the software to be able to distinguish several objects at once allowing for more versatility.

Share this project:

Updates