95% of the world's population occupies only 3% of the Earth's surface. Given the sheer magnitude and size of our world, it may seem almost impossible for us to monitor what happens to the planet we live on. But, with human ingenuity, we can pair two pieces of technology - satellites and autonomous drones - to achieve omniscience in our surroundings. Then even the smallest changes that cause the largest impacts can be observed and dealt with before they become a threat. Drying rivers and lakes will be detected before we are forced to abandon them to their fate. Forest fires will be noticed early and will never again reach the scale of California's devastating wildfires. The farmers that feed us will be able to monitor their crops to avoid plagues and prevent food shortages; this is especially critical for genetically identical species such as bananas. By perceiving the smallest changes, we can create the greatest impacts.
What it does
PlaNET uses the Planet API to get access to satellite images and geospatial data. Then, we use GeoJSON in order to determine which specific areas we want to monitor. Those can be specific plantations in a farm, unique patches of forest or even Tech Green. Those areas of interest are then persistently monitored by our Computer Vision engine. When a significant anomaly is detected, the system will issue anomaly warnings.
Then, the system proceeds to deploy semi-autonomous drones to scout the location. Those drones can take high resolution pictures, map the area, and confirm the anomaly. Alternatively, though not the solution we implemented, the drones could be manually controlled and used to manually annotate objects of interest. For example, a farmer may be curious about an anomaly detected in one of his fields and may proceed to send a drone to the region to see what's going on.
How we built it
We used the Planet API to collect satellite images and geolocation data. The area we monitored (Tech Green) was encoded in GeoJSON, making it suitable for geotagging and analysis through our computer vision algorithms. These algorithms were built using opencv and python, implementing a variety of change detection algorithms such as Fuzzy XOR, the Image Ratio method, and the Subtraction method. These algorithms could be scaled to send automated alerts over arbitrarily large areas in any given period of time.
To avoid loss of generality, we can supplement details by using drones to gather data. Planet's satellites capture images with a resolution of a pixel for every three meters; once an anomaly is detected in any given pixel, a clear idea of the situation can be obtained through automated drone surveillance. We were able to program this using a parrot AR-2.0 drone to attend to the semi-autonomous system's response. This allows us to record quality video of the area where the anomaly was detected, as well as high resolution pictures to highlight changes in a concise manner. Getting to the area of interest involves going to the GPS location received from the satellite. We were able to achieve all of these goals through the ar-drone API, ROS, and the tum_drone package.
Challenges we ran into
We discovered that change estimation was not a scarce theme in computer vision. This caused a lot of unforeseen trouble finding usable computer vision methods to do the anomaly detection. Due to time constraints, we restricted our satellite mapping applications to the city of Atlanta and the Georgia Tech Campus.
Most importantly, we also had problems setting up ROS in our machines. Taking around 10 hours just to get the package to install in 3 of our computers, we were unable to get the software fully working on either of our Mac OS, Ubuntu, or Windows machines. Luckily, one GTRI researcher lent us a virtualBox image with this software installed, enabling us to make implement our drone control system.
Accomplishments that we're proud of
First and foremost, we all stand behind our project's concept. Enabling human to expand their vision of the world by combining themselves with satellites and semi-autonomous drones combines multiple disciplines that we each have overlapping interests in. We our proud of each other and our teamwork in getting this idea off the ground and into the reality.
We're also proud of creating anomaly detection algorithms that were able to detect the transition between Tech Brown to Tech Green that happened in the past 5 months, entirely through Computer Vision. We wrote code that is able to learn the dynamics of a parrot AR drone, enabling it to fly almost autonomously through the Tech Green region. And we created an semi–autonomous system that can directly fly to any GPS coordinate and map the area by taking high resolution pictures and images.
What we learned
Programming drones is more difficult than we initially thought; this was the main bottleneck for our hack. Due to the lack of an in-built state estimator in our parrot AR drone, it became difficult to guarantee how true the actuators of our drone were over long periods of autonomous flight. This is why we had to adopt a semi-automated approach rather than a fully autonomous implementation we had originally envisioned. Since we found it difficult to build a state estimator in a period of 24 hours, we used manual human intervention and corrections to supplement the corrections we could not perform autonomously. We also programmed a gap period in our test flight where the drone just hovered in mid air so people could manually maneuver the drone to inspect interesting features. In practice, we had to provide increasingly larger amounts of manual control towards the end to compensate for the state-estimation errors accumulated over the flight. Aerial autonomous robotics is not for the weak hearted.
We also learned how to do image change recognition. We were surprised to discover that relatively elementary algorithms are still used to describe changes in global descriptors, and wondered how future research could improve anomaly detection in satellite imagery.
What's next for PlaNET
We would love to build a platform that could enable user to track different land portions over time with the satellite data. We believe it would be a great challenge to our computer vision module.We would love to add onto the existing image change recognition algorithm to make detecting anomalies more accurate across more use cases. We would also love to integrate a swarm of drones into the system and enable them to share information with each other.