Surveying biodiversity in forests, like the Amazon, is challenging. Flying drones provides a high-level (literally) view of the forest but cannot get the in-depth details below the dense canopy. Setting up sensors or in-person surveys can be challenging and time-consuming. How else can we un-invasively monitor forest biodiversity?
What it does
The robot takes advantage of a very dense understory tree layer by using tree trunks/branches. Operating at this level limits disruption of the forest floor ecosystem while still providing an opportunity to capture film and sensor readings from the environment.
How we built it
Minimum Viable Product* Breaking down the robot into a minimum viable product, we focused on prototyping the robot movement from branch-to-branch or trunk-to-trunk using extendable arms and firm grippers.
Ideally, once placed onto an initial branch, the robot uses computer vision to locate the next nearest branch. Rotating the body about the initial point of contact, the robot pinpoints the location of the trunk and extends its other arm to meet it. Once it has gotten a firm grip, the other gripper is relaxed and the arm is retracted to continue on in the same manner.
For our final demo, we have simplified the motion. By placing the robot body on wheels, we eliminate the need for the contact points to support the entire weight. On each side of the body, we focus on one motion; the wrist rotation or the extension/grip.
Mechanical Details The key components of our prototype include the wrist rotation servo, a linear actuator, and a gripper. The three parts act in sequence respectively to produce the desired motion of rotating about one truck to the next and then extending and gripping. Each component uses a different power source:
The linear actuator is a DC motor that makes use of a 12V DC power supply. In order to reverse the direction of current to allow for both extension and retraction, we have used a 2-channel 12V relay switch. The relay allows the current direction to move on command. This wiring is pictured below. Meanwhile, the wrist rotation servo uses a 6V wired battery pack, while the claw uses a separate 9V battery pack.
Code and Computer Vision The robot is controlled by a central raspberry pi. The mini-computer provides signal input to each of the mechanical components and makes use of the camera.
Currently, each component is activated in sequence manually through the pi by running a corresponding component script, though they could easily be combined. Therefore, a simple next step in this process would be to connect the scripts and automate the mechanical components to correspond to the camera’s vision.
The camera operates to identify the “tree”. Our plan was to use two cameras to recognize tree trunks, by fine-tuning them on the https://innovation.ox.ac.uk/licence-details/semantic-image-segmentation/. Then, it would use the parallax between the image background and the differing positions of the trunks in each video stream in order to precisely calculate the distance needed to extend the arm. However, due to budget constraints, we could only attach one camera to the raspberry pi at a time and didn’t have much computational power to fine-tune a cv dataset. Therefore, to simplify things, we made the tree trunk pink, and use a simple HSV filter, combined with a couple of denoising and edge detection operations, to determine the location of the trunk.
Challenges we ran into
The largest issue in creating the prototype was the lack of consideration we gave the individual weight of components as well as using sturdy fasteners.
The gripper and actuators proved much more cumbersome and heady than initially believed. As a result, in our initial prototyping stage, we placed the robot body on some wheels, allowing for total support of the robot weight and allowing us to focus on the rotation and extension motion. Below we see the initial setup with a “tree”, the robot body, and fixed wrist rotation servo.
In the final design, including both actuators and the gripper on the robot proved infeasible. The linear actuator fasteners did not end up arriving in the time scale of this project, and as a result, we did not have a good way of connecting the wrist servo or the gripper to the actuators. WE decided to create a rigid connection between the robot body and the rotation servo with some dowels (pictured below) to get the body to move. For now, the gripper rests on the ground adjacent to the target “tree” and will grip on command.
What's next for Automating Forest Biodiversity Surveys
We hope to work on the automation and the physical construction of the biodiversity robot!