NatureNet

Upgrade your Camping Experience

Inspiration

As students who are passionate about innovation and accessibility, we saw the potential for technology to enhance the outdoor camping and hiking experience, specifically by providing information about the plants and wildlife that hikers encounter on their adventures.

So, we came up with NatureNet! Equipped with technology to detect various flora and fauna, this device simultaneously educates users and, with the user’s permission, records crucial data on the preservation of these forests. This device also enables you to enjoy nature without pulling out your phone, risking getting lost in multiple distractions from the busy life you are trying to take a break from.

What it does

A curious camper out in the woods is looking to learn more about the green environment around them, but finds it difficult to access this information without having a personal guide. However, all they need now is NatureNet’s magic glove!

With a camera and speaker attached, the product detects plants and wildlife that the user points towards with the “tap” of two fingers. When the metallic sensors on the index finger and thumb of the glove connect, the camera captures an image. The wildlife is then recognized by an image recognition API. Using OpenAI’s GPT3 API, relevant information about the species is obtained. Then, using text-to-speech conversion, this information is narrated on the attached speaker, guiding the user through their nature expedition!

How we built it

NatureNet is built using a combination of hardware and software components. The hardware components of the device include a Raspberry Pi and Pi Camera that is worn on the user's wrist with a 3D printed prototype and glove. The glove has two conducting plates to control when the camera takes pictures. Additionally, a 7805 5V regulator is used for power management, as well as 6x AA batteries to supply the necessary current required by the Pi. A speaker/headphones worn by the user may be used to relay information narrated by GPT3.

The software component of the device includes the Raspberry Pi, which serves as the device's main processing unit, and OpenCV, a programming library for computer vision. Through the GPIO pins, the Pi recognizes an identification request and OpenCV works to process the images captured by the camera and recognize the plants and wildlife in the user's surroundings using the Plant.id API. Once recognized, using the GPT3 API, we send a request to learn more about the plant in question. This information is converted to speech and played through a connected audio device.

Challenges we ran into

On the hardware side of things, packaging the Pi Cam and Raspberry Pi module around the wrist of the user proved to be a challenge. Particularly since the user still needed full mobility of their wrist, and everything had to be as compact as possible. Using Fusion 360 we were able to design (and later 3D print) a suitable mount with adequate clearance, enabling even the smallest member of our group to make full use of the device. Additionally, finding an efficient and portable way of powering the Pi was crucial, although we didn’t have many options. We decided to use a 7805 regulator to keep the 9V produced from the 6x AA batteries at a constant 5V with enough current for the Pi to operate.

On the software side of things, we faced many connectivity issues largely due to network issues when trying to communicate with the Pi. Furthermore, the full pipeline of detection then recognition then narration was difficult to get correct. The Pi was coded entirely through ssh and vim, and much of the code was written not just for the device's pipeline, but also to make sure the Pi operated reliably across different networks and could start on boot.

Accomplishments that we're proud of

We are very proud of the continuous and successful integration between the hardware and software components in our project. Both aspects were developed simultaneously and later connected, followed by a few hours of debugging. The goal of this project was to enable users to learn more about the environment around them with ease and without the use of cell phones due to the host of distractions that are associated with mobile devices. This was successfully accomplished on the hardware side with a well-designed power management system, wrist mount, and raspberry pi packaging. This was successfully accomplished on the software side with great pipeline of image recognition, the use of plant.id and GPT3 APIs, and text-to-speech software.

What we learned

We learned how to prototype software and hardware asynchronously. In previous hackathon projects, we usually developed one after the other. This enabled us to have a working prototype far sooner than previously anticipated! We also had a first-time hacker joining our team, who was able to seamlessly integrate into our group and work alongside the software development team.

What's next for NatureNet

There is a long way to go for NatureNet! This includes focusing on making the device more portable and compact. We used a Raspberry Pi because it is what we had available; however, much smaller microprocessors may be used for this task instead. Additionally, it would be great for adventurers to be able to store photos they take and maintain a history of their travels on a developed website. Additional sensors to help the camera detect the type of plants would yield greater accuracy in the results of the scan as well.

Built With

Share this project:

Updates