Project TetraNet

image Project TetraNet is a novel research project used for wildfire mitigation by utilizing footage from NanoSats deployed into the edge space. We created a sub-orbital nanosat that utilizes computer vision models, image segmentation U-Net convolutional networks, linear regression artificial neural networks, and heuristic fire spread simulators in order to accurately apply wildfire patterns to a real-world setting. Our mission in mind is to allow anyone to access quality terrain analysis for a fraction of the cost.

Flexible Sub-Orbital Positioning

Here we filled a 600g latex weather balloon with helium to carry the payload containing a TetraNet device. Changing the amount of helium allowed us to have access to any region at a 30+ kilometer radius [500 m for live streaming]. varheight

Establishing Hardware Components

A single TetraNet satellite, which is launched by a specialized weather balloon, will be able to deliver low latency communication to affected regions using Radion Transmission technology, all through the safety of the skies. And with the implementation of the onboard neural network, a single TetraNet device can monitor wildfires and inform officials on undetected storms within 20+ miles of range. With forest fires under rapid control, millions of acres of land can be conserved while safeguarding entire populations from devastating damage. TetraNet’s goal is to minimize the time for locals to be informed, as we believe that communication is a crucial factor in the outcomes of wildfires, especially if it means the protection of vulnerable populations and the lives of many.

real

Heuristic Terrain Analysis

Analyzing image databases from NASA MODIS satellite imagery as well as EPA aerial databases is a good indicator of the performance of TetraNet data. To begin the data analysis process, an image segmentation algorithm is utilized to extract important features from wildfire data. The following images demonstrate the results of the convolutional network’s data. cvdiagram

Exporting to Azure ML Services

azurearch We created a Machine Learning Model and an ANN that allowed us to analyze dense vegetation that had a higher probability to ignite on fire. The ML model was trained by splicing a prior TetraNet launch video into several different frames. After the ML script was trained on each image frame, we were then able to create an ANN that allowed us to model the path and direction at which fire could potentially ignite.

Running with Rest-API

Using RestAPI as a medium to integrate and build our web service.

Example python invocation:

# An example URL for accessing the web service
azure_aci_url = 'http://67534526-f00a-ds33-a447-22a76351d991.eastus.azurecontainer.io/score' 

files = {'image' : open(image_directory, 'rb').read()
response = requests.post(azure_aci_url, files=files)

mask_data = response.json()

This generates a confirmation message, with pretty Markdown formatting.

Challenges and What we Learned

Full Stack FlaskAPI Integration

Initially, we decided to use Flutter in order to create our UI. However, after almost completing the majority of the UI, we learned that Flutter does not support serial communication on desktop applications. We then had to quickly learn how to use Flask API for HTML applications and deployed our own website client in just under 12 hrs due to the time constraint.

Azure Machine Learning Service

We built an ML script through Azure ML notebooks and ACI containers to export the model to a Rest endpoint. None of our team members have ever used RestAPI or Azure, so we had to read through the extensive Azure documentation, and eventually, we were able to establish a connection to the HTTP server.

Azure Glob Service

It was interesting using Azure in our project for the first time. There was a significant learning curve in operating and utilizing storage for our data. So we decided to use Azure Glob Service to optimize our costs as well as flexibly scale up for high-performance computing for an extensive dataset.

Apeer to generate our own DataSets

Although satellite imagery was a viable option, most services required us to be a part of a government institution. As a result, we decided to have two separate launches where the first launch was used to generate image datasets for training. Apeer allowed us to create annotations that allowed the machine learning model to be able to recognize sparse and dense vegetation.

What's next for Project TetraNet

One avenue for progression is furthering the connection stability and upgrading sensors, both of which had room left for improvement. Given TetraNet’s large relevance in this increasingly fire-prone world, it could also be tested on an active wildfire, potentially opening connections to other fire-fighting services.

We were able to design another future TetraNet design that is capable of hosting its own development board with programmable I/O pins.

pcb implementation

Share this project:

Updates