Inspiration

We always wondered how urban city planners were able to visualize the landscape of a city solely based on its 2D layout on a map. Moreover, it is difficult to plan for the future sustainably when we live in a world constantly plagued by climate change and global warming. Hence, we thought of making a tool which allows city planners to see part of the city in a 3D plane while also suggesting improvements based on real-time and historical weather data.

What it does

We built a web dashboard which first allows the user to select the location they want to develop on the map. Then, they can generate a 3D mesh of the city tile and interact with it by viewing it from multiple angles. A plethora of useful charts forecasting the air pressure, wind speed, precipitation and temperature of the chosen location is displayed. The website even provides recommendations on what is the most optimal installation (either solar panels, trees, rainwater harvesters, or windmills) based on the weather data received about the chosen location. The user can also simulate how the particular installation would look in real life by placing 3D models of them in a 3D landscape of the city. It also provides the Return on Investment (ROI) and in how many years the investment would break-even which are important data for a government on a tight budget.

How we built it

Google Maps

  • Used Google Maps JavaScript API to find the latitude and longitude of the marker dynamically as the user moved it on the map.
  • Used Google Maps Static API to capture a screenshot of the satellite image of the current map which is sent to the backend for image processing.

Image segmentation

  • The satellite image is processed to identify the percentage of green signifying trees and the percentage of gray signifying roads and a ratio is calculated which is then used to recommend whether to plant trees or not.
  • Keras-API was used for building the image segmentation model.
  • We fine-tuned ResNets, a large-scale Convolutional Neural Network(CNN) model invented at Microsoft Research to adapt it to the land cover image domain.

Mesh Analysis

  • We use a custom mesh analysis script and run it in the background with the open source 3D modelling software, blender, to analyze and mark all the building tops in a given area’s mesh. Then calculations are done on these values to recommend whether to build solar panels/windmills/rainwater harvesters.

3D object placement

  • We used three.js to load a 3d model’s object and material properties onto the web browser, and you can move the before and after processed models with a single set of controls and a common camera.
  • You can also place trees, solar panels, etc onto terrains onto a separate, distraction-free view and see the impact on the budget, as well as plan how many of each asset you need based on our suggestions. It also shows the ROI of the investment and break even point.

Dashboard

  • The dashboard was created using HTML, CSS (Flexbox), and vanilla Javascript.
  • HighChart.js was used for visualizing weather data.
  • The air quality index was determined using latitude and longitude sent to a public API.
  • We used Flask for backend.

Challenges we ran into

  • Using google maps API to dynamically send data about the longitude and latitude of the chosen location to the backend whenever the marker on the map is moved.

  • Adding self-defined neural network layers while fine-tuning the image segmentation model built with Keras API. Integrating both the front end and back end together because a lot of API calls had to be done.

  • Speeding up of the image processing algorithms because initially it was taking more than 10 secs to return the data.

Accomplishments that we're proud of

  • This website we built would aid policy-makers in browsing through the whole global land system as well as looking into detail a specific region where an improvement in the environmental policy is in need. It is also simple to use.

  • The land segmentation, powered by a large-scale Convolutional Neural Network(CNN) model is able to predict fine-grained analysis of each and every pixel based on local features, i.e., what is near this pixel, as well as the features of the whole image. This hierarchy enabled by deep CNN makes the prediction highly accurate.

What we learned

  • Manipulating data using Google Maps

  • Frontend-backend communication using Flask

  • Making 3D models using Blender

What's next for Terrainier

  • Improve the accuracy of the image segmentation model by fine-tuning ResNet with more data obtained via data-augmentation techniques.

  • Adding more 3D models other than solar panels and trees to the 3D simulation environment.

  • Including more data points and methods of improvement of the environment.

Built With

Share this project:
×

Updates