Pothole AI was designed after looking at a dashcam and asking "What if we used Edge Analytics on a dashcam to map out where potholes are, so that local governments and communities knew where to focus their efforts?".
It consists of two parts:
- The Raspberry Pi based Camera using a PyTorch model to do edge inference of road quality
- A website to aggregate readings and provide a full view of road conditions.
The idea behind these two components is to allow members of the community to automatically record and upload road conditions using edge inference to reduce the data sent across the network. This data could then be displayed on a website to give communities and local governments a geographic density of potholes to give them a better idea of where to spend their efforts, or to find and fix potholes that haven't been reported (because lets face it, who's actually reported a pothole to your local council?).
We believe that this solution is necessary because a quick google of pothole leads you to:
- Many questions on whether cities can be sued for damage to cars due to potholes
- News articles about a village throwing a birthday party for a pothole
- News articles about a man who despite warnings from the police is committed to filling in potholes himself
- News articles about a man celebrating the third birthday of a pothole
These articles are not an indication of a system that is working well.
Some of the more serious consequences of potholes include:
- Annual average cost to vehicles of $377 due to rough pavement
- Of approximately 33,000 traffic fatalities each year, one-third involve poor road conditions.
- Raspberry Pi 4B
- Ublox Neo M8 GPS
- Pi Camera
- Optional: 3A Power Bank
- Optional: USB to FTDI converter
- Optional: Raspberry Pi Heatsink
To build this system we used a Raspberry Pi 4B with a PiCam v2 and a Neo M8 GPS. The Raspberry Pi was responsible for getting the current GPS position, taking a picture, running the PyTorch model to get a confidence score of the road being bad.
This information was then sent to an AWS Lambda endpoint which in turn, stores the latitude, longitude, device name and image into a database.
We would not recommend replacing the Pi Camera with a generic USB camera as the way we have written our code assumes a Pi Camera and results will be uncertain with a USB Camera.
You can substitute the Ublox Neo M8 GPS with a lot of different other models, so long as they support sending NMEA strings over UART and are powered by 3.3v-5v. We ended up passing ours through a FTDI -> USB header so we didn't need to worry about the setting up the Raspberry Pis GPIO pins.
After observing our system we would make three recommendations:
- We would highly suggest putting a heatsink on the raspberry pi as it gets hot when doing inference like this.
We would suggest putting the camera slightly elevated on your cars dashboard. You may notice a ducted taped box that ours was sitting on. We ended up discarding a lot of our initial readings because too much of the field of view was blocked by the windscreen wipers. Note for those who are concerned: Australian steering wheels are on the right side of the car, not the left.
Use a USB powerbank instead of a USB slot powered by your car.The Raspberry Pi is very particular about current/voltage.
Deep learning can be very intimidating at first, and taking inspiration from Fast.ai we wanted to show how easy it is to get something that works.
See notebook for a full end to end downloading and training example that will output a model.
We took the dataset created by the amazing M.J Booysen which is an annotated dataset of labeled images of roads with an without potholes. While this dataset initially is meant to be used for localisation (locating potholes in images), we chose to use it as a classification dataset (image contains potholes or no potholes), similar to Jian Yangs famous App.
We used the PyTorch hosted MobileNet V2 network, which is pretrained on Imagenet. This is a convolutional neural network which is small enough to be used in embedded applications.
We then cut off the classification layer to change it from trying to predict 1000 classes to just two.
Existing classifier Sequential( (0): Dropout(p=0.2, inplace=False) (1): Linear(in_features=1280, out_features=1000, bias=True) ) New classifier Sequential( (0): Dropout(p=0.2, inplace=True) (1): Linear(in_features=1280, out_features=2, bias=True) )
We used transfer learning to speed up the training process, freezing the initial weights of the imagenet trained model, and trained only our classifier. We then progressively unfroze more weight and trained at lower learning rates.
Once we had our saved model, we used this to perform inference on the raspberry pi, to allow us to only send data when the predicted score reached above a custom theshhold to conserve bandwith.
The Raspberry Pi software is available as a command line tool that can be installed with either
It supports sending data to multiple endpoints (Aws SQS, HTTP Post, stdout or to a file) and is configurable via command line inputs and environment variables.
You can view the help command by typing --help on any of the commands provided by the cameraai python package.
Usage: aicamera [OPTIONS] COMMAND [ARGS]... Options: --camera_number INTEGER Raspberry Pi camera number according to https://p icamera.readthedocs.io/en/release-1.13/api_camera .html#picamera, Default: 0 --camera_invert BOOLEAN Vertical invert camera, Default: False --baud_rate INTEGER Baud rate on GPS, Default: 9600 --serial_port TEXT Serial port for GPS, Default: /dev/ttyUSB0 --model_path TEXT Pytorch Model Location, Default: /home/pi/aicamera/models/thirdstep.model --device_name TEXT Device Name, Default: devpi --min_predict_score FLOAT Minimum prediction score to send, Default: 0.5 --help Show this message and exit. Commands: to_file to_http to_sqs to_stdout
Most of these CLI options are also exposed as environment variables.
# Base URL to send HTTP post to # BASE_URL=127.0.0.1 # Vertical invert camera # CAMERA_INVERT=False # Raspberry Pi camera number according to https://picamera.readthedocs.io/en/release-1.13/api_camera.html#picamera # CAMERA_NUMBER=0 # Device Name # DEVICE_NAME=simul8 # Baud rate on GPS # GPS_BAUD_RATE=9600 # Serial port for GPS # GPS_SERIAL_PORT=/dev/ttyAMA0 # PyTorch Model Location # MODEL_PATH=/opt/model
Follow these steps to get the website ready for development and deployment.
Install app dependencies:
$ cd app $ npm install
Set app environment variables by creating the file
.env.localwith the following variables. You'll need a Google Maps API key.
VUE_APP_API_URL=http://localhost:8888 VUE_APP_GOOGLE_MAPS_API_KEY=YOUR_GOOGLE_MAPS_KEY VUE_APP_GOOGLE_ANALYTICS_KEY=YOUR_GOOGLE_ANALYTICS_KEY
Start app in development mode:
$ npm run dev
Create a Python virtual environment and install lambda dependencies:
$ cd lambdas $ virtualenv --python python3 venv $ source venv/bin/activate $ pip install -r requirements.txt $ npm install
Set lambda environment variables by creating the file
.env.developmentwith the following variables:
DEBUG=True PRODUCTION=False QUERY_MAX_RESULT_COUNT=100 QUERY_DEFAULT_RESULT_COUNT=10 PHOTO_BUCKET_NAME=S3_BUCKET_NAME PHOTO_KEY_PREFIX=potholes/ DB_HOST=YOUR_DATABASE_HOST DB_PORT=YOUR_DATABASE_PORT DB_NAME=YOUR_DATABASE_NAME DB_USER=YOUR_DATABASE_USER DB_PASSWORD=YOUR_DATABASE_PASSWORD
Start lambdas in development mode:
$ npm run dev
Building & Deployment
Be sure to check your AWS keys and
serverless.yml before deployment as to not incur costs.
Run the following commands to create a production build of the front-end app. The content of the resulting
dist/folder can be uploaded onto any static site hosting provider. We're using Firebase.
$ cd app $ npm run build
Deploy lambdas using serverless:
$ cd lambdas $ npm run deploy
Run these commands on a raspberry pi with an internet connection.
If you're running this on a corporate proxy you may need to modify you http_proxy, https_proxy environment settings on the Raspberry Pi.
Enable Pi Camera
Configure the camera with
Give your pi user permission to access serial devices.
usermod -a -G dialout pi
git clone --recursive https://github.com/pytorch/pytorch cd pytorch git submodule update --remote third_party/protobuf sudo -E USE_MKLDNN=0 USE_QNNPACK=0 USE_NNPACK=0 USE_DISTRIBUTED=0 BUILD_TEST=0 python3 setup.py install
git clone --recursive https://github.com/pytorch/vision.git cd vision sudo -E USE_MKLDNN=0 USE_QNNPACK=0 USE_NNPACK=0 USE_DISTRIBUTED=0 BUILD_TEST=0 python3 setup.py install
sudo apt update sudo apt upgrade -y sudo apt-get install python-picamera python3-picamera libopenjp2-7 libtiff5 -y sudo apt install libopenblas-dev libblas-dev m4 cmake cython python3-dev python3-yaml python3-setuptools -y git clone https://github.com/SrzStephen/Aicamera.git cd Aicamera pip3 install -r requirements.txt pip3 install .
You should then be able to type
cameraai and see the help page come up.
You can use the
cameraai.service file in the unit file directory to auto start the cameraai program on boot, do to this:
cp unit/cameraai.service /etc/systemd/system/cameraai.service sudo systemctl enable cameraai sudo systemctl start camereaai
You will probably want to modify the
ExecStart command to what you want and point the
--model_path parameter to the model you wish to use.
Available in this repo are
thirdstep.model which represent each of the training sages. By default you will use
While the pretrained models are available in the
models directory, you can train it yourself.
The full code required to train the model is available in the
We would suggest training this on a computer with a decent GPU as it is time intensive otherwise. The notebook should work on your system without modification. If it doesn't then please reach out to us with a link to what was giving you an error.
To use a different model on the raspberry pi, copy it to your Pi and refer to it by the
Challenges we ran into
No prebuilt python wheels on PiPi for Arm71
Currently PiPi does not have any Armv71 (Raspberry Pis current architecture) wheels for PyTorch. There are currently some issues compiling PyTorch from source on a Raspberry Pi shout out to Minki-Kim95 for their post of how to fix this. Compiling PyTorch on a Raspberry pi took 2+ hours.
The version of torchvision in PiPi for Arm71 is very old and does not support
torchvision.models. The latest version also had to be compiled from source (this was a lot faster).
Initially we were planning to use AWS Greengrass to deploy the model to the cloud. What we found when writing the guide was that it was becoming too complicated to write for someone who didn't have extensive AWS experience to follow so we scrapped it.
Initially as part of the AWS Greengrass deployment we were going to use Apache NiFi to give some on disk persistence for when you don't have an internet connection. A perfect use case for this is if you only wanted to upload images when you were on wifi.
We ended up removing this part as it made it too complicated for an end user to use and made it a lot more difficult for someone to debug.
Running the model.
To collect initial data we drove around for ~45 minutes. Unfortunately after turning the device off we realised we were logging to
/tmp/ which deletes itself on power down. Whoops.
Thanks to M.J Booysen for his data on pot holes.
 S. Nienaber, M.J. Booysen, R.S. Kroon, “Detecting potholes using simple image processing techniques and real-world footage”, SATC, July 2015, Pretoria, South Africa.
 S. Nienaber, R.S. Kroon, M.J. Booysen , “A Comparison of Low-Cost Monocular Vision Techniques for Pothole Distance Estimation”, IEEE CIVTS, December 2015, Cape Town, South Africa.
Thanks to Minki-Kim95 for their answers on Github for how to compile PyTorch from source on a Raspberry Pi
Thanks to ptrblck for all the questions he has answered on the PyTorch forums. When we were looking at forum posts for things we didn't know he was the one who answered a lot of questions.