Inspiration

The recent snowstorm which lead to drivers being stranded overnight. https://www.washingtonpost.com/transportation/2022/01/05/virginia-snow-i95-closed/

Growing up in the mountains of North Carolina, and being Virginia's neighbor to the south, I'm personally familiar with ice and snow. I ran off the road not once, but twice due to snow and ice. Later in life, while being totally cognizant of driving on icy roads, driving slow, much further distance to stop, etc I still slid through and intersection. I had a moment of panic and it was by the grace of god, I was not involved in a serious accident.

So, when I saw this particular contest for AWS Disaster it hit home.

In fact during development soon after we had started capturing camera footage, this occurred.

Since I just saw this and I am soliciting participants, let me introduce myself and my background. Afterwards I will tell you what I have in mind.

Previous experience

As part of working with TensorFlow I met the Paige Bailey the Google TensorFlow Google lead, and she turned me on to devpost where I worked on the powered by tensorflow 2.0 hackathon. See here https://github.com/rtp-gcp/tf-hackathon

I also worked with North Carolina Emergency Response when I was working for Greenstream doing embedded firmware for LTE and GOES Satellite development. I intend to reach out to the NCEM team to help us in this endevor. We will need some sample camera imagery.

I have also worked previously with another startup where we did Satellite Imagery where we used Gaussian Edge detection and differencing to detect regions which changed over time. I'm thinking snowfall or ice is a very similar problem.

I am not an expert but I have taken some classes using AWS preferred Vision ML framework MxNet and Gluon. See here https://coursera.org/share/34397e0821bec44faad9e289e8b19597 and later I did some work with AWS DeepLens. See here https://github.com/rtp-aws/aws_deeplens

In fact, I have taken a lot of classes via coursera in ML and Image Processing. See here: https://www.coursera.org/user/45e0a4102512b8ed2ec8255efb605a9c and here https://profile.edx.org/u/netskink

I have also done work with GCP using their offerings. I'm not opposed to using their tools. Whichever gets us up and running as cheaply as possible is game.

At this point I am forming the team and soliciting input. If you are interested please join us. With that said, this is what I am thinking:

  • weather conditions definitely come into play. What data is avialable? Use these conditions to subset our timeframe of interest.
  • camera imaging. I realize a roadway is going to have a lot of variable images which will be a problem. However the boundary of the image will show sidewalks, shoulder of the road etc. These regions will be relatively the same. A crack on a sidewalk for instance will disappear when covered with snow. Black ice on bridges is what I am particularly worried about and it mostly forms when cameras will be in infrared mode. So it's a tough problem. However, let's think about it.
  • I am not really focused on Disaster Response at this moment, but more Disaster Avoidance. ie. what can we do to alert the NCDOT to a potential problem so they can investigate it beforehand. ie. "an automated alert that here is a bridge which is possibly going to have a problem. We need to send someone out to examine it or send a work crew out to mediate."

Lastly, I have two user groups based upon GCP and AWS. We meet each weekend for a weekly workshop where we work on code together for the respective platforms. I'm certain some of my teammates from there will be interested in joining us. You can find more info on the user groups here:

What it does

It was supposed to detect icy bridges. Sadly I ran out of time.

How we built it

  • The frontend is AWS Elastic Beanstalk it use AWS S3 to upload images and perform Default Label and Custom Label object detection with AWS Rekognition.
  • There is a lambda function which does collision detection in place but not used. It does work with API Gateway. It was going to have a intial test for intersection over union. The api is open and responds but just does collision of two bounding boxes.
  • AWS Sage Maker Ground Truth was used to label images and input to Rekognition. The models built with Rekognition did not work well enough for me. See the icy-bridge folder and the readme for details. Mostly it was due to not knowing well enough how to label images. Really just me working on it and not enough time due to a late start.
  • cognitio unauth was used for S3 and Rekognition usage.

Challenges we ran into

I can immediately see that access to camera imagery is going to be a problem. Cameras were a tremendous problem. The cameras changed position. The URLS changed mid project. The images themselves were CORS restricted. It took considerable amount of time to learn javascript enough to build the frontend much less extract the images. I also had to learn Rekognition and Ground Truth.

I never really got to spend much time on Sage Maker proper. I did some intial work and set it up with instructions on how to get started.

No team mates was problematic. I spent a lot of time recruiting and even trying to encourage people to participate but not much cooperation in that area.

Accomplishments that we're proud of

I learned a lot. I am happy to know more about Javascript, Elastic Beanstalk, API Gateway, Rekogntion, Ground Truth. I documented everything used in the project, Make/Makefiles which manage a full directory filesystem, pgp secrets for a public git repo, CORS, etc.

What we learned

See above. I knew very little of AWS and pretty much came up to speed from AWS Jan 10th until now.

What's next for the project

I will continue to refine it. I want to see it drive to completion. Not at the same pace, but most definitely I will continue.

I want to get to where I can get bounding boxes and then work with the bounding boxes in mxnet/gluoncv. I took a class in that previously and never really got a change to revisit it.

Built With

Share this project:

Updates