Inspiration

I wanted to see what Deep Lens could do out of the box. Most new users of Deep Lens will have little experience with machine learning or deep learning. Sagemaker is a powerful tool. But it takes some time to get up to speed. So I wanted to see what could be done using the sample applications. It turns out that quite a bit can be done.

What it does

Dog Park runs the hotdog or no hotdog model and captures the probabilities for the highest matches. Then it examines the highest match to see if it is a dog. This is done by matching the subset of the 1000 categories which are dogs. If so, then it checks the probability. If it is greater than a threshold (arbitrarily set to 0.5), then it asserts that a dog has been found. It then looks at the next category match and determines whether it is a dog. If so, it writes the full-sized image (not the smaller frame that recognition is performed on) to an S3 bucket, and writes the top two breed probabilities and image URL to an MQTT topic. The results can be monitored in the AWS IoT console. A second lambda function (written in Node.js) called DogParkNotifier receives the message from the topic and extracts the relevant fields using an SQL statement. The fields are formatted to truncate the probability to a whole number and they are sent to the user using SNS. They can be sent as an SMS text message or an email. An extra step (creating a topic and verifying the email) is required to set up the email path.

The most visual result is an SMS text message to the phone, where the dog's picture is presented along with the probability results. It is actually quite impressive to get a result a few seconds after aiming the deep lens camera at a dog.

What I found is interesting is that the probabilities change with each frame. Dogs are rarely still. So the recognition probability changes, sometimes by up to 10% based on the movement of the dog.

People are open to being part of a technology experiment. They were very interested in seeing what Deep Lens said about their dog.

How I built it

It was built by starting with the hotdog or no hotdog sample application. I found it could recognize a large variety of dog breeds and so felt it would make a great use case for beginners. The code I created can be reused easily. The user needs to have an AWS account, create an S3 bucket in the same region as the lambda functions will be deployed, set the IAM role on the S3 bucket, and include an IAM role in the lambda function with sufficient privilege. It has placeholders for [YOUR_BUCKET_NAME]. You also need access to the Deep Lens 'Hot Dog or No Hot Dog' sample application.

The DogParkNotifier lambda is the second part of the application. Here the user's phone number or email address can be used for SNS notification.

Challenges I ran into

I have created well over 100 builds to create this demo. Honestly, it took me over 100 hours over four weeks. It was a huge learning curve. Debugging python lambda code on Deep Lens took some getting used to. The Deep Lens device was registered quickly. The instructions were good. I was learning Node.js at the same time for the DogParkNotifier. And of course I ran into the operating system automatic upgrade issue, which caused Wi-Fi to fail. But the Deep Lens AWS team provided a workaround quickly.

One challenge was how to power Deep Lens for outdoor use. I tried a DC battery pack, such as you would use to charge your phone. But the amperage output was 3.2A, and Deep Lens requires 4.0A. The system would attempt to power up, but fail after a few seconds. I found other DC battery packs on the market that output 4.8A. But I was afraid of frying Deep Lens. So I went with the safe bet, an AC power supply. The advantage here is that it stores tons of energy and can keep Deep Lens operating for at least 10 hours. The downside is that it is somewhat bulky. The cost was $79, a bit higher than I would have spent on a DC battery pack.

My phone service vendor (Verizon) blocked incoming text message from Dog Park after a few hundred messages. I could not reset this value and there is no documentation from Verizon on this limit. AWS did attempt to transmit the messages and reported complete failure to deliver the message from the carrier. The workaround was to deliver the message by email.

I ran into weather issues which affected the amount of testing. We had a deep freeze in New Jersey and no one was walking their dog. Then we had several days of constant rain just as the contest was a few days from over. But we were successful in getting to the park twice and the results were surprisingly accurate.

Another issue was internet connectivity. Deep Lens needs an internet connection to send images to S3 and populate the MQTT topic. My phone doubled as a hotspot and a recipient of the text messages. What I found was that the hotspot connection closed down if not in demand. And Deep Lens would not attempt to reconnect. The solution was twofold. I brought an iPad with me and set it up to use the hotspot. For some reason, the iPad keeps the connection to the hotspot on my phone alive, and that let's Deep Lens stay connected. Secondly, I removed the alternate wireless connections from the Deep Lens Ubuntu system. I had a high speed wireless connection for development and one for the hotspot for remote use. When the hotspot became unreachable it attempted to reconnect to the high speed wifi network, which of course was not available in the dog park. And there it sat, never going back to the hotspot. So I had to delete any other Wi-Fi connections in the list before leaving the house for the dog park.

I had a mysterious problem where lambda functions would deploy to Deep Lens but the camera light would not come on. The python log file on the device showed there was no error. The AWS team (during one of the #officehours) had me try:

On the DeepLens system type: aws_cam@Deepcam:~$ git clone https://github.com/boto/boto3.git aws_cam@Deepcam:~$ cd boto3 aws_cam@Deepcam:~$ sudo cp -r boto3 /usr/local/lib/python2.7/dist-packages aws_cam@Deepcam:~$ sudo pip install awscli –force-reinstall –upgrade

This worked. It was the reinstallation of the AWS CLI that did the trick.

Accomplishments that I'm proud of

I'm proud that it works. I learned a lot about lambda, deep lens, python, IAM, topics, and all the myriad problems that stop a project from being successful. I had estimated this would take about 10 hours. It took 100 hours. I was up past 1:00am most nights.

So I am proud that I finished it. It takes the hot dog sample and stretches it to something that is interesting, something outside. It takes deep learning and makes it curious to people. It makes them ask how it works.

What I learned

I learned that I am a poor estimator of how long a project will take! I learned a lot about AWS services. I learned the practical side of putting everything together. I learned to be faster as I went. I no longer wondered where a lambda function goes when you deploy it. I found the location for python log messages on Deep Lens.

What's next for Dog Park

Dog Park needs to use a smaller DC battery pack. I need to make the threshold programmable from an Apple IOS app, so it can be adjusted in the field. I need to clamp down security. I left it too wide open during testing.

I have more ideas that deal with bird recognition at feeders. But that will take fine tuning the model and training with a variety of bird pictures. Of course, Sagemaker will be a key to making that happen.

Built With

Share this project:
×

Updates