We wanted to explore new technologies that had people who were comfortable with hardware experience software, and vice versa. We all explored new areas, whether it was software, hardware, or server architecture. We all learned a lot, and solved a great problem too!
What it does
Our project provides a touch screen interface with a doorbell, video camera, and intelligent lock. By using facial recognition we can determine who is at the door and act appropriately. We have the ability to unlock the door, as well as alert via multiple methods who is at the door. The Google Assistant is also able to give information as to recent door unlocks and visitors.
How we built it
The primary driver for the project is a Kotlin app running on the Android Things MX7D board. The board sends images to a Flask (Python) script hosted on a DigitalOcean droplet, which then communicates with both AWS Lambda facial recognition and Actions on Google and returns this information to the board. We also used some custom-cut components for our demo door.
Challenges we ran into
A lot of challenges involved the sheer number of technologies involved, mainly with connecting Android Things to Lambda to Actions on Google. Eventually we decided to use a Flask script to tie it all together.
Accomplishments that we're proud of
We're very proud of this whole thing, how much we learned, and getting something working on Android Things. The Android Things kit is such an incredible piece of hardware and software we'll definitely be working on again.
What we learned
We all learned a lot. Some of us learned more about hardware, while others learned more about software and backend development. We all explored and mainly worked in areas we weren't necessarily used to.
What's next for A-door-able
There's a lot of room for improvement and improved features. Adding some more notifications, more touch screen features and UI elements, along with more ways to get information about who is at your door!