Inspiration

We all look forward to being home. The familiarity of snuggling up in our comfortable beds and the feeling of our own shower pressure is what we desire - even after a luxurious vacation. But when faced with the cold and dark of an empty house, the word "home" almost doesn't fit. Haven aims to make the event of opening the front door a pleasurable one. We hope to redefine the meaning of "welcome home" by personalizing and enhancing the experience.

While there are numerous smart home technologies developed in the real world, we aim to create a new culture along with new technologies. The consumer doesn't value taking the complicated steps to set up a smart home, and in addition are heavily worried about security and privacy. Though we can't get people to trust us, we rely on people trusting each other to create the smart home norm.

What it does

Haven is not just an app or a function. It is a platform. Haven uses machine learning and image processing to recognize faces that are registered in the household. It then adjusts features in the house such as unlocking the door, adjusting lighting, heat, and anything else the user might want to create the most welcoming experience. It leaves behind the days of coming home to a lonely, stark house after a long tiring day. The API allows easy user registration and security, and the broad scope of our technology allows more and more functions to be added in the future, creating the most personalized experience possible.

We added a functions library similar to the smartphone's app store. This allows a public run development platform made for the public. With simpler steps. come more users, with more users, comes more trust, and with more trust, comes the popularization of smart homes.

How we built it

Haven was built using Python and Flask for the face recognition and API respectively. It is hosted using Amazon AWS and utilises libraries including OpenCV for image processing and OpenFace for face recognition.

The hardware was powered a QualComm DragonBoard, which utilized a USB camera to take continuous images, which were processed by cv and sent to our server. This server powered our face recognition, utilizing OpenFace neural networks that we trained from many pictures of our own face. Tagged by an ID, the board receives the necessary permissions to trigger all of its actions. The homeowner's face can then turn on a light, or open the door.

Then, on the front end, we wire-framed our user story on paper before proceeding to create graphics on Adobe Illustrator and screens on Sketch. They were then imported into Principle to create the guiding animations and finally to create the user-centred software platform and to our smart technology.

Challenges we ran into

Since no one on our team had any experience with machine learning and neural networks, we had to learn a lot very quickly in order to implement face recognition. Learning to use these complex algorithms were difficult, completely understanding them would have needed a Phd.

Hosting our application on AWS turned out to be difficult because setting up OpenFace without Docker had a lot of set-up and required a lot of storage; requiring us to attempt to set-up our servers multiple times.

On the hardware side of things, we found that we struggled the most with performing the necessary functions with the QualComm Dragonboard. Documentation was sparse and as a result we struggled quite a bit with necessary permissions and dealing with dependencies.

Accomplishments that we're proud of

Although we are nowhere close to being experts on this subject, it was definitely super cool to see how image processing, even given a slightly limited set of data, could be so helpful and accurate in building our solution. Hosting this type of computation over the web was certainly not easy either, and we're happy about how well that turned out, as well as how seamless the integrations became as a result of AWS.

On the user experience front, we feel accomplished about the seamless animated interactions integrated into the front-end of our software platform. The custom illustrations, transitions, and user profiles allow an easy visual understanding of the Haven technology.

What we learned

For some of our team members, it was our first hackathons. Learning to work as a very key subunit to a bigger team in a development setting was a new experience, especially seeing pieces fit together in the end. Some of us have never worked with Git.

By utilising neural networks and machine learning without having prior knowledge, we learned a lot about how neural networks and machine learning works. We've always joked about these "buzzwords", but when it comes to working with it, it certainly isn't easy and will always remind us how much there is to learn still.

What's next for Haven

In order to truly give each individual their own home welcoming, we hope that there will be growth in potential functions that Haven can execute. Though we are providing the technology, security, and platform. Our vision is for the functions to be written by our users, for our users, just like skill on the Amazon Alexa, or apps for iOS to wholeheartedly capture customization and popularization of the smart home.

Built With

Share this project:
×

Updates