We were inspired by the idea that, what if a machine becomes a man's companion? We wanted something that would work with the user in a way that revolutionizes the relationship between man and machine. Then we first listed some key features in order for this relationship to happen. The inspiration also comes from the character Matilda, who is very responsible, and can remotely control appliances based on certain needs.

What it does

The app, Matilda, is basically a machine that really takes off a persons stress load by decreasing the effort applied in the users life. For Example, Matilda can monitor the air conditioning level dependent on the temperature outside, this will help maintain a cost-efficient gas bill at home. Matilda also uses Facial Recognition in order to Welcome the User(s) as they enter the house after a busy day at work or turn off any appliances(lights, stove, etc.) and bid the user farewell as they leave for a busy day at work . Matilda also has the ability to modify real time events such as turning on lights or opening blinds, dependent on the User's needs and location through Spacial Recognition. Another one of Matildas key features can be to advice the user on apparel, dependent on the weather outside. Last but not Least, Matilda can recognize faces and greet them according to the information stored in the database.

How we built it

We made a mobile application to manage the buildings, rooms, cameras, and devices, also allowing users to program triggers using react native, redux/saga, firebase, and expo kit. The information about the houses, rooms etc. is stored in the firebase, firestore database. We used firebase cloud messaging to deliver notifications when the events were triggered. To detect when a user was in a trigger zone, we used open CV and AWS reKognition. We used open CV to detect when motion occurred in the cameras field of view and then we used reKognition to confirm that the motion was because of a persons and also to identify which user was in the zone. Once this information has been identified it calls the cloud function to send a message to any connected IOT devices and also to notify the user that the trigger has been executed.

Challenges we ran into

We came into this project with a moderate amount of coding experience. Developing a UI like this meant we needed to learn as we went through development. We believe that learning the material when needed really slowed down the process but in the end, we pushed each other through and taught each other the knowledge we grasped from each topic. In the end, It was never a one man's effort and no one person could have done it without the others.

Accomplishments that we're proud of

We are proud of being able to implement the AWS technology of video recognition into our app which we programmed to detect human motion and capture image from the previous frame to compare it to the current from to accurately detect the person’s activity, and trigger real time events, reacting to an array of scenarios.

What we learned

By the end of make this app we are able to understand what node and react native are and how they impacted our experience with app development. Incorporating AWS in this project gave us a broader perspective on spatial awareness and recognition technology, helping us understand how such technologies could be implemented into applications that require efficiency and reliability.

What's next for Ambience

Given more time to develop Matilda, I believe we can include many more features to come, including a way for Matilda to Memorize where the User has Left their keys or wallet. I believe the addition of many new features would really revolutionize the co existence of man and machine by changing the perspective of the User so that they do not look at Matilda as a machine but as a virtual companion.

Share this project: