Frontend status page
Backend image processing
Image to process
Processsed image - finding white bag
How many times have you rushed out the door in the morning and arrived at work or school only to realize that you forgot that one book you needed or left your computer charger plugged in at your desk? The technology that Amazon modeled to the world in their grocery store, Amazon Go, could be implemented in your home to track your most important belongings.
What it does
The Elephant in the Room is an addition to the home security and smart home networks that are already rising in today's market. IP cameras in your home send their feed to a local server, where opencv used to identify and track objects that have been selected by the user. These items flagged as important can be searched for and found for the user by the server. Additionally, geo-fencing technology can be used to detect when you leave your home, and the server can then notify you if you left an important belonging behind.
How we built it
In order to accomplish this task here on campus, a public AWS server had to be used in place of a local server. Additionally, a computer's webcam was used in place of an IP camera due to lack of proper hardware. Finally, the 'door' is a distance sensor that latches open or closed when somebody passes in front of it, signifying when a user enters or leaves their home.
There are three segments to our project. The server, the 'IP camera', and the 'door'. The server is running a looping program that takes a picture with the IP camera, runs an image recognition script on the picture, then checks if the user has left through the 'door'. If an item has been left and the user is vacant, the program sends one text message to the user indicating which object was left behind. The server also has a webpage that gives feedback for whether or not an object is present in the field of view of the camera or not.
Challenges we ran into
Time and utilities was a large hindrance. This project, if done right, would implement many more hardware components, and would be made more useful to the public with coded elements that we did not have time to start, such as more extensive image recognition with deep learning implemented, or an app that gives the local server's status on the go.
Much of our time was spent trying to interface a small hardware camera with an arduino, giving that element of the project here more of an IP camera feel, but enough problems eventually arose that would take too much time to solve.
We also discovered that UA's OIT automatically quarantines computers that are repeatedly accessing remote servers, in order to prevent potential malicious hackers. For a long time, we were scratching our heads as to why one of our laptops could access the remote AWS server, but our headless RasPi was no longer able to. After much troubleshooting, we found that UA OIT had blocked the RasPi from accessing our AWS server after it pinged it too many times in one minute.
Accomplishments that we're proud of
We are proud of the opencv-enabled image recognition routine that we created to determine whether one of our set objects has been removed from frame.
What we learned
That SO MUCH can be done with opencv and deep learning for image recognition and image tracking, and that we merely scratched the surface of what could be capable.
That forum support for the ArduCAM model we have is sparse, at best :(
What's next for The Elephant in the Room
While the structure we have created for this demo captures the idea of Elephant in the Room, this system we have envisioned is much more extensive than what we have created here.
As far as hardware is concerned, the system should have multiple IP cameras, small RFID tags on some selected items and sensors spread throughout the house, and a local server as opposed to an AWS remote server.
In the software realm, the project would be made better with an app and better use of opencv and deep learning to more accurately identify a variety objects.