Security is a constant concern in our modern society. Meet SentryAI, the answer to your security issues. Let SentryAI scout the premises in auto-pilot mode or manual-mode. He roams around, keeping a keen eye out for suspicious behaviour, including wanted persons and potential weapons. Let SentryAI move around on autopilot without fear - he will alert you of suspicious activity not only out loud, but also through text. If you want SentryAI to move a certain way, simply tell this smart robot, and he will do your bidding. See what he sees anytime through on the LCD. Feel safer knowing SentryAI is on the lookout.

SentryAI was built using a strategic combination of software and hardware. The physical robot was built using the MakeBlock technology, 3D-printing, and the MegaPi. With 2-wheel drive that allows SentryAI to move in all directions, and an LCD that displays SentryAI's current mode through facial expressions, SentryAI is very mobile. An ultrasound sensor attached to the front of the robot ensures that SentryAI does not crash into any objects. A webcam mounted on a servo allows all-around vision of what SentryAI sees.

The smart robot was trained using machine learning to recognize potentially dangerous objects such as knives, and known wanted people through object and face-recognition. If any of these are detected, SentryAI goes into active mode, sending alert signals and flashing red LED lights to issue warnings. Using Twilio, an alert message is issued via SMS. A picture of the suspicious activity is captured. During normal roaming mode, people can view what SentryAI sees through an LCD screen, or even control SentryAI through voice control. The voice control was implemented using Google assistant. Python was implemented to orchestrate each component and bridge them together.

The vision of SentryAI for the future involves incorporating a more sophisticated model of identifying security threats with a greater library of objects to detect, and studying the behaviour of human subjects over time. To improve the overall human interface, next steps include adding customized Twilio phone calls to adapt and react on a situational basis, as well as adding a virtual reality scene created from the sensor and camera data of the rover. The next steps are challenging, involving 3D reconstruction, SLAM, and human pose estimation.

Video

https://docs.google.com/document/d/1_Lnxl72-vh71YZbZOG5dsXBS686dxFSpv3GPSqZHGdo/edit?usp=sharing

Built With

+ 27 more
Share this project:

Updates