Inspiration


We wanted to use our knowledge of Hardware Integration and Java to create a fun and useful project that can be adapted to fit as many roles as the user can imagine.

What it does


It currently finds a person's face and tracks it by moving servos to keep the detected face in the center of the video's frame.

How we built it


Currently, the project utilizes an HD Webcam attached to servos that control the camera's X, Y, and Z axis. Using Java's OpenCV library we created a facial recognition program that detects faces and outputs commands to the servos.

Challenges we ran into


Communicating between the Arduino and the Camera proved challenging at first but after some experimenting we succeeding in communicating between the two. Java doesn't inherently output serial commands, but the Arduino required serial commands to operate the servos.

Additionally, the motor had some issues with over compensating when it tried to center. We fixed this by adding a buffer room, so the camera wouldn't have to exactly center, (because this would be impossible unless we had motors that could change angles very precisely) but rather center within that buffer space.

Accomplishments that we're proud of


Once we finally got it working consistently, that was a very proud moment for us, but we also had proud moments leading up to that moment.

Every leap that we made made us excited as we got closer to our goal, but here are the 3 big ones:

  1. Setting up the JavaFX frame to display the camera
  2. Setting up the OpenCV library and getting it to detect faces
  3. Setting up the motors and passing the input in correctly to follow the user's face.

What we learned


Software:

How to setup a JavaFX scene with .fxml files

How to get information using the OpenCV library and convert it to update the JavaFX scene.

How facial recognition programs detect faces (hint: it's the eyes and nose bridge!)

How to make Java output serial commands.

How to resolve merge conflicts with Git

Hardware:

How to create a simple circuit with multiple servos in parallel.

How to use servo controls with the built in servo library.

How to open and receive communication via serial monitor

How to convert serial communication from byte type into a intuitive string label.

Mount all necessary components onto a platform with the limited supplies available.

What's next for Facial Detection and Tracking


Live Video Stream - Set up the camera video output feed to stream to a website/server

Voice Control - Use the Amazon Alexa API to implement additional features such as 'Hide and Seek', 'Sentry Mode', or even 'Expression Detecting'

Expression Detecting - We would add functionality to determine different expressions of a person, for example if a person is smiling or frowning.

Hide and Seek - Have the computer output via voice, "I've found you!" when a face is detected, and then if a face is not detected after a certain amount of time it would output via voice, "Where'd you go?"

Sentry Mode - Have the camera rotate back and forth, searching for a face, and when it finds a face, it would lock on. When the face is lost (if a person moved out of frame too quickly, etc) it would return back into sentry mode.

Full 360 Support & Movement - We could give the camera the ability to completely rotate 360 degrees, so it wouldn't be limited by the range of motion. We could also add movement so the device could move around like a robot on wheels. It could detect something far away that might've been a face, but it only detected it for a moment, and then roll towards it so it can better detect if it is a face and then follow it!

Independence from computer - We would set it up to run from something that could fit in the box and take it with it. This would work especially well with the movement, and live video stream. With the movement, it wouldn't have to lug around a laptop, and with the live video stream, you would still be able to access it.

Share this project:

Updates