The original idea was to create an alarm clock that could aim at the victim's sleeping person's face and shoot water instead of playing a sound to wake-up.

Obviously, nobody carries around peristaltic pumps at hackathons so the water squirting part had to be removed, but the idea of getting a plateform that could aim at a person't face remained.

What it does

It simply tries to always keep a webcam pointed directly at the largest face in it's field of view.

How I built it

The brain is a Raspberry Pi model 3 with a webcam attachment that streams raw pictures to Microsoft Cognitive Services. The cloud API then identifies the faces (if any) in the picture and gives a coordinate in pixel of the position of the face.

These coordinates are then converted to an offset (in pixel) from the current position.

This offset (in X and Y but only X is used) is then transmitted to the Arduino that's in control of the stepper motor. This is done by encoding the data as a JSON string, sending it over the serial connection between the Pi and the Arduino and parsing the string on the Arduino. A translation is done to get an actual number of steps. The translation isn't necessarily precise, as the algorithm will naturally converge towards the center of the face.

Challenges I ran into

Building the enclosure was a lot harder than what I believed initially. It was impossible to build it with two axis of freedom. A compromise was reached by having only the assembly rotate on the X axis (it can pan but not tilt.)

Acrylic panels were used. This was sub-optimal as we had no proper equipment to drill into acrylic to secure screws correctly. Furthermore, the shape of the stepper-motors made it very hard to secure anything to their rotating axis. This is the reason the tilt feature had to be abandoned.

Proper tooling and expertise could have fixed these issues.

Accomplishments that I'm proud of

Stepping out of my confort zone by making a project that depends on areas of expertise I am not familiar with (physical fabrication).

What I learned

It's easier to write software than to build real stuff. There is no "fast iterations" in hardware.

It was also my first time using epoxy resin as well as laser cuted acrylic. These two materials are interesting to work with and are a good alternative to using thin wood as I was used to before. It's incredibly faster to glue than wood and the laser cutting of the acrylic allows for a precision that's hard to match with wood.

It was a lot easier than what I imagined working with the electronics, as driver and library support was already existing and the pieces of equipment as well as the libraries where well documented.

What's next for FaceTracker

Re-do the enclosure with appropriate materials and proper engineering.

Switch to OpenCV for image recognition as using a cloud service incurs too much latency.

Refine the algorithm to take advantage of the reduced latency.

Add tilt capabilities to the project.

Built With

Share this project: