Inspiration

We want to use CV to allow a blind or visually-impaired person to know who they are talking to via auditory feedback using machine learning principles as part of the theme public utility.

What it does

Our Proof of Concept would be able to describe a face based on distinguishing features, add “new faces” to the database, determine whether another face is “new”, and match new input to “recognized faces.

How we built it

1. Detect person ..* Use IR receiver to detect a heat signature ..* The heat threshold is manually determined, basically assume any heat signature is a person

2. Detect face ..* Use webcam connected to a camera module to get raw image of face ..* Use Python code to extract distinguishing features, refine inputs, build model, and categorize face

3. Classify face ..* Use Python code to label the “features” or the processed data that represents the face with a “name” ..* User records themselves saying who the person they are talking to is ..* For instance, “Bill” (bad) or “Bill from UCSD HKN” (better)

4. Auditory feedback on future encounters ..* On future encounters, the assembly remembers the face and predicts who they are looking at

Challenges we ran into

Dragonboard does not run x86, that many Anaconda downloads require. In addition, python's opencv is too large to download into Dragonboard and requires it to be built up slowly. While trying to fix all the issues, we also upgraded a certain program that caused a major bug. This bug caused the HDMI cable to not work.

Accomplishments that we're proud of

We found out that the Dragonboard can recognize an external web camera and microphone. We also became more confident in our abilities to flash operating systems onto Dragonboard.

What we learned

Dragonboard has many incompatibilities with our project, including a HDMI cable-breaking bug.

Built With

  • audio-mezzanine
  • dragonboards
  • python
Share this project:

Updates