As we enter an age that is progressively more defined by artificial intelligence and machine learning, we must come to terms with some of their limitations. The past few years have been rife with news about racial biases within AI algorithms. One of the primary problems with AI today is how it can manifest inherent human biases in actual applications. As more of these errors occur, they will only serve to further disenfranchise specific groups within our society.
Most solutions to this problem usually deal with completely retooling individual AI algorithms to reduce bias. This is a very inefficient and resource-heavy approach for most companies. Our solution is to have an image/video preprocessor that normalizes faces before they are analyzed by AI. This allows our product to be easily injected and implemented into existing software stacks as a single discrete step.
How we built it
We used constrained local models fitted by regularized landmark mean-shift to fit facial models to faces in videos. By tracking individual feature points, we were able to overlay a face-normalization mask over the video. This mask allows us to provide race/sex/age-agnostic expressions and responses to existing AI algorithms.
Challenges we ran into
What's next for Face Normalizer
Moving forward, we want to use more accurate face tracking techniques to our masking layer to reduce glitches and improve performance.