AWS Rekognition, store window displays, the awesome photography at Unsplash

What it does

Expression AI uses a webcam to send images of the user to S3 and analyzes them with AWS Rekognition via API Gateway/Lambda. Based on the emotion detected, a set of images is pulled from an Aurora database (PostgreSQL) and returned to the browser. JavaScript in the browser picks of the images from that emotion set at random and displays it to the user. It also draws an outline of the users face using the Rekognition bounding box that fades to show the data getting stale. In addition, AWS Polly calls out each expression as it is detected (if it changes). This repeats every four seconds if there is a face in view. A JavaScript component is doing minimal face detection to determine if there is a face in view so that API calls are only made when someone is there.

How we built it

Visual Studio Code, photo resources at Unsplash, etc.

Challenges we ran into

Database permission issues.

Accomplishments that we're proud of

Demonstrating how HTML video and canvas elements can be integrated with AWS Rekognition and Polly services.

What we learned

How to use AWS Rekognition, Polly, Cognito, and Aurora DB.

What's next for expression-ai

Fixing some bugs so it will work on mobile devices.

Built With

  • api-gateway
  • aws-aurora
  • aws-certificate-manager
  • aws-polly
  • aws-rekognition
  • canvas
  • cloudfront
  • html
  • lambda
  • route53
  • s3
  • video
  • webcam
Share this project: