Inspiration

Algorithmic bias (and awareness of it) has become an increasingly ubiquitous issue in facial recognition software/ the ML training process underlying it. We're here to think about that and design a tool that could help combat the problem.

What it does

Our project allows a user to upload an image, then queries AWS's rekognition APIs using that image, and returns whether it "is a hotdog" or "is not a hotdog," along with some basic environmental and illicit content stats. This is a proof of concept; see "What's next" for more.

How we built it

Built using React on the front-end and Node.js on the backend. AWS Rekognition used for image classification logic.

Challenges we ran into

Not sleeping is hard! But seriously, we had some trouble getting AWS CLI set up (yikes), scraping the info we actually want from the JSON objects Rekognition returns, streamlining asynchronous calls s.t. user has seamless experience of uploading image/ returned information.

Accomplishments that we're proud of

This is a solid basis for moving forwards into a robust and meaningful product! It also does a little bit more than Silicon Valley's version, so we think that's worth something...

What we learned

More about algorithmic bias and the injustice that's baked into our ML/ photo recognition SW through readings and thinking deeply about the issues. We'd been avoiding Node.js (Django is better) so learning that was cool. We also learned how to be a team and deliver an MVP under a time crunch!

What's next for hotornotdog

The end goal is to build a robust image classifier that flags inappropriate content. We could also choose to foray into the realm of cross-comparing image classification tools (think: IBM, Face++, Microsoft, Clarifai) and their results, alongside providing intuitive data analysis that could expose underlying trends of bias in the same tools.

Share this project:
×

Updates