Inspiration

Our team has always been concerned about the homogeneity of many computer vision training sets, given that they can introduce cultural and societal biases into the resulting machine learning algorithm.

CultureCV is designed to combat this highly relevant problem, with a user incentive for use: by taking pictures of objects that are culturally relevant to the user, then incorporating user-generated tagging into new data sets, our mobile app encourages both cultural exchange and a future with more diverse CV training sets.

By using our app, the user learns their target language from objects and situations they already see. The user is prompted to tag objects that are meaningful to them, which in turn strengthens the diversity of our custom training data set.

What it does

CultureCV is a cross-platform real-time, real-world flashcard community. After starting the app, the phone's camera captures several images of an object the user is looking at, and sends them to the Microsoft Image Analysis API. As the API returns descriptions and tags for the object, our app displays and translates the best tags, using the Microsoft Translation API, as well as a brief description of the scene. The user can set the translation language (currently supported: French, Spanish, and German).

When our custom data set (using the Microsoft Custom Vision API) has a better match for the image than the tags returned by the Microsoft Image Analysis API, we instead display and translate our own tags.

Users can create their own language "flashcards" with any object they want, by tagging objects that they find meaningful. This is incredibly powerful both in its democratization of training data, and in its high level of personalized customization.

How we built it

CultureCV was built entirely using Expo and React Native -- a huge advantage due to the fact that all of our team members had different mobile devices.

Challenges we ran into

We spent a lot of time uncovering new features by poring through the new Microsoft documentation for the Custom Vision Service. Allowing the user to submit tags to the data set was particularly challenging.

Additionally, two out of three of our developers had never used Expo or React Native before. Thankfully, after the initial difficulties of getting up and running, we were all able to test using Expo.

Accomplishments that we're proud of

We are especially proud of the instant training of our custom model -- users who use the "contribute" button to add tags for an object can see that object be correctly identified just a moment later.

What we learned

The most exciting our team learned was that we could create a high-quality machine learning app without ever having to glance at a gradient! Our team learned how to collaborate in a team using Expo (two of our members learned how to use Expo, full stop).

What's next for CultureCV

The platform is scalable for any type of information -- we see future applications for museums, famous landmarks and historical sites, or even directly in the classroom. Ultimately, we see our platform becoming something like Wikipedia for the real world.

Built With

Share this project:
×

Updates