Inspiration:

Personal + friends' experience of dealing with food allergies in Japan, while not being able to read packages and not willing/able to ask employees for every product. Two types of customers:

  • the person living in Japan and who is always eating the same thing because he/she is scared of new products.
  • the tourist that is eager to try the maximum number of Japanese food in a limited time, but has no idea what each product contains, so they end up passing up a lot of opportunities out of fear of eating something with an allergen in it.

What it does:

Typical user journey:

  1. First-time use, the user registers his/her food allergies
  2. Every time they run into a product, they can scan the package and the app will tell him/her if the product contains the allergens they have selected with a level of confidence percentage --> all in English for the first iteration. Bonus:
  3. The user can save the product & where it was bought for future purchases and sharing (work in progress).

How we built it

We used Node.js and Python for the back-end structure using the SYSTRAN.io API, Microsoft Computer Vision API, and Microsoft Text Translation API, and React-Native for mobile (iOS and android).

Challenges we ran into

Many OCR APIs are only English-centric, and we were required to first translate into English from Japanese on two separate APIs and then compare the two in English words. It would have been better to have to OCRs compare the native language (Japanese) for better accuracy.

We had many technical issues ranging from database connection and access issues to standard debugging issues. Part of our desire to implement ML into app, not only through APIs, but also through training a NN in the future, we decided on the separation of Node.js and Python. This presented additional complexity into our prototyping.

Not knowing if we were experiencing issues with Wi-Fi or actual technical issues.

Accomplishments that we're proud of

-Coming together as a group of unknowns from diverse backgrounds just days before the Hackathon, we were able to form a close-working team that was able to put together an actual "working" prototype of an application in a very short amount of time.

-Our lead developer, who had no experience with Python and very little React-native experience was able to successful deploy our app.

-Each small breakthroughs was a cause for celebration and pride.

-Coming up with an innovative way to solve a real "pain point" for travelers with allergies.

What we learned

-In addition to learning new and interesting development methods and technologies, it was a great way to experience the process of putting together a real product, from idea to prototype, and working with a diverse team of people.

What's next for Alletector

  • In the future, in addition to a more robust AI-driven detection functionality, the users will also be able to scan the barcode of a product and get information based on a bigger database collected from all different users, thereby improving the accuracy of the detection.

-Photos can be stored with comments and geolocation, and then shared through social media.

  • Through data aggregation, our data could be used by governments and research institutions for researching allergy trend statistics, in addition to retailers using the data for improving target marketing of foreign residents and visitors at first in Japan and then scaled to various markets.

Built With

Share this project:

Updates