Inspiration

In Health class last week, we learned about how much of a problem obesity is, and how around ⅔ of Americans are overweight. Additionally, we saw that on any given day, 33% of Americans eat at a fast food restaurant for the sake of efficiency. Many of these foods are processed foods, and contain ingredients which we should only be consuming in moderation. In a few years, if this trend continues, we will see that obesity will become the lead cause of death in a few years, and we can prevent this by educating people by this application.

What it does

This program uses Tesseract OCR Image recognition to convert text in an image file into actual text which we can use. From there, we are able to read the text from the image file, (whether the image file is a screenshot or a picture), and then use that data and separate it into another file. Then, we basically just write “if statements” for ingredients that we should watch out for, especially in processed foods. Then, we basically bold those ingredients, or at least outline them so that the user knows to be aware of them.

How we built it

We built this app using Xcode. There were two parts to this. The first was to convert the image or screenshot into text, and then from there, we use the text, and if the text is a bad ingredient, it is highlighted by our program. We used OCR, which stands for Optical Character Recognition, and it is what actually extracts the text. Once we were able to get that to work, we had to compile data with bad ingredients, especially in processed foods. Once we got that to work, we made the bad ingredients turn up in red font to alert the user which ingredients could be potentially harmful to their health. We built this using Swift, which we had to learn along the way. We also had to make it User Friendly, and the interface had to look good. The basic layout for our app is in the image gallery.

Challenges we ran into

We ran into several challenges while making our code. First, the wifi was very slow and hard to work with. This made it difficult to be efficient in terms of learning a new language and completing our app. Furthermore, the imaging processing and OCR was not working because often times when we take a picture we are at a slant angle. OCR can not work to it's optimal functionality when the picture is taken at a slanted angle. We went about solving this problem in two ways. First, the user can input a screenshot or a picture file instead of taking the picture on a camera. Second, we tried to improve the accuracy of the recognition and it works a majority of the time. We hope to improve this situation later on our own time. One last challenge we went into is working with the syntax of swift in Xcode since none of us had formal experience with it. We did a lot of research on how to code with swift and became proficient in time to code this app.

Accomplishments that we're proud of

We are proud that even though we don’t have any experience developing applications, we were able to learn in the duration of this hackathon, and apply it to a well-working app. We are also quite proud that we could implement OCR into our app, since it isn’t an easy thing to do. We are overjoyed that we were able to work around syntax errors on Swift, especially since no one had experience with working with Swift. There were often problems such as that the OCR wasn’t working in general, and the image wouldn’t show on the screen, but we were able to think about the problem with a fresh mind and persevere through.

What we learned

When approaching such a big project, we first learned to break it up into steps. We first wrote the logic, which is as follows: 1) First build a program that is able to convert text from an image file into actual text. 2) Once we are able to get that working in general, we focus on ingredients, by taking a picture of the ingredients label, or an image file and it should be able to parse that into text. 3) After that, we write some if statements for bad ingredients, and it clearly bolds and makes the font of those ingredients red as a warning sign. Another thing we learned is how to use OCR to process an image file and convert it into text, which was key to how this app would work.

  • We learned how to work around problems such as WiFi.
  • We learned how to make an IOS App on Xcode, and make it fairly beautiful in terms of fixing the layouts of the buttons, and changing colors so that they would fit.
  • Furthermore, we learned how to make a logo as well.

What's next for Ingredient Scanner - Do you know what's in your food?

We aren’t going to stop here with this idea, we have many ideas of how we can improve it. The first is that we can improve the accuracy of image detection. As of now, we weren’t able to make it perfect, and we focused more on making the bad ingredients red and standing out, since that was the whole problem that we were trying to solve. However, the image detection works 90% of the time. Second, we can also try to look at nutrition facts and determine how much of each part of the label is bad(Ex. Sugar). By using the amount of a bad ingredient, we can allow the user to judge for themselves whether they want to consume it or not. Also, another future plan is to develop the application for watchOS, on the apple watch series 2. Once the series of iWatches start coming with built in cameras or better camera support (CMRA), our application can be even more useful, and efficiency would only become more optimal.

Additional Comment

IMPORTANT: Make sure to download the file from the bit.ly link, since the video format is not supported

Built With

Share this project:
×

Updates