What inspired our project was honestly just our own experiences shopping online. Especially with makeup, we kept running into the same issue: products with amazing reviews that didn’t actually work as expected. In one case, one of our teammates tried a product that had tons of positive reviews, but ended up breaking out because the reviews weren’t very credible or didn’t include important context like skin type or sensitivities. That moment really stood out to us, because it showed how misleading reviews can be when they lack detail or honesty.

We also realized that the review process for makeup products is actually a lot more complex than it seems, especially for people with specific skin conditions like acne-prone or sensitive skin. A product might work really well for one person and be terrible for someone else, but most review sections don’t reflect that nuance. People end up having to dig through tons of content across different platforms just to get a clearer picture.

While working on this, we learned a lot about how messy and inconsistent online data actually is. Scraping reviews from websites sounds simple, but every site structures things differently, so we had to constantly adjust our approach. We also learned how to work with APIs like YouTube’s to pull in external data, and how to use Gemini to actually analyze and summarize that information in a meaningful way. Another big learning curve was figuring out how to structure our backend so everything, from scraping to AI processing, works smoothly together.

We built our project as a browser extension connected to a Node.js backend. The extension extracts product information and on-page reviews, then sends that data to our server. From there, we bring in external insights like YouTube content and use Gemini to generate a rating, key pros and cons, and a comparison between what the product page says and what people are saying elsewhere. We also started adding personalization, so users can input things like their skin type and get more relevant results.

One of the biggest challenges we faced was coordination and timing. Since different parts of the project depended on each other, like scraping, external data collection, and AI analysis, we had to build things in parallel without always being able to fully test them together right away. Another challenge was getting the AI output to be actually useful and not just generic summaries. It took a lot of iteration to make the results feel specific and helpful. We also ran into issues like API limits and debugging communication between the extension and backend.

Looking ahead, we want to expand the kinds of content we analyze. Right now we focus mainly on written reviews and longer videos, but we plan to support shorter content like TikToks and YouTube Shorts by transcribing and analyzing them as well. We also want to incorporate platforms like Reddit and other forums, since those often have more honest and detailed discussions. Beyond that, we’re thinking about improving personalization even further, refining how we score credibility, and making the interface more interactive so users can explore different perspectives more easily.

Overall, this project pushed us to think beyond just building something functional and really focus on solving a problem we’ve actually experienced ourselves.

Built With

Share this project:

Updates