Inspiration

We want people to understand the news and media biases that they interact with while being more engaged in the world and society around them. To do so, we came up with an API and Chrome Extension that has a detector to inform people of potential bias in news articles. The generational reframe of the article also puts a humorous spin on it, keeping people engaged and making the content more accessible.

What it does

The extension works by extracting article text from a webpage, sending it to the Reframe API for processing, and then replacing the on-page content with the reframed version in the selected generational style.

How we built it

We used Python, FastAPI/Starlette, and Pydantic, with additional libraries for HTTP scraping, LLM access and configuration management (ie BeautifulSoup, Gemini API).

Challenges we ran into

As a team of all beginner hackers, we had to spend time researching about APIs before starting our project. Throughout our project, we relied on Google Antigravity/Gemini 3 Pro to aid us in establishing the libraries, logic, and overall structure of our API. However, it was challenging to figure out how to use AI efficiently and effectively in a way that would produce the exact results we wanted.

Accomplishments that we're proud of

We are proud of the visual design of the extension and the quality of the generation-based translations, as well as how well the bias detector works across different news platforms.

What we learned

We learned how to design and deploy a RESTful API, including structuring endpoints, handling requests and responses, and testing functionality using Postman. Since none of us had extensive prior experience building APIs, this project helped us understand how backend services communicate with clients and how to create clear, predictable developer-facing interfaces.

What's next for Reframe

To prepare for public deployment, we plan to replace the current multi-key Gemini rotation approach with a scalable production solution, such as a paid API tier. This will be more reliable and accessible for public users. We could also expand the bias analysis to provide more in-depth explanations, rather than only providing simple labels.

Share this project:

Updates