Inspiration
Inspired by https://www.allsides.com/unbiased-balanced-news and https://ground.news/, two organizations trying to provide more balanced views by annotating biases in news.
We thought we could apply LLM's towards this problem.
What it does
Takes selected text in the browser, embeds said text and does a lookup into a vector db. The vector db returns pertinent news articles. Then we generate an LLM response asking for a general opinion based on the returned articles (Retrieval Augmented Generation).
How we built it
First, we scraped various news sources / used their API. We embedded them into vector space using together ai. Setup postgres w/ pgvector extension. FastAPI backend Typescript code for browser extension
selected text => fast api backend [ together.ai embed => pgvector => together.ai LLM inference ] => response to browser request => display on browser
Challenges we ran into
Legality / Terms of Service: Have to pay for some news sources. NYT recently updated ToS to ban training with their data w/o explicit approval.
Together.api was blocked by Cornell for a good part of the day.
Not enough documentation
Accomplishments that we're proud of
Creating this working prototype
What we learned
How many moving parts there are in an LLM application.
What's next for Conifer
- Safer responses
- Using the retrieved sources to start an LLM chat, so you can debate/discuss your views, and see if you are missing any sides to an issue.
Built With
- docker
- fast.api
- nix
- postgresql
- python
- together.ai
- typescript
Log in or sign up for Devpost to join the conversation.