Inspiration

In today's day and age, media sources are increasingly more polarized, as is the entire world. This got us thinking.

What if you could have an all-in-one news briefing on things you ACTUALLY cared about each morning? What if you could listen, read, explore, and understand what went into the news YOU care about? What if it was free of bias, and you could dive into the details while exploring a variety of sources that tell the whole picture, not just one side?

What it does

Axon creates an all-in-one customizable newsletter and podcast that you receive each morning. It includes fact-checking features and research on topics of interest through a deep-research interface.

How we built it

stack

Frontend

  • Onboarding with auth for interested topics and protected routes pages for the dashboard

We created an onboarding customization system, along with a user dashboard with the previous day's newsletters/podcasts that is accessible through Supabase Auth.

  • Email newsletter sending and dashboard page with custom newsletter and podcast

We use Resend to send an email with the plaintext custom newsletter each morning. This email encourages users to go to the website, where they can interact with and explore sources and topics contained in the newsletter.

  • Newsletter chatting and exploration with forked deep-research implementation

We use a deep research implementation and hardcoded websites to find opposing sources and data that users can interact with to get a holistic view of events they are interested in.

Server

  • AWS services and Supabase

We used Supabase for user accounts and settings, including personalization settings, UUIDs, etc. We used AWS S3 buckets for large files and data manipulation to coordinate among our many scripts. We use AWS services to coordinate our scripts through the queue system each day to get a custom newsletter and podcast ready for each user and to handle async requests that take more time than a regular API endpoint would handle.

  • Newsletter fetching and parsing with IMAP and OpenAI API

We used IMAP to subscribe to various newsletters across different topics and parse data into our database. We then used OpenAI models to synthesize the topics into one data/information file for each topic. We extracted keywords for later use.

  • Keyword-to-article mapping with Scrapybara virtual machines filtering out unreliable/paywalled services

We used our keywords to scrape for different news websites across the political spectrum and use them in our deep research implementation. Scrapybara finds resources based on keywords extracted from the newsletters to use as a basis for fact-checking/extra information.

  • Reinforcement learning scoring system for newsletter generation using OpenAI and Claude

We used a reinforcement learning method to generate and score a final user-facing synthesized newsletter, then repeat it until a satisfactory score is reached. We had Claude 3.5 Sonnet read and judge our GPT-4 Turbo's output with feedback to generate the best possible custom newsletter.

  • Deep research module implementation for exploring/fact-checking sources

We implemented a fork of Scira, an LLM tooling platform, in order to explore user-relevant topics in the newsletter and allow users to view a wide variety of sources.

  • Elevenlabs podcast generation

We use our final newsletter and the ElevenLabs API to create a podcast version of the newsletter that is included on the protected dashboard page.

Challenges we ran into

Prompt engineering to perfect fact-checking methods, content generation, and optimizing usage of a virtual machine through the Scrapybara interface to navigate and scrape websites reliably. Implementing reinforcement learning among AI models to create interesting, natural-sounding newsletters/podcasts.

Putting together a wide variety of scripts that we each worked on individually into a scalable production environment was difficult but doable through the great tools available in the AWS suite. We also had some trouble figuring out the right sources to hardcode, as well as which sources we could trust as fact as a fact-checking service. This is still a work in progress, especially with regard to scaling and caching strategies, and we look forward to continuing work on it.

Accomplishments that we're proud of

We shipped a scalable, working, persistent system in 36 hours. We’re super excited about the mission, and glad we were able to work on such an interesting project.

What we learned

We learned a lot about working in a team in a production environment, breaking things, merging the wrong branches, and never giving up. We also used many new tools we had never touched, including innovative services like Scrapybara.

What's next for Axon

We don’t want to stop here. There’s a data crisis warring, and we believe a solution like ours is what is needed to truly provide perspective on world events and interests accurately and efficiently. Our next steps include collecting and hardcoding pipelines for a gradient of sites and sources on current topics, and adapting even more to what users want to see each morning, perfecting our personalized experience. By providing a customizable daily dose of truth, we want to unify a splintered public.

Built With

Share this project:

Updates