Brieflit — Hackathon Submission

Tagline / Elevator Pitch

Brieflit algorithmically condenses 600-page classic masterpieces into 10% of their length—strictly preserving the author's original prose, dialogue, and voice—empowering busy readers to conquer the classics on a single lunch break.


Inspiration

In today's hyper-fast world, nobody has time to read a 500-page classic novel like Frankenstein or Pride and Prejudice. People rely on 5-bullet-point Wikipedia summaries or generic study guides. However, those summaries fundamentally destroy the author's precise voice, the emotional depth of the scene, and the beautiful dialogue that made the books famous in the first place.

We were inspired to use AI not to replace the author, but to edit them. We wanted to build a "Reader's Digest Condensed Books" for the modern digital era—an application that intelligently maps out the narrative and cuts out the filler, leaving behind a perfectly intact, miniature diamond of the original text.

What it does

Brieflit is an AI-powered digital library and reading application built specifically for condensed literature.

  • The Library: Users navigate a beautiful, lightning-fast UI to discover compressed masterpieces.
  • The Engine: Our custom 4-pass AI pipeline downloads RAW novels directly from Project Gutenberg, maps the scenes, calculates compression ratios, and condenses the text while mathematically guaranteeing that all dialogue is kept verbatim.
  • The "Reader's Lens": Upon finishing a book, Brieflit dynamically provisions a DigitalOcean Agent so users can "Ask the Book" specific questions or pull character traits, entirely grounded within that specific novel's universe.

How we built it

Brieflit is a full-stack Next.js and Node.js monorepo architected entirely around the DigitalOcean ecosystem.

  • Frontend: Next.js 14 (App Router), Tailwind CSS, and shadcn/ui for a premium, distraction-free reading experience.
  • Backend: Express.js + TypeScript handling heavy file processing, orchestrated by BullMQ running on an Upstash Redis queue.
  • Database: Serverless Neon Postgres scaling alongside a pgvector extension for storing advanced semantic embeddings of our literature.
  • Infrastructure: DO App Platform, DO Spaces, and DO Gradient AI. (See specifically below).

Challenges we ran into

The hardest challenge was preventing the AI from hallucinating or summarizing the text in its own voice. When you ask standard AI models to "summarize" a chapter, they respond with "This chapter is about...". We had to meticulously design a chained, 4-pass processing pipeline that explicitly forces the AI into an "Editing" mode rather than a "Summary" mode. Handling cross-origin real-time infrastructure (bouncing Server-Sent Events from the BullMQ pipeline inside DO App Platform out to a localized Next.js frontend) also took significant architectural debugging.

Accomplishments that we're proud of

We are incredibly proud of the 4-Pass AI Pipeline. Building an asynchronous background worker that can rip a raw .txt file from Project Gutenberg, dynamically strip 100 pages of legal copyright boilerplates using custom Regex, slice the book into token-safe 15,000-word chunks, process them concurrently, and stitch them back together with a continuity check was a massive engineering feat for a single hackathon.

What we learned

We learned that context windows are not a silver bullet. Even with large context windows available in Llama 3 70B and Gemini, stuffing an entire book into a prompt degrades the AI's ability to follow strict prose-preservation rules. We learned the immense value of programmatic chunking and the absolute necessity of robust asynchronous queues (BullMQ) to prevent our Express servers from dropping connections during 8-minute AI workloads.

What's next for Brieflit

We plan to launch Brieflit into a fully monetized platform. Next steps include implementing Razorpay subscriptions to gate premium 10% condensed books, building an "Author Chat" feature using grounded DO Agents so readers can ask Mary Shelley herself about her inspirations, and expanding our custom Gutenberg pipeline to process non-fiction historical texts.


Our Experience Building with DigitalOcean & Gradient™ AI

Deploying an application with this sheer level of asynchronous complexity and AI integration usually requires stringing together a dozen disconnected services. DigitalOcean allowed us to unify our entire architecture under a single, cohesive developer experience.

Architecture Breakdown:

  1. DigitalOcean App Platform: We deployed our heavily asynchronous Node.js backend natively. Because App Platform correctly handles Long-Lived connections and auto-scales, our Express server easily maintained open Server-Sent Event (SSE) streams, piping real-time BullMQ AI pipeline metrics directly to our admin frontend without timing out behind an aggressive proxy.
  2. DigitalOcean Spaces: As the backbone of our library's CMS, DO Spaces securely hosts and delivers the high-resolution book cover assets pulled during the Gutenberg ingestion pipeline. The S3-compatible structure allowed us to immediately integrate @aws-sdk/client-s3 without any friction.
  3. DO Gradient Inference (llama3-70b-instruct): We built our core flagship feature, "Ask the Book", explicitly on DO Inference. Because the llama3-70b model sits natively on DO's low-latency infrastructure, our readers get immediate, lightning-fast chat replies about the text without paying the massive latency round-trips associated with routing to external AI providers.
  4. DO Gradient Agents: Rather than building a clumsy, manual RAG (Retrieval-Augmented Generation) pipeline from scratch, our background worker taps into the DO Agents API in its final post-processing step. The moment a book finishes compression, Brieflit programmatically provisions a DO Agent strictly grounded in that specific book’s compressed text ID, meaning the AI will never hallucinate outside of the narrative's universe.

DigitalOcean gave us the premium, simplified cloud experience we needed, ensuring we spent 90% of our hackathon actually building features, rather than debugging Kubernetes clusters.

Built With

Share this project:

Updates