Inspiration Standard LLMs suffer from "Context Amnesia" when writing long-form content. After 5,000 words, they lose the plot, hallucinate details, and break character. We wanted to build a system that doesn't just "chat," but "engineers"—an autonomous assembly line that can write a 250-page book without losing a single thread. We realized that to achieve human-level manuscript generation, we needed to mimic a real publishing house.

What it does Scriptirix AI is a full-stack, multi-agent manuscript engine. It takes a simple user idea and passes it through a "Neural Assembly Line":

  1. Planning: The AI architects a highly detailed JSON blueprint, plotting out chapters, scenes, and character arcs.
  2. Writing: The AI generates high-speed, stylistically consistent prose based on the blueprint.
  3. Quality Assurance: A separate AI agent audits the prose, scoring it and forcing rewrites if it fails to meet genre standards.
  4. Memory: Every generated chapter is embedded into a vector database, allowing the AI to recall exact details from Chapter 1 while writing Chapter 50.
  5. Marketing: Once the book is finished, a specialized agent generates a full social media, SEO, and back-cover marketing suite.

How we built it (The Technical Deep Dive)

  • The Multi-Agent Assembly Line (Powered by Amazon Nova): Instead of relying on a single prompt, Scriptirix utilizes a multi-agent topology where AI agents build upon each other’s results sequentially. We mapped specific tasks to optimal Nova models. Standard generation tasks (bulk creative prose) are routed to Amazon Nova Lite for high speed and creative fluency. Premium tasks requiring complex reasoning (architecting the JSON blueprint, deep logic auditing) are routed to Amazon Nova Pro.
  • The Iterative QA Loop: When a Chapter Director agent (Nova Lite) drafts a section, it is not immediately saved. It is passed to the Quality Assurance Department (Nova Pro). The QA agent acts as a strict editor, scoring the draft out of 100 based on pacing, logic, and "show, don't tell" rules. If the draft scores below 75, Nova Pro generates a "Surgeon's Prescription" of defects, forcing Nova Lite into a conversational rewrite. This loop iterates up to 3 times to ensure a polished final product.
  • Asynchronous Orchestration & Resilience: Generating a 250-page book takes hours, making standard HTTP request-response cycles impossible. The Django REST Framework (DRF) API Gateway acts merely as a dispatcher. Heavy lifting is offloaded to Celery worker nodes using Redis as the message broker.
  • Distributed Redis Locks: To prevent race conditions during highly concurrent operations (like syncing generated chapters to Google Drive simultaneously), we implemented distributed Mutex locks (self.redis_client.lock) to force threads into a safe, single-file line.
  • Zombie Recovery Watchdog: If a Celery worker dies mid-generation, the AI engine pulses a heartbeat to Redis. A background Celery beat task scans these heartbeats; if a project hasn't pulsed in 45 minutes, it catches the "Zombie" process, safely terminates the lock, and allows the user to seamlessly resume via idempotent HTML markers.
  • "The Archivist" (Titan + pgvector RAG): Completed chapters are semantically chunked using NLTK, embedded via Amazon Titan Text Embeddings v2, and stored in PostgreSQL using the pgvector extension for L2 Cosine Distance search.
  • The Narrative State Machine: To maintain immediate plot awareness, we treat the story mathematically as $S_{t+1} = f(S_t, C_t)$. After every chapter ($C_t$), a lightweight model extracts the "Delta State" ($S_{t+1}$)—unresolved mysteries, active tension, and character inventory. This strict JSON state is injected into the prompt of the next chapter, ensuring the AI never hallucinates plot holes.
  • Real-Time WebSockets & Security: As the Celery workers execute the Nova agents, they stream the AI's internal reasoning and output directly to the client in real-time using Django Channels (Daphne ASGI). We secured the REST API using strict JWT token rotation and implemented Google OAuth2 for social login and Google Drive integration. To secure the WebSockets, the client requests a short-lived (60-second), single-use UUID ticket via REST before connecting.
  • The Native Android Client: Built entirely in Kotlin and Jetpack Compose, utilizing Retrofit2 and OkHttp3 to handle encrypted HTTPS/WSS connections. Complex, deeply nested JSON blueprints generated by Nova Pro are parsed using GSON into reactive UI states, allowing users to manually edit or regenerate chapters directly on their devices.
  • Architectural Philosophy: Escaping the 17x Multi-Agent Trap Most multi-agent systems fail due to Compound Reliability Decay, where small errors in early agents cascade into "confident nonsense" later in the chain. Scriptirix AI was engineered to bypass this "trap" by implementing: Structured Topology: We avoided the "bag of agents" approach, using a Supervisor-Worker pattern where Amazon Nova Pro acts as a high-reasoning auditor over Nova Lite’s creative output.
  • Stateful Contracts: By using a Narrative State Machine to extract JSON delta-states after every chapter, we ensure an explicit contract between agents, preventing the 17.2x error amplification common in unstructured LLM pipelines.
  • Deterministic Guardrails: Our 3-step iterative QA loop acts as a circuit breaker, ensuring that only prose meeting a >75/100 quality threshold moves downstream.

Challenges we ran into

  1. The "State" of a Stateless API: Managing the context of an 50-chapter book over decoupled, asynchronous tasks was incredibly difficult. We solved this by building the aforementioned Narrative State Machine and combining it with the Titan-backed RAG memory.
  2. AWS Infrastructure Constraints: Due to a temporary administrative hurdle in verifying a new AWS account—which will take longer than the hackathon deadline to resolve—we were unable to deploy our backend directly into an AWS VPC. Because of this, we couldn't use surrounding AWS services (like Amazon RDS or AWS Fargate) as deeply as we wanted to right now. However, we successfully utilized the Amazon Nova and Titan models via API to power the core reasoning engine.

Accomplishments that we're proud of

  • Successfully shifting our entire multi-agent logic to Amazon Nova and seeing a massive jump in reasoning quality.
  • The Self-Correcting QA Loop, which creates a genuine conversation between two AIs (Writer and Auditor) to ensure the user gets a production-ready final product.
  • Building an enterprise-grade backend with a Zombie Recovery Protocol that ensures an AI generation task can survive server restarts or network drops.

What we learned We learned that "Agentic Topologies" are infinitely superior to singular prompts. By breaking the task into Departments (Planning, Writing, QA), we can utilize Nova's specific strengths in each model size, saving tokens and vastly improving the output quality.

What's next for Scriptirix AI Scriptirix is evolving into a universal content suite. Now that the Nova-powered foundation is built, we are working on:

  • Deep AWS Integration: Once our AWS account is fully provisioned, we plan to migrate our PostgreSQL database to Amazon RDS, our workers to AWS Fargate, and utilize Amazon Bedrock directly to leverage the ecosystem's full potential.
  • Audiobook Integration: Converting manuscripts to high-fidelity audio via AWS Polly.
  • Multimodal Illustration: Using Amazon Nova Canvas to generate context-accurate book covers and interior art based on the Narrative State.
  • Vertical Expansion: Re-tuning the Assembly Line to act as an AI Math Solver and a rigorously cited Academic Essay writer as well as multi agent powered Article/Blog writer.

Testing it live

  • For testing it live join the google group and then app's test and install it from Google Play. Links are given below!
  • For reviewing just the code see GitHub repo.

Built With

Share this project:

Updates