Nova Code — Devpost Project Report
Inspiration
Amazon Nova models are uniquely good at reasoning. The configurable extended thinking feature lets the model slow down, work through hard problems step by step, and catch things a faster pass would miss. That felt like a perfect fit for a coding assistant — not just autocomplete, but a tool that can actually think through your codebase with you.
The goal was simple: take that reasoning power and wrap it in something a developer can use without friction — both from the terminal and directly inside VS Code. No third-party API keys, no cloud middleware. Just you, your AWS account, and a model that can read your files, run your tests, and help you ship faster.
What It Does
Nova Code is an AI coding assistant powered by Amazon Bedrock Nova models. It comes in two flavors:
Python CLI (nova command)
nova chat — a full interactive REPL where you can have a multi-turn conversation with the model. Sessions are saved automatically per project directory and can be resumed later.
Extended thinking on demand (
--thinking low/medium/high/auto) for harder problems.Auto-approve mode to let the agent run fully autonomously.
VS Code Extension
- A chat panel that opens alongside your editor, so you never leave your workspace.
- Automatically passes your open file and any selected text as context with every message — no manual attachment needed.
- Approve or reject every file edit before it's applied, with a color-coded diff (green for additions, red for deletions), including an expand toggle for large changes.
- Full session management — resume any past conversation from a quick-pick list.
- Thinking mode selector (Off / Auto / Low / Medium / High) right in the chat header.
- Credentials stored securely via VS Code's native SecretStorage (OS keychain), never in plain text.
Built-in Agent Tools
The model can autonomously use a suite of tools to work inside your codebase:
| Category | Tools |
|---|---|
| File system | read_file, write_file, edit_file, multi_edit, glob_files, grep, list_directory, bash |
| Web | web_search (DuckDuckGo), web_fetch |
| Notebooks | notebook_read, notebook_edit |
| Task tracking | todo_read, todo_write |
All file-editing tools require your explicit approval before changes are applied (unless auto-approve is on). Read-only tools run automatically.
Project instructions via NOVA.md — drop a NOVA.md in your project root and Nova will pick up your conventions, stack preferences, and any persistent context automatically.
How We Built It
The project has three layers that work together:
1. Core (Python)
The brains of the operation. ChatSession manages conversation history, the LangChain agent loop, and tool execution. It communicates tool approvals back to the caller through a TurnCallbacks interface, so the same core logic drives both the CLI and the extension without duplication.
2. CLI (Python / Click + Rich)
A thin wrapper around the core. It uses Rich for live Markdown rendering as the model streams tokens, Click for the command interface, and standard asyncio for the agent loop. Sessions are stored as JSON files keyed to the current working directory so each project has its own history.
3. VS Code Extension (TypeScript)
The extension spawns the Python nova serve process as a child process and communicates with it over a newline-delimited JSON protocol on stdin/stdout. Every user action (send message, approve tool, switch thinking mode) becomes a JSON message written to stdin. Every model event (text chunk, tool approval request, session data) comes back as a JSON message on stdout. The chat UI is a VS Code WebviewPanel with vanilla JS, marked.js for Markdown rendering, and VS Code's own CSS variables for theming.
This stdio protocol design means the Python backend is completely decoupled from VS Code — it's the same process the CLI uses, just invoked in server mode.
Challenges We Ran Into
Challenges we ran into
- Race condition between the webview and the backend — early messages were silently dropped before the UI finished loading
- File links opening in the wrong editor column — clicking a file path from the chat panel didn't behave as expected
- Making multi-edit operations atomic — ensuring multiple edits to the same file either all succeed or all roll back
- Balancing the tool approval UX — showing enough context for the user to make a decision without cluttering the chat
- Glob pattern support in the grep tool — certain common patterns weren't handled by Python's standard library
Accomplishments We're Proud Of
Extended thinking that actually integrates with your workflow
Most tools that expose "thinking" make it a hidden background thing. In Nova Code it's a first-class control — a toggle in the VS Code header and a CLI flag. You can switch modes mid-session. When thinking is active, the status bar tells you. It changes how you use the tool: turn it off for quick lookups, turn it high when you're debugging something genuinely hard.
A real tool approval system with diffs
The approval flow isn't just "yes or no." You see exactly what's about to change, in green and red, before it's applied. For multi-file edits (multi_edit), each edit gets its own diff block. You can reject a tool and give the model a redirect instruction — "skip this, do X instead" — without cancelling the whole turn.
The stdio protocol
Decoupling the Python backend from the VS Code extension via a JSON-lines stdio protocol turned out to be one of the cleanest decisions in the project. It meant the CLI and the extension share the exact same agent, tools, and session logic — zero duplication. It also makes the backend completely testable in isolation.
Session persistence per project
Sessions are automatically namespaced by the current working directory. Switch between projects and your conversation history follows. Resume any past session by timestamp. It makes Nova feel less like a one-shot chatbot and more like a persistent collaborator.
What We Learned
VS Code extension development is a world of its own
Going into this project with no prior TypeScript or VS Code extension experience, the learning curve was steep. The webview sandbox is strict — no Node.js APIs, no direct filesystem access, everything goes through message passing. The Content Security Policy blocks inline resources unless you explicitly whitelist them. The ViewColumn and localResourceRoots APIs behave in non-obvious ways (the file-link bug is a perfect example of this). VS Code's excellent documentation helped, but a lot of the real behavior only becomes clear when something breaks in production.
Coding tool design is genuinely hard
The gap between "an AI that can answer questions about code" and "an AI that can safely modify a codebase" is enormous. You have to think about: what does the user need to see before approving? what happens when a tool fails halfway through? what if the model calls the wrong edit on the wrong file? how do you cancel a long-running turn cleanly? Every one of those is a real design and engineering problem. Building this gave us a much deeper appreciation for what tools like Claude Code, Cursor, and Copilot have had to solve at scale.
Streaming UX matters more than you'd expect
Watching a response appear token by token feels qualitatively different from waiting for a complete response. Getting the Rich live display right in the CLI and the streaming Markdown rendering right in the extension were small details that made the tool feel alive rather than sluggish.
What's Next for Nova Code
AWS MCP Integration
The most natural next step is connecting Nova Code to the AWS Model Context Protocol (MCP) server, giving the agent live access to your AWS infrastructure. Query CloudWatch logs, describe running ECS tasks, check S3 bucket contents, inspect Lambda function configs — all from the chat panel. For developers building on AWS, this would close the loop between writing the code and understanding what's happening in prod.
Smarter context awareness
Right now the agent gets the open file and selection as context. The next level is automatic project-wide context — feeding in the dependency graph, recent git changes, open GitHub issues, and error logs so Nova already knows the shape of your project before you type a word.
Voice input
Quick voice-to-text for the chat input. Useful when you want to describe a problem faster than you can type it.
Inline diff in the editor
Instead of showing diffs inside the chat panel, surface proposed edits directly as VS Code inline diff decorations — the same style you see in a git diff. Accept or reject individual hunks without leaving the file you're editing.
Test generation from coverage reports
Point Nova at your Jest or pytest coverage output and have it automatically write tests for uncovered lines. The tool infrastructure is already there; it's a matter of adding a coverage-aware prompt layer on top.
Built With
- amazon-web-services
- bedrock
- langchain
- python
- typescript
Log in or sign up for Devpost to join the conversation.