Inspiration

We built NemoStage because presenting is harder than just getting through a slide deck. A speaker has to track timing, audience reactions, questions, and whether they are drifting away from the main point, all while still sounding natural. We wanted to make a control-plane for presentations that gives the presenter useful AI help without getting in the way.

What it does

NemoStage is a desktop presentation assistant for NemoClaw. You can upload a PowerPoint deck, send it to a backend running on a DGX Spark, and get AI-powered help understanding and presenting the content. The backend also has early support for live transcript analysis and audience engagement signals, so the system can eventually help with slide tracking, off-slide detection, confusion, interest, and audience questions.

How we built it

We built the frontend as an Electron, React, and TypeScript desktop app. The backend is a FastAPI server running on an ASUS DGX Spark. Uploaded PowerPoint files are handled by the backend, placed into a sandbox, and passed to NemoClaw agents through the OpenClaw gateway. The system uses specialized agents for the main presentation assistant, live transcript analysis, and audience engagement analysis.

Challenges we ran into

A lot of the challenge was getting local desktop UI, remote backend services, sandboxed agent execution, and model gateways to all talk to each other reliably. We had to be careful about which gateway was used for which task, how files moved between the laptop, backend, and sandbox, and how to make agent calls work from the host environment. Debugging cold starts and long-running agent responses also took some patience.

Accomplishments that we're proud of

We are proud that NemoStage is not just a mockup. It has a real desktop app, a real FastAPI backend, remote deployment to the DGX Spark, file upload support, and working agent calls through NemoClaw. We also built the system with room to grow into live presentation support instead of only static slide analysis.

What we learned

We learned a lot about connecting desktop software to remote AI infrastructure, working with sandboxed agents, and designing a system where the UI stays simple while the backend handles complex orchestration. We also learned that presentation assistance needs to be fast, calm, and practical. The AI has to help the speaker stay present, not distract them.

What's next for NemoStage

Next, we want to fully wire the live transcript and audience agents into the UI, add real-time slide tracking, and give presenters useful signals during a talk without overwhelming them. We also want to improve the upload flow, make agent responses faster, and build a smoother dashboard for reviewing deck feedback, audience questions, and presentation performance after a session.

Built With

Share this project:

Updates