Inspiration
The transition to AI-driven operations presents a massive security paradox. On one hand, large language models can parse intent and automate workflows at unprecedented speeds. On the other, allowing an autonomous agent direct access to high-stakes infrastructure like payment gateways or database provisioning is a catastrophic security risk.
I was inspired by the need for a "middle ground." I wanted to build a system that possessed the conversational fluidity and speed of an AI agent but was grounded by strict, immutable human governance. AuraOps was born from the idea that the best operational tools shouldn't replace the human administrator; they should empower them through a secure, frictionless "human-in-the-loop" architecture.
what it does
AuraOps transforms Slack from a standard communication tool into a highly secure, AI-driven command center for enterprise operations. Instead of switching between multiple dashboards to manage infrastructure, team members can simply send a natural language command to @AuraOps in Slack.
Currently, the agent is capable of:
Infrastructure Management: Automatically parsing parameters to provision digital wallets.
Financial Operations: Processing transactions and managing funds securely via the Paystack API.
Automated Scheduling: Interpreting time-based intents to seamlessly book and manage meetings via the Google Calendar API.
Crucially, AuraOps does not blindly execute. For any action flagged as "high-stakes," it halts the process and generates a secure Auth0 step-up authentication link. The operation is only executed after an authorized administrator logs into a React web dashboard and manually clicks "Approve."
How we built it
AuraOps is built on a highly decoupled, tripartite architecture to separate intent generation from authorization and execution: The Interface (Slack & Bolt): We utilized Slack as the primary conversational UI. Users interact with the @AuraOps bot using natural language.
The Brain (FastAPI & Gemini): The Slack events payload is routed to a Python FastAPI backend. Here, Google's Gemini 3 Flash operates as a strict intent-parser, extracting the core command and parameters into a structured JSON payload rather than conversational text.
The Security Gate (Auth0): For high-stakes operations, the system intercepts the execution. We mathematically defined our execution policy where the execution function $E$ for a given parsed command $C$ is strictly dependent on the evaluated risk $\mathcal{R}(C)$ and the presence of step-up authorization $A_{step}$:$$E(C) = \begin{cases} \text{API_Call}(C) & \text{if } \mathcal{R}(C) < \tau \lor (\mathcal{R}(C) \ge \tau \land A_{step} = \text{True}) \ \text{Reject} & \text{otherwise} \end{cases}$$(Where $\tau$ represents the threshold for high-stakes operations).
The Command Center (React/Vite): If $A_{step}$ is required, the backend generates a unique secure session link sent back to Slack. The administrator clicks this, authenticates via Auth0, and lands on a custom React dashboard to manually review the parsed JSON and approve the execution.The Execution Layer: Once approved, the backend releases the payload to external APIs, integrating Paystack for payment processing, Google Calendar via Service Accounts for scheduling, and Supabase for state management.
Challenges we ran into
Bridging the gap between an asynchronous chat interface, a stateless API backend, and a secure frontend dashboard was incredibly complex.
State Management Across Domains: When a user initiates a command in Slack, the system has to "pause" that state in the FastAPI backend, generate a unique request ID, and wait for an entirely separate React frontend application to ping back with a valid Auth0 token before resuming the function.
LLM Output Determinism: Early on, the AI would occasionally append markdown formatting (like ```json) or conversational filler to its output, breaking our backend logic. We had to heavily refine our prompt engineering to force Gemini into strict, deterministic tool-calling behavior.
Tunneling and Webhooks: Dealing with dynamic ngrok URLs during development meant constantly updating our Slack Event Subscriptions and Auth0 callback URLs. Deploying the backend to a live Render environment was a necessary hurdle to stabilize the webhook flow.
Accomplishments that we're proud of
The Tripartite Architecture: Successfully bridging a stateless Python/FastAPI backend, a dynamic Vite/React frontend, and an asynchronous Slack bot into one cohesive loop. Getting Slack to talk to our server, having the server wait for a React frontend approval, and then completing the Slack loop is a massive architectural win.
Deterministic AI Outputs: Prompt engineering Gemini 3 Flash to perfectly and consistently extract conversational intent into strict, application-ready JSON payloads without hallucinating markdown or conversational filler.
Zero-Trust AI Execution: We successfully built a system that proves AI can be used for sensitive enterprise operations without sacrificing security, effectively solving the "rogue agent" problem through our Auth0 human-in-the-loop integration.
What we learned
Beyond mastering specific APIs like Paystack and Google Calendar, the biggest technical takeaway was designing secure OAuth 2.0 flows and step-up authentication. We learned how to properly validate webhook signatures to prevent payload spoofing and how to architect an AI agent not just for intelligence, but for enterprise-grade security.
What's next for AuraOps
We have three primary milestones to evolve AuraOps from a functional prototype into a production-grade SaaS platform:
Native Function Calling: Upgrading from static intent parsing to true autonomous tool use. Currently, our action space $\mathcal{A}$ is a rigid mapping of intents to functions. In the future state, the AI will evaluate the state $S_t$ and dynamically select the optimal tool sequence to achieve the goal $G$:$$a_t = \pi(S_t, G) \text{ where } a_t \in \mathcal{A}_{tools}$$
Real-Time Dashboard Telemetry: Replacing our static HTTP polling with Server-Sent Events (SSE) or WebSockets. This will allow the React command center to update with pending Slack authorizations in real-time, reducing approval latency to milliseconds.
Multi-Tenant OAuth Distribution: Re-architecting the backend database (Supabase) to handle full Slack OAuth 2.0 flows, allowing any organization to install AuraOps in their workspace with a single click.
My project is a dev tool built on slack rails. To test it out, join the slack app using the link in the "Try it out" section, then start the backend clicking the banckend url and waiting for the render server to wake. You can now instruct the slack bot "@AuraOps" to carry out different tasks.
📝 BONUS BLOG POST: Surviving the Panic of Giving an LLM the Company Checkbook
I love Slack, and I love building AI agents. Naturally, I thought it would be a brilliant idea to combine the two for my hackathon project. The idea was AuraOps; an autonomous bot that sits in a Slack channel, reads natural language, and executes heavy ecosystem operations like provisioning hundreds of NFC wallets, paying vendor invoices or scheduling events using google calender. It sounded incredibly sleek until I actually started writing the backend code.
Suddenly, a terrifying thought hit me at 3:00 AM. To make this work, I was about to hand over my production API keys to an LLM. I was giving a chatbot absolute, unrestricted God-mode over a financial database. If the AI hallucinated a command, or if a team member made a careless typo in Slack, the bot could instantly drain the ecosystem's funds. I completely panicked; I spent the next few hours ripping my hair out trying to build custom middleware to restrict the AI. Everything I wrote felt hacky, insecure, and honestly, a bit miserable.
I was ready to scrap the high-stakes features entirely until I finally dug deep into the documentation for the Auth0 Token Vault for AI Agents. It was exactly the architectural lifeline I was desperately looking for. It completely shifted my dilemma from "How do I make the AI perfectly safe?" to "How do I build a zero-trust environment where the AI's mistakes don't matter?"
Here is the beautiful technical reality I ended up building. Token Vault allowed me to ruthlessly strip my FastAPI backend of all its permanent administrative credentials. Now, when I tell AuraOps to pay a ₦500,000 catering invoice, the Gemini AI does the heavy lifting of parsing the intent and drafting the API payload. But when it tries to actually execute the payment, it slams into a brick wall; it simply does not hold the transfer:funds scope.
Instead of crashing, the Token Vault intercepts the workflow. The agent shoots a message back into Slack begging for permission. I click the link, get routed to my secure Vite dashboard, and Auth0 triggers a step-up authentication. I log in, verify the details, and explicitly grant the delegated consent. Only then does Auth0 hand a temporary, scoped token back to the AI to finish the job.
That moment of seeing the step-up authentication work flawlessly was the absolute peak of my hackathon. It wasn't just a cool feature to tick a box for the judges; it was a fundamental lesson in engineering. I learned that building enterprise AI is not about trusting the bot; it is about cryptographically ensuring you never have to.
Log in or sign up for Devpost to join the conversation.