Inspiration

As a Chief of Staff, I was asked to automate executive emails, but quickly hit a wall with AI. It was only about 80% reliable, and the remaining 20% risk of factual errors or an impersonal, AI-generated tone posed a major reputational risk to our executives—and to me.

My initial solution was to build a manual approval step into the workflow, but this created a new bottleneck: I had to approve every single email. The tools allowed for looping a human into a workflow, and then I realized I didn't have to be that human. Tired of being at the mercy of AI's limitations and my own role as a manual approver, I used this hackathon as the opportunity to build the solution I desperately needed.

What it does

Hoop is a human-in-the-loop platform that plugs real people into AI or automation processes whenever human judgment is needed. It acts as an on-demand human fallback for AI workflows, turning fragile automations into robust, trustworthy systems.

When AI produces inconsistent outputs, or when a human touch is needed for finalizing a task like adjusting the tone in an email, Hoop brings in a qualified human (a "Hooper") to handle the decision. The Hooper completes the task based on guidelines set by the user (a “Requester”)—for example, "Rate this AI-generated answer for accuracy" or "Check whether this support ticket has been assigned to the correct category". The structured result is then sent directly back into the Requester’s workflow, whether it's into a Zap, a spreadsheet, or CRM, with the reassurance that the AI outputs have been verified. This allows users to automate 100% of a process by covering that final, crucial "last mile" with human intelligence.

How I built it

Hoop was built as a platform that could seamlessly integrate into the toolchains that people like me already use. The process involved:

  1. Task Submission: Building out an API and a user-friendly web interface so that tasks can be sent to Hoop from anywhere—be it a no-code tool like Zapier or Make, or directly through the API.
  2. Defining Judgment: Creating a clear and simple interface for the Requester user to specify exactly what decision or action is needed from a human.
  3. The Hooper Network: Establishing a system for a global network of expert freelance Hoopers to accept and complete these tasks quickly and accurately. Two key features: a) a dashboard for monitoring task progress in real time, and b) a dispatch system that routes tasks to online Hoopers who have the skills needed to complete the task (e.g., copywriting, review research, German translation, etc.)
  4. Structured Results & Payouts: Ensuring the platform can return the completed task data in a structured format that drops directly back into the original system. The system operates on a pay-per-task model using prepaid credits, ensuring that every time that a Requester uses Hoop, a Hooper is compensated for their quality work.

Challenges I ran into

Nuances of Bolt: While Bolt was phenomenal for rapidly building the project's architecture, API foundation, and initial prototype, asking Bolt to implement large code changes led to unsaved work, bugs, and code regressions. I learned to break down complex tasks into small, manageable steps to avoid these issues. However, my process of prompting the application into existence over a month as a non-developer naturally introduced some technical debt. A key challenge was then shifting from rapid prototyping to stabilization, which required support to refactor the code. This improved security, eliminated console errors, and ultimately led to a more robust and scalable application.

Integrating with Zapier: I hit a struggle offering Hoop’s core "pause and resume" functionality in Zapier. I discovered that enabling this required moving from Zapier's user-friendly UI to its code-intensive Command-Line Interface platform. This pivot demanded a steep learning curve, but through Bolt’s discuss feature and Google Gemini, I was able to successfully make it happen.

Accomplishments that I’m proud of

My biggest accomplishment is bridging the gap between concept and reality. Just two months ago, I simply wanted to get on the pulse of agentic AI. Today, I've built a functional tool that solves a critical problem I face daily in my demanding Chief of Staff role. I’m proud of navigating the steep learning curve of a new development platform and the intricacies of APIs.

The most rewarding part was the empowering experience of taking a technical vision and making it tangible. I created a product that I know I would use—a tool that provides the control and reliability needed to confidently deploy AI in a high-stakes professional environment. That, to me, is the ultimate validation of this project.

What I learned

This project was a deep dive into the world of agentic AI. I went beyond theory and had the opportunity to research the rapidly evolving landscape, test new tools, and speak with people about their own needs and limitations regarding trust in AI.

Technically, I learned the ins and outs of APIs and how to build with a new platform. However, the most critical lesson was about strategy: to move fast and build a reliable application, you must first lay a solid foundation. You have to invest time upfront to map out the architecture before writing a single line of code. This principle became the core of my process, as knowledge is what gives you control over unpredictable systems. In the end, the journey of building Hoop mirrored its very mission: creating a solid foundation to manage and control the unpredictable nature of AI.

What's next for Hoop

The vision for Hoop is to become the essential trust layer for AI automation. The next steps are focused on expansion and intelligence:

  • Start Growing the Hooper Community: Actively recruit and vet a diverse, global community of Hoopers with specialized domain expertise (e.g., copywriting, data analysis, content moderation) to handle an even wider range of tasks.
  • Deeper Integrations: Move beyond Zapier to build direct, one-click integrations with all agentic AI platforms where this problem is most acute. The goal is to make adding a "Hoop" as easy as adding an AI step.
  • Intelligent Feedback Loops: Develop a system where the corrections and decisions made by Hoopers can be fed back to the user. This data could then be used to fine-tune and improve the original AI models over time, creating a virtuous cycle of improvement.
  • Expanding Language Support: Leverage native-speaking Hoopers to offer nuanced translation and localization services, ensuring that multilingual content makes sense to native readers far better than a literal AI translation could.

The goal is to let everyone building with AI finally have the confidence to deploy their systems without the lingering worry of "What if it gets it wrong?".

Built With

Share this project:

Updates