About Packt

Inspiration

Packt came from a very real problem in the creator space. A lot of YouTubers spend huge amounts of time planning, filming, editing, and polishing their videos, but many of those videos still do not perform as well as they should. Often, the issue is not the content itself. The real issue is the packaging around it. The title may not create enough curiosity. The hook may take too long to get to the point. The thumbnail may not stand out strongly enough to earn the click. In other words, strong content can still fail if the presentation does not immediately capture attention.

That felt like a genuine gap in the market. Most tools available today either focus on search engine optimisation, or they help only after a video has already been published. Some AI tools can generate suggestions, but many of them feel generic and are not clearly grounded in evidence. We wanted to build something that could help creators earlier, at the point where the most important decision is still being made: what should I publish, and how should I package it?

Another major part of our inspiration came directly from the challenge itself. The Kaggle dataset provided for the hackathon made the problem much more interesting, because it gave us access to real YouTube trending data instead of forcing us to rely only on assumptions. Rather than building a tool that simply gives subjective advice, we wanted to create something that could actually compare a creator’s video against patterns from videos that had already trended. That made the idea behind Packt much stronger. It meant the product could sit at the intersection of creator intuition, benchmark data, and AI based analysis.

So the idea became simple but powerful: a creator pastes in a YouTube link, and Packt helps them understand how well that video is likely to perform, what is helping it, what is hurting it, and what they should change before or around publishing.

What it does

Packt is a pre publish intelligence tool for YouTube creators. Its job is to help creators make better publishing decisions by turning a YouTube link into a structured set of recommendations.

The user submits a YouTube URL, and Packt runs that through two core systems. The first is the Main Engine. This system generates and scores possible hooks, titles, and thumbnail concepts. It is focused on recommendation and evaluation. It looks at the video context and produces structured options that are meant to improve click potential and overall performance.

The second system is the AI Swarm. This is a persona based layer where different audience archetypes simulate how different kinds of viewers might respond. Instead of relying on only one opinion, Packt uses multiple points of view. Some personas may care more about curiosity, others may react more strongly to clarity, emotion, novelty, or pacing. The result is a stronger consensus process that gives the product more depth.

From those two layers, Packt gives the creator a final recommendation. That recommendation includes the best hook, the best title, the best thumbnail direction, a virality score, confidence levels, and reasoning for why one combination performed better than others. The goal is not only to tell the user what to do, but also to explain why that recommendation makes sense.

In the broader backend version of the system, Packt also compares the submitted video against benchmark patterns built from the Kaggle YouTube trending dataset. That benchmark layer gives the product a more grounded understanding of what successful content tends to look like across categories. So Packt is not just generating ideas. It is trying to connect those ideas to real performance patterns from YouTube itself.

In practical terms, this means Packt helps creators answer a simple but important question before they go live: does this video have the right packaging to perform well, and if not, what should I change?

How we built it

We built Packt as a modern web application with a strong focus on product experience. On the frontend, the project is a Vite, React, and TypeScript single page application designed as a dark premium dashboard for creators. We wanted the interface to feel like a real software platform, not just a rough technical demo, so a lot of attention went into layout, hierarchy, flow, and clarity.

The dashboard is structured like a modern SaaS product. It includes pages such as Overview, New Analysis, Engine Output, Swarm Consensus, Thumbnail Lab, Hooks, Titles, Analytics, and Settings. This gave the project a product shaped architecture instead of a single results screen. A user can move between different parts of the analysis and understand the recommendation from multiple angles.

For routing and structure, we used React Router and a central app shell. For styling and interface design, we used Tailwind CSS with shadcn style components so the product could look sharp and consistent. For motion and transitions, we used Framer Motion. For charts and score visualisation, we used Recharts. For user feedback and interaction states, we used Sonner toasts. Altogether, that stack helped us create a dashboard that feels much closer to a polished creator tool than a basic prototype.

On the data persistence side, we used two React context providers. One stores the latest Main Engine analysis, and the other stores the latest Swarm analysis. Both are persisted in sessionStorage, which means users can refresh the page or move between routes without losing their last successful result. That was important because analysis is not useful if it disappears the moment the user navigates away.

The actual analysis flow works through external APIs. When a creator submits a YouTube link, the frontend sends that link into two systems. One request goes to the Main Engine API, which returns scored hooks, titles, thumbnail concepts, and a recommended final combination. The other request goes to the Swarm API, which returns persona based reactions, vote patterns, and consensus information. We used Promise.allSettled so the app could still recover gracefully if one service failed while the other succeeded. That made the overall experience much more resilient.

Behind that, in the broader backend system, we also built a benchmark and scoring pipeline that uses the Kaggle dataset. That dataset contains a large collection of trending YouTube videos across multiple countries. We processed that information into benchmark statistics by category so the system could compare a creator’s video against actual patterns linked to virality. That includes things like views, engagement signals, title structures, tag behaviour, timing patterns, and category specific trends.

We also integrated multimodal AI analysis so the system was not limited to metadata only. This let Packt move beyond just looking at numbers and also consider the content level characteristics of a video. In combination, the benchmark pipeline and AI layer made the final virality score more explainable and more useful.

Challenges we ran into

One of the biggest challenges was credibility. There are many AI products that generate suggestions, but not all of them feel trustworthy. We knew early on that if Packt was going to be useful, it could not just sound smart. It had to feel grounded. That meant we had to think carefully about the logic behind the scores, the structure of the outputs, and the reasoning shown to the user. The challenge was not only building intelligence, but presenting it in a way that felt reliable rather than vague.

Another major challenge was orchestrating multiple systems at the same time. Our frontend depends on separate APIs, each with its own response structure, latency, and possible failure points. In a normal flow, that is manageable, but in a hackathon environment, where services can be unstable or incomplete, it becomes a real design and engineering problem. We had to make sure one failing system did not destroy the entire experience. That is why fallback logic, loading states, demo mode behaviour, and partial success handling became such an important part of the product.

We also faced a strong design challenge. Because Packt is AI heavy, it would have been very easy to make it look overly flashy, crowded, or full of familiar AI visual clichés. We did not want that. We wanted the opposite. We wanted it to feel calm, premium, and intentional. That meant being very selective with the layout, the use of motion, the colour system, and the way information was revealed. Making something feel minimal while still showing a lot of analytical output turned out to be harder than expected.

Working with real YouTube analysis added another level of complexity. Metadata can be inconsistent. External services can fail. Model responses can be slow. Some parts of the analysis are naturally probabilistic rather than exact. So another challenge was building the product in a way that stayed useful even when parts of the system were imperfect. In other words, the product had to be practical, not fragile.

Finally, there was the challenge of turning the Kaggle dataset into something actually usable. A dataset by itself is not a product. We had to think about how to transform raw trending data into benchmark signals that would be meaningful for a creator in a real decision making scenario. That meant extracting patterns, simplifying them into usable metrics, and connecting them back to recommendations a creator could act on.

Accomplishments that we are proud of

One accomplishment we are especially proud of is that Packt feels like a real product rather than just an idea. We did not stop at a concept or a static mockup. We built a functioning dashboard, connected it to live style API flows, created persistent analysis state, and structured the app like a real piece of creator software. That gave the project a much stronger sense of completeness.

We are also proud of the product architecture. Packt is not simply one AI call wrapped in a user interface. It combines multiple layers of intelligence. There is a Main Engine for structured recommendation, an AI Swarm for simulated audience response, and a benchmark layer built from the Kaggle trending dataset for grounding and comparison. That combination gives the product more depth, more defensibility, and a clearer reason for existing.

Another thing we are proud of is the way we used the Kaggle dataset. Instead of mentioning the data as background context, we actually treated it as part of the logic of the system. It became the basis for understanding virality patterns and benchmarking videos against real examples. That made Packt feel far more evidence driven than a generic AI feedback tool.

We are also proud of the user experience. A lot of hackathon projects are technically interesting but hard to understand. We wanted Packt to be the opposite. We wanted someone to open the product and immediately understand what it is for, where to click, what the outputs mean, and why the dashboard exists. We think that clarity is one of the strongest parts of the build.

Finally, we are proud that Packt focuses on a real workflow that creators actually care about. It is not trying to solve everything in the creator economy. It is focused on one valuable moment, which is the publishing decision. That focus gave the product more sharpness and more practical value.

What we learned

This project taught us that the strongest products often come from a very specific problem rather than a huge abstract one. In our case, the problem was not that creators need more data in general. The problem was that creators need better decisions before they publish. That framing changed everything. It shaped the product, the dashboard, the scoring system, and the value proposition.

We also learned a great deal about combining technical systems into a single user experience. It is one thing to build an API. It is another thing to connect multiple APIs, maintain state across pages, handle failure gracefully, and still make the product feel smooth and intuitive. Building Packt forced us to think beyond isolated features and focus on orchestration and usability.

Another important lesson was about AI product design. AI by itself is not enough. If the output is unclear, ungrounded, or badly presented, the user does not trust it. We learned that explanation matters just as much as generation. A score is only useful if the user can understand what it means and what action to take next. That is why we kept returning to clarity, structure, and reasoning throughout the build.

The Kaggle dataset also taught us something important. Real world data makes a huge difference when building products like this. Without that dataset, it would have been much easier to fall into vague generalisations about virality. With it, we were forced to think more carefully about patterns, benchmarks, and what success actually looks like in different categories. That made the project much more rigorous.

Perhaps the biggest lesson was that AI products become much more valuable when they reduce uncertainty. Creators do not necessarily want endless analysis. They want help making a decision with more confidence. That is the role we want Packt to play.

What's next for Packt

The next step for Packt is to move from a strong prototype toward a more complete creator operating system. There are several directions we want to push further.

One major area is thumbnail generation and testing. Right now, Packt can return thumbnail concepts and recommendations, but a future version should be able to generate richer visual outputs and help creators compare alternatives more directly.

Another important area is post publish analytics. At the moment, Packt is strongest as a pre publish intelligence tool, but in the future we want it to connect more closely to real performance after upload. That would let creators compare prediction versus reality and improve their next decision even more effectively.

We also want stronger personalization. Different niches, audiences, and creator styles perform differently, so the system should become more tailored over time. A gaming creator, a finance educator, and an entertainment channel should not all receive the same style of recommendation. That is where deeper benchmark segmentation and better modelling can make a major difference.

On the technical side, we want to improve performance, caching, and reliability, especially as the number of analyses grows. We also want to deepen the AI Swarm so persona behaviour feels even more distinct and useful, especially when there is disagreement between audience types.

Most importantly, we want to continue building Packt into a product that creators would genuinely rely on. The long term goal is not just to make another analytics dashboard. It is to create a system that helps creators package content more intelligently, publish with more confidence, and learn faster from every video they make.

In short, we want Packt to turn YouTube publishing from guesswork into a more structured, data informed process.

Built With

Share this project:

Updates