Inspiration
In today’s rapidly evolving VALORANT Esports scene, scouting and recruiting top talent is critical for building competitive teams. Recognizing the growing need for more efficient data-driven decisions in this process, our group was “hired” to develop a cutting-edge solution. Our mission: create a powerful LLM-powered digital assistant capable of answering complex questions and building optimized team rosters based on professional player data.
What it does
Leveraging Amazon Bedrock’s generative AI capabilities, we developed a unique assistant designed to streamline the recruitment process. By combining advanced data analysis with intuitive user interactions, our tool provides users with real-time insights into player performance, team compositions, and VALORANT Esports trends. This digital assistant equips coaches, analysts, and managers with the knowledge needed to quickly and confidently scout players, whether for regional tournaments or international competitions. From assessing individual player stats to comparing team performance, the assistant offers an unparalleled understanding of the VALORANT esports landscape, enabling anyone to stay ahead in this highly competitive field. From a usage standpoint, this digital assistant acts as a chat box that will accept user-input prompts and output different team compositions and statistics that are justifiable.
How we built it
We leveraged multiple data sources to develop our VCT eSports Manager, primarily focusing on esports and game data. In addition to these structured datasets, we got player earnings and rankings data from VLR.gg and also manually scraped community forums such as VLR.gg and Reddit to gather insights into regional playstyles and strategic tendencies. This extra context helped inform our data analysis and prompt generation for the assistant.
A key aspect of our model’s development was making informed design decisions on how to choose the best teams and player statistics. To ensure accuracy and relevance, we consulted with industry experts. Specifically, we spoke with an R6 professional coach to understand which statistics are most critical for evaluating player performance. We also collaborated with a Game Changers (GC) player to optimize our model for GC results, refining how we identify top players. By generalizing these optimizations to all regions, we ensured that our approach scales globally and remains applicable across various levels of competition.
The game data was processed then and organized into Pandas tables for further analysis. We focused heavily on player performance, filtering players across all regions based on criteria such as ranking, earnings, and key statistics. We also dealt with stale data, such as retired or duplicate players, doing our best to clean up the dataset. Since in-game leader (IGL) data is challenging to find, we manually identified 60 IGLs from various teams. Finally, we leveraged detailed game statistics from Riot’s AWS bucket, such as identifying players that primarily use the Operator and calculating stats like variance in kill-per-round (KPR), providing a more granular understanding of player and agent performance.
For more details regarding our methodology and model architecture, please reference the detailed write-up provided.
Challenges we ran into
We ran into quite a few issues in regard to hosting limitations, especially when running our services in certain regions. Some models available on Bedrock lacked necessary tools for our project were simply not fully optimized for Bedrock. Additionally, we ran into problems with token limits, where the cap of only 20-30k tokens per minute as a free user proved to be a significant limitation. We found a few workarounds to solve these problems, as explained in our detailed write-up.
What we learned
Some of the most interesting work that we did and learned quite a lot from was working with the massive amounts of data available to us. Incorporating various data engineering and data cleaning techniques, we were able to see how manipulating data can positively affect a code base and lead to better responses. In regard to using generative AI, learning how to prompt engineer and work with an LLM proved to be beneficial especially when looking towards the future and how technology is trending.
What's next for VCT Hackathon Esports Manager Challenge: Forge Your Team
Looking ahead, one of our key goals is to enhance the flexibility of Amazon Bedrock by enabling direct orchestration of the agent’s workflow from within the console. Currently, we build custom workflows for Bedrock, but it would be highly beneficial to adjust orchestration dynamically, especially when managing token usage. For example, incorporating summarization within individual conversations could help us reduce token consumption—so that raw stats don’t take up large portions of context (e.g., 10k tokens per message). This would optimize performance, allowing us to manage the token limit more efficiently.
Looking further ahead, we may incorporate vector search for cases where vague or contextual queries are needed, such as finding optimal team compositions for specific maps or retrieving detailed agent abilities. This would enable more flexible queries and improve the system’s responsiveness to complex requests.
Built With
- amazon-web-services
- bedrock
- jupyter
- litestar
- pandas
- python
- s3
- vercel

Log in or sign up for Devpost to join the conversation.