Inspiration and About

We wanted to create a LLM powered chatbot that will assist with picking competitive Valorant Players based on their stats over a course of competitive games.

We fundamentally believed that a player who is able to reach high tier stages and tournaments has the experience and consistency to excel as a competitive player and thus should be prioritized in our selection process, hence our solution is primarily focused around this fact.

What it does

This is a chatbot interface directly connected to our operational agent from AWS Bedrock.

The Agent itself is powered by Claude Sonnet 3 and utilizes a knowledge base that contains tabular definitions and

How we built it

The different sections of this build are listed below as well as further breakdowns as needed.

1. Data Fetch and ETL

  • The data was primarily sourced from VLR where we used BS4 to webscrape for the player stats across each tournament.
  • We then cleaned the data and then imputated missing data with average values to eliminate misleading stats that are caused by missing data.
  • The data was formatted and joined to create our SQL tables and stored in RDS.

2. Methodology

A shortened summary is provided here, you can find a more detailed breakdown here.

  • Data We quickly identified that there was no real way for us to rank players without oversights, other websites and organizations have previously attempted this but we saw fundamental flaws with a number of the implementations. We saw the vision behind the Rating metric but also see how it may fall short when considering players under the notion of consistency and experience. We also saw a possible favoritism towards players who may supporting agents that do not take front and center on a stat screen as well.

    At it's core we introduced two new main stats to rank and order players that attempts to take consistency, experience and supporting roles into account.

  • LLM For our LLM we utilized Claude Sonnet 3.0, we choose this model as we saw significant performance improvements when attempting to formulate a plan for execution. We utilized a knowledge base and action groups to functionally implement SQL pulls for the cleaned and adjusted data inside our RDS instance. The model was prompt engineered to formulate a plan and interpret the data provided. We also used function calls and lambda as a secondary method to answer follow up questions regarding the data. Some of the calls done here are a little complex so we're very proud of this.

  • App For our app we used Streamlit as a front-end interactable user interface, responses and traces are parsed and returned via a streaming methodology to reduce the amount of time the user is waiting for a response. It helps makes the orchestration and thinking time a lot more user friendly and bearable.

Challenges we ran into

  • Data As with anyone working with data, we struggled a lot with data errors and visualizing a story from the data we had access to. One of the biggest issues we had was determining what was the best realistic way of identifying players that were good. I believe as these type of questions have no real answers we really struggled without the necessary competitive knowledge on this front.

  • Domain Our domain knowledge, specifically in the competitive field of Valorant, was also very limited. We mostly happen to be low rank bronzies and silvers but we tried our best using what knowledge we had!

  • LLM/AI We struggled a lot with getting the LLM to functionally work the way we wanted it to. There was a lot of guessing and randomness that we really had a hard time flushing out especially when it came to prompt engineering. Scenarios like random logic shifts and when certain prompt instructions get drowned out / ignored led to many cups of coffee and energy drinks.

  • AWS This was all of our first time utilizing multiple AWS services to this scale and we also struggled as we were learning new tools and techniques. For this, I want to give special thanks to everyone from AWS who offered help and guidance to all entrants throughout this process!

  • Approach We had a lot of issues with how we were attempting to approach this challenge as well.

    For one we tried a completely knowledge base dependent solution. We saw many issues with knowledge base dependent sourcing of data. Primarily hallucinations and terrible runtimes as it attempts to pull chunks of data at a time. Aggregations and such would all have to be done before hand manually to be used by the agent and when when provided they may exceed the max size of a chunk.

    We also attempted more complex stats but would often end up with the agent not being able to understand the stat or dropping pieces of guidelines or instructions out. We tried also giving more leeway and freedom to the Agent but this often lead to more issues with the rigid structure of SQL table names and columns as well as issues with 'mixing' data.

  • Throttling Sometime during the middle of our project, our ability to call the invoke model api was throttled and caused significant issues with being able test and implement further functional improvements. We were lucky that we were not as heavily affected as we originally intended to do a small and less step intensive solution but even then the agent can often randomly reinvoke itself multiple times and cause errors.

What would we do if we didn’t have throttling issues

We wanted to implement more freedom for the agent to reprompt the knowledge base and use more functions to further improve the quality of the response being returned, the limiting number of calls made this a hard ask. Assuming we had access to more api calls by the minute, I believe an implementation where more data about the player, the way they play, and sentiment analysis would further the available information regarding the player to help the agent create even better reasoning and arguments for the players selected.

Accomplishments that we're proud of

We're happy to start to finish our first prototype! It took a lot to get here and I'm glad we were able to make it on time. We also persevered through a lot of data problems and around the clock teamworking so I personally want to thank my team for this!

What we learned

A lot of foundational experience in modern tools and services offered by AWS.

A lot of us were working with LLM on a first time basis as well so this served as an excellent introductory experience for us.

What's next for JettReviveMe

Given more time, we hoped to implement a more in-depth ranking system. We originally wanted to keep it simple as we were not sure how the Agent would react to large number of steps / requests and with hindsight this does seem to be correct with the presence of throttling errors but I believe I see a functional implementation of more in-depth stats that utilized the mass amount of data available even more efficiently if we were unthrottled.

Built With

Share this project:

Updates