Inspiration

As long time VALORANT players as well as students who have an interest with LLMs, this hackathon inspired us to experiment with generative AI on this intersection of our interests, creating our project "Team ACE (AI Competitive Evaluator)". We want to design a reliable LLM that makes effective use of available data to give clear and logical responces for managing esports teams (and perhaps also for helping with our VCT Pick'Ems next season).

What it does

Team ACE uses state-of-the-art LLM capabilities combined with a knowledge base of current VCT players to suggest team compositions for the highest levels of competitive play.

How we built it

We use Python to work with the Amazon Bedrock API, and use the Streamlit package for the user interface, as it is convenient to directly connect the UI with the backend for this project. We wanted to use the Claude Sonnet 3 model as it is one of the only models to that has both powerful reasoning and regional availibility for agents, however our quota was set to 0 with no support response, so we had to go with Amazon Titan. For our knowledge base we scraped data using BeautifulSoup from VLR.gg, as it houses relevant player stats, and stored them in Pinecone with Cohere English embeddings.

Challenges we ran into

The major challenge for us was learning how to use Amazon Bedrock and what exactly Bedrock, Bedrock runtime, and Bedrock agent was. Since we joined the Hackathon right around midterm season (about a week ago), we didn't have the workshop environments, so we watched some tutorial videos and looked through lots of demo code to develop our understanding bit by bit. Another major issue that affected us throughout the project was being rate limited when using Anthropic models, so we couldn't get even one request in. We saw that others were having similar issues with no solution, so we waited for resolution however none arrived until the day before, which is where we stand now, waiting for support. In the meantime, we tried our best with Titan, to get only somewhat passable results even with rigouous prompt engineering. Another major challenge was navigating the multitude of Amazon services and configuring all the different accounts, permissions, and resources at each step. This included lots of googling and looking through FAQ, and double-checking everything (So many open tabs...). Overall it was a lot of learning, but it all paid off and we've learned some valuable and applicable skills!

Accomplishments that we're proud of

As beginners in the LLM and AWS space, we're proud of creating our first LLM assistant, especially with the amount of time we had, and learning how to configure LLMs and knowledge bases in general, as it is sure to be a useful skill in the near future.

What we learned

We learned how to set up LLMs and customize them with instructions, formatting, and knowledge bases. We also learned how to interact with powerful LLM APIs in Python and integrate them in our own applications. Something interesting we found out about was how supposedly markdown tags greatly increase understanding of a problem for Anthropic models, though we never did get to apply that here, but it would be interesting to try in the future!

What's next for VCT Esports Assistant - Team Ace

Some additional ides we have not yet had time to implement would be to try to get through to support for Claude access, and also add some diverse sources of VCT-related online discussion to the knowledge base, as community insights should be a non-negligible factor in reasoning. Additionally, we could try out the fine-tuning option and calibrate our model with game data to develop the model's judgement. From a UI standpoint, we could upgrade the chat feature to include message permanence and scrolling.

Built With

Share this project:

Updates