Inspiration
We had the concept quite a while ago but didn't have a reason to execute it until we saw about the hackathon. We wanted to create a convenient way for all of our users to extract the data and insights necessary without relying on learning how to use our HUD/UI/UX. Our future vision is to simplify all data needs for actors in the esports space from esports teams to production/broadcasting teams.
What it does
It takes all types of data sources from VDP, Dev portal, and Riot API. In the future it will use our AI data to produce insights and visualizations for both teams and broadcasting partners.
How we built it
Built with AWS Sagemaker Notebooks: Data processing, flattening and uploading. AWS Bedrock: Language model provider. Cloudflare Workers: Serverless APIs. Cloudflare Durable Objects: Serverless websockets coordination. Cloudflare D1: Serverless SQL Database for holding processed and flatten data. Cloudflare R2: Bucket storage for user chat history.
Methodology:
- Identify critical statistics that measures team or player performances: Model responses must be based on real and up to date statistics to give responsible and accurate insights.
- Flatten and store source data in database: Improving data query efficiency reduces response wait time, returning simple structured data improves model's output accuracy.
- Building data query tools ports: Model must be able to use tools on demand to query statistics from our database; including filtering, grouping and selecting.
- Developing the backend APIs and websocket handlers: The application must have a way to use the model, coordinating between prompts, tool uses and responses, while ensuring scalability.
- Crafting frontend application's UI and widgets interactions: The chat is expected to render interactive widgets UI on top of the model's text responses.
- Designing system prompts and instructions: Sets the models role, encouraging certain behaviors and instructions on handling data that are rendered separately as widgets.
Data source used: Hackathon provided GRID data year 2024. Assets from https://dash.valorant-api.com/
Challenges we ran into
- Flatening complex source data: The levels of nesting on the source data reaches to 13 levels, this has to be compiled and organized down to 1 or 2 levels for efficient data storage and querying on database.
- High flexibility query tool: A tool designed for language model use must be as flexible as possible to ensure content rich and variety responses.
- Name to ID conversion: Users sends prompts using team names, these names often does not match the official name (e.g. Vitality -> Team Vitality, team names with non ascii characters, etc.), the ID finder tool must get creative when handling inputs.
- Model response and tool use request conflicts: The chat is desigend to be a three-party interaction rather than user-assistant interaction, the third party being the backend holding the tools, when models requests tool use, it also includes a message that is not relevant to the user.
- Model response and widget rendering conflicts: By default the model will relay the returned data of tools in its response, this creates a conflict when we render the data additionally on our interactive widget, system prompts were not able to stop this.
- User experience issues: Due to tool uses, messages attached with tool use requests are not sent to the user, this causes prolonged period of time where the user has no visual feedback, reducing the users engagement and interactivity.
Accomplishments that we're proud of
We already have a few of our pro team clients and content creators using it.
What we learned
- Complex SQL queries building: For the tools to be flexible it must build SQL queries on-demand fitted to the request.
- Prefills over system prompts: Due to system prompts not stopping repeating responses from data and widgets, prefilling responses will force the text response to not include the queried data, instead include key takeaways and interpretations.
- Return live activity of model: While the user is waiting for a response, every tool use request of the model will return the attached message as "thoughts" displayed in temporary secondary texts making known to the users that their prompt is being processed step by step.
What's next for Augment Codex
Add the rest of our features for Highlight generation, simplifying Digital Media Management for broadcasting teams (generate highlights with boaster in clutch scenarios for 2024 for example), help with storywriting for broadcasting (track stats of key players and teams), provide an easier way for teams to antistrat during tournaments. Expanding into League of Legends potentially in the following months as well.
We are also open for other ideas from the jury if it comes up.
Built With
- amazon-web-services
- and
- apis.
- aws-sagemaker-notebooks:-data-processing
- bedrock:
- bucket
- chat
- cloudflare
- coordination.
- d1:
- data.
- database
- durable
- flatten
- flattening
- for
- holding
- language
- model
- objects:
- processed
- provider.
- r2:
- serverless
- sql
- storage
- uploading.
- user
- websockets
- workers:
Log in or sign up for Devpost to join the conversation.