posted an update

by the way- like many others probably if the model is asked many questions in a row, it has 20,000 token limit per minute so you may run into throttling issues! This is especially true if you were to create many teams in the same conversation because that means it has a lot of the data already cached into its context

Log in or sign up for Devpost to join the conversation.