Inspiration

We chose this challenge because we recognized a critical gap in education. We are driven by the potential to revolutionize teacher training, giving educators a space to refine their skills without high-stakes pressure. Beyond the social impact, we were captivated by the technical complexity. Orchestrating a multi-agent system where distinct LLM personalities interact realistically offered a far more intriguing engineering challenge than standard application development.

What it does

MindSim AI is a simulator for teachers. It creates a virtual classroom where you can practice teaching by speaking into your microphone.

  • AI Students: The class is full of AI agents (like the "Class Clown" or "Shy Student") who listen and react to you.
  • Real-time Voice: You speak, and the students answer back with their own voices.
  • Feedback: The system tracks the "Mood" of the class. If students get bored or confused, you see it on a dashboard immediately.

How we built it

We used Next.js for the frontend and Supabase for the database. The core intelligence is powered by Azure:

  • Azure OpenAI (GPT-4o) acts as the brain for the students.
  • Azure Speech Services handles the Speech-to-Text (listening) and Text-to-Speech (speaking). We send the data in streams to make the conversation feel fast and natural.

Challenges we ran into

  • Speed: Making the AI reply instantly was hard. We had to use streaming to fix delays.
  • Realism: It was difficult to make the AI students sound like real teenagers and not like robots. We spent a lot of time fixing the prompts.

Accomplishments that we're proud of

We are proud that the live voice interaction actually works. Seeing the "Mood Meter" move up and down when we talk to the students feels amazing. We also built a generator that creates new unique students with one click.

What we learned

We learned a lot about Azure Speech SDK and how to connect it with LLMs in real-time. We also learned that managing the "silence" in a conversation is just as important as the speaking parts.

What's next for VirtualClassroom

We want to add visual emotions, so the student avatars smile or frown based on their mood. We also plan to add harder scenarios, like talking to an angry parent or handling a exam situation.

Built With

Share this project:

Updates