Inspiration

Every college student knows the feeling: you just sat down with a beautiful plate of barbeque chicken. You are ready to eat in peace. But suddenly, a wave of dread washes over you. Wait, do I have a CMSC420 project due tonight? Does my math lecture start in 15 minutes? Checking Canvas, Google Calendar, and syllabi to figure out your schedule takes time and ruins the vibe. We wanted to build a hands-free, hyper-intelligent assistant that handles the cognitive load for us. We just wanted to ask the room, "Hey Jarvis, can I eat this chicken right now?" and get a brutally honest, context-aware answer.

What it does

Barbeque Chicken is an AI-powered voice assistant that acts as your personal time-management bodyguard.

Using a fully local wake-word engine, you can activate the assistant hands-free. When you ask a question like "Can I chill right now?", the system springs into action. It dynamically queries a local SQLite database of your upcoming Canvas assignments and pulls live, recurring events from your Google Calendar.

It then feeds your current context (time, calendar, and assignments) into a Gemini-powered reasoning engine. The LLM evaluates the difficulty of your upcoming tasks based on their names and deadlines, calculates your free time, and speaks out loud—advising you to either enjoy your barbeque chicken in peace, or warning you that you have a midterm in three hours and need to start studying immediately. It can also generate and pop open a sleek, custom HTML dashboard of your schedule on the fly.

How we built it

We built the core pipeline using Python, stitching together several distinct modules:

The Ears (Wake Word & STT): We used openWakeWord running on an ONNX inference framework to catch "Hey Jarvis" entirely offline without draining API credits. We then cleanly hand off the microphone lock to Google's Web Speech API for fast transcription.

The Brain (Intent Routing): We used the gemini-2.5-flash API not just for chatting, but as a strict JSON intent router. It takes the user's natural language, categorizes it, and maps it to local Python functions.

The Integrations: We bypassed tedious OAuth setups by parsing secret iCal links using icalevents to automatically expand recurring classes. Canvas assignments were queried dynamically using SQLite.

The Voice (TTS): We utilized pyttsx3 to tap into native Windows SAPI5 voices for zero-latency, offline text-to-speech.

Challenges we ran into

We spent hours fighting in "Dependency Hell" and battling Windows hardware locks:

C-Level Audio Deadlocks: When transitioning from the wake-word listener to the speech-to-text listener, Windows Audio Service would lock the microphone, causing the entire terminal to freeze. We had to engineer a precise hardware hand-off using .close() commands and time.sleep() delays to let the drivers breathe.

The "Silent Speaker" Bug: Modern audio drivers go to sleep to save power, which was cutting off the first few words of Jarvis's responses. We hacked a fix by injecting a 200ms, 800Hz winsound.Beep right before the TTS engine fires to force the hardware awake.

API Rate Limiting: We hit 429 RESOURCE EXHAUSTED limits during testing. We had to build exponential backoff and error-handling wrappers into the intent router to prevent the whole app from crashing during traffic spikes.

Recurring Event Math: Standard calendar parsers were deleting all our classes because the "start date" was in January. We had to pivot libraries mid-hackathon to correctly calculate RRULE expansions.

Accomplishments that we're proud of

Zero-Click Architecture: Building a voice loop that can listen, transcribe, think, speak, and go back to listening without ever requiring a keyboard or crashing.

LLM as a Router: Successfully coercing Gemini into outputting strict, parsable JSON to trigger local UI functions, mimicking enterprise-level Model Context Protocol (MCP) behavior.

Contextual Reasoning: The AI doesn't just read a list of events; it actually understands that a "Phase 1 Project Implementation" takes more time than a "Read Chapter 2" assignment, and tailors its advice accordingly.

What we learned

Working with low-level audio streams (PyAudio) requires extremely careful memory and hardware management.

Spend lots of money to prevent rate limiting

Large Language Models are incredibly powerful when used as logic routers for local code, not just text generators.

What's next for Barbeque Chicken

Real Canvas Integration: Swapping out the SQLite database for live Canvas API tokens to pull real-time grades and syllabus updates.

Model Context Protocol (MCP): Upgrading our custom JSON router to a standardized MCP Host, allowing us to easily plug in Spotify (to play focus music) or GitHub.

Built With

Share this project:

Updates