Inspiration
We kept coming back to one number: 500,000 small trucking fleets in the United States, running on spreadsheets and phone calls. Enterprise TMS software exists for the big carriers, but the owner-operator running 10 trucks has nothing. Every load assignment is a gut call. Every delay is discovered when the customer calls. Every HOS violation is a surprise. We wanted to build the tool that actually serves them.
What it does
DispatchIQ is an AI copilot for fleet dispatchers. It gives small trucking operations four things they've never had in one place:
- Live Fleet Dashboard — real driver cards showing location, HOS remaining, fuel level, load progress, cost per mile, and automated severity-scored alerts for out-of-route deviations, fuel risk, and schedule delays. All grounded in live Supabase data.
- Smart Dispatch — type in a load origin and destination, get a ranked list of available drivers with AI reasoning covering HOS compliance, proximity, fuel status, and cost per mile. No more gut calls.
- AI Chat Assistant — ask any fleet question in plain English ("Who can take a Phoenix to Dallas load right now?") and get a structured, data-grounded answer card in under 2 seconds, powered by Groq LLaMA 3 and a 9-intent LangChain router.
- Voice Copilot — press a button and ElevenLabs reads your fleet status or critical alerts aloud in a natural voice. Built for dispatchers who are on the phone, driving, or just need their hands free.
How we built it
We split the backend into two Flask services deployed on Railway. The Fleet API (port 8000) handles driver data from Supabase PostgreSQL, driver scoring, out-of-route ratio calculations using haversine distance, and load recommendations. The AI Chat service (port 5001) runs a LangChain intent router with 9 intents, calls Groq LLaMA 3 for reasoning, and pipes responses to ElevenLabs for voice synthesis.
The frontend is React with TypeScript and Tailwind CSS, deployed on Vercel, with a provider-agnostic fleet data abstraction layer that switches seamlessly between live Supabase data and mock fallback — so the AI layer never breaks even if the ops backend is unreachable.
We set up full CI/CD from day one: GitHub Actions for both services, Railway Git integration for auto-deploy on every push to main, and Vercel connected to the repo for zero-touch frontend deploys.
Challenges we ran into
The hardest part was the merge conflict marathon. Three branches — deployment, integrate,
and main — had diverged significantly, and the TTS implementation had been built differently
in each one. We had to surgically resolve conflicts in chat_service.py, tts.py, and
config.py while preserving the correct behavior from each branch.
The voice agent was broken right up until the final hours. The root cause turned out to be
three separate issues stacked on top of each other: a hardcoded localhost URL in the frontend,
a route name mismatch between /api/tts and /api/voice/tts, and the frontend expecting
JSON with base64 audio while the backend was streaming raw audio/mpeg bytes. Debugging
across Railway logs, browser DevTools network tab, and two deployed services simultaneously
was genuinely painful.
Railway's build cache also caused a deployment where old code ran for 40 minutes after we pushed a fix, because all Docker layers were cached. We learned to force cache busting when a route simply won't appear despite the code being correct.
Accomplishments that we're proud of
Getting the full stack — two Railway backends, Vercel frontend, Supabase database, Groq, and ElevenLabs — all talking to each other in production, with real data, within 23 hours is something we're genuinely proud of. The voice copilot working end-to-end in the deployed version (not just locally) felt like a real win given how close to the wire it came.
The provider abstraction layer is also something we think is genuinely good engineering. The AI layer doesn't know or care whether it's talking to live Supabase data or the mock provider — which means plugging in the Trucker Path NavPro live GPS feed is a configuration change, not a rewrite.
What we learned
Ship the CI/CD pipeline first, not last. We spent hours manually deploying and debugging environment variable mismatches that a proper pipeline would have caught in minutes. By the time GitHub Actions, Railway Git integration, and Vercel were all wired up correctly, the last few hours of the hackathon became dramatically more productive.
We also learned that debugging across multiple deployed services requires discipline — always check the Railway runtime logs before the browser console, because the error you see in the The browser is almost never the real error.
What's next for DispatchIQ
The Trucker Path NavPro API integration is already architected — plugging in live GPS and ELD feeds requires swapping the mock provider for the live API provider, which is a single configuration change. From there, the roadmap is real-time push notifications for HOS violations, a mobile-optimized dispatcher view, and a $49/dispatcher/month SaaS offering targeting the 500,000 small fleets that enterprise TMS has left behind.
Built With
- docker
- elevenlabs
- flask
- github-actions
- groq
- javascript
- langchain
- llama-3
- postgresql
- python
- railway
- react
- supabase
- tailwind-css
- trucker-path-navpro-api
- typescript
- vercel
- vite
Log in or sign up for Devpost to join the conversation.