Inspiration
Hotels are still screen-first environments while millions of travelers rely on voice and assistive tech. Multilingual guests and people with visual impairments struggle to request services, navigate facilities, or communicate urgent needs. We wanted a zero-screen, voice-only layer that makes every hotel interaction accessible from the first minute of arrival.
What it does
LumiVoice is a multilingual voice assistant for hotels. Guests speak in their own language to: request services (towels, checkout, taxi, wake-up), navigate the property with audio directions, trigger accessibility or emergency help, receive confirmations via natural TTS. The system routes intents to hotel workflows and staff dashboards in real time.
How we built it
Voice-first stack: speech-to-text → intent orchestration → LLM reasoning → hotel action API → multilingual TTS. Frontend: lightweight web app optimized for screen readers. Backend: FastAPI services, vector memory for context, role-based staff panel, end-to-end logging and latency guardrails.
Challenges we ran into
Handling diverse accents and noisy lobby audio Designing flows without any visual dependency Keeping responses under 2 seconds Mapping free-form speech to structured hotel operations
Accomplishments that we're proud of
Fully functional voice-only journey in 3 languages 92% intent accuracy on hospitality scenarios Accessible UX validated with screen-reader users Working staff dashboard with live tickets
What we learned
Accessibility improves experience for all guests, not only disabled users. Voice design requires tighter constraints than chat design. Real-world hospitality needs deterministic workflows on top of generative AI.
What's next for LumiVoice
PMS integrations (Opera, Cloudbeds) offline fallback for poor connectivity adaptive accent model pilot deployment in a partner hotel with analytics on wait-time reduction
Built With
- elevenlabstts
- fastapi
- next.js
- redis
- vercel
Log in or sign up for Devpost to join the conversation.