Inspiration
We wanted to create Talking-Oski to help provide campus news, updates, and welcome students in an engaging way. Our aim was to make the experience accessible for everyone, including speech capabilities for people who are blind or visually impaired, ensuring an inclusive platform for all students.
What it does
Talking-Oski is a chatbot that offers real-time campus information and updates through both text and audio. Users can ask questions about UC Berkeley, and Oski responds with informative guidance. The tool also includes speech capabilities to accommodate students with disabilities.
How we built it
We built it using:
- Next.js for both frontend and backend integration.
- OpenAI's GPT-3.5 for text responses.
- Deepgram API for transcription and text-to-speech capabilities.
- React for the interface and handling both text and voice interactions.
- Tailwind CSS for a responsive and clean design.
Challenges we ran into
We encountered challenges in managing the synchronization between transcription, text generation, and audio playback, ensuring smooth interactions between the user and Oski.
Accomplishments that we're proud of
We successfully integrated real-time text and audio capabilities, creating an inclusive platform where users can interact with Oski through both text and voice. This allows us to support students with visual impairments.
What we learned
We learned a lot about API integration, real-time streaming, and handling asynchronous requests for multimedia processing, all while ensuring an inclusive user experience.
What's next for Talking-Oski
- Expand Oski’s knowledge on more campus resources.
- Enhance speech features with customizable voices.
- Improve scalability to handle more users simultaneously.
- Integrate with campus systems for real-time updates on events and services.
Built With
- css
- html
- javascript
- next.js
- react


Log in or sign up for Devpost to join the conversation.