Considering the fact that we're all college students who have all sat through long lectures, we decided to think of a way to enhance the lecturing/presentation experience for both the presenter and the audience.
What it does
FlowDecks recognizes specific keywords through natural language processing and advances slides naturally and automatically. The presenter and the audience have separate views, but are able to both view the presentation in real time.This makes for a more engaging and more enjoyable presentation experience.
How we built it
We used a microservice architecture as our backend, and html and css for our frontend. We also used Firebase to store the content of the slides and google api and natural language processing for recognizing speech. To sync the presenter's view with the audience's view, we used WebSockets.
Challenges we ran into
Our biggest challenge was coming up with a data format for our slides using Firebase in order to make it easy to organize the messaging protocol for our room system. We also ran into problems with WebSockets and sending voice data between the server and presenters and clients.
Accomplishments that we're proud of
We are proud of getting our WebSocket to work, as well as our use of Firebase. We are also happy with how our css styling came out. All in all, we are pleased with the initial idea came to fruition and our creation of an interesting and engaging educational tool.
What we learned
We definitely improve our CSS design, and we learned more Firebase functionality. We also never used WebSockets before, and we were surprised how easy it was to create our own messaging protocols.
What's next for FlowDecks
Our microservice architecture allows for easy scalability, so adding features that would allow for more student engagement(like clicker or polling features) are a definite possibility. As for the presenter, we hope to find a way to make it easy for anyone to create their own presentations compatible with FlowDecks.