Inspiration
There were more than 1 billion devices with voice assistant integration through the end of 2018, and the majority of those are smartphones with heavy integration of both Google Assistant and Apple Siri. We're sure that number jumped again after COVID when going digital became a norm. That ubiquity means that users are getting used to voice activation of their devices.
Thus, voice interaction provides:
- increased convenience
- increased accessibility
- a more human experience
With Biden's recent announcement with student loan debt relief, finances are on our mind these days. Additionally, because of our team's interest in natural language understanding, passion for UI/UX design and data visualization, and creating a portfolio project, we decided to create a voice-activated expense tracker.
What it does
We created a voice-activated expense tracker that logs your earnings and expenses by manual input and by voice. We then offer a feature for you to share that information in an aggregate way and help you and others visualize monthly student expenses, filtered by various categories.
How we built it
We built the app in React, the natural language understanding/voice-activated piece using bits of Node and C# and Azure services. We used MongoDB for backend to persist data. We built the visualizations using chart.js and d3.
Challenges we ran into
We found we could not implement real-time voice transcription due to the time constraint of this hackathon, and it happened late enough where it would have been rough to start developing on an online platform, so we found a way to leverage the service to demo how it would work.
The React front end was also new or familiar to our team, so there was a a learning curve we had to overcome to create the front end and also to integrate the back end with. In a 24 hour hackathon, we did not have enough time to integrate the back end to the front end.
Accomplishments that we're proud of
We all stepped out of our comfort zone to learn new technologies to us.
What we learned
We learned about a number of paths that are closed to us for reasons like needing more time to train real-time voice transcription using cloud services in a 24-hour hackathon and paths that we could pursue in the future for easier implementation like developing on a platform like Google Actions/Dialogflow.
We also grew an appreciation for writing down project requirements and organizing important information on data structure .
What's next for Voice Wallet
Developing on another platform!

Log in or sign up for Devpost to join the conversation.