Inspiration 🌟

Two weeks ago was OpenAI's Dev Day, so I thought, "What better to use than their newly released Text-to-Speech (TTS) API?" And with that, I started brainstorming. I'm indecisive, so what if I made something that made decisions for me? What if I make something where I can see and read every opinion? Read? Why should I read when I have it speak to me, and why should I type when I can talk to it? Oh, and of course we can't forget to add in some cute assets! ...And that was how InnerVoice was born.

What it does 📱

InnerVoice is a web app for the indecisive or those who want to hear every opinion. Featuring eight contrasting personas, from The Practical Realist to The Creative Dreamer, to The Charismatic Leader, they each have their take on the world. Whether you want to be persuaded or enjoy watching two personalities clash, InnerVoice is easy to use and typing-free.

How I built it 🔨

There were three parts to this: Design, Frontend, and Backend. My forte is Design and Frontend, so I was confident and had plenty of experience, but my experience with Backend was limited, having only done it once. Knowing this, I set deadlines for myself, and here's how it went: The Design was done on Figma and was finished within two hours. The Frontend utilized Next.js, Typescript, Tailwind, and DaisyUI (a styling component library new to me), and that was finished by midnight on Friday, after which I went to sleep. The Backend was Flask, accompanied by several APIs like Whisper, OpenAI, TTS, and LangChain. This took more than a day to complete, but I managed to finish everything I set out to do.

Challenges I ran into 🚧

The number of challenges I encountered while making the Backend made me realize that Backend developers are geniuses. This was my first solo project and only my second time doing backend, which felt like wading through mud. I learned the basics: setting up Flask, understanding what endpoints are, and the difference between GET and POST requests, all under a time crunch. A major setback was implementing the Speech-to-Text (STT) and Text-to-Speech (TTS) functions, especially since TTS was released just two weeks ago. There was only one reliable documentation, and being sleep-deprived and a beginner in Backend didn't help. Eventually, I got it working.

Accomplishments that I'm proud of 🏆

For my first "real" time doing Backend, saying I learned a lot is an understatement. However, aside from the Backend knowledge, realizing I can create a successful project solo is astounding. All my previous hackathons have been with teams of at least three. Going solo is an experience I've looked forward to for a while now, and in this hackathon, I proved that I could do it. It was quite enjoyable, and I learned things I wouldn't have if I were in a team.

What I learned 📚

Starting with DaisyUI, a styled React component library, it proved to be very useful. It was a tool I had never used before, and I took this opportunity to try it out. Compared to normal CSS and even Tailwind, this library made Frontend styling much faster and neater. The backend was a whole other adventure. I learned how to use Flask, virtual python environments, Postman, how to create and call endpoints, how to store Frontend and Backend separately, and how to use all three APIs (TTS, Whisper, and OpenAI). This could easily be the hackathon where I learned the most.

What's next for InnerVoice 🌈

For this hackathon, I was able to finish all the basic features I wanted, but the vision is much grander. Currently, InnerVoice has eight personalities, yet there are so many more personas than that. With more fine-tuning, each personality could have much more depth and expertise. The potential is huge with InnerVoice as it can expand into aiding mental health, becoming a virtual buddy that helps people through tough times, offering all sorts of advice.

Built With

Share this project:

Updates