Inspiration

The Idea

Mental Health has become a very well-known matter of concern which raises the need for constant regulation of emotions for most people, and what more to tackle this than with music! With plenty of proven cases of music acting as a healing medium for many people struggling with mental health, we thought of building an app that could easily help users cope through their emotion using music.

Similar Cases

When we first tried to survey the possibility of this idea, we found out that there are already a lot of projects that have done this as well, which got us thinking: What other potential could this app have that stands out from the rest? That's where it got us thinking - who are the ones that really need such an app? Safe to say that we immediately have our answer - users who struggle with mental health on a clinical level, particularly those who can experience sudden panic attacks. You could say this is us trying to build a solution for those whom we know are going through such suffering.

Pinning Down The Idea

With the end goal in mind, we also started discussing on the some other things we can do in order to improve the whole app usage experience as compared to other projects. Finally, our stage is set - to build a personalized AI-powered music recommendation app based off emotions, with our UI being as simple and straightforward as possible, to curating personalized messages and music that best fits for your occasion. The occasion ranges from simple ones to potentially life-threatening ones.

What it does

The main features of the app are:

  • Music Recommendation based on Emotions/State of Mind
  • Emotion Journaling
  • AI-powered "Paragraph of the Day" that outputs words of affirmation based on your emotion and its description
  • "Panic Button": Perfect for an instant music play for users experiencing an alarming burst of reaction or emotion (e.g. panic attack)

How we built it

Frontend

  • Expo
  • React Moodz is an emotion-based audio therapy app, the kind of experience users reach for in vulnerable moments: during a panic attack, before bed, or when they need to decompress. That demands a native-feeling presence on the device, not a browser tab they might never reopen.

Why React Native over a pure web app?

  1. Push notifications for consistency: Therapeutic apps live or die by daily engagement. Native push notifications let us send gentle check-in reminders ("How are you feeling today?") that keep users building their journaling streak. Web push is unreliable and unsupported on iOS Safari.
  2. Still one codebase: React Native with Expo lets us ship iOS, Android, and web from the same TypeScript codebase. We're not sacrificing development speed, we get the cross-platform benefits of web development with native capabilities where it matters.

Backend

  • Express.js: Backend framework for all Elastic and S3 related REST API calls
  • Supabase: Online low-code database management system that could easily connect with the Front End without creating a new or expanding an existing backend.
  • Elastic: Search based on tag queries and paragraph semantic analysis, along with using GenAI to output personalised prompts to the user based on given emotions.
  • S3: Store all videos into its cloud storage for easier and secure audio playing access.

DevOps

  • GitHub Actions: automatic CI/CD pipeline that mainly checks for coding syntax consistency and app building competency.

Challenges we ran into

Learning New Technology

Given that most of the tech stack we used is particularly new to us, it took us a lot of time to figure out how everything works. From CI/CD on GitHub Actions, integrating ElasticSearch and its provisioned AI Agent, to assigning IAM roles for S3 access only. The learning curve took out some time from us but it was also fun to explore new tech and implement those which we've only got to hear of. Fortunately, the whole team came from a computer science background hence picking the tech stack up isn't as stressful as we thought it would be.

Setting up CI/CD pipeline on GitHub Actions

Our main challenge with this is to create different CI/CD pipelines for both frontend and backend respectively, all while in a monorepo. We took time to understand the minimum basic checks required for each side, and attempted to configure how the checks are determined based on the files changed when a pull request is created. In the end, we were able to get the pipeline going and it has made sure all our code is properly formatted through linting and essentially secure using CodeQL.

Integrating Elastic Search

Since ElasticSearch is a platform that was newly introduced to us during this hackathon, we decided to opt in and see what it's capable of. We initially thought it would be just like how MongoDB works, but we were partially correct. The setting up of the search engine was something we were familiar with, but integrating an LLM AI using the same platform was the challenging part. From our initial research, we thought that we had to configure a local AI agent using Kibana just to get the inference details, but actually everything was already in hind sight, and we somehow forgotten to utilize the console to find about that details. In the end, Kibana was totally redundant for us and we ended up wasted some time there.

Accomplishments that we're proud of

Music Player

We wanted to mainly focus on commercially free audio/music rather than Spotify, along with the initial idea of expanding the player to include generative music (which we couldn't complete in time in the end). Hence, we took a leap of faith to make our own recommendation engine that automatically plays sounds based on audio files uploaded into AWS S3. It was definitely great to see how, in the end, everything connected together and worked. It was a great highlight moment seeing how the player was able to run on our dynamic data.

What we learned

One of the main things we learned is how solutions with social impact doesn't have to be for large scale problems, but even on a personal scale. When we were ideating, we bounced off a lot of ideas that looked into the demographic as a whole, but when this idea was suggested, everyone instantly clicked with it because of how it involves them on a personal level. This was also the reason why the whole team was so passionate about making this solution come to life, because along the way, it felt like we were learning more about our own emotions too, especially when it comes to picking our songs. Therefore, we hope that no matter who you are, or where you're from, you can also learn more about yourself through this simple yet personal solution.

What's next for Malong Dream Team

We have a few ideas that we had for this app which were not completed due to time restrictions, the ideas are as follow:

  • Next of Kin Alert: For those who registered their next of kin, the AI agent can alert the next of kin if the user's journal or emotion history shows anomaly or concern.
  • Expand Choices: We wanted to discover more sounds that could help with people suffering psychologically.
  • Generative Sound/Music: With Generative AI, we can generate sound on the fly and use it as part of our sound/music bank.
Share this project:

Updates