Social media, and the conversations we have on them often affect our mental health. They bother children and adults alike. Sometimes they might contain abusive and profane words and make us feel bad. Even if they are not profane, chats of a particular sentiment may affect us negatively at different times. Hence, after brainstorming to find solutions to this problem, we came up with our idea of this project.

What it does

A few of the things you can do with SpoilNoMore:

  • Make an account and chat with your friends
  • Detect profanity of your own phrases
  • Check the emotion of your phrases
  • Get to know the emotion of an ongoing chat without opening it
  • Avoid being affected by abusive words that our censored by us
  • Analyse the emotion of a chat you are present in through our customised emotion-colour backgrounds.

How we built it

  • We created a .mlmodel using createml and the dataset Emotion Detection for NLP.
  • We then created a UI mock up on Figma
  • It was then implemented using XCode
  • For emotion detection, we used our createml model
  • For profanity filter, we used promptAPI- bad words censor. *For authentication and database, we used firebase.

Challenges we ran into

  • Finding a good dataset for accurate detection of emotions from text was hard.
  • We implemeted various features in a very short amount of time.

Accomplishments that we're proud of

  • Our app detects emotions with an accuracy of 99.52%.
  • It successfully censors profane words.
  • It depicts emotion-based colour backgrounds while maintaining an aesthetic sense.

Built With

Share this project: