I can be insensitive in my communications including online, so I built this app to check my speech before posting online and then regretting it, because you know no matter how fast you delete or edit your post, someone will have probably screenshotted it to use against you when you least expect it. Am I paranoid or am I funny? No, don't answer and disillusion me, please.
The app lets you choose from one of 15 categories to check for unsafe speech from you, or you can just let it check against all categories.
The 15 categories:
- age
- disability
- drugs
- education
- gender
- historical
- income
- inhumane
- mental health
- peaceful
- profanity
- race
- religion
- respectful
- self-harm
If it decides that your speech isn't safe for one or more of these categories, it'll show an alert for each and a summary. Or if you've seen the light and are a saint, it'll tell you your speech is safe, or at least for the category you chose if you only want to test one at a time.
Mind you my code isn't judging your speech. I leave that to Llama 3.1 to judge based on its training data. So blame Meta and how it trained its LLM, don't blame me if you don't agree with an alert from the app. As for why I chose Llama, I could say because it's open source and I want to support open source software. But in reality, it's because I don't want to pay for OpenAI and the other closed source LLMs which probably do a better job. Am I biased or am I joking? No, don't answer and disillusion me, please.
Built With
- gradio
- langchain
- llama-3.1
- prompt-engineering
- python
- tenacity

Log in or sign up for Devpost to join the conversation.