Inspiration
According to research, 2.4 million consumers have reported losing money to scammers.
Americans lost $8.8 billion to fraud in 2022 after a 30% surge.
Experts predict $44 billion worldwide losses due to fraud by 2025.
72% of business leaders cite fraud as a growing concern over the past 12 months.
Cyber security experts warn 2023 is shaping up to be a dangerous year, thanks to huge advances in artificial intelligence technology.
Today’s human-run scams are limited by the labor-intensive process of persuading people. The introduction of generative AI such as large language models and voice clones are about to change the scam pipeline. AI-powered scam chatbots are able to generate more diverse and persuasive content at a lower cost, operating 24/7. AI’s enhanced ability to turbocharge fraud puts more people under risk and becomes a serious concern.
What it does
Call Guard trained Co:here classification model to detect risky spams and scams and send a scam score to the users. We provide a spam score showing how likely the message is a spam and a evaluation that states how likely the caller is a scammer.
What are differences between spams and scams:
Spam refers to unsolicited messages sent in bulk to a large number of people, often for advertising purposes. These messages can take the form of emails, text messages, social media posts, or any other form of electronic communication. While spam may be annoying, it is usually not harmful, and the sender is not attempting to defraud the recipient. Scams, on the other hand, are designed to deceive and defraud the recipient. Scammers use various techniques, including phishing emails, fake websites, and phone calls, to trick people into providing personal information, such as passwords or credit card numbers, or to transfer money to them. Unlike spam, scams are intended to harm the recipient financially or otherwise.
How we built it
We built it using React Native frontend and Django Backend. Upon receiving new messages from caller, we use a use model trained on Cohere by datasets scraped from Huggingface to predict a spam score between 0 and 1 and produced a warning using ChatGPT api with the prompt set to scam detector that outputs whether the caller is scam likely or not
Challenges we met
The model we trained initially was laggy and has a low true positive rate because the dataset is somewhat imbalanced. So we did data processing of imbalance dataset to make account of more scam messages and leveraged between the time of response and model accuracy.
Built With
- chatgpt
- co:here
- django
- react-native

Log in or sign up for Devpost to join the conversation.