Inspiration

Our idea wasn't born from a desire for a grandiose technical goal but from a genuine empathy for the strained reality of social media users who suffered under the authority of the media. We recognize the devastating impact misinformation has on vital sectors in public health institutes, corporations, and teens. In an effort to dismantle the core problem of digital scams, rumors, and fake information that thrives on social media, we found that amplification and the echo-chamber AI-based algorithms social media uses are the systems we need to change. This inspired us to design an AI assistant that exposes manipulation, clarifies framing, and guides vulnerable users susceptible to false information through transparent, balanced feed recommendations.

What it does

Our project builds an AI assistant that analyzes media content to detect linguistic bias, emotional manipulation, and framing patterns. It indicates where users may be influenced by the framing, then calculates a neutrality score using both universal linguistic cues and topic-specific criteria. Beyond analysis, the tool actively helps users escape algorithm-driven echo chambers by recommending more neutral content based on the score (0-100). Instead of telling what to believe, our system reveals how information is being presented, empowering them to navigate media with balance and critical awareness.

How we built it

We built our system by combining natural language processing, topic-aware analysis, and a two-layer scoring framework. The language analysis begins with a topic classifier that identifies whether the content is political, medical, or social. We then generate sentence embeddings using Simple Contrastive Learning of Sentence Embeddings (SimCSE), which analyzes sentence structure, tone, framing, and contextual nuance. To produce a neutrality score, the first layer detects universal signs of manipulation such as emotional intensity, biased framing, and unsupported claims, while the second layer applies topic-specific criteria depending on whether the content is political, medical, or social. After that, the recommendation engine uses the neutrality score to present content with higher neutrality scores. to users, helping them escape the echo chamber patterns and seek more transparent content. We illustrated the full user experience using a Figma prototype that demonstrates how the assistant would function within a social media environment.

Challenges we ran into

Perhaps the biggest challenge we faced was navigating the ethical questions around building the system while also making the solution realistically achievable. One of the main issues was that AI’s definition of “balance” or “bias” can easily reflect the values of its programmers or the dominant cultural norms in the training data, resulting in a non-transparent or unintentional scoring system. So we tried to find a machine learning technique that minimized such risks. We ultimately decided to use Reinforcement Learning from Human Feedback (RLHF) as it allows the model to learn from diverse human evaluations rather than relying solely on a potentially biased training dataset. This helped us create a more ethical and achievable solution.

Accomplishments that we're proud of

In the early stages, we considered allowing AI to directly evaluate the risk level of the content users encountered. However, this approach quickly raised concerns about ethical responsibility. After extensive discussion, we redefined the role of AI. Instead of making automatic recommendations, the system now analyzes each article or video, calculates its bias rate, and presents users with a curated list of alternative perspectives. Then the user will choose which content to engage with. Whether it reflects a completely different viewpoint or simply a partial counterbalance. This ensures that every decision-making process remains grounded in human judgment, ultimately achieving a balance between technological innovation and ethical accountability. This accomplishment represents more than technical milestones; it reflects our commitment to aligning innovation with responsibility. Therefore, we are proud of this achievement of refining the problem because it shows that we didn’t just build a powerful AI system—we built one that represents human judgment, turning a potential ethical risk into a design strength that fosters trust and critical thinking.

What we learned

The process of preparing this project wasn't always smooth. As we came from diverse backgrounds, there was a wide spectrum of perspectives. Reaching consensus required persistent questioning and rigorous debate, which often made the process challenging. Nevertheless, this very act of navigating our differences proved to be an essential component of our personal growth, and we learned how to cooperate with someone who has a different perspective. It enabled us to refine our ideas and arrive at more thoughtful solutions. For instance, when reflecting on the outcome, we had two different, opposite opinions of how to implement it in the prototype—whether to use positive and negative signs to indicate the bias. Since this could be perceived as a strong or polarizing expression, after extensive discussion, we agreed to adopt a more neutral scoring method that conveys fairness and balance more effectively. Likewise, we learned how to reach consensus through constructive debate, and this experience strengthened our ability to transform disagreement into innovation.

What's next for Vera

The next phase of Vera is to adapt its bias check Copilot AI into the financial domain. Stock markets are highly sensitive to rumors, emotionally charged narratives, and manipulative content that can distort investor behavior. Often, investors rely on broad benchmarks like the S&P 500 or asset management firms simply because they lack trustworthy information to determine what is accurate. With Vera expanding into finance, even individual investors can make decisions without depending solely on institutional filters. There are a few functions of our Vera. By detecting bias in financial news and social media, calculating an “emotional risk index,” and presenting balanced alternative perspectives, Vera empowers users to separate market signals from market noise. This evolution will support the investor to guide with clarity and fairness, rather than hype or fear.

Built With

Share this project:

Updates