Inspiration

FinalSay was inspired by a frustrating experience at DECA, a competitive business and debate competition where participants argue their case in front of judges. One of the debates ended in a loss with no reasoning from the judge, causing the perception of an unfair decision. This experience sparked the idea; what if there was a way to get truly objective, constructive feedback that could help debaters improve rather than just telling them they won or lost?

That’s where the idea for FinalSay was born. We wanted to create an AI-powered debate coach that provides constant feedback, helping debaters refine their arguments in real time and strengthen their reasoning.


What It Does

FinalSay is more than just an AI judge. It acts as a real-time debate coach, giving you continuous feedback to challenge your thinking and improve your argument. Instead of just scoring a debate, our web app provides personalized feedback by asking thought-provoking questions that challenge your argument, expose biases, and highlight improvement.

Our AI algorithm assesses arguments on both logical and emotional levels centered around the "Triple Bottom Line". Which is a success metric that analyzes arguments on social, economical and environmental feasibility; allowing us to identify strengths, weaknesses, and biases while providing constructive feedback to debaters.


How We Built It

To create an interactive and intelligent debate coaching experience, we combined AI tools for a smooth and intuitive design:

  • Backend: Powered by Python and Flask for ease of integration with various libraries
  • Frontend: Built using HTML, Figma, CSS for a clean and user-friendly experience.
  • AI Technologies:
    • Open AI - Whisper: Handles speech-to-text conversion with high accuracy.
    • LangChain & Cohere: Generate structured, intelligent feedback that challenges and strengthens arguments.
    • Open AI - GPT 4.0: Minimizes AI hallucinations and ensures accurate, bias-free analysis.

Challenges We Ran Into

  • Setting up the development environment took longer than expected due to our team being beginner coders
  • We had to pivot our idea multiple times before settling on the final concept.
  • Coming up with an idea that was both unique and feasible was a challenge.
  • Learning LangChain, Whisper, and Figma had a steep learning curve.
  • Keeping up with LangChain’s frequently changing documentation made integration tricky.
  • Our accuracy is restricted due to modern-day LLM's
  • Converting Figma UI/UX design into existing Git repository
  • Had difficulty creating new pages due to time constraint

Accomplishments That We're Proud Of

  • AI-Powered Debate Coach: We successfully built an application that provides real-time feedback to help debaters improve their arguments.
  • Triple Bottom Line Evaluation: Our system evaluates arguments from three key perspectives—economic, social, and environmental—to ensure well-rounded feedback.
  • Seamless Integration: We integrated LangChain, Whisper AI, and Cohere, leveraging cutting-edge AI tools to enhance debate coaching.
  • User-Centric Design: We designed a clear and intuitive interface in Figma to ensure a smooth user experience.

What We Learned

  • Adaptability & Communication: importance of being adaptable and open to changing directions when needed.
  • Attention to Detail: The importance of carefully managing dependencies, permissions, and configurations to ensure that all components can work together as expected.
  • Importance of Planning: We have learned the importance of planning for large projects, and to tackle problems one at a time.
  • Troubleshooting: Debugging and troubleshooting are the key to implementing any software, and that it requires patience.
  • Fundamental Coding: Due to our beginner skill-set we learnt the fundamental of speech-to-text processing and real-time feedback generation.
    • How to effectively use LangChain despite its rapidly evolving documentation.
    • The challenges of applying AI to subjective tasks, like evaluating argument quality.

What's Next for FinalSay

  • More Languages: Add support for more languages to make it accessible to a wider audience.
  • More applicability: Since our design has a standard rubric/ metric for evaluation it is difficult to apply into different application. By handling and integrating rubrics provided by the user; our algorithm can then create decisions based off specific structured evaluation metrics for various situations.
  • Multiple Speakers: Fully integrate speaker diarization to track and analyze multi-speaker debates accurately.
  • Winner Winner: Develop a structured evaluation system to determine which argument is stronger in a debate (Debate Judge, instead of Debate Coach).

FinalSay has the potential to redefine debate training, and we're excited to continue improving it! 🎙️

Built With

Share this project:

Updates