Coalesce 👇


Inspiration

We wanted to explore what happens when humans and AI collaborate instead of compete. In mainstream discussions, AI is often portrayed as a threat, something that will replace or outperform humans. But we wanted to see what could happen if both worked together, blending human empathy and moral intuition with AI’s analytical reasoning. Coalesce was born out of that curiosity to bridge those two worlds.


What it does

Coalesce is an interactive tool where users and an AI both answer a moral or ethical question separately. The system then compares their responses and extracts traits like empathy, confidence, logical reasoning, and humility using the OpenAI API. After that, the human and AI collaboratively craft a joint answer, and Coalesce analyzes how their shared reasoning changes these traitsm, showing how understanding evolves when human and AI “think together.” Finally, it generates a summary of key insights about their collaboration.


How we built it

We used the Gemini API to generate and analyze responses, leveraging its language understanding to extract personality and moral reasoning traits. The web interface (built with [React + TypeScript + Gemini API]) provides a smooth, conversational experience. Visualization components show before-and-after comparisons of empathy, confidence, and other traits, helping users see how collaboration reshapes moral perspective.


Challenges we ran into

  • Quantifying abstract traits like empathy or humility in a consistent, explainable way.
  • Designing a flow that felt like a collaboration instead of a Q&A session.
  • Managing prompt consistency and ensuring the AI produced structured JSON outputs for reliable scoring.
  • Balancing interpretability with creativity making it analytical yet human.

Accomplishments that we're proud of

  • Creating a space where humans and AI genuinely co-create moral reasoning rather than compete.
  • Building a working system that transforms philosophical ideas into interactive insights.
  • Demonstrating that human–AI collaboration can lead to measurable improvement in qualities like empathy and understanding.
  • Delivering a clean, working prototype under tight hackathon time limits.

What we learned

  • AI can serve as a mirror for human reasoning, not just a tool for automation.
  • Subtle prompt changes drastically affect perceived “personality” traits.
  • Collaboration between human intuition and AI analysis often produces more balanced, thoughtful moral outcomes.
  • Building interfaces for empathy and warmth is as important as building for speed or accuracy.

What's next for Coalesce

We plan to:

  • Add a “Perspective Switch” mode, where users and the AI swap roles to experience reasoning from each other’s viewpoint.
  • Improve visualization with real-time “Moral Mirror” graphs.

Share this project:

Updates