The video is in my github. Check my github out: https://github.com/JaydenLee0503/SendCheck Link to my project: https://send-check-liart.vercel.app/

SendCheck

Inspiration

SendCheck was inspired by a very common problem: people often know what they want to say, but they are not sure how it will come across.

This happens everywhere. Students hesitate before emailing a teacher, messaging a group member, or writing to a recruiter. Tech workers second-guess Slack updates, PR comments, follow-ups, and status messages. A small wording mistake can make a message sound too blunt, too passive, too apologetic, or simply unclear.

In my own Utopia, I want everyone to be kind, thoughtful and respect others. I thought that by building SendCheck, it will reduce unexpected rudeness.

We noticed that most writing tools focus on grammar, spelling, or generic rewriting. Very few tools focus on the more human question:

“How will this message likely land?”

That became the core idea behind SendCheck: a tool that helps users understand how their message may be perceived before they send it, then gives them practical rewrites that preserve their intent while improving tone, clarity, and confidence.


What we learned

One of the biggest things we learned is that communication is not just about correctness. A message can be grammatically perfect and still create confusion or tension.

We also learned that people do not always want their message rewritten from scratch. In many cases, they want to:

  • keep their original meaning,
  • sound more confident or professional,
  • reduce the risk of being misunderstood,
  • and send the message faster.

Another major lesson was that AI feedback needs to be framed carefully. We did not want SendCheck to act like it could read minds. Instead, we designed it to give likely interpretation signals, not absolute judgments.

That difference mattered a lot. It made the product more honest, more useful, and more trustworthy.


How we built the project

We built SendCheck as a lightweight AI-powered web app focused on one fast workflow: paste a draft, choose the context, and get useful feedback immediately.

Frontend

We designed the interface as a clean two-panel experience:

  • an input area for the draft and settings,
  • and a results area for analysis and rewrites.

The user can choose:

  • the audience, such as teacher, manager, teammate, recruiter, or client,
  • the communication channel, such as email, Slack, Discord, text, or PR comment,
  • and the improvement goal, such as clearer, warmer, more confident, or more professional.

We also added recent-history support so users could revisit past checks without needing a full database.

AI pipeline

For the analysis layer, we used Featherless AI through its OpenAI-compatible API interface. The model receives:

  • the original draft,
  • the selected audience,
  • the selected channel,
  • and the user’s desired communication style.

From there, the model returns structured output containing:

  • a likely perception summary,
  • overall tone,
  • communication risk flags,
  • three rewrites in different styles,
  • an optional subject line for email,
  • and a suggested follow-up message.

To make the output reliable, we designed the response around a fixed JSON schema rather than loose text. That made the frontend easier to build and made the results more consistent.

Analysis logic

The system is built around the idea that communication risk is multi-factor. At a simple level, we treated the overall message quality as a combination of several smaller signals.

For example, clarity may matter more in a message to a professor, while tone may matter more in a message to a teammate after a conflict.

Product design

For the UI direction, we used Mobbin as inspiration for polished SaaS-style layouts, especially around:

  • spacing,
  • hierarchy,
  • card design,
  • empty states,
  • and split-screen workflows.

We did not want the product to feel like a plain chatbot. We wanted it to feel like a focused decision tool.


Challenges we faced

One of the hardest challenges was avoiding robotic rewrites.

A lot of AI-generated writing sounds polished but unnatural. In a tool like SendCheck, that would make the product less useful, because users still want the final message to sound like them. So we had to tune the prompting carefully to preserve the user’s intent and voice instead of replacing it with generic corporate language.

Another challenge was balancing honesty with confidence. We wanted SendCheck to say things like:

  • “this may sound too blunt,”
  • “this could be read as vague,”
  • or “this likely weakens your ask,”

without pretending to know exactly what another person will think.

We also faced a product-design challenge: too much feedback makes the tool overwhelming, but too little feedback makes it feel shallow. We solved that by keeping the result focused on a few high-value outputs:

  • one perception summary,
  • a short list of risk flags,
  • and three targeted rewrites.

Finally, structured output itself was a challenge. LLMs are powerful, but they do not always return clean, predictable data. We had to design the prompt and validation flow carefully so the app could depend on consistent fields without breaking the interface.


What we’re proud of

We are proud that SendCheck solves a small but real problem in a very usable way.

Instead of building another generic AI assistant, we built a focused communication tool that helps users make better decisions in moments that actually matter. We are also proud that:

  • the app is fast and intuitive,
  • the feedback is practical rather than preachy,
  • the rewrites are differentiated by style,
  • and the product feels relevant to both students and tech workers.

Most of all, we are proud that SendCheck is built around a real human tension: wanting to communicate well, but not always knowing how a message will land.


What’s next

The next step for SendCheck is to make the analysis even more context-aware.

We want to expand support for:

  • more message types,
  • stronger follow-up suggestions,
  • better personalization by audience and scenario,
  • and more nuanced interpretation of workplace and academic communication.

We also want to explore features like:

  • comparing two draft versions side by side,
  • scoring urgency and politeness balance,
  • and helping users adjust tone based on their specific goal.

Our long-term vision is simple: make thoughtful communication easier, faster, and less stressful.

Credit: OpenAI codex for helping me build the frontend, next.js for the structure, ZTailwind.css for styling, typescript for coding and featherless AI for the API key.

Built With

Share this project:

Updates