📖 About the Project

What Inspired Us

When we began tackling the IEEE Hackathon challenge, our guiding question was simple:

"What kind of assistant would employees actually use in a real-world office setting?"

Our answer: something that was easy to use, intuitive, and accessible from any device without setup or training. That ruled out complex dashboards or installations. A simple, smart chatbot felt like the most natural way to deliver actionable, personalized support to professionals dealing with change in the workplace.


How We Built It

Our first instinct was to build a Telegram bot, thinking it would provide a lightweight and familiar interface. We got the basic version working—but very quickly ran into issues with scalability, user authentication, and integration with enterprise systems. It just wasn’t the right fit for the use case.

We pivoted to a web-based chatbot interface using plain HTML, CSS, and JavaScript for simplicity. Paired with a FastAPI backend, this allowed us to build something modular, portable, and fast.

Backend Architecture

  • Built using FastAPI for routing, feedback endpoints, and integration logic
  • Chat inputs are routed to a Groq-powered inference engine using the Llama 3 model
  • Uses a hybrid knowledge base with:
    • Change management frameworks (ADKAR, Lewin, Kotter, etc.)
    • FAQs, benchmarks, case studies, emotional strategies
    • Live market and tech trends pulled from NewsAPI

Frontend

  • A clean, minimalist chat UI (index.html) designed for maximum accessibility
  • Styled using vanilla CSS with soft UI elements to make conversations feel human
  • All responses from the backend are streamed and displayed dynamically

Why Groq?

Choosing the right AI API was one of our biggest decisions.

We tested a few options, including OpenAI and open-source models hosted locally. While functional, they either introduced high latency, cost limitations, or scaling problems.

We ultimately selected Groq API because of:

  • Ultra-low latency: Lightning-fast responses, perfect for conversational AI
  • Built-in Llama 3 support: Strong reasoning, summarization, and framework adaptation
  • Developer-friendly integration: Easily connected with FastAPI
  • High scalability: Great performance even under load

What We Learned

  • Design for the user first – Simplicity wins when you're targeting working professionals.
  • Prompt engineering is a skill – Tuning our prompts to return emotionally intelligent, context-aware answers was more iterative than expected.
  • Change management is nuanced – It’s not just about strategy—it’s about people. Navi needed to sound human, not robotic.
  • Real-time inputs improve relevance – Pulling live news into our reasoning made responses dynamic and situationally aware.
  • Feedback loops matter – Logging structured feedback allows future tuning and real-time learning.

Final Thoughts

Navi isn't just another AI chatbot—it's a specialized, domain-aware assistant built to help people navigate one of the most difficult parts of any organization: change.

We learned a lot about balancing speed, empathy, accuracy, and relevance and we’re excited by the potential to continue developing Navi into a tool that can truly support real-world transformation at scale.

Built With

Share this project:

Updates