🚀 Inspiration Every open-source project struggles to scale its community efforts. DevRel professionals are drowning in GitHub issues, trying to separate noise from insight — manually. That’s where the idea struck: “What if issues could think for themselves?”

We imagined an AI DevRel teammate — one that reads GitHub issues, understands them like a PM, strategizes like a community manager, and works like a SaaS tool — all autonomously.

💡 What it does DevRel AI Assistant is the first lightweight autonomous pipeline that:

Ingests GitHub issues from any public repo

Uses a local LLaMA 3 model to classify them into actionable DevRel categories

Generates concrete, strategic DevRel suggestions (under 120 words)

Enhances understanding with real-time web insights via Tavily

Visualizes it all in an interactive Streamlit dashboard

Lets users filter, explore, and export JSON/CSV/Markdown reports instantly

It’s a plug-and-play DevRel strategist that turns noisy feedback into clear actions — 100% offline-capable.

🛠️ How we built it 🧠 LLM-powered agents: We used TinyLLaMA (via Ollama) for issue classification and DevRel suggestion generation.

🔍 Web context agent: Integrated Tavily API to fetch relevant web data per issue.

📊 Interactive dashboard: Built in Streamlit with filters, search, and data export.

📦 Agent pipeline: Orchestrated via run_pipeline.py to combine classification, search, and suggestion logic into one core flow.

🧾 Auto-reporting: Generates stakeholder-ready reports for offline insights in Markdown, CSV, and JSON.

⚙️ Deployed on Hugging Face Spaces, fully Dockerized with GPU-inference capabilities even under resource constraints.

🧱 Challenges we ran into Running LLMs locally in a public cloud space (with Docker + GPU + Ollama) was extremely tricky — had to build custom wait logic and optimize RAM/tokens.

Prompt engineering was an art: getting concise yet high-quality DevRel suggestions consistently required multiple tuning loops.

Balancing latency, clarity, and relevance with TinyLLaMA was a tightrope walk given compute limits.

Tavily limits meant we had to merge context + suggestion logic smartly without over-querying.

🏆 Accomplishments that we're proud of Shipped a fully functional DevRel SaaS-style agent powered by 100% open models — no OpenAI, no paid APIs.

Built an MVP that looks, feels, and performs like a product — not just a script.

Created something that real DevRel teams could use today to save hours each week.

Proved that even on minimal hardware (4GB VRAM), agentic AI can solve real business problems.

📚 What I learned Open-source LLMs are more than good enough for intelligent task pipelines.

You don’t need OpenAI to build intelligent agents — with smart prompt design and modular thinking, TinyLLaMA did the job.

DevRel automation is not sci-fi — it’s already feasible today in a meaningful way.

A solo dev with resource limits can still ship full-stack AI tools if focused right.

🔮 What's next for DevRel AI Assistant GitHub Actions integration: Trigger DevRel insights automatically on new issues.

Multi-model support: Plug in Mistral, Phi, or Mixtral as model backends.

Memory & learning: Add persistent memory to adapt suggestions per repo trends over time.

Fine-tuning: Train a lightweight DevRel-specific model for even sharper insights.

SaaSify it: Wrap the tool into a hosted app that DevRel teams can use without code.

Built With

Share this project:

Updates