🧠 Information Credibility Assistant – Hackathon Reflection
💡 Inspiration
This project was inspired by a previous application I built—a 1-minute news summarizer. While that project focused on condensing information, I wanted to take things a step further by making the experience more interactive and intelligent.
Instead of just summarizing content, I aimed to create a tool that allows users to engage with information through an AI chatbot, helping them better understand and evaluate the credibility of what they are reading. With the growing concern around misinformation, I saw an opportunity to build something both practical and impactful.
🛠️ How I Built It
I structured the project as a full-stack application:
- Frontend: React (Vite)
- Backend: Node.js with Express
- AI Integration: Local LLM using
Xenova/TinyLlama-1.1B-Chat-v1.0
Key Features:
- Users can input or paste text/articles
- AI chatbot analyzes and responds to user queries about the content
- Focus on credibility, summarization, and explanation
Development Tools:
- ChatGPT: Helped scaffold the initial project structure and guide implementation decisions
- GitHub Copilot: Assisted with writing boilerplate code and debugging issues quickly
📚 What I Learned
This project pushed me to explore several new areas:
- Working with local AI models instead of relying on cloud APIs
- Understanding token limitations and how they affect AI model usage
- Designing a system that balances performance vs. capability
- Improving my debugging workflow when integrating multiple technologies
- Learning how to adapt quickly when original plans fail
⚔️ Challenges I Faced
1. Token Limit Issues
Initially, I planned to use more powerful AI models like Gemini and other APIs. However, I repeatedly ran into token limitations, which prevented the application from functioning as intended.
2. Switching to Local Models
After multiple failed attempts with external APIs, I pivoted to using TinyLlama locally. While this solved the token issue, it introduced new challenges:
- Less accurate and less sophisticated responses
- Performance limitations
- Additional setup complexity
3. Time Constraints
Since this was a hackathon project, I had limited time to:
- Experiment with multiple AI solutions
- Debug integration issues
- Polish the user experience
🚀 Final Thoughts
Despite the challenges, I’m proud of how the project turned out. I successfully:
- Built a working AI-powered application
- Adapted to technical limitations under pressure
- Explored a new approach using local AI models
If I had more time, I would:
- Improve the AI model quality (possibly using a hybrid approach)
- Enhance the UI/UX
- Add more advanced credibility analysis features
This project was a great learning experience and gave me deeper insight into the real-world challenges of integrating AI into applications.
Log in or sign up for Devpost to join the conversation.