🏔️ Mount AI: Scholar – The Complexity Translator 💡 Inspiration As a student and developer, I’ve often felt overwhelmed by the sheer volume of complex information we have to process daily. Whether it's a 50-page medical research paper or a dense physics lecture, the "cognitive load" is real. I wanted to build a tool that doesn't just "give answers," but actually teaches. Inspired by the Mount AI Pro productivity suite, Scholar was born to act as a bridge between raw complexity and human understanding. My goal was to create an AI tutor that could instantly turn any text into a structured, visual, and interactive learning experience. 🛠️ How I Built It The project is a full-stack application designed for speed and precision: AI Engine: I used Llama 3.3 70B via the Groq API. The near-instant inference speed allows for a "real-time" feel that is crucial for maintaining focus during study sessions. Frontend: Built with React and Tailwind CSS for a clean, professional "Scholar" aesthetic. Visual Learning: I integrated Mermaid.js to dynamically generate mindmaps from AI responses, allowing users to visualize the hierarchy of concepts. Mathematical Precision: To support STEM students, I implemented LaTeX rendering so that formulas like the Schrödinger equation:

look as beautiful as they do in a textbook. Backend: An Express.js server handles API requests and environment variable security. 🚧 Challenges I Faced Prompt Engineering for Visuals: Getting the AI to consistently output valid Mermaid.js syntax for mindmaps was tricky. I had to refine the system prompts multiple times to ensure the nodes and connections were always syntactically correct. Multi-language Support: Ensuring the AI maintained its "Expert Tutor" persona across 8 different languages while using the correct technical vocabulary (e.g., using precise medical terms in French vs. English) required deep prompt tuning. Real-time UX: Balancing the loading states with the high-speed output of Groq required careful state management in React to ensure the UI felt smooth and responsive. 🧠 What I Learned This hackathon taught me the true power of Llama 3 when combined with specialized tools like Groq. I learned that speed isn't just a luxury; it's a feature that changes how users interact with AI. I also deepened my knowledge of: Visual Data Representation: How to transform raw text into structured diagrams programmatically. Pedagogical AI: How to instruct an LLM to act as a teacher rather than just a chatbot. Full-stack Integration: Managing a seamless flow between a Vite frontend and an Express backend in a production-ready environment. 🚀 The Future of Mount AI: Scholar This is just the beginning. I plan to add PDF parsing, voice-to-notes capabilities, and a collaborative "Study Room" feature. Mount AI: Scholar is here to ensure that no concept is too high to climb. I hope you ll like my app !!

Built With

Share this project:

Updates