Inspiration
The path to higher education is often blocked by financial barriers and complex application processes. We noticed that many deserving students miss out on life-changing scholarships simply because they lack access to expert guidance or don't know where to look. We built ScholarHunter to bridge this gap, putting an elite academic consultant in the hands of every student, making global education accessible regardless of their background or financial status.
What it does
ScholarHunter is a comprehensive AI ecosystem that manages every stage of the scholarship search and application journey. It creates personalized match rankings by aligning student profiles with global opportunities in real-time. Through a multimodal chat interface, users can consult with an expert AI that analyzes text, documents, and images. The platform also automates the creation of critical application materials, like Statements of Purpose, and provides a mock interview suite with intelligent feedback to prepare students for success.
How we built it
We engineered a high-performance three-tier architecture. The frontend is a responsive Next.js 15 application designed for a professional academic experience. The Core API is built with NestJS, orchestrating authentication and real-time updates via WebSockets, while persisting data across PostgreSQL and MongoDB. The Intelligence Layer is a specialized FastAPI service that leverages the Google Gemini 3.0 Pro model, optimized for multimodal processing and low-latency streaming responses. It also run a crons job that scrape M.Sc or PhD scholarship and save to the db, which are inturn displayed for the user. The entire system is fully containerized with Docker for seamless deployment.
Challenges we ran into
One of our biggest hurdles was managing large multimodal payloads across the microservices; we had to meticulously tune payload limits and prompt lengths to support Base64-encoded files without hitting "Request Entity Too Large" errors. Additionally, synchronizing the frontend state with the AI's real-time "Live Discovery" engine required complex WebSocket event handling. Finally, engineering the LLM's system prompt to shift from generic encouragement to providing nuanced, data-driven academic advice was an intensive iterative process.
Accomplishments that we're proud of
We are incredibly proud of being among the first to implement a fully multimodal chat interface using the latest Gemini 3.0 models. Our "Live Discovery" UI effectively creates a matching experience that feels "alive," where scholarships appear in real-time with dynamic progress tracking. Successfully orchestrating five distinct services—API, LLM, Redis, Postgres, and Mongo—into a single, reliable environment was also a major milestone for the team.
What we learned
Building ScholarHunter taught us that prompt engineering is essentially a new form of UX design—how the AI speaks is just as important as the data it provides. We gained deep experience in managing distributed application state, particularly handling live streaming chunks and persistent chat history across sessions. We also learned the immense power of Docker in simplifying the development and deployment of complex AI-driven infrastructures.
What's next for ScholarHunter
Our roadmap includes an Auto-Apply Engine that will move the platform from document generation to one-click applications by integrating directly with scholarship portals. We also plan to launch a mobile application for on-the-go discovery and build a secure Document Vault where students can version-control their academic transcripts and essays. Finally, we aim to foster a peer community where students can share verified success stories and tips.
Also we ought to finish up with the remaining pages and functionalities and UI cleanings
Log in or sign up for Devpost to join the conversation.