Here is the formatted Markdown for your Compify write-up, ready to be pasted into a submission platform or a README.md file.
Compify: AI Math Olympiad Trainer
Inspiration
The path to mastering Math Olympiads (AMC, AIME, USAMO) is often gated by access to high-quality tutoring. While the Art of Problem Solving (AoPS) community provides incredible resources, students often struggle to bridge the gap between a problem they can’t solve and the high-level logic found in past competition solutions. We wanted to build a "Digital Coach" that doesn't just give answers, but finds the most relevant historical context to teach the why behind the math.
What it does
Compify is an AI-powered Math Olympiad Trainer that uses a "Vision-to-Logic" pipeline:
Vision: A student uploads an image of a handwritten or printed math problem.
RAG (Retrieval-Augmented Generation): The app converts the problem into a vector embedding and searches a curated database of 2,000+ elite competition problems (from the Hendrycks/MATH dataset) to find the closest logical match.
Reasoning: Leveraging cutting-edge Gemini 2.5/3 models, the app explains the logic of the historical match and applies that specific strategy to the student's current problem.
Tutor Chat: A built-in chat interface allows students to ask follow-up questions, request hints, or dive deeper into specific formulas.
How we built it
We built the core application using Streamlit for the frontend and Python for the backend logic.
Database: We processed over 2,000 problems from the Hendrycks/MATH dataset in Google Colab, converting them into a searchable memory bank using models/text-embedding-004.
LLM Integration: We leveraged the Google Generative AI SDK, specifically utilizing the early-access capabilities of Gemini 2.5 Flash for multi-modal vision and reasoning.
Vector Search: We implemented Cosine Similarity to perform the retrieval step, ensuring the AI "remembers" similar problems before it attempts to solve the new one.
Challenges we ran into
One of the biggest hurdles was Model Compatibility. Because we were using the latest "Flash" and "Preview" models, standard API calls often required custom handling. We built a diagnostic "Model Finder" script to identify which specific next-gen endpoints were active for our API key. Additionally, cleaning raw LaTeX and Asymptote (asy) code from the database so the AI could interpret it without being distracted by formatting required extensive prompt engineering.
Accomplishments that we're proud of
We are incredibly proud of the Retrieval Accuracy. It is a "eureka" moment when you upload a random geometry problem and the AI correctly identifies a similar AIME problem from a decade ago to use as a teaching guide. We also successfully implemented a Persistent Session State, allowing the tutor to maintain the context of the problem even as the user asks complex follow-up questions in the chat.
What we learned
We learned that the quality of an AI's reasoning is directly tied to the context it is provided. By using RAG to provide a "ground truth" (a solved Olympiad problem) to the LLM, we significantly reduced hallucinations and ensured the math stayed rigorous. We also gained deep experience in multi-modal AI—managing the transition from image pixels to mathematical embeddings.
What's next for Compify
The next step for Compify is scaling the "Brain." We plan to:
LaTeX rendering
Expand the database to 15,000+ problems.
Implement a handwriting recognition specialist model to better handle messy student notes.
Add a feature to generate Personalized Practice Sets based on the specific types of problems a student frequently struggles with, turning Compify into a full-scale, adaptive learning platform.
Built With
- github
- python
- studio
- typescript


Log in or sign up for Devpost to join the conversation.