INSPIRATION: Every student has different learning needs, but homework is still a "one size fits all" approach. Teachers are stretched to the maximum and don't have time to develop their own resources. We wanted to use AI to provide every student with a learning experience customized to their unique cognitive strengths and interests, helping assignments feel more like games and less like chores.
WHAT IT DOES: Adaptify is an AI‑based solution that creates actual personalized and gamified assignments on the fly. Students complete a brief interactive quiz that assesses four cognitive factors: Logical Reasoning, Working Memory, Pattern Recognition, and Problem Solving. Teachers can add their own materials (PDF, DOCX, text) – the platform applies Retrieval‑Augmented Generation (RAG) to base all AI output on the real course material. They can also create a unique task for any student, broken down into "Game Zones" that are specific to that student's cognitive profile. As they work, students chat with an AI tutor that delivers Socratic nudges (never direct answers) based on their unique interests and learning preferences.
HOW WE BUILT IT: Backend: FastAPI (Python) with SQLite and SQLAlchemy ORM. AI Engine: Groq API (Llama 3‑70b) for high speed, low latency generation. RAG & Vector Search: In-memory vector store (vector_store.py) to fetch relevant chunks from teacher materials uploaded. File Parsing: pdfplumber and python-docx for extracting text from docs. Frontend: Guidance: Vanilla JavaScript, HTML5, CSS3 – dark mode with glassmorphism effects.
OBSTACLES WE ENCOUNTERED: Prompt engineering: Achieving a consistent output from the AI in the form of valid JSON that was aligned with the cognitive profile of each student took dozens of iterations. RAG accuracy: It was tough to make the AI never hallucinate and only rely on teacher-uploaded materials; we fine-tuned retrieval thresholds. Context limits: Striking a balance between the amount of retrieved contents and the model token limit, while keeping the assignments concise and yet rich. Real-time interaction: Keeping the UX responsive as the AI is being generated – Groq’s speed was instrumental here.
THINGS WE'RE PROUD OF: Created a working prototype A fully working platform where teachers can upload content and within seconds get a personalized assignment for any student. Developed a cognitive test that effectively maps students along four critical learning dimensions. Built a Socratic AI tutor that offers hints tailored to a student’s interests ( for example, explaining math with soccer analogies) without giving away answers. Enforced zero-hallucination generation by full integration of RAG with teacher materials. Built a seamless, contemporary UI that makes even the most complex AI interactions feel simple and fun to use.
WHAT WE LEARNED: Personalized AI is so much more than “recommendations” — it can actually generate content. Similarly, RAG is crucial for anchoring AI to real data; without it, even the most sophisticated LLMs can churn out plausible but wrong material. Prompt engineering is as important as model selection; structured prompts with samples perform significantly better. User experience matters – you get users more engaged and reduce friction by delivering AI output in a game-like “zone” format.
WHAT'S NEXT FOR ADAPTIFY: Expand the cognitive dimensions (verbal, spatial, social learning styles). Add adaptive difficulty – tasks that adapt on-the-fly to how the student is doing. Allow teacher collaboration — educators will be able to edit AI‑created assignments prior to distribution. Develop a mobile application to enable access to the platform from any location. Incorporate more media – image, diagram, or video automatically based on student's preferred learning style.
Log in or sign up for Devpost to join the conversation.