Inspiration
Students often struggle to convert raw notes into structured, exam-ready material. Long paragraphs, unorganized content, and complex explanations make revision difficult and time-consuming.
At the same time, many people search online about health symptoms and receive alarming or misleading information. We wanted to build a system that not only improves academic understanding but also promotes responsible AI use in sensitive areas like health.
This inspired us to create an AI assistant that focuses on clarity, structure, and ethical guidance.
What it does
Our project is an AI-powered assistant that performs two intelligent functions:
Academic Transformation Mode
Converts raw notes into structured, point-wise explanations
Generates beginner-friendly explanations
Creates 5-mark and 10-mark exam answers
Extracts key terms and definitions
Provides quick revision summaries
Responsible Health Guidance Mode
Automatically detects symptom-related input
Does not diagnose or mention diseases
Responds calmly and responsibly
Suggests general care steps
Encourages consulting a healthcare professional
The system switches behavior silently based on input context.
How we built it
We built the project using:
Flask (Python) for backend
Gemini 3 Flash model for AI reasoning
Structured system prompts to control behavior
Keyword-based health detection logic
Context-aware response switching
A clean HTML/CSS interface for user interaction
We designed conditional logic to differentiate between academic content and health-related inputs.
Challenges we ran into
Preventing false health-mode triggers
Designing non-diagnostic health responses
Maintaining structured academic output consistently
Avoiding long, unstructured AI responses
Balancing usefulness with ethical responsibility
We had to carefully engineer prompts and detection logic to ensure safe and structured outputs.
Accomplishments that we're proud of
Successfully built context-aware AI behavior
Implemented silent auto-detection of health-related input
Enforced structured academic formatting
Designed ethical safeguards for sensitive topics
Built a dual-purpose AI assistant with clear role control
We are proud of creating a system that is both intelligent and responsible.
What we learned
Prompt engineering significantly affects output quality
Structured formatting improves user understanding
Responsible AI design requires clear behavioral constraints
Context detection improves realism and usability
Ethical considerations are as important as technical performance
What's next for enhanced usage of Gemini 3
Smarter semantic detection instead of keyword-based health triggers
PDF and document upload for automatic academic conversion
Multi-language support
Voice input for accessibility
Adaptive learning modes (Beginner / Exam / Interview)
Image-based content understanding using Gemini’s multimodal capabilities
We aim to further leverage Gemini 3’s reasoning and multimodal strengths to make the system more intelligent and accessible
Log in or sign up for Devpost to join the conversation.