Inspiration
The idea for Snap Study was deeply personal. One of our team members has a close friend living with ADHD, and over the years, we’ve seen the daily challenges they face in academic environments such as struggling with focus, time management, and motivation despite a genuine love of learning. Their experience brought to light a much broader issue: traditional classrooms still rely heavily on dense readings, fast-paced lectures, and abstract concepts which can overwhelm even the most motivated learners.
For neurodivergent students, especially those with ADHD, dyslexia, or low literacy, this creates a constant uphill battle. Existing AI tools like ChatGPT are powerful, but they’re designed for adults, not for students who need tailored, accessible, and engaging support. That gap inspired Snap Study.
We set out to build an AI-powered companion that truly meets learners where they are. Snap Study takes complex academic content and transforms it into simplified, structured summaries tailored for neurodiverse students. It emphasizes key terms, offers step-by-step explanations, incorporates visual storytelling, and provides text-to-speech audio for auditory learners. Every feature is designed to reduce cognitive overload and make studying more approachable, empowering students to learn in the way that works best for them.
What it does
Snap Study is an AI-powered learning companion specially designed to support neurodiverse students by transforming complex educational materials into simplified, engaging, and accessible formats. Every feature and detail in Snap Study is crafted with neurodiverse learners in mind to create a welcoming, stress-free environment where learning feels achievable and personalized.
Students start by uploading a photo or PDF of a textbook page, lecture slide, or handwritten notes. Using Amazon Textract, Snap Study extracts the text and processes it through a literacy-aware language model on Amazon Bedrock. The app then generates a simplified version of the content, organized into four core parts: a clear paragraph summary, a key terms section, a step-by-step explanation, and an visual storytelling component that brings concepts to life.
To support different learning styles, Snap Study offers audio narration of the simplified content via Amazon Polly. The built-in audio player features a single clear voice with straightforward play and pause controls, which eliminate distractions and allow students to listen and follow along comfortably at their own pace.
Students can also ask questions through an integrated chatbot, receiving real-time, easy-to-understand responses. All content, including text summaries, audio files, and visuals, is automatically saved in the Past Uploads section, where users can revisit lessons and make favorites. The dashboard also includes essential accessibility options such as font size adjustments and light and dark modes to meet individual preferences.
Throughout the experience, students are accompanied every step of the way by Tobi the Tutor Bear, a friendly and reassuring mascot who acts as a constant companion. Tobi’s presence helps students feel supported and encouraged as they work through the text, reducing anxiety and building confidence, especially for learners with attention challenges.
In essence, Snap Study makes learning simpler, more personal, and more supportive by turning overwhelming academic content into an intuitive, multi-sensory experience tailored specifically for neurodiverse students. Every feature, big or small, is thoughtfully built to help these learners feel confident, understood, and encouraged every step of the way.
How we built it
Snap Study is a cross-platform app built with a Flutter frontend and a FastAPI backend that connects seamlessly with AWS AI services to deliver personalized learning.
- The frontend is developed in Flutter, allowing easy deployment on mobile and web. It handles user interactions like PDF uploads, audio playback, and displays simplified content with visual highlights.
- The backend uses FastAPI to manage API endpoints. It receives PDF files, sends them to Amazon Textract for text extraction, then processes the text with Amazon Bedrock (Claude 3.5 Sonnet) for multi-level simplification.
- Simplified content is then converted into natural speech with Amazon Polly, enabling an engaging voice-first experience.
- AWS S3 is used for storing uploaded files and generated audio assets securely.
- We built custom modules to coordinate these services, including error handling and asynchronous processing to keep the app responsive.
- The project uses environment variables to securely manage AWS credentials and configurations.
- The mascot “Tobi” and visual storytelling features enrich the UI, making the app friendly and accessible, particularly for neurodiverse learners.
Challenges we ran into
Building Snap Study came with several tough challenges that pushed us to learn and adapt quickly.
- Fine-tuning AI prompts for the text simplification pipeline was tricky. We had to carefully craft and iterate prompts to ensure the content stayed accurate while becoming easier to understand across three complexity levels.
- Initially, our visual storytelling feature struggled because images were not generating properly. We had to troubleshoot the integration and explore alternative ways to reliably create helpful visuals.
- We originally experimented with Llama models for content simplification but integration issues and inconsistent output led us to switch to Anthropic’s Claude 3.5 Sonnet, which provided more reliable and contextually accurate simplifications.
- Managing user state was another hurdle. We encountered issues with saving user info to distinguish new from returning users. This affected personalized features and session continuity.
- On the simplified output screen, some buttons like favoriting lessons malfunctioned. When a user saved one favorite, it sometimes caused others to reset or behave unexpectedly. Fixing this required reworking the frontend state management and backend syncing.
Accomplishments that we're proud of
- Successfully built a multimodal learning platform that integrates complex AWS AI services like Textract, Bedrock, and Polly to transform dense educational content into accessible, personalized lessons.
- Created a voice-first experience with synced text highlighting, making listening and reading simultaneous and engaging.
- Developed a three types of simplification system that adapts content complexity to meet diverse learner needs, empowering students with ADHD, dyslexia, and low literacy.
- Designed and implemented a friendly, encouraging mascot Tobi, which helps reduce learning anxiety and makes the app feel welcoming and safe.
- Delivered cross-platform support with Flutter, enabling students to access Snap Study on mobile and web seamlessly.
- Overcame major technical hurdles such as from AI prompt fine-tuning to fixing UI bugs which demonstrates resilience and commitment to user-centered design.
What we learned
- Building AI-Nable showed us that accessibility is a mindset, not an afterthought. Designing for neurodiverse learners means rethinking everything, from UI choices to how content is delivered and explained.
- We gained deep experience with AWS generative AI services like Textract, Bedrock, and Polly, learning how to chain them together smoothly and handle real-world issues such as latency and error handling.
- Crafting effective AI prompts for content simplification taught us the importance of iteration and testing. Small changes in prompts made big differences in output quality and usability.
- Managing user state and favorites pushed us to build more robust backend logic and frontend state management, showing how critical reliable data handling is for a seamless user experience.
- Above all, we learned the power of building with empathy by understanding the real struggles of learners. This helped us stay focused on creating a tool that truly supports and empowers.
What's next for Snap Study
- Expand AI capabilities to support more languages and dialects, making learning accessible to even more students worldwide.
- Improve visual storytelling by integrating more reliable and diverse image generation tools to better illustrate concepts.
- Offer offline access so students can continue learning anytime, even without an internet connection.
Log in or sign up for Devpost to join the conversation.