Inspiration

As a father and an educator deeply involved in the ed-tech space, I've noticed a persistent gap in how we teach children with neurodiverse conditions (such as ADHD or mild autism). Standardized textbooks often fail to engage them.

I realized that these children engage deeply when learning is connected to their immediate physical reality. The "Aha!" moment came when I saw my child ignore a book about apples but spend 20 minutes examining a real apple on the table. I wanted to build a bridge between their physical world and abstract learning using the power of multimodal AI.

What it does

Lumina is an adaptive, multimodal learning companion powered by Google's Gemini Pro. It turns real-world objects into personalized, interactive learning stories.

Key features include:

  1. Snap-to-Story: A child takes a photo of anything around them (e.g., a toy car, a pet, a cloud). Lumina analyzes the image and instantly generates a unique, educational story featuring that specific object as the protagonist.
  2. Adaptive Complexity: The vocabulary and sentence structure adjust dynamically based on the child's reading level and real-time engagement.
  3. Voice Interaction: Children can talk to the characters in the story, fostering speech and social skills.

How we will build it

We plan to build Lumina using a strictly Google-native stack to ensure low latency and high integration:

  • Core AI: We will utilize Gemini 3.0 Pro via Vertex AI. Its multimodal capabilities are essential for analyzing user-uploaded images and generating context-aware narratives simultaneously.
  • Backend: Google Cloud Functions (Python) for serverless orchestration.
  • Frontend: Flutter for a cross-platform (iOS/Android) experience that is accessible and visually soothing.
  • Database: Firebase Firestore to store user profiles and

Built With

Share this project:

Updates

Private user

Private user posted an update

Excited to start our journey for the Google AI Hackathon!

We are building an AI Storyteller designed to help neurodiverse children connect with the real world. Currently, we are experimenting with Gemini 1.5 Pro and its multimodal capabilities to analyze images and generate context-aware stories.

Initial tests with Vertex AI are looking promising. Stay tuned for our MVP!

Log in or sign up for Devpost to join the conversation.