Inspiration

The inspiration for AuraRecall Pro came from the "48-Hour Fade"—the frustration of spending hours highlighting a 50-page PDF only to realize you’ve forgotten most of it by the next day. We looked into the Von Restorff Effect, which suggests that the human brain prioritizes "bizarre" or "unique" information over dry, uniform data. We asked ourselves: What if we could turn boring academic facts into unforgettable sensory experiences?

What it does

AuraRecall Pro is a sensory memory engine that transforms static documents into a multimodal learning suite. By uploading a PDF or an image of notes, the app:

  • Identifies the "Aura": Detects the most complex or abstract concept using high-level reasoning.
  • Generates Surreal Mnemonics: Creates a bizarre visual analogy that is psychologically easier to remember than text.
  • Builds Logic Maps: Automatically generates Mermaid.js flowcharts to visualize the hierarchy of information.
  • Curates Playlists: Uses Google Search to find the top 5 YouTube tutorials specifically tailored to the identified concept.
  • Voices the Knowledge: Produces a "Study Hype-Man" audio script for auditory learners.

How we built it

We leveraged the Gemini 3 Flash model for its lightning-fast multimodal reasoning.

  1. Context Processing: Used Gemini’s 1M+ token window to ingest dense PDFs and target specific user queries.
  2. Visual Reasoning: Employed native vision capabilities to "read" messy whiteboards and complex textbook diagrams.
  3. Code as Visualization: Programmed the system to output Mermaid.js syntax, allowing for dynamic, editable mind-maps.
  4. Mathematical Model: We mapped conceptual density as a function of information entropy: .

  5. Live Search Integration: Integrated the Google Search tool to bridge the gap between static files and real-time web resources.

Challenges we ran into

One major challenge was the PDF "Noise" Problem. Many academic papers have multi-column layouts and inline formulas that confuse standard text scrapers. We solved this by treating the PDF as a series of images, using Gemini's Vision to understand the layout spatially. We also struggled with "Analogy Accuracy"—sometimes the mnemonics were too surreal and lost the scientific meaning. We refined our System Instructions to enforce a strict "Logic Mapping" where every part of the weird visual must correspond to a factual component.

Accomplishments that we're proud of

We are incredibly proud of the zero-friction workflow. In under 30 seconds, a student can go from a 40-page technical manual to a "Memory Palace" consisting of a visual anchor, a logical roadmap, and a curated video path. Successfully integrating Google Search to act as an automated "Librarian" was a major technical milestone that sets AuraRecall apart from standard summarizers.

What we learned

We learned that multimodality is the future of accessibility. By providing information in visual, auditory, and logical formats simultaneously, we can cater to any learning style. We also discovered that "Agentic Workflows"—where the AI plans the research and curation itself—are far more effective for deep study than simple Q&A bots.

What's next for AuraRecall

The next step is Augmented Reality (AR) Integration. We want to allow students to point their phones at a textbook and see the Mermaid.js flowcharts and surreal mnemonics float over the page in 3D. We also plan to implement Spaced Repetition (SRS), where AuraRecall automatically sends you a "vibe-check" quiz via audio one, three, and seven days after your first study session to ensure (long-term memory) consolidation.

Built With

Share this project:

Updates