Inspiration
I've always been the kind of person who opens twelve browser tabs with the intention of "learning something today" and closes them all two hours later, having retained almost nothing. Apps like Blinkist and Headway showed me that format matters as much as content — compressing an idea into a focused five-minute read isn't dumbing it down, it's respecting the learner's time. But I kept hitting the same wall: the content felt generic, disconnected from what I actually cared about that week. At the same time, I was fascinated by habit-tracking apps — the way a daily streak, that simple ( \text{day}n \geq \text{day}{n-1} + 1 ) dopamine loop, makes showing up feel non-negotiable. And then there were social learning platforms, where interest graphs connect users to content across unexpected domains — the idea that knowing about neuroscience should eventually lead you to behavioral economics through a bridge you didn't know you needed. That collision — personalized micro-content + daily behavioral commitment + cross-domain discovery — became Synapse.
What it does
Synapse is an AI-native second brain that delivers a personalized daily queue of five Sparks — structured micro-articles generated by Amazon Nova Lite across 63 topics in 9 interest areas. Every Spark can be read in Zen mode, speed-read via RSVP (with Nova-calibrated per-word durations based on linguistic complexity), or listened to through Listening Mode powered by Amazon Polly Neural TTS. After every Spark, an Active Recall quiz forces memory consolidation. As users learn, a Neural Brain Map grows — a visual graph where nodes represent topics and edges represent cross-domain connections, mapping the user's knowledge the way cortical regions map cognition. Custom Sparks can be generated from any source — a URL, a PDF, a YouTube video — via Flash Sync, and multi-day structured study plans can be created with Deep Track.
How we built it
The backend runs entirely on AWS, deployed with SAM (AWS Serverless Application Model): 14 Lambda functions, API Gateway HTTP API with Cognito JWT auth, DynamoDB single-table design, S3 for audio and batch assets, and EventBridge for scheduled jobs. Amazon Nova Lite (amazon.nova-lite-v1:0) is the core intelligence layer — used in six distinct pipelines. The most ambitious is the weekly Batch Inference pipeline: every Sunday at 02:00 UTC, a Lambda builds a JSONL file with 2,520 prompts (63 topics × 35 focus angles + bridge pairs) and submits it as a single Bedrock Batch job, achieving a content generation cost of approximately:
$$C = N \times \bar{T}{in} \times p{in} + N \times \bar{T}{out} \times p{out} \approx $0.75 \text{ per week}$$
Daily personalization uses a TF-IDF cosine similarity between the user's free-text cognitive seed and pool spark content — no embedding API calls, no vector DB, just a lightweight ranker inside the Lambda. The Flutter frontend uses StatefulWidget + ChangeNotifier with no external state management library, and audioplayers v6 for Polly TTS playback.
Challenges we ran into
Every layer had at least one bug that taught us something. The audio pipeline was the hardest: audioplayers v6 silently ignores resume() on a player that has never started — we had to track a _hasStartedPlay flag and switch to play(UrlSource(...)) on the first invocation. The backend was returning 500 on every audio request because POLLY_VOICE_ID was set to Bianca (an Italian neural voice) while the content is English and LanguageCode was hardcoded to en-US — Polly rejected the mismatch. Pool sparks stored under POOL#{topicId} in DynamoDB were invisible to the audio Lambda, which only looked under USER#{userId} — fixed with a ?topicId= query param and a fallback lookup. On the frontend, the Profile dashboard and the full-screen Brain Map were computing stats from two different sources (raw API data vs. generated graph nodes), so the numbers never matched — we aligned both to derive stats from the same NeuralMapData.generate() output. And on Android, a ClassNotFoundException: com.synapse.app.MainActivity crash traced back to MainActivity.kt sitting in the wrong package directory (com.example.myapp instead of com.synapse.app).
Accomplishments that we're proud of
We're proud that the Batch Inference pipeline works end-to-end autonomously — from EventBridge trigger to DynamoDB-populated pool — with no human intervention. The RSVP mode with Nova-calibrated word durations is something we haven't seen in any other micro-learning app: instead of a fixed WPM rate, each word gets a duration based on its linguistic complexity as understood by Nova, making speed-reading feel natural rather than mechanical. The Neural Brain Map — a CustomPainter visualization that maps learned topics to anatomical brain regions — gives users a genuinely memorable metaphor for their own knowledge growth. And the fact that the entire personalization system runs without embeddings, vector databases, or per-request model calls — just TF-IDF against pre-generated content — means the marginal cost per daily active user approaches zero.
What we learned
We learned that the distance between a working prototype and a working system is exactly where all the interesting engineering lives. Nova Lite understood structured generation tasks — title + body + RSVP word durations + quiz question in a single JSON response — almost immediately, which was genuinely surprising and let us focus on the system design rather than prompt debugging. We learned that DynamoDB single-table design requires you to think about every access pattern before you write your first item, because retrofitting a partition key later is painful. We learned that Bedrock Batch Inference is an underrated primitive: submitting 2,520 prompts as one job rather than 2,520 sequential calls isn't just cheaper, it's architecturally cleaner. And we learned, the hard way, that silent fallback behavior in UI libraries (a timer that ticks but plays no sound) is far more dangerous than an explicit crash — at least a crash tells you something is wrong.
What's next for Synapse
The immediate next step is spaced repetition scheduling — using quiz performance data to resurface Sparks at optimal intervals, turning the daily queue into a true memory system rather than just a discovery feed. We want to add voice-input Flash Sync, so users can describe a concept verbally and get a Spark back. The Neural Brain Map deserves an expanded interaction model — tapping a node should show a timeline of when that topic was learned, which bridge sparks it unlocked, and what connections were formed. On the infrastructure side, we want to move the personalization layer from TF-IDF to a lightweight embedding-based ranker using Titan Embeddings, while keeping the batch-generation cost model intact. And longer term, Synapse should be able to generate a full learning path for any goal a user describes — turning the Deep Track feature from a manual request into a proactive coaching agent.
Log in or sign up for Devpost to join the conversation.