Tapsy — AI-Powered Microlearning

Inspiration

Okay real talk — I've tried learning from online courses more times than I can count. I open a module, it's a 45-minute video, and 10 minutes in I'm already on Instagram. It's not that I don't want to learn, the format just doesn't work for how people actually consume content anymore.

At the same time, I looked at it from the creator side. If you want to teach something today, your options are either spending days building a full course on Udemy, or just making a thread on X and hoping people read it. There's nothing in between that's actually good.

That's when the idea clicked — what if making a course felt as easy as sending a voice note, and learning it felt like scrolling through Reels? That's what I set out to build.


What it does

Tapsy lets you create a complete interactive course by just chatting with an AI. No writing, no design, no suffering.

You answer 6 questions — what you want to teach, who it's for, what the goal is. Tapsy handles the rest:

  • Structures your course into modules automatically
  • Generates all the cards — info, quizzes, polls, embedded videos
  • Fetches relevant cover images from Unsplash for each card
  • Creates a full podcast episode with two AI voices (a host and a guest) having an actual discussion about your topic

On the learner side, there's no sign-up. Just a link. Learners swipe through cards in a mobile-first format, answer questions, and get a personalised experience — the course literally rewrites itself based on what they already know.

And when they're done, they get a learning report showing how much they improved, and a certificate if they scored above 70%.


How I built it

The frontend is React + TypeScript with Framer Motion for animations — I wanted the card transitions to feel smooth, not janky.

For the backend I went all-in on Appwrite — it handled auth, database, file storage, and I deployed the whole thing on Appwrite Sites. No separate server, no DevOps headache.

The interesting part is the AI layer. I ended up using 4 different Gemini models because not every task needs the same model:

Task Model
Course blueprint & module structure gemini-3.1-pro-preview — needs deep reasoning
Card content generation gemini-2.5-flash — needs speed at scale
Adaptive rewriting per learner gemini-3-flash-preview — latency sensitive
Podcast audio (2 voices) gemini-2.5-flash-preview-tts

The podcast pipeline was probably the most fun/chaotic part. Gemini generates a script, then I call the TTS model twice — once for the host voice (Puck) and once for the guest voice (Charon) — convert each from PCM16 to WAV, merge the buffers, and upload the final file to Appwrite Storage. Learners just press play.


Challenges I ran into

The module ID nightmare

Gemini is great at generating course structure but it invents its own IDs (m1, m2, m3). Appwrite generates completely different IDs when you actually save the modules to the database. So every single card was referencing module IDs that didn't exist.

I fixed this by building a post-save ID remapping step — after saving modules to Appwrite, I create a lookup map from Gemini's temp IDs to Appwrite's real IDs, then remap every card's moduleId before saving them.

Adaptive rewrite losing card metadata

When Gemini rewrites cards for a specific learner, the output is just plain objects — no id, no courseId, no moduleId. If I used the AI output directly, I'd lose all the Appwrite metadata and break analytics.

Solution: merge AI output with the original cards using a cardId-keyed map, always spreading original fields first so metadata is preserved.

Learner sessions duplicating on every visit

The first version created a new session document every time someone opened the course. So returning learners had 5 sessions, analytics were broken, and resume-from-where-you-left-off didn't work.

Switched to a composite key (email_courseId) and a proper upsert pattern — check if a session exists for this key, update it if yes, create only if new.

Poll votes getting overwritten

Appwrite stores poll options as a JSON string. I was reading the string, updating it in memory, and writing back — but under concurrent users, votes were getting lost. Fixed it with a strict parse → update single index → re-stringify → updateDocument flow with rollback on error.


What I learned

Before this I'd call APIs and move on. This project forced me to actually think about system design — which model for which task, how data flows between AI output and database storage, how to build features that work correctly at scale with multiple users hitting the same data.

I also learned that the most satisfying bugs to fix are the ones that only show up in production with real users. Nothing like seeing your poll votes disappear live to make you write better code.

And honestly? Building something end-to-end solo — from database schema to AI pipeline to a deployed product — is a completely different feeling from following tutorials. 10/10 would stress myself out again.


What's next

  • Course marketplace so learners can discover courses publicly
  • Teams feature for collaborative course creation
  • LMS integrations (Moodle, Canvas) for institutional use

Built With

Share this project:

Updates