Video: https://dastageer-siddiqui.short.gy/maximally_video_sub

What it does

  • Pulls ~160 short public snippets (Reddit, Hacker News, RSS) via app.sources.collect_snippets
  • Runs NLP + emotion analysis to pick a mood + color + sound + texture via app.mood_engine.synthesize
  • Serves a UI that shows the current vibe and animates it (Three.js + Web Audio) in index.html and app.js
  • Saves the latest snapshot + a daily history using app.storage.Storage into current.json and history.json

How we built it

  • Backend: FastAPI app
  • Sources: Reddit JSON + HN Algolia + RSS parsing in sources.py
  • Mood engine: Transformer emotion model + spaCy entities + TF‑IDF clustering in mood_engine.py
  • Frontend: Glitch UI + animated 3D sphere + procedural audio in app.js and styles.css

Challenges we ran into

  • Model weight/performance: first-run downloads + inference cost (Transformers + torch) in requirements.txt
  • Noisy/variable sources: RSS formats change and headline duplication required dedupe/capping
  • Keeping the app resilient: refresh failures shouldn’t break the UI

Accomplishments that we're proud of

  • We analyze all collected snippets (not a tiny keyword list) using emotion + aggregation.
  • Clear debug trail saved in scrape_latest.json.
  • A polished “vibe” UI (3D + sound) that still stays minimal and readable.

What we learned

  • Practical NLP plumbing: spaCy entities/topics + transformer emotion classification working together
  • Lightweight clustering/“topic concentration” from TF‑IDF
  • Frontend experience design: small visuals + audio can communicate “state” better than raw numbers.

What's next for internet-vibes

  • Add real widget content (trend pulse / keyword cloud) using the existing verbose debug payload
  • Speed-ups: batching emotion inference + smarter caching.
  • More sources + reliability (timeouts/retries, per-source health).
  • Deploy + scheduled refresh as a service (so it’s always “listening”).

Built With

Share this project:

Updates