Inspiration
What it does
Inspiration
The idea for pulpa.work was born from a personal need: to find a better way to engage in self-reflection. Traditional journaling felt static, and fleeting thoughts were often lost. I envisioned a tool that wasn't just a passive notebook, but an active partner in introspection. Inspired by Stoic philosophy, I imagined an empathetic AI interviewer that could help me converse with my own thoughts, to find the "pulpa"—the true essence and meaning—hidden within them.
How It Was Built
This project was built entirely within the Bolt.new cloud development environment, following an ambitious three-phase roadmap.
Phase 1 & 2: Building the Foundation & Memory First, I developed the core conversational loop using React, Supabase Edge Functions, and a suite of AI APIs: Google STT for transcription, Google Gemini for AI reasoning, and ElevenLabs for voice. A key architectural decision was building an asynchronous transcription pipeline using Google Cloud Storage to handle long audio without timeouts. We then built the "Memory Lane," a full-featured interface to a PostgreSQL database, allowing users to save, review, and search every conversation. We also implemented a state management refactor using Zustand to handle application complexity as it grew.
Phase 3: Activating Insights
The final phase brought the project to life. We enabled the pgvector extension in Supabase and built a complete backend system to automatically generate vector embeddings for every message. On top of this, we created a semantic-search API endpoint, allowing the user to ask natural language questions about their own past thoughts and receive conceptually relevant results, effectively creating a dialogue with their own memory.
Challenges Faced
The journey was filled with complex technical challenges.
- Asynchronous Processing: The initial design suffered from timeouts on long audio. This forced a complete re-architecture of the transcription service into a robust, two-step asynchronous flow.
- Database Schema Mismatches: We faced persistent
400 Bad Requesterrors that were incredibly difficult to debug. Through methodical logging and analysis, we discovered the issue was our local schema being out of sync with the deployed database, which we resolved by adopting a more direct SQL execution method. - CORS Errors: As new serverless functions were added, we had to systematically debug and implement a standard CORS handling policy across the entire backend to allow the frontend to communicate securely.
What I Learned
This hackathon was an incredible learning experience. I learned how to architect a full-stack, AI-native application from the ground up. I gained deep experience in serverless functions, asynchronous job handling, and the practical application of vector databases for semantic search. Most importantly, I learned the value of a methodical, iterative development process: building, testing, hitting a wall, analyzing the problem, and building again, better. pulpa.work is the result of that journey.
Built With
- bolt.new
- deno
- elevenlabs-api
- google-cloud-speech-to-text-api
- google-gemini-api
- pgvector
- postgresql
- react
- supabase
- supabase-edge-functions
- tailwindcss
- typescript
- vite
- zustand
Log in or sign up for Devpost to join the conversation.