Noe: An Intelligent Browsing Companion

Inspiration

This project started with a simple but ambitious question: What if a browser could do more than just display web pages? i was and still am an avid user of the Arc browser and was genuinely sad when i leaned of his future so i decided to embark on a huge task. What if it could be an intelligent companion that actively helps you understand the world, conduct deep research, and organize your digital life seamlessly? That's the vision behind "noe," an AI-powered browser MVP designed to reimagine the browsing experience.

I was inspired by the rise of AI assistants and the sophisticated interfaces of tools like Perplexity. I wanted to create a beautiful, functional, and intelligent interface that didn't just fetch search results, but synthesized information, cited its sources, and provided a clear, comprehensive understanding of any topic.

What it does

How we built it

  1. Phase 1: The UI Foundation: The project began as a frontend-only MVP, generated by Anima. This gave me a fantastic starting point with a clean, component-based structure using React and Shadcn. The initial focus was on creating the core user experience: a fluid sidebar, a dynamic content area, and a sophisticated tab manager. At this stage, the "AI Research" was simulated with mock data to perfect the UI.

  2. Phase 2: The Need for a Secure Backend: To integrate a real AI like Google Gemini, I knew I couldn't call the API directly from the frontend, as this would expose my secret API key. This led to the most critical architectural decision: building a backend.

  3. Phase 3: Choosing Serverless with Supabase: Instead of a traditional, always-on server, I chose to use Supabase Edge Functions. This serverless approach offered several key advantages:

    • Security: My GEMINI_API_KEY is securely stored as an environment variable in Supabase, completely inaccessible to the client.
    • Scalability: The function only runs when needed, making it efficient and scalable.
    • Integration: It seamlessly connected with the other Supabase services I planned to use.
  4. Phase 4: Bringing it all to Life: With the architecture in place, I built out the core features:

    • A Supabase Edge Function (gemini-deep-research) that receives a query, enables Gemini's web search tool to find and analyze real-time sources, and returns a synthesized summary along with the source list.
    • A full authentication system using Supabase Auth, complete with a React Context (AuthContext) to manage user sessions and profiles across the app.
    • Database tables for user profiles, preferences, search history, and saved research sessions, all protected with PostgreSQL's Row Level Security (RLS).
    • Custom React hooks (useSearchHistory, useResearchSessions) to cleanly separate data-fetching logic from the UI components.

The Technology Stack

  • Frontend: React, TypeScript, Vite, Tailwind CSS
  • UI Components: Shadcn/UI, Lucide React Icons
  • Drag & Drop: dnd-kit for a smooth, accessible tab management experience.
  • Backend: Supabase Edge Functions (Deno)
  • AI Engine: Google Gemini 1.5 Flash with the google_search tool for grounded, real-time results.
  • Database & Auth: Supabase (PostgreSQL, GoTrue) for user authentication, profiles, and data persistence.

Challenges we ran into

  1. Securely Integrating the AI: The initial challenge was figuring out the best way to call the Gemini API without compromising security. The solution was adopting a client-server model and using a Supabase Edge Function as the intermediary. This was a crucial learning step in building real-world applications.

  2. The Dreaded CORS Error: When first connecting the frontend (running on localhost:5173) to the local Supabase functions (running on localhost:54321), I encountered the classic TypeError: Failed to fetch. This taught me a valuable lesson in debugging cross-origin requests. The fix involved:

    • Ensuring the Edge Function correctly handled preflight OPTIONS requests.
    • Aligning the HTTP methods between the client (POST) and the server (which was expecting to read a POST body).
  3. Managing Complex Application State: The browser's state is non-trivial. It includes the active tab, the URL or research query for each tab, the user's authentication status, and more. Centralizing the user session with AuthContext and managing the active content through a main screen component (DivFlexScreen) proved to be an effective strategy for keeping the state predictable and manageable.

    Accomplishments that we're proud of

What we learned

This project was an incredible learning experience that went far beyond just building a UI. I gained hands-on experience with:

  • Full-Stack Development: Designing and building both a sophisticated frontend and a secure, serverless backend.
  • Serverless Architecture: Implementing and debugging cloud functions with Supabase, and understanding the serverless paradigm.
  • AI Integration: Working directly with a powerful large language model (Google Gemini), using its advanced features like tool-use for web grounding, and parsing its structured output.
  • Database Design & Security: Writing SQL schemas, defining table relationships, and implementing Row Level Security (RLS) in PostgreSQL to ensure user data is private and secure.
  • Advanced React Patterns: Building a robust application using custom hooks, context for state management, and modern data-fetching techniques.

What's next for Noe

This MVP is just the beginning. I envision a future for "noe" that includes follow-up conversations within research sessions, integrating MCPS as a way to SUPERCHARGE the browser #Extensions, AI browser automation, even more personalization based on user history, and the ability for users to save, share, and collaborate on their research. This project has built the solid foundation needed to make that vision a reality, suggestiung pages, news based on bookmarks (will integrate it later).

Built With

Share this project:

Updates