JL Intelligence: A Gemini-Powered Institutional Research Agent

1. Inspiration: Bridging the "State" Gap

As a serial founder ($6M exit) and a National Chess Athlete, I view financial research as a high-stakes game of strategy. While working on projects for institutional clients like ChinaAMC, I realized that standard AI often fails at the "stateful reasoning" required for professional analysis. It treats every query as a new start, losing the thread of complex financial logic.

I was inspired to build JL Intelligence to prove that we can move beyond simple chatbots. I wanted to create a resilient, institutional-grade "Analyst-in-the-Loop" that uses Gemini 3 to maintain deep context across 100-page documents, identifying "Alpha" and compliance risks that others miss.

2. How I Built It: The Technical Stack

The project is a full-stack implementation designed for production-level stability:

  • Intelligence Layer: Built using the google-genai SDK ($\ge 1.55.0$) to leverage the Gemini 3 Interactions API. This allows for a stateful session where the model "remembers" the document structure and previous audit steps.
  • Backend: A Python-based FastAPI server that manages the inference routing and document parsing.
  • Frontend: A React and Tailwind CSS dashboard designed for high-density financial data, featuring a real-time status monitor.

3. Challenges: The 512MB Memory Battle

The most significant challenge was engineering for extreme resource constraints. My deployment environment had a strict 512MB RAM cap. In standard builds, processing large financial PDFs results in memory usage $M_{u}$ where:

$$M_{u} \approx (\text{PDF Size} \times C) + \Omega > 512\text{MB}$$

Where $C$ is the parsing buffer and $\Omega$ is the model overhead. To prevent Out-of-Memory (OOM) crashes, I implemented:

  • Explicit Memory Management: Forcing manual garbage collection cycles using gc.collect() after every inference step.
  • Dual-Path Fallback: A logic gate that hot-swaps to a legacy inference path in $t < 100\text{ms}$ if the primary Interactions route fails.
  • Diagnostic Heartbeat: A React status monitor that pings the backend every 12 seconds to manage server cold-starts and provide visual feedback to the user.

4. What I Learned: Architecture over Prompts

This project taught me that the future of AI is "Architectural Engineering," not just prompt engineering. For an AI to be trusted by a VP of Product or a Quant Trader, it must be resilient. I learned how to turn a static document into a dynamic reasoning environment using Gemini 3's stateful capabilities, proving that AI can handle the "dirty work" of memory management and compliance while delivering elite-level insights.

5. Conclusion

JL Intelligence represents the intersection of Ivy League technical rigor and the startup grit required to build for the real world. It proves that Gemini 3 is ready for the most demanding environments in global finance.

Built With

  • github-(version-control)-tools-&-libraries:-node.js
  • google-search-grounding-cloud-services-&-platforms:-render-(backend-deployment)
  • interactions-api-(for-stateful-reasoning)
  • javascript-(jsx)-frameworks:-fastapi-(backend)
  • lucide-react
  • netlify-(frontend-hosting)
  • pdf-parsing-libraries-(e.g.
  • python
  • python-garbage-collector-(gc-for-memory-optimization)
  • react-(frontend)
  • tailwind-css-(styling)-apis-&-models:-google-gemini-3-(via-google-genai-sdk)
  • vite
Share this project:

Updates