How LiquidMind works (high level)

Goal
Help someone reason about concentrated liquidity on Solana (Meteora-style DLMM) before they treat a range as final: combine LLM-backed roles with historical stress simulation so “deploy” is reviewed, not guessed.

Inputs
The user picks a pool and describes what they want in natural language (or supplies parameters directly in a simpler path). Market context comes from pool metadata and OHLCV-style history, not from the model’s memory.

Pipeline

  1. A user step turns chat into structured intent (goals and risk).
  2. A strategy step proposes concrete LP parameters: range width, mode, rebalance behavior, and plain-language rationale.
  3. An adversarial step backtests that proposal on several historical windows (volatile and calmer), producing numbers (P&L, IL, time-in-range, etc.) and a risk readout.
  4. An arbiter step approves, adjusts, or rejects the proposal using both the strategy and the stress results. If rejected, the strategy step tries again with explicit feedback from the arbiter and the sims until limits are hit.

Fast vs full runs
A lighter mode trades some depth (shorter history, fewer windows, less LLM work) for speed; the shape of the pipeline is the same.

Deploy model
The system does not sign transactions from chat. When parameters are acceptable, the app builds the on-chain action and the user signs with their own wallet (e.g. via an embedded wallet provider). That keeps custody and execution in the user’s environment.

Agentverse / other agents
The same pipeline can be started or confirmed from an external agent (e.g. Fetch) that talks to the backend: the outcome (approved range, run identity, optional nudge to deploy) is shared with the web UI so chat-originated and dashboard-originated flows stay aligned.

After deploy
A separate execution / monitoring layer can track the live position against the approved strategy (rebalances, bounds), using live market data and RPC as the operational substrate.

Built With

Share this project:

Updates