Reduce AI guesswork.
Start building with structure.

Pipes is an MCP server that enforces Plan → Act → Verify workflows on any AI coding agents. It automatically generates the system prompt for each phase — handled for you.

Core Concepts

Pipes works by storing pre-configured workflows in a backend database. When you start a task, the MCP server identifies the right workflow, loads its instructions, and automatically generates the AI system prompt at every phase — so the agent always knows exactly what to do and what not to do.

🔀

Pipelines

Pre-configured workflows stored in Postgres. Each one defines a sequence of phases for a specific task type (e.g., New Feature, Debugging).

📊

Phases

Ordered stages within a pipeline. Each phase has exit conditions that must be satisfied before the AI can advance. Planning phases block code edits entirely.

🧩

Fragments

Atomic instruction sets (e.g., Identify User Intent, Assess Blast Radius) injected into the AI system prompt. Each fragment has instructions, constraints, expected outputs, and failure modes.

The Orchestrator

The backend reads the current phase, loads its fragments, and instructs an LLM to write a tailored system prompt. You never craft these prompts yourself — the Orchestrator does it automatically, every time.

How It Works

Every conversation follows the same loop. These MCP tools orchestrate the AI automatically — the agent calls them behind the scenes while you just chat normally.

Step 1

create_aisession

Creates a stateful session and auto-generates a unique conversation ID.

Step 2

identify_pipeline

Sends the user's task description to a classifier that selects the best pipeline. Prevents mismatching workflows.

Step 3

update_aisession

Stores the confirmed pipeline in the session. Required before the Orchestrator can generate phase-specific prompts.

Step 4

generate_metaprompt

The Orchestrator. Loads the current phase's Fragments, builds a pipeline roadmap, and outputs a complete system prompt tailored to the conversation.

Step 5

Do the work

AI follows the generated prompt. Planning phases = no code. Execution phases = code allowed. Use get_pipeline_state to check constraints at any time.

Step 6

advance_phase

Transitions to the next phase, verifies exit conditions, and automatically returns the new metaprompt. Repeat from Step 5.

The Workflow in Action

A real conversation using the New Feature Implementation pipeline. Notice how the Orchestrator injects different Fragments at each phase, changing the AI's behavior automatically.

Phase 1 of 4

Framing — Intent & Constraints

Fragments: Identify User Intent · Validate User Intent · Gather Requirements · Identify Success Criteria · Define Guardrails

U
I need a dashboard to track API usage.
Orchestrator injects Framing Fragments"Validate intent. Do NOT propose solutions or write code. Gather missing requirements. Define success criteria before proceeding."
AI
Before I start anything, I need to understand the scope:

1. Who is the primary user? Admin team? End users?
2. What metrics are must-haves vs nice-to-haves?
3. Is there an existing database table for this data?
4. Any latency or freshness requirements?

I cannot proceed to design until these are answered.
U
Admin team only. Must show Daily Requests and Error Rate. Data lives in the access_logs table. Real-time isn't needed — daily aggregation is fine.
AI
Great. Here are the confirmed success criteria:

✅ Admin-only dashboard showing Daily Requests and Error Rate
✅ Data source: access_logs table
✅ Daily aggregation (no real-time)

Guardrails: No schema migrations in v1. No public-facing access.

Framing complete. Calling advance_phase
Phase 2 of 4

Context — System Analysis

Fragments: Read Codebase · Identify Codebase Stack · Read Project Guidelines · Identify System Boundaries · Identify Affected Components · Assess Blast Radius

advance_phase auto-generates new prompt → Context Fragments: "Read the codebase. Map system boundaries. Identify every affected component. Do NOT write or modify code."
🔧
Agent calls grep_search("access_logs")
Agent calls view_file("src/services/logs.ts")
Agent calls view_file("src/routes/index.tsx")
AI
System Context Report:

Stack: React 19 + Vite + Tailwind + Drizzle ORM + Postgres

Affected Components:
- src/services/logs.ts — has LogAggregator class, needs new getDailyStats() method
- src/routes/index.tsx — needs new /dashboard route
- src/db/schema.ts — no changes needed, access_logs table already exists

Blast Radius: Low. No shared components affected. New route is additive.

Context complete. Advancing to Design…
Phase 3 of 4

Design — Solution Architecture

Fragments: Generate Solution Hypothesis · Select Solution Hypothesis · Define Solution Architecture · Identify Impacted Files

New prompt injected → Design Fragments: "Generate ≥2 solution hypotheses. Select one with justification. Define architecture. Still NO code edits."
AI
Hypothesis A: Add a recharts-based chart component.
Hypothesis B: Server-side rendered HTML table with sparklines.

Selected: A — Recharts provides better interactivity for admin users and is already used in the org.

Architecture:
1. New UsageChart.tsx component
2. New getDailyStats() in LogAggregator
3. New /dashboard route

Do you approve? I can refine before moving to Planning.
U
Approved. Use recharts.
Phase 4 of 4

Planning — Implementation Steps

Fragments: Define Implementation Steps · Define Test Strategy · Assess Risks · Estimate Effort · Confirm Acceptance Criteria · Prepare Verification Report

New prompt → Planning Fragments: "Define step-by-step implementation plan. Write test strategy. Assess risks. Estimate effort. Get user acceptance."
AI
Implementation Plan:
1. npm install recharts
2. Add getDailyStats() to LogAggregator
3. Create src/components/UsageChart.tsx
4. Add /dashboard route in router
5. Add auth guard (admin only)

Test Strategy: Unit test for getDailyStats(), integration test for the route.
Risk: Low — additive change, no shared state.
Effort: ~2 hours.

🎉 Pipeline complete. Ready for implementation.

Available Pipelines

Setup

1

Create an Account

Sign up to get access to the Pipes MCP backend and dashboard.

2

Generate API Key

Go to your Dashboard to create a token.

3

Configure your IDE

Add the MCP server to your IDE settings (Cursor, VS Code, etc.):

{
  "mcpServers": {
    "mcp-pipes": {
      "command": "npx",
      "args": [
        "-y",
        "@dvasc/mcp-bridge",
        "https://pipes-admin.dcsv.cc/mcp"
      ],
      "env": {
        "API_KEY": "tok_your_token" // from Dashboard
      }
    }
  }
}