Skip to content

Pipelines

Pipelines are RaiSE's orchestration layer. They take the skills you already use (like /rai-story-design or /rai-story-implement) and wire them into automated, gate-enforced sequences. Instead of invoking each skill manually, a pipeline runs the entire lifecycle — pausing only when human judgment is needed.

Legacy runbook commands (/rai-story-run, /rai-bugfix-run, /rai-epic-run) are deprecated. Use the pipeline engine through your agent integration instead of teaching new workflows around those commands.

Why Pipelines?

Without pipelines, you invoke skills one by one:

/rai-story-start → /rai-story-design → /rai-story-plan → /rai-story-implement → ...

This works, but it depends on you remembering the sequence, running each skill, and not skipping steps. Pipelines encode the sequence as data — a YAML file — and the engine executes it, enforcing gates and injecting context at each phase.

How You Use Pipelines

You interact with pipelines through your agent's pipeline integration for a work item such as RAISE-1234.

Behind the scenes:

  1. The runbook skill calls pipeline_start (MCP tool) to initialize the run
  2. At each phase, the AI reads the skill's SKILL.md and follows its steps
  3. pipeline_advance moves to the next phase after completion
  4. HITL gates pause and ask you for approval in the conversation
  5. pipeline_restore recovers state if the session restarts

You never need to call MCP tools directly — the runbook skills handle the orchestration. Your interaction is conversational: read the output, approve at gates, provide guidance when asked.

Legacy runbook commands (deprecated)

Command Pipeline Phases Use case
/rai-story-run story 8 Feature development
/rai-bugfix-run bugfix 7 Tracked bug fixes
/rai-epic-run epic 6 Multi-story initiatives

Pipeline YAML Anatomy

Each pipeline is defined in a YAML file that the engine loads:

name: story
description: "8-phase story lifecycle pipeline"
issue_types:
  - story

defaults:
  story_type: code

execution:
  worktree_isolation: true
  branch_pattern: "story/{issue_id}/pipeline"

phases:
  - id: design
    type: llm
    skill: rai-story-design
    context:
      graph:
        - types: [pattern]
          limit: 3
        - types: [module]
          limit: 2
    validates:
      - pattern: "**/*-design.md"
        description: "Story design document"
    gate:
      type: hitl
      level: REVIEW

Key Fields

Field Description
phases[].type llm (AI-driven via skill) or deterministic (shell commands)
phases[].skill Which SKILL.md to load as the prompt
phases[].context.graph Knowledge graph queries injected into context
phases[].validates Glob patterns for artifacts that must exist after execution
phases[].gate hitl (human review) or deterministic (automated check)
phases[].when Condition expression (e.g., "story_type == 'code'")
execution.worktree_isolation Run in a separate git worktree

Built-in Pipelines

RaiSE ships with four lifecycle pipelines:

Pipeline Phases Lifecycle
story 8 start, design, plan, implement, AR, QR, review, close
epic 6 start, design, plan, story-iteration, docs, close
bugfix 7 start, triage, analyse, plan, fix, review, close
hotfix 3 Minimal emergency fixes

HITL Gates and Delegation

Gates are checkpoints where a human decision is required. The pipeline pauses and the AI presents what happened — you decide whether to proceed, adjust, or reject.

In Claude Code, this looks like a conversational prompt:

── GATE: Design Approval ──

Story: RAISE-1234 — Add webhook notifications
Approach: Event-driven via existing hook system
Components: hooks/webhook.py (new), config schema (modify)

▸ Approve design? [y/edit/reject]

Your delegation level (from your developer profile) controls how much autonomy the pipeline has:

Level Behavior
REVIEW Pipeline pauses at every HITL gate. You review and approve. (Default)
NOTIFY Pipeline proceeds but notifies you. You can intervene.
AUTO Pipeline proceeds without pausing. Gates are logged but not blocking.

This maps to ShuHaRi: Shu developers use REVIEW (learn the process), Ri developers use AUTO (trust the process).

Context Injection

Each phase can declare what knowledge graph context it needs. The engine queries the graph via MCP tools and includes the results:

context:
  graph:
    - types: [pattern]    # "What patterns apply here?"
      limit: 5
    - types: [module]     # "What modules are relevant?"
      limit: 2
    - types: [decision]   # "What ADRs should I know about?"
      limit: 2

This is how the pipeline connects institutional knowledge to execution — patterns from past work inform current decisions automatically.

MCP Tools (Under the Hood)

The pipeline engine exposes MCP tools that runbook skills consume:

MCP Tool Purpose
pipeline_start Initialize a pipeline run
pipeline_advance Complete current phase, move to next
pipeline_status Check current state of a run
pipeline_restore Recover full state after session restart
pipeline_pause Pause a run (resumable)
pipeline_cancel Cancel a run (non-resumable)
pipeline_list List available pipeline definitions
pipeline_runs List active and recent runs

You don't call these directly in new workflows. They are documented here only as legacy background for older projects and historical material.

Skills Are Unchanged

Important: Skills work exactly the same whether invoked manually (/rai-story-design) or through a pipeline. The pipeline just automates the sequencing and adds context injection. You can always fall back to manual skill invocation.

Next Steps