Example: Full CI/CD Pipeline

A practical, end-to-end example using multiple node types. This pipeline performs AI code review, runs quality checks in parallel, requires human approval, and then deploys — with a reusable sub-pipeline for the deployment step. See the Node Types page for the full list of 8 available types including Loop.

Pipeline Overview

1. AI Code Reviewai-task
success
2. Quality Checks (Parallel)parallel
2a. Run Testsshell
2b. Lint & Type Checkshell
2c. AI Security Scanai-task
all succeed
3. Commit Changesgit
success
4. Deploy Approvalapproval-gate
approved
5. Deploy to Stagingsub-pipeline
success
6. Post-Deploy Verificationai-task

Step-by-Step Walkthrough

#1 AI Code Review

ai-task

The pipeline starts by asking Claude to review all staged changes. Uses the code-reviewer agent for consistent review criteria.

What it does:

  • Sends all git diff output to Claude with the code-reviewer agent
  • Uses Sonnet model — good balance of quality and cost for code review
  • Claude analyzes for bugs, security issues, and code quality
  • Output is captured and available to downstream nodes via {output.node-1}
  • Cost is tracked (typically $0.01-0.05 depending on diff size)
Node configuration
json
{
  "id": "node-1",
  "name": "AI Code Review",
  "type": "ai-task",
  "agent": "code-reviewer",
  "model": "sonnet",
  "instructions": "Review all staged changes in this repository. Focus on bugs, security vulnerabilities, and code quality issues. Report findings grouped by severity.",
  "retry": { "max": 2, "delay": 10 },
  "timeout": 600,
  "inputs": [],
  "outputs": ["review_report"],
  "position": { "x": 300, "y": 0 }
}
Agent: .claude/agents/code-reviewer.md
markdown
# Code Reviewer

You are a senior engineer performing code review. Analyze all changes for:

1. **Bugs** - Logic errors, null references, race conditions
2. **Security** - Injection, auth bypass, data exposure (OWASP Top 10)
3. **Performance** - N+1 queries, unnecessary allocations, blocking calls

## Output Format
Group findings by severity: Critical > Warning > Info.
For each: file:line - description - suggested fix.

#2 Quality Checks

parallel

Runs three quality checks at the same time. All three must pass for the pipeline to continue. This saves ~60% time compared to running them sequentially.

What it does:

  • Spawns all three children concurrently via Tokio tasks
  • Waits for ALL to finish before continuing
  • If any child fails, the parallel node fails and the pipeline stops
Node configuration
json
{
  "id": "node-2",
  "name": "Quality Checks",
  "type": "parallel",
  "instructions": "Run all quality checks concurrently",
  "children": ["node-2a", "node-2b", "node-2c"],
  "inputs": [],
  "outputs": [],
  "position": { "x": 300, "y": 150 }
}

2a.Run Testsshell

Runs the full test suite with coverage. Times out after 5 minutes to prevent hanging test runners.

{
  "id": "node-2a",
  "name": "Run Tests",
  "type": "shell",
  "instructions": "npm run test -- --coverage --ci",
  "retry": { "max": 2, "delay": 5 },
  "timeout": 300,
  "inputs": [],
  "outputs": [],
  "position": { "x": 100, "y": 300 }
}

2b.Lint & Type Checkshell

Runs ESLint and TypeScript type checking in a single bash command. Fast and free (no AI cost).

{
  "id": "node-2b",
  "name": "Lint & Type Check",
  "type": "shell",
  "instructions": "npx eslint src/ --max-warnings 0 && npx tsc --noEmit",
  "timeout": 120,
  "inputs": [],
  "outputs": [],
  "position": { "x": 300, "y": 300 }
}

2c.AI Security Scanai-task

Uses Claude with a security-focused agent to do a deeper security analysis beyond what the code review catches. Specifically scans for OWASP Top 10 vulnerabilities.

{
  "id": "node-2c",
  "name": "AI Security Scan",
  "type": "ai-task",
  "agent": "security-scanner",
  "instructions": "Scan the entire src/ directory for OWASP Top 10 vulnerabilities. Check for SQL injection, XSS, CSRF, insecure deserialization, and secrets in code. Exit with a non-zero status if any critical issues are found.",
  "timeout": 600,
  "inputs": [],
  "outputs": ["security_report"],
  "position": { "x": 500, "y": 300 }
}

#3 Commit Changes

git

If the AI review prompted any auto-fixes, commit them. Uses pipeline variables for the branch name.

What it does:

  • Stages all changes, creates a commit with a descriptive message
  • Uses the ${BRANCH} variable for push target
  • Free — git operations have no AI cost
Node configuration
json
{
  "id": "node-3",
  "name": "Commit Changes",
  "type": "git",
  "instructions": "git add -A && git commit -m 'chore: apply AI review fixes' --allow-empty && git push origin HEAD:${BRANCH}",
  "timeout": 60,
  "inputs": [],
  "outputs": [],
  "position": { "x": 300, "y": 450 }
}

#4 Deploy Approval

approval-gate

Pauses the pipeline and presents the human operator with a summary of everything that happened so far. They can approve to continue to deployment or reject to stop.

What it does:

  • Emits an approval-requested event to the UI
  • Shows the instructions text as a prompt to the operator
  • Blocks until the operator clicks Approve or Reject
  • Times out after 30 minutes with no response (auto-fails)
Node configuration
json
{
  "id": "node-4",
  "name": "Deploy Approval",
  "type": "approval-gate",
  "instructions": "Pipeline has completed code review, tests, lint, type check, and security scan. All checks passed. Changes have been committed and pushed to ${BRANCH}.\n\nReview the run logs above and approve deployment to ${DEPLOY_ENV}.",
  "timeout": 1800,
  "inputs": [],
  "outputs": [],
  "position": { "x": 300, "y": 600 }
}

Resume behavior

If you resume this pipeline after it was previously approved, the gate is automatically re-approved. You won't be prompted again.

#5 Deploy to Staging

sub-pipeline

Calls a reusable "deploy-staging" pipeline that handles the actual deployment. This pipeline might be shared across multiple parent pipelines (e.g., CI/CD, hotfix, rollback).

What it does:

  • Loads .claude/pipelines/deploy-staging.pipeline.json
  • Executes it recursively as a full pipeline
  • Costs from AI nodes inside the sub-pipeline are aggregated into the parent run
  • Circular references are blocked (if deploy-staging calls back to this pipeline)
Node configuration
json
{
  "id": "node-5",
  "name": "Deploy to Staging",
  "type": "sub-pipeline",
  "instructions": "Execute the staging deployment pipeline",
  "pipeline_ref": "deploy-staging",
  "inputs": [],
  "outputs": [],
  "position": { "x": 300, "y": 750 }
}
Referenced: deploy-staging.pipeline.json (simplified)
json
{
  "name": "deploy-staging",
  "description": "Build and deploy to staging environment",
  "version": "1.0.0",
  "variables": {},
  "nodes": [
    {
      "id": "ds-1", "name": "Build", "type": "shell",
      "instructions": "npm run build",
      "position": { "x": 250, "y": 0 }
    },
    {
      "id": "ds-2", "name": "Deploy", "type": "shell",
      "instructions": "npx deploy --env staging",
      "timeout": 300,
      "position": { "x": 250, "y": 150 }
    }
  ],
  "edges": [
    { "id": "ds-e1", "from": "ds-1", "to": "ds-2", "condition": "success" }
  ]
}

#6 Post-Deploy Verification

ai-task

After deployment, uses Claude (Haiku for speed and low cost) to verify the staging environment is healthy. References the code review output from node 1 via {output.node-1} to check if flagged issues are resolved.

Node configuration
json
{
  "id": "node-6",
  "name": "Post-Deploy Verification",
  "type": "ai-task",
  "model": "haiku",
  "instructions": "The staging deployment just completed. The code review found: {output.node-1}\n\nVerify the deployment by:\n1. Checking the health endpoint at ${STAGING_URL}/health\n2. Running a quick smoke test of the main user flows\n3. Checking application logs for any new errors\n\nReport pass/fail with details.",
  "timeout": 300,
  "inputs": [],
  "outputs": ["verification_report"],
  "position": { "x": 300, "y": 900 }
}

Complete Pipeline JSON

Here's the full pipeline file you can save as .claude/pipelines/full-ci-cd.pipeline.json:

full-ci-cd.pipeline.json
json
{
  "name": "Full CI/CD",
  "description": "AI code review, parallel quality checks, approval gate, and staged deployment",
  "version": "1.0.0",
  "budget": 0.50,
  "variables": {
    "BRANCH": "feature/my-changes",
    "DEPLOY_ENV": "staging",
    "STAGING_URL": "https://staging.example.com"
  },
  "nodes": [
    {
      "id": "node-1",
      "name": "AI Code Review",
      "type": "ai-task",
      "agent": "code-reviewer",
      "model": "sonnet",
      "instructions": "Review all staged changes. Focus on bugs, security, and code quality.",
      "retry": { "max": 2, "delay": 10 },
      "timeout": 600,
      "inputs": [],
      "outputs": ["review_report"],
      "position": { "x": 300, "y": 0 }
    },
    {
      "id": "node-2",
      "name": "Quality Checks",
      "type": "parallel",
      "instructions": "Run all quality checks concurrently",
      "children": ["node-2a", "node-2b", "node-2c"],
      "inputs": [],
      "outputs": [],
      "position": { "x": 300, "y": 150 }
    },
    {
      "id": "node-2a",
      "name": "Run Tests",
      "type": "shell",
      "instructions": "npm run test -- --coverage --ci",
      "retry": { "max": 2, "delay": 5 },
      "timeout": 300,
      "inputs": [],
      "outputs": [],
      "position": { "x": 100, "y": 300 }
    },
    {
      "id": "node-2b",
      "name": "Lint & Type Check",
      "type": "shell",
      "instructions": "npx eslint src/ --max-warnings 0 && npx tsc --noEmit",
      "timeout": 120,
      "inputs": [],
      "outputs": [],
      "position": { "x": 300, "y": 300 }
    },
    {
      "id": "node-2c",
      "name": "AI Security Scan",
      "type": "ai-task",
      "agent": "security-scanner",
      "instructions": "Scan src/ for OWASP Top 10 vulnerabilities. Exit non-zero if critical issues found.",
      "timeout": 600,
      "inputs": [],
      "outputs": ["security_report"],
      "position": { "x": 500, "y": 300 }
    },
    {
      "id": "node-3",
      "name": "Commit Changes",
      "type": "git",
      "instructions": "git add -A && git commit -m 'chore: apply AI review fixes' --allow-empty && git push origin HEAD:${BRANCH}",
      "timeout": 60,
      "inputs": [],
      "outputs": [],
      "position": { "x": 300, "y": 450 }
    },
    {
      "id": "node-4",
      "name": "Deploy Approval",
      "type": "approval-gate",
      "instructions": "All checks passed. Approve deployment to ${DEPLOY_ENV}?",
      "timeout": 1800,
      "inputs": [],
      "outputs": [],
      "position": { "x": 300, "y": 600 }
    },
    {
      "id": "node-5",
      "name": "Deploy to Staging",
      "type": "sub-pipeline",
      "instructions": "Execute staging deployment",
      "pipeline_ref": "deploy-staging",
      "inputs": [],
      "outputs": [],
      "position": { "x": 300, "y": 750 }
    },
    {
      "id": "node-6",
      "name": "Post-Deploy Verification",
      "type": "ai-task",
      "model": "haiku",
      "instructions": "Review findings: {output.node-1}. Verify staging at ${STAGING_URL}. Check health, smoke test, and logs.",
      "timeout": 300,
      "inputs": [],
      "outputs": ["verification_report"],
      "position": { "x": 300, "y": 900 }
    }
  ],
  "edges": [
    { "id": "e1", "from": "node-1", "to": "node-2", "condition": "success" },
    { "id": "e2", "from": "node-2", "to": "node-3", "condition": "success" },
    { "id": "e3", "from": "node-3", "to": "node-4", "condition": "success" },
    { "id": "e4", "from": "node-4", "to": "node-5" },
    { "id": "e5", "from": "node-5", "to": "node-6", "condition": "success" }
  ]
}

Expected Cost Breakdown

Here's a typical cost breakdown for one run of this pipeline:

NodeTypeCost
AI Code Review (Sonnet)ai-task$0.02 - $0.08
Run TestsshellFree
Lint & Type CheckshellFree
AI Security Scanai-task$0.01 - $0.05
Commit ChangesgitFree
Deploy Approvalapproval-gateFree
Deploy (sub-pipeline)sub-pipelineFree (shell only)
Post-Deploy Verify (Haiku)ai-task$0.001 - $0.01
Total per run~$0.03 - $0.14

Budget limit

This pipeline sets budget: 0.50 — if accumulated AI costs exceed $0.50, execution halts automatically and remaining nodes are cancelled.

Cost savings with resume & caching

If this pipeline fails at step 4 (rejected) and you re-run it after making changes, the resume feature skips the AI Code Review and AI Security Scan if the code hasn't changed — saving the bulk of the cost. Output caching further reduces cost on iterative runs where instructions haven't changed.

Failure Scenarios

Tests fail (node 2a)

The parallel node fails because a child failed. Nodes 3-6 are skipped. Fix the failing test and resume — the AI nodes from step 1 are reused if instructions haven't changed.

Approval rejected (node 4)

The approval gate fails with approval_state: "rejected". The pipeline stops. Review the logs, make changes, and run a new pipeline.

AI task times out (node 1)

The Claude CLI process is killed after 600 seconds. Retry policy kicks in — the command is re-run up to 2 times with 10-second delays.

Sub-pipeline deploy fails (node 5)

The error propagates up. The parent pipeline records the sub-pipeline failure. Resume re-runs only the sub-pipeline, not the parent nodes.