Best Practices
Tips for using AgentFlow effectively, keeping costs low, and building reliable pipelines.
Keeping Costs Low
Break large AI tasks into focused steps
Instead of one big prompt that does everything, split work into smaller, targeted nodes. Each invocation uses fewer tokens and is easier to debug.
# Instead of:
"Review code, fix bugs, add tests, and update docs"
# Use 4 separate AI Task nodes:
1. "Review code for bugs"
2. "Fix the identified bugs"
3. "Generate unit tests for changed files"
4. "Update documentation for new APIs"Use Shell nodes for non-AI work
Don't use AI Task nodes for things that bash can do. Running tests, building projects, linting — all of these are free as Shell nodes.
# Free (Shell node):
npm run test -- --coverage
# Costs tokens (AI Task node):
"Run the test suite and tell me if anything fails"
# The AI would just run the same command anyway!Write specific, concise instructions
Vague prompts cause Claude to produce more output (more tokens). Be direct about what you want.
# Expensive (vague):
"Look at the codebase and suggest improvements"
# Cheaper (specific):
"Review src/auth.ts for SQL injection vulnerabilities.
List any found as: file:line - description"Use the Cost Dashboard to identify expensive nodes
Check the Top Nodes and Top Pipelines views regularly. If one node consistently costs more than expected, refine its instructions or split it.
Leverage resume instead of full re-runs
When a pipeline fails partway through, use Resume instead of re-running from scratch. Successfully completed AI nodes are reused at zero additional cost.
Use agents to reduce repetition
If multiple AI nodes share system instructions, create an agent with the common context. This avoids repeating the same context in every node's instructions.
Set budget limits on expensive pipelines
Configure a max cost (USD) per run in Pipeline Settings. If accumulated AI task costs exceed the budget, execution halts automatically — preventing runaway costs during development.
# In Pipeline Settings panel:
Budget: $0.50
# Pipeline halts if total AI cost exceeds $0.50
# Remaining nodes are cancelled, not failedUse output caching for iterative runs
Output caching skips re-execution of nodes whose instructions haven't changed since the last successful run. This saves tokens when you're iterating on later steps of a pipeline.
# First run: all nodes execute ($0.15)
# You tweak node-5's instructions
# Second run: nodes 1-4 are cached (free)
# node-5+ re-execute ($0.03)Pick the right model per node
Use per-node model selection to match capability to complexity. Haiku for simple formatting, Sonnet for general tasks, Opus for complex reasoning — optimizing cost at each step.
# Expensive: all nodes use Opus
Node 1 (Opus): "Format this JSON" $0.02
Node 2 (Opus): "Architect new module" $0.08
# Optimized: right model per task
Node 1 (Haiku): "Format this JSON" $0.001
Node 2 (Opus): "Architect new module" $0.08Use pre-run cost estimates
Before clicking Run, check the estimated cost based on historical averages. After 3-5 runs, estimates become reliable enough for budget planning.
Using Loops Effectively
Set max_iterations to prevent runaway loops
Always configure max_iterations on loop nodes. Without a cap, a loop over a large list can consume significant time and cost.
# Safe: capped at 50 iterations
"loop_config": { "max_iterations": 50 }
# Dangerous: no cap on a dynamic list
"loop_config": {} // could iterate thousands of timesUse the right separator for your data
Choose the separator that matches your list format. Newline works for file lists, comma for CSV-style data, or set a custom separator for special cases.
# Newline-separated (default):
"file1.ts\nfile2.ts\nfile3.ts"
# Comma-separated:
"item1, item2, item3" → separator: "comma"
# Custom separator:
"task1|task2|task3" → separator: "|"Reference loop variables in child instructions
Child nodes can use $LOOP_ITEM, $LOOP_INDEX, and $LOOP_COUNT to customize behavior per iteration.
# Child AI Task instructions:
"Review file $LOOP_ITEM (file $LOOP_INDEX of $LOOP_COUNT).
Focus on security issues and suggest fixes."Monitor loop costs with pre-run estimates
Loop-aware cost estimation multiplies child AI Task costs by the iteration count. Check the estimate before running a loop over a large list.
Pipeline Design
Use conditional edges for error handling
Add "failure" edges to create fallback paths. Instead of letting the whole pipeline fail, route failures to cleanup or notification nodes.
Node: "Deploy to Production"
├── success → "Notify Team: Success"
└── failure → "Rollback Deploy"
└── "Notify Team: Rolled Back"Add approval gates before destructive actions
Any node that pushes to main, deploys to production, or modifies shared resources should have an approval gate before it.
AI Review → Tests → ✅ Approval Gate → Git Push to Main
# The approval gate shows you exactly what the AI
# changed before you let it push.Use parallel nodes to speed up pipelines
Independent tasks like linting, testing, and type-checking can run at the same time. This cuts total pipeline duration significantly.
# Sequential: ~90 seconds
Lint (30s) → Test (30s) → Typecheck (30s)
# Parallel: ~30 seconds
┌── Lint (30s) ──┐
├── Test (30s) ──┤ → Continue
└── Typecheck ───┘Set timeouts on all execution nodes
AI tasks and shell commands can hang. Always set a reasonable timeout to prevent pipelines from running indefinitely.
{
"timeout": 300, // 5 minutes for tests
"timeout": 600, // 10 minutes for AI code review
"timeout": 60 // 1 minute for simple shell commands
}Use variables for environment-specific values
Don't hardcode branch names, URLs, or environment names. Use pipeline variables so you can reuse the same pipeline across contexts.
// Pipeline variables:
{ "BRANCH": "main", "ENV": "staging", "REGION": "us-east-1" }
// Node instructions:
"Deploy to ${ENV} in ${REGION}"
"git push origin HEAD:${BRANCH}"Compose with sub-pipelines
Build reusable pipeline modules. A 'deploy' sub-pipeline can be called from both your 'full CI/CD' and 'hotfix' pipelines.
Chain nodes with output data passing
Use {output.NODE_ID} to feed one node's output into another. This lets you build multi-step reasoning chains where each step builds on the previous result.
# Node 1: "Analyze the codebase for tech debt"
# → captures analysis report to stdout
# Node 2: "Based on this report: {output.node-1}
# Create a prioritized remediation plan"
# → receives Node 1's full output as contextUse conditional edges for graceful failure handling
Add "failure" edges alongside "success" edges to create cleanup paths. Instead of letting the entire pipeline fail, route failures to rollback or notification nodes.
Deploy Node
├── success → "Send Success Notification"
└── failure → "Rollback"
└── "Send Failure Alert"Writing Effective Agents
Be specific about output format
Tell the agent exactly how to format its output. This makes downstream processing easier and responses more consistent.
# In your agent markdown:
## Output Format
Respond with a JSON object:
```json
{
"issues": [
{ "file": "path", "line": 42, "severity": "high", "message": "..." }
],
"summary": "One-line summary"
}
```Define scope boundaries
Tell the agent what it should NOT do. This prevents scope creep and keeps responses focused.
# Good agent instructions:
You are a security reviewer. ONLY report security issues.
Do NOT suggest style changes, performance improvements,
or refactoring unless they directly relate to security.Include examples in agent instructions
One good example is worth more than a paragraph of explanation. Show the agent exactly what good output looks like.
Keep agents single-purpose
A 'code-reviewer' agent and a 'test-writer' agent are better than one 'do-everything' agent. Each can be optimized independently.
Team Collaboration
Commit pipelines and agents to git
Everything in .claude/ is plain JSON and Markdown. Commit it, review it in PRs, and share it across the team.
git add .claude/agents/ .claude/pipelines/
git commit -m "feat: add code review pipeline"
git pushUse descriptive pipeline names and descriptions
Your teammates will see pipeline names in the selector. 'review-and-deploy-staging' is better than 'pipeline-1'.
CLI users can run the same pipelines
AgentFlow generates standard Claude Code agent files. Teammates who prefer the CLI can use agents directly without the desktop app.
# Direct CLI usage (no AgentFlow needed):
claude --agent code-reviewer --print "Review latest changes"Use file watching for live collaboration
When a teammate pushes new agents or pipelines, AgentFlow's file watcher auto-detects the changes and refreshes the UI. Just pull and see updates.
Debugging Failed Pipelines
Check the live log first
The log viewer shows real-time output from each node. Scroll to the failed node and look at its stderr output for error messages.
Review run history for patterns
If a pipeline fails intermittently, check run history. Look for patterns — does it always fail on the same node? Same time of day? Same input variables?
Use retry policies for flaky operations
Network requests, API calls, and some test suites can be flaky. Add a retry policy with a short delay instead of failing the whole pipeline.
{
"retry": { "max": 3, "delay": 5 }
// Retries up to 3 times with 5-second delay
// Each attempt is logged separately
}Check the app error log
AgentFlow logs internal errors to ~/.cache/com.agentflow.app/agentflow.log. Check this if the UI behaves unexpectedly or IPC calls fail.
Verify the Claude CLI path
Most execution failures are caused by the Claude CLI not being found. Check Settings → Claude CLI Path and verify it points to the correct binary.
Disable output caching when debugging
If a node's output seems stale, it might be serving a cached result. Tweak the node's instructions slightly (even adding a space) to invalidate the cache and force re-execution.
Use the Latest button in long runs
During or after a pipeline run, click the "Latest" button in the log viewer to jump directly to the most recent node with results — it auto-expands and highlights it.
Quick diagnostic checklist
- Is the Claude CLI path correct? (Settings panel)
- Is the project path correct? (Top bar)
- Does
.claude/exist in the project? - Can you run
claude --versionin terminal? - Check the error log for Rust-level errors