Execution & Runs
How AgentFlow executes pipelines, streams logs, handles failures, and manages run history.
Execution Flow
Sub-pipeline references checked, MCP tools validated, instruction hashes computed for each node.
Nodes and edges are analyzed to build a topological execution order. Nodes at the same level can run concurrently.
For each level in the DAG, runnable nodes are identified based on edge conditions from predecessors.
AI tasks spawn Claude CLI, shell/git nodes spawn bash. All stdout/stderr is streamed line-by-line to the UI.
Each node transitions through: pending → running → success/failed/skipped/cancelled. Events emitted on every change.
AI task costs are parsed from Claude CLI stderr and summed for the entire run.
Run metadata and per-node details (logs, exit codes, costs, timing) are stored in SQLite.
Data Flow
Frontend: startRun(pipeline, inputs, cliPath, projectPath)
│
▼
Tauri IPC → executor::start_run()
│
├── Validate sub-pipelines & MCP tools
├── Insert run record in SQLite
├── Spawn async task: run_pipeline_loop()
│ │
│ ├── Build topological order (DAG)
│ ├── For each level:
│ │ ├── Check edge conditions
│ │ ├── Skip nodes without satisfied edges
│ │ └── Execute runnable nodes concurrently
│ │ │
│ │ ├── ai-task → spawn claude CLI
│ │ ├── shell/git → spawn bash -c
│ │ ├── parallel → tokio::spawn children
│ │ ├── loop → iterate items, spawn per-item
│ │ ├── approval-gate → emit event, wait
│ │ └── sub-pipeline → recursive call
│ │
│ ├── Emit node-log events (streamed output)
│ ├── Emit run-update events (status changes)
│ ├── Parse cost from Claude stderr
│ └── Insert run_steps in SQLite
│
▼
Frontend: listens for events, updates canvas & log viewerLoop Execution
Loop nodes iterate over a list of items, executing their child nodes once per item. Each iteration injects per-item variables.
1. Parse the instructions field into a list using the configured separator
(newline, comma, or custom string)
2. Cap the list at max_iterations (default: 1000)
3. For each item in the list:
a. Inject loop variables: $LOOP_ITEM, $LOOP_INDEX, $LOOP_COUNT
b. Execute all child nodes with the injected variables
c. Track per-iteration status and cost
4. Loop succeeds only if ALL iterations succeed
5. If any iteration fails, remaining iterations are skipped
6. Costs from all iterations are aggregated into the loop node's totalPerformance
Live Log Streaming
During execution, stdout and stderr from each node are captured and buffered, then flushed as batched events every 50ms to minimize IPC overhead.
// node-log-batch event payload (batched every 50ms)
{
"runId": "run-1708344567890",
"nodeId": "node-2",
"lines": [
"PASS src/utils.test.ts (2.34s)",
"PASS src/lib.test.ts (1.12s)"
]
}
// run-update event payload
{
"runId": "run-1708344567890",
"status": "running",
"nodeStates": {
"node-1": "success",
"node-2": "running",
"node-3": "pending"
},
"totalCost": 0.0342
}The live log viewer shows streamed output in a monospace panel. A "Latest" button lets you jump directly to the most recent node with results — expanding and highlighting it automatically. Only the last 200 log lines per node are rendered, with an indicator for hidden earlier lines.
Cancellation
Running pipelines can be cancelled at any time via Ctrl+R (toggles run/cancel) or the cancel button.
- Cancellation is propagated via a
tokio::sync::watchchannel for zero-cost event-driven signaling - Active processes receive SIGTERM first
- If a process doesn't exit within the grace period, SIGKILL is sent
- All pending nodes are marked as
cancelled - The run is recorded with status
cancelledin the database
Resume from Failure
When a pipeline fails, you can resume it from the point of failure instead of re-running everything from scratch.
1. Load original run and its steps from the database
2. Build maps:
- prior_results: { nodeId → status }
- prior_approvals: { nodeId → "approved" | "rejected" }
- prior_instructions: { nodeId → SHA256 hash }
3. During re-execution, for each node:
- If prior status was "success" AND instruction hash matches
→ Reuse result (skip execution)
- If prior approval was "approved"
→ Auto-approve (skip waiting)
- Otherwise → Execute normally
4. New run ID created, linked via resumed_from fieldInstruction change detection
Run History
Every pipeline run is stored in the SQLite database with full details.
Run Record
| Field | Description |
|---|---|
id | Unique run ID, format: "run-{timestamp_ms}" |
pipeline_name | Name of the executed pipeline |
started_at | ISO 8601 UTC timestamp |
finished_at | NULL until completion |
status | "running" | "success" | "failed" | "cancelled" |
trigger_input | JSON string of input variables used |
resumed_from | Original run ID if this is a resume |
failed_node_id | ID of the first node that failed |
pipeline_hash | SHA-256 of all instructions |
Step Record
| Field | Description |
|---|---|
run_id | References the parent run |
node_id | Pipeline node ID |
node_name | Display name at execution time |
started_at / finished_at | Execution timing |
status | Node execution status |
exit_code | Process exit code (0 = success) |
log_output | Full captured stdout + stderr |
attempt | Retry attempt number (starts at 1) |
cost_usd | Claude API cost (NULL for non-AI nodes) |
model | AI model name if tracked |
approval_state | "approved" | "rejected" for approval gates |
instructions_hash | SHA-256 for resume detection |
Filtering
The run history panel supports filtering by pipeline name, status (success, failed, cancelled), and date range. Filters persist across sessions so you can quickly return to the runs you care about.
Approval Handling
When execution reaches an approval gate:
- An
approval-requestedevent is emitted - The UI shows a dialog with the node's instructions
- The user clicks Approve or Reject
respond_to_approval(approved: bool)is called via IPC- Execution continues (on approve) or fails (on reject)
Desktop Notifications
AgentFlow sends native OS notifications for key pipeline events, even when the app is minimized or in the background.
- Run completed: Notifies when a pipeline finishes successfully
- Run failed: Alerts immediately when a node fails
- Approval requested: Prompts you to review and approve/reject a gate
Tip
Run State Isolation
Run results are scoped to their pipeline. Switching between pipelines preserves each pipeline's run state independently — the canvas and LiveLog only show results matching the currently selected pipeline.
Expandable Node Output
During and after execution, each node on the canvas can be expanded to show its full output inline. Click a completed node to toggle its output panel, which displays the captured stdout/stderr with syntax highlighting.
Export Run Reports
Completed runs can be exported as JSON reports containing full run metadata, per-node results, logs, timing, and cost breakdowns. Use these for auditing, sharing with teammates, or feeding into external dashboards.