DAG Workflows and Jobs
Execute multi-step workflows with task dependencies and jobs with execution modes
This example demonstrates two orchestration methods in Propeller:
- Workflows — Define explicit task dependencies using a DAG structure. Tasks specify which other tasks they depend on via
depends_on. - Jobs — Group tasks with a shared execution mode (
parallelorsequential). No explicit dependencies—the mode controls execution order.
Both use the same underlying tasks but differ in how execution order is determined.
Part 1: DAG Workflow
This section demonstrates DAG (Directed Acyclic Graph) workflow execution. Workflows allow you to define task dependencies where child tasks only execute after their parent tasks complete successfully.
Prerequisites
Ensure you have:
- Propeller services running (
make start-supermq) - A provisioned config with valid credentials
- The
addition.wasmmodule built
Source Code
This example uses the addition-wat example, which exports a main function that adds two i32 integers:
(module
(func (export "main") (param i32 i32) (result i32)
local.get 0
local.get 1
i32.add))Build WASM
Propeller executes tasks as WebAssembly modules, so we need to compile the source code to .wasm format before uploading.
Build the addition module from WAT (WebAssembly Text format):
cd propeller
wat2wasm examples/addition-wat/addition.wat -o build/addition.wasmThe command produces no output on success.
Verify the build:
file build/addition.wasmYour output should look like this:
build/addition.wasm: WebAssembly (wasm) binary module version 0x1 (MVP)This confirms the file is a valid WebAssembly binary. "Version 0x1 (MVP)" indicates WebAssembly 1.0, the first standardized version supported by all runtimes.
Create Workflow
Create a DAG workflow with three tasks. Three tasks is the minimum needed to demonstrate parallel fan-out: Task 1 is the root, and Task 2 and Task 3 both depend on it. When Task 1 completes, Tasks 2 and 3 start simultaneously.
Task IDs are auto-generated by Propeller if omitted. In this example, we explicitly set "id": "task-001" so that task-002 and task-003 can reference it in their depends_on arrays:
curl -X POST "http://localhost:7070/workflows" \
-H "Content-Type: application/json" \
-d '{
"tasks": [
{
"id": "task-001",
"name": "main",
"inputs": [10, 20]
},
{
"id": "task-002",
"name": "main",
"inputs": [5, 15],
"depends_on": ["task-001"]
},
{
"id": "task-003",
"name": "main",
"inputs": [100, 200],
"depends_on": ["task-001"]
}
]
}'Your output should look like this:
{
"tasks": [
{
"id": "task-001",
"name": "main",
"state": 0,
"cli_args": null,
"inputs": [10, 20],
"daemon": false,
"encrypted": false,
"workflow_id": "33b97dd3-f719-40f0-8a45-87254c27406f",
"start_time": "0001-01-01T00:00:00Z",
"finish_time": "0001-01-01T00:00:00Z",
"created_at": "2026-03-01T15:21:52.857522851Z",
"updated_at": "0001-01-01T00:00:00Z",
"next_run": "0001-01-01T00:00:00Z"
},
{
"id": "task-002",
"name": "main",
"state": 0,
"cli_args": null,
"inputs": [5, 15],
"daemon": false,
"encrypted": false,
"depends_on": ["task-001"],
"workflow_id": "33b97dd3-f719-40f0-8a45-87254c27406f",
"start_time": "0001-01-01T00:00:00Z",
"finish_time": "0001-01-01T00:00:00Z",
"created_at": "2026-03-01T15:21:52.857523790Z",
"updated_at": "0001-01-01T00:00:00Z",
"next_run": "0001-01-01T00:00:00Z"
},
{
"id": "task-003",
"name": "main",
"state": 0,
"cli_args": null,
"inputs": [100, 200],
"daemon": false,
"encrypted": false,
"depends_on": ["task-001"],
"workflow_id": "33b97dd3-f719-40f0-8a45-87254c27406f",
"start_time": "0001-01-01T00:00:00Z",
"finish_time": "0001-01-01T00:00:00Z",
"created_at": "2026-03-01T15:21:52.857524208Z",
"updated_at": "0001-01-01T00:00:00Z",
"next_run": "0001-01-01T00:00:00Z"
}
]
}The response shows all three tasks created with state: 0 (Pending). Key fields:
- workflow_id: All tasks share the same ID, linking them as a single workflow
- depends_on: Task 2 and Task 3 both list
task-001, establishing the fan-out dependency - inputs: The integer pairs that will be passed to the
mainfunction - start_time / finish_time: Zero values (
0001-01-01) indicate the tasks haven't run yet
For details on dependency validation, see Task and Dependency Fields.
Upload WASM
Since we built addition.wasm locally, we upload it directly to each task. For production workflows, you can use image_url instead—see WASM Module Deployment.
Task 1
curl -X PUT "http://localhost:7070/tasks/task-001/upload" \
-F "file=@build/addition.wasm"Your output should look like this:
{
"id": "task-001",
"name": "main",
"state": 0,
"file": "AGFzbQEAAAABBwFgAn9/AX8DAgEABwgBBG1haW4AAAoJAQcAIAAgAWoL",
"cli_args": null,
"inputs": [10, 20],
"daemon": false,
"encrypted": false,
"workflow_id": "33b97dd3-f719-40f0-8a45-87254c27406f",
"start_time": "0001-01-01T00:00:00Z",
"finish_time": "0001-01-01T00:00:00Z",
"created_at": "2026-03-01T15:21:52.857522851Z",
"updated_at": "2026-03-01T15:21:55.123456789Z",
"next_run": "0001-01-01T00:00:00Z"
}The file field now contains the WASM binary encoded as base64. This is the actual module that will be sent to the proplet for execution. The updated_at timestamp reflects when the upload occurred.
Tasks 2 and 3
Repeat the same upload for the remaining tasks. The response structure is identical—only the task ID, inputs, and depends_on fields differ:
curl -X PUT "http://localhost:7070/tasks/task-002/upload" \
-F "file=@build/addition.wasm"
curl -X PUT "http://localhost:7070/tasks/task-003/upload" \
-F "file=@build/addition.wasm"Start Workflow
Start the root task (task-001). The manager automatically schedules dependent tasks (task-002 and task-003) when their dependencies complete:
curl -X POST "http://localhost:7070/tasks/task-001/start"Your output should look like this:
{ "started": true }Check Workflow Status
Query all tasks and filter by workflow_id to see execution status:
curl "http://localhost:7070/tasks" | jq '.tasks[] | select(.workflow_id == "33b97dd3-f719-40f0-8a45-87254c27406f")'After completion, your output should look like this:
{
"offset": 0,
"limit": 100,
"total": 3,
"tasks": [
{
"id": "task-001",
"name": "main",
"state": 3,
"file": "AGFzbQEAAAABBwFgAn9/AX8DAgEABwgBBG1haW4AAAoJAQcAIAAgAWoL",
"cli_args": null,
"inputs": [10, 20],
"daemon": false,
"encrypted": false,
"proplet_id": "ec182939-7940-4b25-869e-47b245ddec09",
"workflow_id": "33b97dd3-f719-40f0-8a45-87254c27406f",
"results": "30\n",
"start_time": "2026-03-01T15:22:08.180655563Z",
"finish_time": "2026-03-01T15:22:08.309737353Z",
"created_at": "2026-03-01T15:21:52.857522851Z",
"updated_at": "2026-03-01T15:22:08.309737353Z",
"next_run": "0001-01-01T00:00:00Z"
},
{
"id": "task-002",
"name": "main",
"state": 3,
"file": "AGFzbQEAAAABBwFgAn9/AX8DAgEABwgBBG1haW4AAAoJAQcAIAAgAWoL",
"cli_args": null,
"inputs": [5, 15],
"daemon": false,
"encrypted": false,
"depends_on": ["task-001"],
"proplet_id": "ec182939-7940-4b25-869e-47b245ddec09",
"workflow_id": "33b97dd3-f719-40f0-8a45-87254c27406f",
"results": "20\n",
"start_time": "2026-03-01T15:22:08.333634221Z",
"finish_time": "2026-03-01T15:22:08.513905318Z",
"created_at": "2026-03-01T15:21:52.857523790Z",
"updated_at": "2026-03-01T15:22:08.513905318Z",
"next_run": "0001-01-01T00:00:00Z"
},
{
"id": "task-003",
"name": "main",
"state": 3,
"file": "AGFzbQEAAAABBwFgAn9/AX8DAgEABwgBBG1haW4AAAoJAQcAIAAgAWoL",
"cli_args": null,
"inputs": [100, 200],
"daemon": false,
"encrypted": false,
"depends_on": ["task-001"],
"proplet_id": "ec182939-7940-4b25-869e-47b245ddec09",
"workflow_id": "33b97dd3-f719-40f0-8a45-87254c27406f",
"results": "300\n",
"start_time": "2026-03-01T15:22:08.371892377Z",
"finish_time": "2026-03-01T15:22:08.537613640Z",
"created_at": "2026-03-01T15:21:52.857524208Z",
"updated_at": "2026-03-01T15:22:08.537613640Z",
"next_run": "0001-01-01T00:00:00Z"
}
]
}Results:
- Task 1:
10 + 20 = 30 - Task 2:
5 + 15 = 20(started after Task 1 completed) - Task 3:
100 + 200 = 300(started after Task 1 completed, ran in parallel with Task 2)
Key fields in this example:
| Field | Value | Meaning |
|---|---|---|
state | 3 | All three tasks completed successfully |
results | 30, 20, 300 | Output from each main(a, b) call |
proplet_id | ec182939-... | The single proplet that ran all tasks |
start_time | 15:22:08.180 (task-001), .333 (task-002), .371 (task-003) | Task 2 and 3 started ~150ms after Task 1, confirming fan-out |
depends_on | ["task-001"] | Only present on task-002 and task-003 |
For conditional execution with run_if and DAG validation rules, see Task Scheduling.
Part 2: Job
This section demonstrates job-based execution. Jobs group tasks with a shared execution mode—no explicit dependencies. The mode controls whether tasks run in parallel or sequentially.
Create Job
Create a job with parallel execution mode. All tasks in a parallel job start simultaneously:
curl -X POST "http://localhost:7070/jobs" \
-H "Content-Type: application/json" \
-d '{
"name": "addition-pipeline",
"execution_mode": "parallel",
"tasks": [
{
"id": "job-task-001",
"name": "main",
"inputs": [100, 50]
},
{
"id": "job-task-002",
"name": "main",
"inputs": [25, 75]
},
{
"id": "job-task-003",
"name": "main",
"inputs": [200, 300]
}
]
}'Your output should look like this:
{
"job_id": "981a9c91-92e2-4518-a0b6-46b8b590d9bb",
"tasks": [
{
"id": "job-task-001",
"name": "main",
"state": 0,
"cli_args": null,
"inputs": [100, 50],
"daemon": false,
"encrypted": false,
"job_id": "981a9c91-92e2-4518-a0b6-46b8b590d9bb",
"start_time": "0001-01-01T00:00:00Z",
"finish_time": "0001-01-01T00:00:00Z",
"created_at": "2026-03-01T17:05:45.493390418Z",
"updated_at": "0001-01-01T00:00:00Z",
"next_run": "0001-01-01T00:00:00Z"
},
{
"id": "job-task-002",
"name": "main",
"state": 0,
"cli_args": null,
"inputs": [25, 75],
"daemon": false,
"encrypted": false,
"job_id": "981a9c91-92e2-4518-a0b6-46b8b590d9bb",
"start_time": "0001-01-01T00:00:00Z",
"finish_time": "0001-01-01T00:00:00Z",
"created_at": "2026-03-01T17:05:45.493390418Z",
"updated_at": "0001-01-01T00:00:00Z",
"next_run": "0001-01-01T00:00:00Z"
},
{
"id": "job-task-003",
"name": "main",
"state": 0,
"cli_args": null,
"inputs": [200, 300],
"daemon": false,
"encrypted": false,
"job_id": "981a9c91-92e2-4518-a0b6-46b8b590d9bb",
"start_time": "0001-01-01T00:00:00Z",
"finish_time": "0001-01-01T00:00:00Z",
"created_at": "2026-03-01T17:05:45.493390418Z",
"updated_at": "0001-01-01T00:00:00Z",
"next_run": "0001-01-01T00:00:00Z"
}
]
}Key differences from workflows:
- job_id: Tasks share a
job_idinstead ofworkflow_id - No depends_on: Tasks have no explicit dependencies—execution order is controlled by
execution_mode - execution_mode: Set to
parallel(all tasks start simultaneously) orsequential(tasks run one at a time)
Upload WASM
Upload the same addition.wasm module to each task:
curl -X PUT "http://localhost:7070/tasks/job-task-001/upload" \
-F "file=@build/addition.wasm"
curl -X PUT "http://localhost:7070/tasks/job-task-002/upload" \
-F "file=@build/addition.wasm"
curl -X PUT "http://localhost:7070/tasks/job-task-003/upload" \
-F "file=@build/addition.wasm"Start Job
Start the job using the job endpoint (not individual tasks):
curl -X POST "http://localhost:7070/jobs/981a9c91-92e2-4518-a0b6-46b8b590d9bb/start"Your output should look like this:
{
"job_id": "981a9c91-92e2-4518-a0b6-46b8b590d9bb",
"message": "job started"
}Check Job Status
Query the job to see all task results:
curl "http://localhost:7070/jobs/981a9c91-92e2-4518-a0b6-46b8b590d9bb"After completion, your output should look like this:
{
"job_id": "981a9c91-92e2-4518-a0b6-46b8b590d9bb",
"tasks": [
{
"id": "job-task-001",
"name": "main",
"state": 3,
"file": "AGFzbQEAAAABBwFgAn9/AX8DAgEABwgBBG1haW4AAAoJAQcAIAAgAWoL",
"cli_args": null,
"inputs": [100, 50],
"daemon": false,
"encrypted": false,
"proplet_id": "ec182939-7940-4b25-869e-47b245ddec09",
"job_id": "981a9c91-92e2-4518-a0b6-46b8b590d9bb",
"results": "150\n",
"start_time": "2026-03-01T17:06:17.452922184Z",
"finish_time": "2026-03-01T17:06:17.683702371Z",
"created_at": "2026-03-01T17:05:45.493390418Z",
"updated_at": "2026-03-01T17:06:17.683702145Z",
"next_run": "0001-01-01T00:00:00Z"
},
{
"id": "job-task-002",
"name": "main",
"state": 3,
"file": "AGFzbQEAAAABBwFgAn9/AX8DAgEABwgBBG1haW4AAAoJAQcAIAAgAWoL",
"cli_args": null,
"inputs": [25, 75],
"daemon": false,
"encrypted": false,
"proplet_id": "ec182939-7940-4b25-869e-47b245ddec09",
"job_id": "981a9c91-92e2-4518-a0b6-46b8b590d9bb",
"results": "100\n",
"start_time": "2026-03-01T17:06:17.468977763Z",
"finish_time": "2026-03-01T17:06:17.674829758Z",
"created_at": "2026-03-01T17:05:45.493390418Z",
"updated_at": "2026-03-01T17:06:17.674829554Z",
"next_run": "0001-01-01T00:00:00Z"
},
{
"id": "job-task-003",
"name": "main",
"state": 3,
"file": "AGFzbQEAAAABBwFgAn9/AX8DAgEABwgBBG1haW4AAAoJAQcAIAAgAWoL",
"cli_args": null,
"inputs": [200, 300],
"daemon": false,
"encrypted": false,
"proplet_id": "ec182939-7940-4b25-869e-47b245ddec09",
"job_id": "981a9c91-92e2-4518-a0b6-46b8b590d9bb",
"results": "500\n",
"start_time": "2026-03-01T17:06:17.48555379Z",
"finish_time": "2026-03-01T17:06:17.664597381Z",
"created_at": "2026-03-01T17:05:45.493390418Z",
"updated_at": "2026-03-01T17:06:17.664597204Z",
"next_run": "0001-01-01T00:00:00Z"
}
]
}Results:
- Task 1:
100 + 50 = 150 - Task 2:
25 + 75 = 100 - Task 3:
200 + 300 = 500
Key fields in this example:
| Field | Value | Meaning |
|---|---|---|
state | 3 | All three tasks completed successfully |
results | 150, 100, 500 | Output from each main(a, b) call |
proplet_id | ec182939-... | The single proplet that ran all tasks |
start_time | 17:06:17.452, .468, .485 | All tasks started within 33ms—parallel execution |
job_id | 981a9c91-... | All tasks share the same job ID |
Sequential Job
Create a job with sequential execution mode. Tasks run one at a time in order:
curl -X POST "http://localhost:7070/jobs" \
-H "Content-Type: application/json" \
-d '{
"name": "sequential-pipeline",
"execution_mode": "sequential",
"tasks": [
{
"id": "seq-task-001",
"name": "main",
"inputs": [1, 2]
},
{
"id": "seq-task-002",
"name": "main",
"inputs": [3, 4]
},
{
"id": "seq-task-003",
"name": "main",
"inputs": [5, 6]
}
]
}'Upload WASM and start the job:
curl -X PUT "http://localhost:7070/tasks/seq-task-001/upload" -F "file=@build/addition.wasm"
curl -X PUT "http://localhost:7070/tasks/seq-task-002/upload" -F "file=@build/addition.wasm"
curl -X PUT "http://localhost:7070/tasks/seq-task-003/upload" -F "file=@build/addition.wasm"
curl -X POST "http://localhost:7070/jobs/38c745f1-b5c8-4e72-8c0a-f269ec0637f5/start"After completion:
{
"job_id": "38c745f1-b5c8-4e72-8c0a-f269ec0637f5",
"tasks": [
{
"id": "seq-task-001",
"name": "main",
"state": 3,
"inputs": [1, 2],
"results": "3\n",
"start_time": "2026-03-01T17:13:42.099469689Z",
"finish_time": "2026-03-01T17:13:42.239309087Z"
},
{
"id": "seq-task-002",
"name": "main",
"state": 3,
"inputs": [3, 4],
"results": "7\n",
"start_time": "2026-03-01T17:13:42.120860495Z",
"finish_time": "2026-03-01T17:13:42.269139758Z"
},
{
"id": "seq-task-003",
"name": "main",
"state": 3,
"inputs": [5, 6],
"results": "11\n",
"start_time": "2026-03-01T17:13:42.153693284Z",
"finish_time": "2026-03-01T17:13:42.287081364Z"
}
]
}Results:
- Task 1:
1 + 2 = 3 - Task 2:
3 + 4 = 7 - Task 3:
5 + 6 = 11
Execution Mode Comparison
| Mode | Behavior | Start Times (this example) |
|---|---|---|
parallel | All tasks start simultaneously | .452, .468, .485 (33ms spread) |
sequential | Tasks start one after another | .099, .120, .153 (staggered) |
With fast-executing tasks, the timing difference is subtle. For long-running tasks, sequential mode ensures each task completes before the next starts (fail-fast behavior).
For a comparison of workflows vs jobs and detailed job configuration options, see Workflow vs Job Comparison.