Running a WASM Task via the Propeller Kubernetes Operator
Propeller Kubernetes Operator example
This is a Propeller Kubernetes Operator example. It provisions SuperMQ, deploys the operator, registers an external proplet, and runs a WebAssembly addition task.
For a complete reference of the operator's architecture, Custom Resource Definitions, configuration options, and MQTT communication patterns, see the Propeller Kubernetes Operator documentation.
INFO[0000] Created image volume k3d-k3s-default-imagesINFO[0000] Starting new tools node...INFO[0001] Creating node 'k3d-k3s-default-server-0'INFO[0004] Pulling image 'docker.io/rancher/k3s:v1.31.5-k3s1'INFO[0005] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.8.3'INFO[0081] Starting node 'k3d-k3s-default-tools'INFO[0082] Creating LoadBalancer 'k3d-k3s-default-serverlb'INFO[0086] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.8.3'INFO[0115] Using the k3d-tools node to gather environment informationINFO[0116] HostIP: using network gateway 172.18.0.1 addressINFO[0116] Starting cluster 'k3s-default'INFO[0116] Starting servers...INFO[0116] Starting node 'k3d-k3s-default-server-0'INFO[0127] All agents already running.INFO[0127] Starting helpers...INFO[0127] Starting node 'k3d-k3s-default-serverlb'INFO[0134] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap...INFO[0136] Cluster 'k3s-default' created successfully!INFO[0136] You can now use it like this:kubectl cluster-info
Verify the cluster is reachable:
kubectl cluster-info
Your output should look like this:
Kubernetes control plane is running at https://0.0.0.0:34963CoreDNS is running at https://0.0.0.0:34963/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyMetrics-server is running at https://0.0.0.0:34963/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
The port (34963 above) is assigned randomly by k3d and will differ on your machine.
Verify the node is Ready:
kubectl get nodes
Your output should look like this:
NAME STATUS ROLES AGE VERSIONk3d-k3s-default-server-0 Ready control-plane,master 13s v1.31.5+k3s1
Create a namespace for workload resources (Proplets and Tasks). The operator's own namespace (propeller-k8s-operator-system) is created automatically by make deploy in Step 8:
The operator communicates with proplets over MQTT via the Magistrala broker stack. Start the full Propeller Docker stack (broker services + Propeller services) from the propeller repository:
cd propellermake start-all
Your output should look like this:
Container magistrala-spicedb-db Started Container magistrala-auth-redis Started Container magistrala-users-db Started Container magistrala-channels-db Started Container magistrala-clients-db Started Container magistrala-fluxmq-node1 Started Container magistrala-fluxmq-node2 Started Container magistrala-fluxmq-node3 Started Container magistrala-fluxmq-auth Started Container magistrala-nginx Started Container magistrala-users Started Container magistrala-channels Started Container magistrala-clients Started Container magistrala-auth Started Container magistrala-domains Started Container propeller-manager Starting Container propeller-proxy Starting Container propeller-proplet Starting Container propeller-proplet Started Container propeller-proxy Started Container propeller-manager Started
propeller-proplet Up 15 seconds (healthy)propeller-proxy Up 15 secondspropeller-manager Up 15 secondsmagistrala-nginx Up 37 secondsmagistrala-clients Up 35 secondsmagistrala-channels Up 35 secondsmagistrala-users Up 33 secondsmagistrala-auth Up 30 secondsmagistrala-domains Up 28 secondsmagistrala-fluxmq-node1 Up 37 secondsmagistrala-fluxmq-node2 Up 37 secondsmagistrala-fluxmq-node3 Up 37 secondsmagistrala-fluxmq-auth Up 37 secondsmagistrala-spicedb Up 37 secondsmagistrala-users-db Up 37 secondsmagistrala-channels-db Up 37 secondsmagistrala-clients-db Up 37 secondsmagistrala-auth-db Up 37 secondsmagistrala-auth-redis Up 37 secondsmagistrala-jaeger Up 37 seconds
The propeller-manager, propeller-proplet, and propeller-proxy containers start but will fail to connect until you provision credentials. That is expected at this stage.
The operator and proplet need SuperMQ credentials—a domain, channel, and two client IDs with keys—to authenticate on MQTT. Use propeller-cli provision to create all these resources through the SuperMQ HTTP API and write the result directly to docker/config.toml.
The CLI opens an interactive form. Fill in your admin credentials and press Enter to accept the suggested defaults for the optional fields:
? Username: admin? Password: ••••••••? Domain name (leave blank to generate): propeller-test? Domain route (leave blank to generate): propeller-test? Domain permission: admin? Manager client name (leave blank to generate): propeller-manager? Number of proplets: 1? Channel name (leave blank to generate): propeller-channel
After completing the form, the CLI provisions all resources and writes docker/config.toml. Confirm the file was created:
The Docker containers read credentials from environment variables in docker/.env. Update those values to match the new credentials you just provisioned.
Open docker/.env and update the following six lines. Find each one by searching for the variable name and replace the old UUID with your new one:
INFO Starting Proplet (Rust) - Instance ID: dd8fdbc0-148d-46a2-9df6-ed7b3ec80184 INFO MQTT client created (TLS: false) INFO Using Wasmtime runtime INFO Starting MQTT event loop INFO Starting PropletService INFO Published discovery message INFO Subscribed to topic: m/3053156a-1994-4776-9e18-8c5d8883647c/c/c19ded40-eeec-448e-86e9-42f490c766a4/control/manager/start INFO Subscribed to topic: m/3053156a-1994-4776-9e18-8c5d8883647c/c/c19ded40-eeec-448e-86e9-42f490c766a4/control/manager/stop INFO Subscribed to topic: m/3053156a-1994-4776-9e18-8c5d8883647c/c/c19ded40-eeec-448e-86e9-42f490c766a4/registry/server INFO Successfully re-subscribed to topics after reconnection
The proplet is subscribed to your new domain and channel. Note the Instance ID in the first line—this is a randomly generated UUID created at startup. It is distinct from the client_id.
The proplet_id field in alive messages is the proplet's client_id. The operator matches this value against the spec.connectionConfig.clientId field in Proplet CRs to identify which Proplet a message belongs to.
The operator runs as a pod inside the k3d cluster. Deploying it requires four steps: configure credentials, build a Docker image, load it into k3d, and apply the manifests.
Local development workflow: This example builds the operator image locally and imports it directly into k3d, avoiding the need for a container registry. This is the fastest workflow for local development and testing. For production deployments, you would typically push the image to a registry and let Kubernetes pull it normally.
Open config/manager/manager.yaml and replace the placeholder values in the args list with the credentials provisioned in Step 5. Because the operator runs inside k3d, the MQTT address must use host.k3d.internal (the DNS name k3d injects for the host machine's Docker network gateway) rather than localhost:
Also add imagePullPolicy: Never immediately after the image: line so k3d uses the locally-loaded image without trying to pull it from a registry:
image: controller:latestimagePullPolicy: Never
Replace all UUIDs above with the values from docker/config.toml (written in Step 5).
Registry-based images: If you push the image to a container registry instead of importing it locally, omit the imagePullPolicy: Never line (or set it to IfNotPresent). If your registry requires authentication, create an image pull secret and reference it in the Deployment:
k3d nodes run their own containerd instance and cannot access images from the host Docker daemon directly. You must import the image explicitly so it is available in the cluster's containerd store:
INFO[0000] Importing image(s) into cluster 'k3s-default'INFO[0001] Saving 1 image(s) from runtime...INFO[0005] Importing images into nodes...INFO[0013] Successfully imported image(s)INFO[0015] Successfully imported 1 image(s) into 1 cluster(s)
Skip this step if using a registry. If you pushed the image to a registry accessible from inside the cluster, skip 8.2 and 8.3 entirely and pass the registry image reference directly to make deploy in 8.4.
NAME READY STATUS RESTARTS AGEpropeller-k8s-operator-controller-manager-5485b6db9d-kvz8d 1/1 Running 0 24s
Check the operator logs to confirm it started and connected to MQTT:
kubectl logs -n propeller-k8s-operator-system \ deployment/propeller-k8s-operator-controller-manager | head -30
Your output should look like this:
2026-03-04T15:19:49Z INFO mqtt MQTT connection lost2026-03-04T15:19:49Z INFO setup starting manager2026-03-04T15:19:49Z INFO controller-runtime.metrics Starting metrics server2026-03-04T15:19:49Z INFO Starting Controller {"controller": "proplet", ...}2026-03-04T15:19:49Z INFO Starting workers {"controller": "proplet", ..., "worker count": 1}2026-03-04T15:19:49Z INFO Starting Controller {"controller": "task", ...}2026-03-04T15:19:49Z INFO Starting workers {"controller": "task", ..., "worker count": 1}...
The first log line says MQTT connection lost. This is a mislabeled log in the operator source—it fires from the SetOnConnectHandler callback and actually means the MQTT connection was established. All five controllers start successfully.
Register the running docker proplet as an external Proplet CRD. The spec.connectionConfig.clientId must match the proplet_id the proplet sends in alive messages (verified in step 6 to be the proplet's client_id). See the Proplet spec reference for all available fields.
Create a Task that targets the docker proplet. This uses the addition WASM module (the same binary used in the addition example)—a minimal module that exports a main function taking two integers and returning their sum. See the Task spec reference for all available fields.
INFO Task 3732d66f-eb2b-471f-bace-024fa3f878e4 completed successfully. Result: 42 INFO Publishing result for task 3732d66f-eb2b-471f-bace-024fa3f878e4 DEBUG Published to topic: m/3053156a-1994-4776-9e18-8c5d8883647c/c/c19ded40-eeec-448e-86e9-42f490c766a4/control/proplet/results INFO Successfully published result for task 3732d66f-eb2b-471f-bace-024fa3f878e4
The proplet executed the WASM binary with Wasmtime, published the result to control/proplet/results, and the operator received it and stored it in status.results.