Skip to main content


The Workload resource represents a scalable containerized service deployed and managed by a container orchestration system such as Kubernetes.

When running locally within the Wing Simulator, either during development or during builds, workloads are implemented using local docker images.

When running on the cloud, workloads become Kubernetes applications, built and published to an image registry and deployed to a Kubernetes cluster using Helm. We currently only support AWS/EKS but support for other platforms are planned.

It will also be possible for platforms to implement workloads using any other compatible container orchestration system such as Amazon ECS, or ControlPlane.

⚠️ This resource is still experimental. Please ping the team on Wing Discord if you encounter any issues or have any questions and let us know what you think. See roadmap below for more details about our plans.


For the time being, in order to use this resource you will first first need to install @winglibs/containers from npm:

npm i @winglibs/containers

You will also need Docker or OrbStack installed on your system in order for workloads to work in the Wing Simulator.


In your code, just bring containers and define workloads using the containers.Workload class.

Check out a few examples below or jump to the full API Reference.

Using an image from a registry

Let's start with a simple example which defines a workload based on the hashicorp/http-echo image. It is a simple HTTP server listening on port 5678 that responds with a message.

bring containers;

let hello = new containers.Workload(
name: "hello",
image: "hashicorp/http-echo",
port: 5678,
public: true,
args: ["-text=hello, wingnuts!"],

In order to test this workload, we can use publicUrl which resolves to a publicly accessible route into your container. And if you were wondering: Yes, this also works on the cloud! Every workload with public: true will have a URL that can be used to access it from the web.

bring http;
bring expect;

test "message is returned in http body" {
let url = hello.publicUrl ?? "FAIL";
let body = http.get(url).body ?? "FAIL";
expect.equal(body, "hello, wingnuts!\n");

Building an image from source

Workloads can also be be based on an image defined through a dockerfile within your project. The image is automatically built during compilation and published to a container registry during deployment.

Let's define a workload which based on the docker image built from the dockerfile in the ./backend directory:

bring containers;

new containers.Workload(
name: "backend",
image: "./backend",
port: 3000,
public: true

Under ./backend, create:


FROM node:20.8.0-alpine
ADD index.js /app/index.js
ENTRYPOINT [ "node", "/app/index.js" ]


const http = require('http');

process.on('SIGINT', () => process.exit(0));

const server = http.createServer((req, res) => {
console.log(`request received: ${req.method} ${req.url}`);
res.end('Hello, Wingnuts!');

console.log('listening on port 3000');

Defining multiple workloads as microservices

Using privateUrl, it is possible to reach workloads without having to expose them publicly.

Let's combine the last two examples by deploying the http-echo container and ping it from within our docker image:

bring containers;

let echo = new containers.Workload(
name: "echo",
image: "hashicorp/http-echo",
port: 5678,
args: ["-text=hello, wingnuts!"],
) as "echo";

let backend = new containers.Workload(
name: "backend",
image: "./backend",
port: 3000,
public: true,
env: {
ECHO_URL: echo.internalUrl
) as "backend";

In backend/index.js file, we can access the internal URL of the echo workload through process.env.ECHO_URL.

Check out the full microservice example here.

API Reference

name: str

This is a required option and must be a a unique name for the workload within the application.

In the tf-aws target, this name will be used as the name of the Helm chart and the name of all the resources associated with the workload in your Kubernetes cluster.

image: str

This is another required option and can either be the name of a publicly available docker image or a relative path to a docker build context directory (with a Dockerfile in it).

port: num?

  • port: num? (optional): internal port number listening. This is required to connect to a server running inside the container.

public: bool?

If this option is enabled, this workload will be accessible from the public internet through the URL returned from publicUrl. When disabled, the container can only be accessed by other workloads in the application via its privateUrl.

When running in sim, the container will be accessible through a localhost port.

When running on tf-aws (EKS), an Ingress resource will be defined for this workload and an ALB (Application Load Balancer) will be allocated. The publicUrl of this workload will contain the fully qualified host name that can be used to access the container from the public internet.

By default, containers are only accessible from within the same application through their privateUrl.

When public is enabled, port must also be set.

readiness: str?

If this is specified, it is the URL path to send HTTP GET requests to in order to determine that the workload has finished initialization.

When deployed to Kubernetes, this is implemented using a readiness probe.

By default, readiness probes are disabled.

replicas: num?

Defines the number of container instances needed for this workload.

When running in the simulator, this option is ignored and there is always a single container.

When running in Kubernetes, this is implemented by setting replicas in the Deployment resource that defines this workload.

By default this is set to 1 replica.

sources: Array<str>?

A list of glob patterns which are used to match the source files of the container. If any of these files change, the image is rebuilt and invalidated. This is only relevant if the image is built from source.

By default, this is all the files under the image directory.

args and env

  • args: Array<str>? (optional): arguments to pass to the entrypoint.
  • env: Map<str>? (optional): environment variables.

Target-specific details

Simulator (sim)

When executed in the Wing Simulator, the workload is started within a local Docker container.

AWS (tf-aws)

Workloads are deployed to a Kubernetes cluster running on Amazon EKS.

For each application, a Helm chart is synthesized with a Deployment, Service and if the workload is public, an Ingress as well.

By default, a new Amazon EKS cluster will be provisioned for each Wing application. This might be okay in a situation where your cluster hosts only a single application, but it is very common to share a single cluster across multiple applications.

Creating a new EKS cluster

To share a single EKS cluster across multiple Wing applications, you will first need to create a cluster in your AWS account. If you already have a cluster, jump to Deploying into an existing cluster below.

To create a compatible EKS cluster manually, we recommend to use use the tfaws.Cluster resource:


bring containers;
new containers.Cluster("my-wing-cluster");

And provision it using Terraform (this operation could take up to 20 minutes...):

wing compile -t tf-aws eks.main.w
cd target/eks.main.tfaws
terraform init
terraform apply

To connect to our new cluster through kubectl, use update-kubeconfig:

aws eks update-kubeconfig --name my-wing-cluster


$ kubectl get all
service/kubernetes ClusterIP <none> 443/TCP 36m

Deploying into an existing EKS cluster

To deploy a workload into an the EKS cluster you just created or to an already existing cluster, you will need to set the following platform values (this can be done using -v X=Y or --values values.yml):

  • eks.cluster_name: The name of the cluster
  • eks.endpoint: The URL of the Kubernetes API endpoint of the cluster
  • eks.certificate: The certificate authority of this cluster.

You can use the script to obtain the attributes of your EKS cluster.


$ curl >
$ chmod +x ./


$ ./ CLUSTER-NAME > values.yaml
$ wing compile -t tf-aws --values ./values.yaml main.w

Azure (tf-azure)

Not supported yet.

GCP (tf-gcp)

Not supported yet.


The following is a non-exhaustive list of capabilities we are looking to add to this resource:


  • Constraints
  • Autoscaling


  • Access cloud.* resources from workloads (e.g. put an object in a bucket, poll a queue).
  • Access something like Redis from a workload (unify VPCs)
  • Access non-public workloads from cloud.Function


  • Allow defining workloads using inflights (cloud.Service)


  • Logs
  • Sidecar containers


  • SSL
  • Custom domains


  • ECS
  • GKE
  • AKS
  • ControlPlane