Forget managing regions, orchestrators, or artifact registries. Cloudflare Containers are now in public beta, bringing a new level of power and flexibility to the developer platform. You can now run almost any application—from AI and data processing to existing backend services—right alongside your Workers, deployed globally to "Region: Earth" with a familiar wrangler deploy
.
Let's get straight to it.
The Core: Workers + Durable Objects = Programmable Containers
At its heart, the system is an elegant integration. Your Worker acts as the programmable entrypoint, routing requests to a container-enabled Durable Object. This Durable Object acts as a dedicated, programmable sidecar for each container instance, managing its lifecycle.
To simplify this, Cloudflare provides the @cloudflare/containers
package with a Container
class, abstracting away the Durable Object boilerplate.
Here’s the gist of it in your wrangler.jsonc
:
{
"containers": [
{
"class_name": "MyContainer",
"image": "./Dockerfile",
"max_instances": 10
}
],
"durable_objects": {
"bindings": [
{
"name": "MY_CONTAINER",
"class_name": "MyContainer"
}
]
},
"migrations": [
{
"tag": "v1",
"new_sqlite_classes": ["MyContainer"]
}
]
}
And in your Worker, you extend the Container
class:
import { Container } from "@cloudflare/containers";
export class MyContainer extends Container {
// The port your container listens on
defaultPort = 8080;
// Put the container to sleep after 10s of inactivity
sleepAfter = "10s";
// Pass environment variables directly to the container
envVars = {
MESSAGE: "Hello from my Worker!",
};
override onStart() {
console.log("Container started successfully");
}
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const { pathname } = new URL(request.url);
// Route to a unique, stateful container instance for each ID
if (pathname.startsWith("/sandbox/")) {
const sessionId = pathname.split("/")[2];
const id = env.MY_CONTAINER.idFromName(sessionId);
const containerInstance = env.MY_CONTAINER.get(id);
return containerInstance.fetch(request);
}
// Or, load balance across a pool of stateless containers
if (pathname.startsWith("/api/")) {
// A simple random load balancer; more advanced native
// routing is on the roadmap
const containerId = Math.floor(Math.random() * 3);
const id = env.MY_CONTAINER.idFromName(containerId.toString());
const containerInstance = env.MY_CONTAINER.get(id);
return containerInstance.fetch(request);
}
return new Response("Not found", { status: 404 });
},
};
This setup gives you two powerful patterns out of the box:
- Stateful Services: By using
idFromName(uniqueId)
, you can route requests to a specific, long-lived container instance, perfect for things like code sandboxes or user sessions. - Stateless Services: By routing to a pool of containers, you can handle scalable, stateless workloads like a typical backend API.
Getting Started: From Zero to Global in Minutes
Ready to deploy? The workflow is pure wrangler
.
Prerequisites:
- Docker must be running locally. Check with
docker info
.
Deploy:
Run the create command to clone a starter template and deploy it.
npm create cloudflare@latest -- --template=cloudflare/templates/containers-template
Running wrangler deploy
will:
- Build your container image using your local Docker daemon.
- Push the image to your private, automatically configured Cloudflare Container Registry.
- Deploy your Worker and provision the container for on-demand startup across the globe.
You can check the status with wrangler containers list
.
Pay-for-Use Pricing
You are only billed for the time your container is actively running, calculated every 10ms. It automatically scales to zero.
Instance Types:
Name | Memory | CPU | Disk |
---|---|---|---|
dev |
256 MiB | 1/16 vCPU | 2 GB |
basic |
1 GiB | 1/4 vCPU | 4 GB |
standard |
4 GiB | 1/2 vCPU | 4 GB |
Billing Rates (includes monthly free tier):
- Memory: $0.0000025 per GiB-second
- CPU: $0.000020 per vCPU-second
- Disk: $0.00000007 per GB-second
Data Egress (Bandwidth)
Egress is priced regionally, with a generous free tier included monthly with Workers Standard:
- North America and Europe: $0.025 per GB (1 TB included)
- Australia, New Zealand, Taiwan, and Korea: $0.050 per GB (500 GB included)
- Everywhere else: $0.040 per GB (500 GB included)
The Roadmap: What's Next
This is just the beginning. Here's a look at what's coming:
- Global Autoscaling: A simple
autoscale = true
flag in your config will enable latency-aware routing and automatic scaling based on CPU or memory utilization. - Deeper Integrations: Expect first-party APIs to easily mount R2 buckets, access Hyperdrive, KV, and more directly from your container.
- Enhanced Communication: Soon you'll be able to
exec
shell commands in your container from your Worker and handle requests from the container to the Worker, enabling powerful new patterns.
Cloudflare Containers fundamentally change what's possible on the platform. The friction between your lightweight, globally distributed Worker logic and your heavier, containerized workloads has been eliminated. Go build something.
Top comments (0)