The Cold Start Nightmare
Our serverless API was fast—when warm. But the moment a new Lambda instance spun up? 1.5-second delays during cold starts. Users noticed. Complaints rolled in.
After weeks of testing, we reduced cold starts from 1500ms to under 150ms—without changing our runtime. Here’s how.
1. The Prime Suspect: Bloated Dependencies
Problem:
Our node_modules
was 120MB (!), mostly from:
- Unused libraries (leftover from old features)
-
Heavy SDKs (AWS SDK v2, full
lodash
)
Fix:
-
Tree-shaking with
esbuild
:
esbuild src/handler.js --bundle --minify --platform=node --outfile=dist/handler.js
- Switched to AWS SDK v3 (modular imports):
import { DynamoDBClient } from "@aws-sdk/client-dynamodb"; // 5KB vs 50MB!
Result: 40% smaller deployments → faster Lambda init.
2. Lazy-Loading for the Win
Problem:
We were initializing everything at startup:
const db = new Database(); // 🐌 Loaded even if unused!
exports.handler = async () => { ... };
Fix:
Delay non-critical work until first execution:
let db;
const getDB = () => db || (db = new Database()); // Initialize on demand
exports.handler = async () => {
const database = getDB(); // Only loads when needed
};
Result: Cold starts 30% faster (less to initialize).
3. Provisioned Concurrency: The Nuclear Option
Problem:
Even after optimizations, sporadic cold starts still happened.
Fix:
Pre-warm Lambdas with provisioned concurrency:
resource "aws_lambda_provisioned_concurrency_config" "api" {
function_name = aws_lambda_function.api.name
qualifier = "prod"
provisioned_concurrent_executions = 5 # Always keep 5 warm
}
Cost: ~$15/month (vs. $500+ in lost revenue from slow responses).
Result: Near-zero cold starts for active users.
Key Takeaways
✔ Shrink deployments (esbuild + SDK v3)
✔ Lazy-load everything (faster init)
✔ Pre-warm with provisioned concurrency (if budget allows)
Our API now stays fast—even at 3 AM.
What’s your cold-start war story? Let’s swap fixes!
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.