Docker has revolutionized the way developers build, package, and deploy applications. Its convenience and flexibility make it a top choice for containerizing Node.js apps. But while Docker simplifies many aspects of DevOps, it’s not a silver bullet. Poor configurations, misunderstood best practices, and bad habits can turn your containerized Node.js app into a sluggish, resource-hogging mess.
Let’s face it: performance is everything in production. And when things go wrong in Dockerized environments, they can be incredibly hard to debug.
1. Using the Wrong Base Image
The Mistake:
Many developers default to using the full node
image, like node:latest
or node:alpine
, without considering the implications. While node:latest
offers a complete development environment, it's bloated for production and contains unnecessary tools.
Why It Hurts:
- Increases container size.
- Slower cold starts.
- Higher memory consumption.
- Inconsistent behavior between environments (e.g., dev vs prod).
The Fix:
Use minimal base images for production:
FROM node:18-alpine
Or go even further with multi-stage builds (more on that soon). Alpine reduces image size drastically (~5MB base) and is ideal when paired with a build step.
Bonus Tip: If Alpine causes native module issues, try node:slim
as a middle ground.
2. Not Using Multi-Stage Builds
The Mistake:
Packaging both build and runtime dependencies in the same container. You build the app and run it all in one stage.
Why It Hurts:
- Bloated image size.
- Security risks from leftover build tools.
- Increased attack surface.
The Fix:
Split build and runtime into separate stages:
# Build stage
FROM node:18-alpine as build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:18-alpine
WORKDIR /app
COPY --from=build /app/dist ./dist
COPY --from=build /app/node_modules ./node_modules
CMD ["node", "dist/index.js"]
Result: A clean, lightweight, production-ready image.
3. Installing DevDependencies in Production
The Mistake:
Using npm install
without pruning dev dependencies when building for production.
Why It Hurts:
- Increases image size.
- Slows down container startup.
- Exposes sensitive dev tools or scripts.
The Fix:
Use npm ci --only=production
or NODE_ENV=production
to avoid dev dependencies.
ENV NODE_ENV=production
RUN npm ci --only=production
Also, don’t forget to prune:
npm prune --production
With newer versions of npm/yarn, also explore:
npm ci --omit=dev
yarn install --production
4. Ignoring .dockerignore
The Mistake:
Forgetting to add a .dockerignore
file, or keeping it empty.
Why It Hurts:
- Uploads unnecessary files to Docker context.
- Slows down builds.
- Can expose sensitive files (like
.env
,.git
, or node_modules).
The Fix:
Create a .dockerignore
file:
node_modules
npm-debug.log
Dockerfile
.dockerignore
.git
.env
tests
coverage
Only include what's needed for the build. This speeds up builds and keeps your image clean.
5. Running Node.js as Root
The Mistake:
By default, Docker runs processes as root unless specified otherwise.
Why It Hurts:
- Security risk. If an attacker breaks into your container, they get root access.
- Violates container best practices.
The Fix:
Create a non-root user and use it:
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
Or use node
user if available in the base image:
USER node
This small change boosts container security significantly.
6. Missing Health Checks
The Mistake:
Containers are started with no HEALTHCHECK
instruction, meaning Docker can’t tell if your app is truly healthy.
Why It Hurts:
- Docker assumes the app is healthy as long as the process runs.
- Faulty containers may serve traffic.
- Impacts orchestrators like Kubernetes or Docker Swarm.
The Fix:
Define a health check in your Dockerfile
:
HEALTHCHECK --interval=30s --timeout=10s --retries=3 CMD curl -f http://localhost:3000/health || exit 1
Then create a lightweight /health
route in your app that returns 200 OK.
7. Logging to Console Without a Strategy
The Mistake:
Just dumping logs using console.log()
with no structured format or log forwarding.
Why It Hurts:
- Difficult to parse logs in production.
- No timestamps, no severity levels.
- Hard to integrate with log aggregation tools (e.g., ELK, Loki, Datadog).
The Fix:
Use structured logging with a library like pino
or winston
.
const pino = require('pino');
const logger = pino({ level: 'info' });
logger.info('Server started');
Then configure Docker or your orchestrator to send logs to a centralized logging system.
8. Not Using Proper Resource Limits
The Mistake:
Running containers without CPU or memory limits.
Why It Hurts:
- Containers can consume unlimited host resources.
- Risk of OOM (Out Of Memory) crashes.
- No guarantees in multi-container setups.
The Fix:
Set resource limits in Docker Compose or your orchestration tool:
services:
app:
image: my-node-app
deploy:
resources:
limits:
cpus: "0.50"
memory: "512M"
Or use Docker CLI:
docker run --memory="512m" --cpus="0.5" my-node-app
These settings help maintain predictable performance across deployments.
9. Ignoring Layer Caching in Dockerfiles
The Mistake:
Reordering Dockerfile instructions or placing COPY . .
too early.
Why It Hurts:
- Invalidates cache on every build.
- Longer build times.
- Slows down CI/CD pipelines.
The Fix:
Use Dockerfile caching wisely:
COPY package*.json ./
RUN npm ci
COPY . .
Put frequently changing files (e.g., source code) after dependency installation to leverage Docker’s layer cache.
Bonus: Use BuildKit to speed up builds even more:
DOCKER_BUILDKIT=1 docker build .
10. Using Docker in Dev and Production the Same Way
The Mistake:
Using one Dockerfile
and one container setup for both development and production.
Why It Hurts:
- Overcomplicates
Dockerfile
. - Pulls in unnecessary tools for production.
- Makes debugging harder.
The Fix:
Use separate Dockerfiles or use build-time arguments to handle differences.
Example:
ARG NODE_ENV
ENV NODE_ENV=$NODE_ENV
RUN if [ "$NODE_ENV" = "development" ]; then \
npm install; \
else \
npm ci --only=production; \
fi
Or better yet:
Dockerfile.dev
Dockerfile.prod
This ensures clean separation and better performance in production environments.
Bonus Tip: Use Slim Tools to Scan Your Images
Docker images can easily grow with outdated dependencies or security flaws. Tools like Dive or docker scout
help analyze image layers.
docker scout quickview my-node-app
Or scan directly:
docker scan my-node-app
Keeping images clean not only helps performance—it also boosts security.
Final Thoughts
Docker is powerful, but it doesn’t absolve us of performance and security best practices. Node.js, being single-threaded and event-driven, needs careful tuning to shine in production—and the way we containerize it matters a lot.
Recap: The 10 Mistakes to Watch Out For
- Using the wrong base image
- Not using multi-stage builds
- Installing devDependencies in production
- Ignoring
.dockerignore
- Running as root
- Missing health checks
- Poor logging strategy
- No resource limits
- Inefficient Dockerfile caching
- Mixing dev and prod Docker setups
You may also like:
Read more blogs from Here
Share your experiences in the comments, and let's discuss how to tackle them!
Follow me on LinkedIn
Top comments (3)
So many of these hit home - multi-stage builds and .dockerignore alone saved me loads of headache.
Which one bit you the hardest the first time?
.dockerignore
Pretty cool run-down, honestly - makes me wanna double-check my old Dockerfiles now.