DEV Community

bilel salem
bilel salem

Posted on

Solving Real Deployment Problems with Docker: Shared Networks and Beyond

Docker makes development and deployment easier—but it’s not always plug and play. As your stack grows, simple containers stop being enough. One of the first real-world challenges you’ll run into is inter-container communication, especially when building microservices or replicating production environments locally.

Let’s walk through a common problem and how Docker’s shared networks feature solves it.

The Problem: "It works on my machine"—until containers can't talk to each other

Imagine this: you're working on a simple microservices architecture with a Node.js API, a Redis cache, and a PostgreSQL database. You’ve containerized everything. Locally, you run each container, expose the ports, and it all seems fine—until your API container tries to connect to the database using localhost:5432. It fails.

Why? Because in Docker, each container has its own network namespace. From the API container’s perspective, localhost means itself—not the host or the Postgres container.

Manually exposing ports and referencing host.docker.internal might work in some setups, but it's fragile, doesn’t scale, and often breaks on different operating systems or cloud environments.

The Solution: Shared Docker Networks

The clean, scalable fix is using a user-defined Docker network.

When you create a custom bridge network, Docker adds internal DNS resolution. Containers on the same network can reference each other by name—no need to hardcode IPs or guess what localhost actually points to.

Example

docker network create app-network
Enter fullscreen mode Exit fullscreen mode

Now you launch your services on the same network:

docker run -d --name db --network app-network postgres
docker run -d --name redis --network app-network redis
docker run -d --name api --network app-network my-node-api
Enter fullscreen mode Exit fullscreen mode

Inside the API container, you can connect to the database using db:5432 or to Redis using redis:6379. Docker resolves the names automatically. No extra config needed.

Why this matters

  • You get environment parity between local dev and production.
  • Containers can communicate securely without exposing ports to the host.
  • Adding new services is plug-and-play—no need to rewire everything.

A Real-World Scenario: Local Dev Environments That Mirror Production

Let’s say you're building a staging environment that mimics your Kubernetes cluster. You spin up 5-6 services in Docker Compose—some internal, some public-facing. Without a shared network, you’ll end up exposing too many ports, managing hostnames manually, and debugging broken links between services.

But with a shared network in Docker Compose:

services:
  api:
    build: ./api
    networks:
      - backend

  db:
    image: postgres
    networks:
      - backend

networks:
  backend:
Enter fullscreen mode Exit fullscreen mode

It all “just works.” Services talk to each other by name, nothing leaks to the host, and your staging setup behaves like production—without the pain.

Takeaway

Inter-container communication is a common stumbling block for anyone moving beyond single-container setups. Docker's shared networks give you a clean, scalable way to solve this problem.

Don’t hardcode IPs. Don’t expose every port to your host. Just use networks right, and your services will find each other—reliably, predictably, and securely.

Top comments (5)

Collapse
 
abdelkader_bouzomita_c34e profile image
Abdelkader Bouzomita

Great article! Docker makes dev workflows so much smoother

Collapse
 
mohamed_benjedida_44145d profile image
mohamed ben jedida

Thanks, very useful

Collapse
 
rayenmabrouk profile image
Rayen Mabrouk

Thank you for sharing Bilel

Collapse
 
daly_mefteh profile image
Daly mefteh

Thanks, I already used something like that

Collapse
 
nawfelsekrafi profile image
Nawfel Sekrafi

Thanks Bilal !, I am a newbie in dockerization, so this definitely a great start !