Skip to main content
Commonmark migration
Source Link

If you don't want given node to run any containers you can [drain][1]drain it

docker node update swarm-01.local --availability drain

This will move any container running in swarm (cluster) mode to any other available node. Containers that are not swarm aware (started with) docker run command will continue to run there.

The behaviour described in your second question is intended in order to avoid service disruption to the end user by unnecessarily shifting (stopping/starting) containers, for reference see: https://docs.docker.com/engine/swarm/admin_guide/#force-the-swarm-to-rebalance

When you add a new node to a swarm, or a node reconnects to the swarm after a period of unavailability, the swarm does not automatically give a workload to the idle node. This is a design decision. If the swarm periodically shifted tasks to different nodes for the sake of balance, the clients using those tasks would be disrupted. The goal is to avoid disrupting running services for the sake of balance across the swarm. When new tasks start, or when a node with running tasks becomes unavailable, those tasks are given to less busy nodes. The goal is eventual balance, with minimal disruption to the end user. [1]: https://docs.docker.com/engine/swarm/manage-nodes/#change-node-availability

If you don't want given node to run any containers you can [drain][1] it

docker node update swarm-01.local --availability drain

This will move any container running in swarm (cluster) mode to any other available node. Containers that are not swarm aware (started with) docker run command will continue to run there.

The behaviour described in your second question is intended in order to avoid service disruption to the end user by unnecessarily shifting (stopping/starting) containers, for reference see: https://docs.docker.com/engine/swarm/admin_guide/#force-the-swarm-to-rebalance

When you add a new node to a swarm, or a node reconnects to the swarm after a period of unavailability, the swarm does not automatically give a workload to the idle node. This is a design decision. If the swarm periodically shifted tasks to different nodes for the sake of balance, the clients using those tasks would be disrupted. The goal is to avoid disrupting running services for the sake of balance across the swarm. When new tasks start, or when a node with running tasks becomes unavailable, those tasks are given to less busy nodes. The goal is eventual balance, with minimal disruption to the end user. [1]: https://docs.docker.com/engine/swarm/manage-nodes/#change-node-availability

If you don't want given node to run any containers you can drain it

docker node update swarm-01.local --availability drain

This will move any container running in swarm (cluster) mode to any other available node. Containers that are not swarm aware (started with) docker run command will continue to run there.

The behaviour described in your second question is intended in order to avoid service disruption to the end user by unnecessarily shifting (stopping/starting) containers, for reference see: https://docs.docker.com/engine/swarm/admin_guide/#force-the-swarm-to-rebalance

When you add a new node to a swarm, or a node reconnects to the swarm after a period of unavailability, the swarm does not automatically give a workload to the idle node. This is a design decision. If the swarm periodically shifted tasks to different nodes for the sake of balance, the clients using those tasks would be disrupted. The goal is to avoid disrupting running services for the sake of balance across the swarm. When new tasks start, or when a node with running tasks becomes unavailable, those tasks are given to less busy nodes. The goal is eventual balance, with minimal disruption to the end user.

Expanding the answer on the second question.
Source Link

If you don't want given node to run any containers you can [drain][1] it

docker node update swarm-01.local --availability drain

This will move any container running in swarm (cluster) mode to any other available node. Containers that are not swarm aware (started with) docker run command will continue to run there.

The behaviour described in your second question is intended in order to avoid service disruption to the end user by unnecessarily shifting (stopping/starting) containers, for reference see: https://docs.docker.com/engine/swarm/admin_guide/#force-the-swarm-to-rebalance

When you add a new node to a swarm, or a node reconnects to the swarm after a period of unavailability, the swarm does not automatically give a workload to the idle node. This is a design decision. If the swarm periodically shifted tasks to different nodes for the sake of balance, the clients using those tasks would be disrupted. The goal is to avoid disrupting running services for the sake of balance across the swarm. When new tasks start, or when a node with running tasks becomes unavailable, those tasks are given to less busy nodes. The goal is eventual balance, with minimal disruption to the end user. [1]: https://docs.docker.com/engine/swarm/manage-nodes/#change-node-availability

If you don't want given node to run any containers you can [drain][1] it

docker node update swarm-01.local --availability drain

This will move any container running in swarm (cluster) mode to any other available node. Containers that are not swarm aware (started with) docker run command will continue to run there.

The behaviour described in your second question is intended, for reference see: https://docs.docker.com/engine/swarm/admin_guide/#force-the-swarm-to-rebalance

When you add a new node to a swarm, or a node reconnects to the swarm after a period of unavailability, the swarm does not automatically give a workload to the idle node. This is a design decision. If the swarm periodically shifted tasks to different nodes for the sake of balance, the clients using those tasks would be disrupted. The goal is to avoid disrupting running services for the sake of balance across the swarm. When new tasks start, or when a node with running tasks becomes unavailable, those tasks are given to less busy nodes. The goal is eventual balance, with minimal disruption to the end user. [1]: https://docs.docker.com/engine/swarm/manage-nodes/#change-node-availability

If you don't want given node to run any containers you can [drain][1] it

docker node update swarm-01.local --availability drain

This will move any container running in swarm (cluster) mode to any other available node. Containers that are not swarm aware (started with) docker run command will continue to run there.

The behaviour described in your second question is intended in order to avoid service disruption to the end user by unnecessarily shifting (stopping/starting) containers, for reference see: https://docs.docker.com/engine/swarm/admin_guide/#force-the-swarm-to-rebalance

When you add a new node to a swarm, or a node reconnects to the swarm after a period of unavailability, the swarm does not automatically give a workload to the idle node. This is a design decision. If the swarm periodically shifted tasks to different nodes for the sake of balance, the clients using those tasks would be disrupted. The goal is to avoid disrupting running services for the sake of balance across the swarm. When new tasks start, or when a node with running tasks becomes unavailable, those tasks are given to less busy nodes. The goal is eventual balance, with minimal disruption to the end user. [1]: https://docs.docker.com/engine/swarm/manage-nodes/#change-node-availability

Source Link

If you don't want given node to run any containers you can [drain][1] it

docker node update swarm-01.local --availability drain

This will move any container running in swarm (cluster) mode to any other available node. Containers that are not swarm aware (started with) docker run command will continue to run there.

The behaviour described in your second question is intended, for reference see: https://docs.docker.com/engine/swarm/admin_guide/#force-the-swarm-to-rebalance

When you add a new node to a swarm, or a node reconnects to the swarm after a period of unavailability, the swarm does not automatically give a workload to the idle node. This is a design decision. If the swarm periodically shifted tasks to different nodes for the sake of balance, the clients using those tasks would be disrupted. The goal is to avoid disrupting running services for the sake of balance across the swarm. When new tasks start, or when a node with running tasks becomes unavailable, those tasks are given to less busy nodes. The goal is eventual balance, with minimal disruption to the end user. [1]: https://docs.docker.com/engine/swarm/manage-nodes/#change-node-availability