Skip to main content
added 110 characters in body
Source Link
Eugen Mayer
  • 10k
  • 5
  • 39
  • 63

Well there is no "container to container" internal communication layer like "ssh". In this regard, the containers are as standalone as 2 different VMs ( beside the network part in general ).

You might go the usual way, install opensshd-server on the "receiving" server, configure it key-based only. You do not need to export the port to the host, just connect to the port using the docker-internal network. Deploy the sskssh private key/public on the 'caller server' and the public key into .ssh/authorized_keys on the 'receiving server' during container start time ( volume mount ) so you do not keep the secrets in the image (build time).

Probably also create a ssh-alias in .ssh/config and also set HostVerify to no, since the containers could be rebuild. Then do

ssh <alias> your-command

Well there is no "container to container" internal communication layer like "ssh". In this regard, the containers are as standalone as 2 different VMs ( beside the network part in general ).

You might go the usual way, install opensshd-server on the "receiving" server, configure it key-based only. You do not need to export the port to the host, just connect to the port using the docker-internal network. Deploy the ssk key/public key during container time ( volume mount ) so you do not keep the secrets in the image.

Probably also create a ssh-alias in .ssh/config and also set HostVerify to no, since the containers could be rebuild. Then do

ssh <alias> your-command

Well there is no "container to container" internal communication layer like "ssh". In this regard, the containers are as standalone as 2 different VMs ( beside the network part in general ).

You might go the usual way, install opensshd-server on the "receiving" server, configure it key-based only. You do not need to export the port to the host, just connect to the port using the docker-internal network. Deploy the ssh private key on the 'caller server' and the public key into .ssh/authorized_keys on the 'receiving server' during container start time ( volume mount ) so you do not keep the secrets in the image (build time).

Probably also create a ssh-alias in .ssh/config and also set HostVerify to no, since the containers could be rebuild. Then do

ssh <alias> your-command
Source Link
Eugen Mayer
  • 10k
  • 5
  • 39
  • 63

Well there is no "container to container" internal communication layer like "ssh". In this regard, the containers are as standalone as 2 different VMs ( beside the network part in general ).

You might go the usual way, install opensshd-server on the "receiving" server, configure it key-based only. You do not need to export the port to the host, just connect to the port using the docker-internal network. Deploy the ssk key/public key during container time ( volume mount ) so you do not keep the secrets in the image.

Probably also create a ssh-alias in .ssh/config and also set HostVerify to no, since the containers could be rebuild. Then do

ssh <alias> your-command