0

I have a helper container and an app container.

The helper container handles mounting of code via git to a shared mount with the app container.

I need for the helper container to check for a package.json or requirements.txt in the cloned code and if one exists to run npm install or pip install -r requirements.txt, storing the dependencies in the shared mount. Thing is the npm command and/or the pip command needs to be run from the app container to keep the helper container as generic and as agnostic as possible.

One solution would be to mount the docker socket to the helper container and run docker exec <command> <app container> but what if I have thousands of such apps on a single host. Will there be issues having hundreds of containers all accessing the docker socket at the same time? And is there a better way to do this? Get commands run on another container?

4
  • Your description of the helper container role sounds like it should be the image that you build your app container from. Commented Aug 31, 2016 at 10:32
  • No, it exposes an endpoint that I use as a webhook for gogs internally, which then clones the files to the shared mount. Commented Aug 31, 2016 at 11:00
  • ah ok, running tasks in your containers that would normally be in an image build then. It doesn't change your problem, if you were building images you'd still need to trigger a build from the webhook container. You would be running less commands if you had 100's of instances of the same app. Commented Aug 31, 2016 at 11:33
  • See this answer: stackoverflow.com/a/63690421/10534470 Commented Sep 1, 2020 at 15:14

2 Answers 2

1

Well there is no "container to container" internal communication layer like "ssh". In this regard, the containers are as standalone as 2 different VMs ( beside the network part in general ).

You might go the usual way, install opensshd-server on the "receiving" server, configure it key-based only. You do not need to export the port to the host, just connect to the port using the docker-internal network. Deploy the ssh private key on the 'caller server' and the public key into .ssh/authorized_keys on the 'receiving server' during container start time ( volume mount ) so you do not keep the secrets in the image (build time).

Probably also create a ssh-alias in .ssh/config and also set HostVerify to no, since the containers could be rebuild. Then do

ssh <alias> your-command
Sign up to request clarification or add additional context in comments.

9 Comments

This is one way to do it. However it would mean installing openssh and messing around with keys. It does clarify it a bit to think of the containers as vms, if they were vms without a doubt ssh would be my goto. Any info on the cons of mounting the docker socket?
i heard about the docker-socket way + using docker exec but i consider this a sever security flaw. If someone overtakes your 'caller' server, he will be able to access every singe docker container on your host, not only the ones in the current application docker stack. Its a full-access-on-all-docker-container-running - way to much
Makes sense, but what if the socket is mounted only on the helper container, the running app in the app container has no iteraction with it beyond sharing a mounted folder. Would this be okay?
if you mount the socket there and the attacker gets access to this docker-container, he can use the socket to access every single container on the host - no matter that you only mounted the socket "in this container" - its the socket of the host for all containers
Thing is if the attacker hasn't leveraged some bug in the running app container to access the helper container where the socket is mounted, the only other way to access the helper container would be from the host and I imagine if the attacker has access to the host, they could do a lot worse things and would have access to all containers already.
|
0

Found that better way I was looking for :-) .

Using supervisord and running the xml rpc server enables me to run something like:

supervisorctl -s http://127.0.0.1:9002 -utheuser -pthepassword start uwsgi supervisorctl -s http://127.0.0.1:9002 -utheuser -pthepassword start uwsgi

In the helper container, this will connect to the rpc server running on port 9002 on the app container and execute a program block that may look something like;

[program:uwsgi]
directory=/app
command=/usr/sbin/uwsgi --ini /app/app.ini --uid nginx --gid nginx --plugins http,python --limit-as 512
autostart=false
autorestart=unexpected
stdout_logfile=/var/log/uwsgi/stdout.log
stdout_logfile_maxbytes=0
stderr_logfile=/var/log/uwsgi/stderr.log
stderr_logfile_maxbytes=0
exitcodes=0
environment = HOME="/app", USER="nginx"]

This is exactly what I needed!

For anyone who finds this you'll probably need your supervisord.conf on your app container to look sth like:

[supervisord]
nodaemon=true

[supervisorctl]

[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[inet_http_server]
port=127.0.0.1:9002
username=user
password=password

[program:uwsgi]
directory=/app
command=/usr/sbin/uwsgi --ini /app/app.ini --uid nginx --gid nginx --plugins http,python --limit-as 512
autostart=false
autorestart=unexpected
stdout_logfile=/var/log/uwsgi/stdout.log
stdout_logfile_maxbytes=0
stderr_logfile=/var/log/uwsgi/stderr.log
stderr_logfile_maxbytes=0
exitcodes=0
environment = HOME="/app", USER="nginx"]

You can setup the inet_http_server to listen on a socket. You can link the containers to be able to access them at a hostname.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.