35

I'm creating an application that will allow users to upload video files that will then be put through some processing.

I have two containers.

  1. Nginx container that serves the website where users can upload their video files.
  2. Video processing container that has FFmpeg and some other processing stuff installed.

What I want to achieve. I need container 1 to be able to run a bash script on container 2.

One possibility as far as I can see is to make them communicate over HTTP via an API. But then I would need to install a web server in container 2 and write an API which seems a bit overkill. I just want to execute a bash script.

Any suggestions?

2
  • You could maybe just use a shared volume and watch for changes.. You can also share the /var/run/docker.sock and run docker commands from the container. Commented Nov 25, 2019 at 16:01
  • This is not really a good design. The techniques you'd need to solve this are the same ones as if the two containers were running on physically separate systems. Commented Nov 25, 2019 at 16:22

7 Answers 7

21

You have a few options, but the first 2 that come to mind are:

  1. In container 1, install the Docker CLI and bind mount /var/run/docker.sock (you need to specify the bind mount from the host when you start the container). Then, inside the container, you should be able to use docker commands against the bind mounted socket as if you were executing them from the host (you might also need to chmod the socket inside the container to allow a non-root user to do this.
  2. You could install SSHD on container 2, and then ssh in from container 1 and run your script. The advantage here is that you don't need to make any changes inside the containers to account for the fact that they are running in Docker and not bare metal. The down side is that you will need to add the SSHD setup to your Dockerfile or the startup scripts.

Most of the other ideas I can think of are just variants of option (2), with SSHD replaced by some other tool.

Also be aware that Docker networking is a little strange (at least on Mac hosts), so you need to make sure that the containers are using the same docker-network and are able to communicate over it.

Warning:

To be completely clear, do not use option 1 outside of a lab or very controlled dev environment. It is taking a secure socket that has full authority over the Docker runtime on the host, and granting unchecked access to it from a container. Doing that makes it trivially easy to break out of the Docker sandbox and compromise the host system. About the only place I would consider it acceptable is as part of a full stack integration test setup that will only be run adhoc by a developer. It's a hack that can be a useful shortcut in some very specific situations but the drawbacks cannot be overstated.

Sign up to request clarification or add additional context in comments.

6 Comments

I considered option 1 using docker sockets but seems a bit dangerous. I also don't like the idea of installing docker inside of docker. I will go with the SSHD option since it seems like it is the best choice. Thanks for your help, greatly appreciated
A lot of folks would agree with you about the socket option. Really depends on what you're doing, but SSHD is definitely the safer option, especially for a prod system. glad i could help :)
@NicolasBuch I hope you don't mind me asking but why does it seem dangerous to you? I understand that, if the container is open to the public, then yes, that is very dangerous. But I have got a cron container that needs to execute commands in the container. The cron container itself is not connected to the outside world.
Why not use a container 3 with access to /var/run/docker.sock, with a named pipe shared with container 1? Let container 1 write to the pipe, and let container 3 watch the pipe and execute the command in container 2. You'll have the advantages of your solution 1 without the security issue.
@reinierpost that is really just option (2). The third container is unnecessary; just have the two containers share a pipe and send commands across it. Trying to proxy commands through a third container probably isn't going to offer much protection if someone gains access to either end of the pipe.
|
17

I wrote a python package especially for this use-case.

Flask-Shell2HTTP is a Flask-extension to convert a command line tool into a RESTful API with mere 5 lines of code.

Example Code:

from flask import Flask
from flask_executor import Executor
from flask_shell2http import Shell2HTTP

app = Flask(__name__)
executor = Executor(app)
shell2http = Shell2HTTP(app=app, executor=executor, base_url_prefix="/commands/")

shell2http.register_command(endpoint="saythis", command_name="echo")
shell2http.register_command(endpoint="run", command_name="./myscript")

can be called easily like,

$ curl -X POST -H 'Content-Type: application/json' -d '{"args": ["Hello", "World!"]}' http://localhost:4000/commands/saythis

You can use this to create RESTful micro-services that can execute pre-defined shell commands/scripts with dynamic arguments asynchronously and fetch result.

It supports file upload, callback fn, reactive programming and more. I recommend you to checkout the Examples.

3 Comments

Sounds cool! Isn't there a django version of it?
This is creative, and probably a good solution in a lot of cases. Mainly, anything that uses a base image that includes Python but doesn't have sshd (which is a lot of base images). I would be cautious about this in a production environment, since it appears to expose these endpoints without any sort of authentication. But for a local test setup, this seems pretty easy.
@Z4-tier You can wrap the endpoints in a decorator to implement authentication, logging, etc. basically anything.
13

Running a docker command from a container is not straightforward and not really a good idea (in my opinion), because :

  1. You'll need to install docker on the container (and do docker in docker stuff)
  2. You'll need to share the unix socket, which is not a good thing if you have no idea of what you're doing.

So, this leaves us two solutions :

  1. Install ssh on you're container and execute the command through ssh
  2. Share a volume and have a process that watch for something to trigger your batch

3 Comments

I was considering sharing the docker socket. But I don't like the idea of having to install docker in my docker images. It seems a bit dangerous, especially since I don't fully grasp which security vulnerabilities it might open. Watching a folder for files is not going to work either since I need to run different scrips based on some parameters. I would have to have many different folders for each scenario. So this leaves us with executing commands via SSH. This might be the best solution. I'll give it a try, thanks!
+1 for the shared volume idea. Polling on a touchfile isn't normally my favorite approach, but it does the job and might be the most straightforward solution, depending on the situation.
Also a great project is github.com/msoap/shell2http. It even allows to send a file through HTTP-Post. So not even a shared volume is needed.
5

It was mentioned here before, but a reasonable, semi-hacky option is to install SSH in both containers and then use ssh to execute commands on the other container:

# install SSH, if you don't have it already
sudo apt install openssh-server

# start the ssh service
sudo service start ssh

# start the daemon
sudo /usr/sbin/sshd -D &

Assuming you don't want to always be root, you can add default user (in this case, 'foobob'):

useradd -m --no-log-init --system  --uid 1000 foobob -s /bin/bash -g sudo -G root

#change password
echo 'foobob:foobob' | chpasswd

Do this on both the source and target containers. Now you can execute a command from container_1 to container_2.

# obtain container-id of target container using 'docker ps'
ssh foobob@<container-id> << "EOL"
echo 'hello bob from container 1' > message.txt
EOL

You can automate the password with ssh-agent, or you can use some bit of more hacky with sshpass (install it first using sudo apt install sshpass):

sshpass -p 'foobob' ssh foobob@<container-id>

4 Comments

hi, thanks for sharing this solution. I tried it and was wondering where does the message.txt file go on the container 2? I put a verbose flag in the ssh cmd and at the end it showed: Transferred: sent 2836, received 2644 bytes, in 0.1 seconds Bytes per second: sent 30489.4, received 28425.2. Seems like the file was transferred?
This should simply create text file message.txt in your home directory on "container_2" with the single line of text 'hello bob ...'. There should be no file transferred. This report from ssh is only telling you about the text it sent and received back as communication overhead
thanks, i found out my working directory was set to another location so I didn't see it. I checked the home dir and saw the file. I appreciate you sharing this solution!
sshpass should come with a warning (and it does, in the man page): ssh intentionally makes it hard to enter passwords from a script, because that pattern almost invariably leads to insecure password management practices. If possible, it is probably better to use public keys that you build into image that runs on the target container.
1

You could write a very basic API using Ncat and GNU sed’s e command.

If needed, install nmap-ncat and GNU sed, then run something like this in the container you want to control:

ncat -lkp 9000 | sed \
  -e '/^cmd1$/e /opt/foo.sh' \
  -e '/^stop$/e kill -s INT 1'

The entrypoint script would look like this:

ncat -lkp 9000 | sed \
  -e '/^cmd1$/e /opt/foo.sh' \
  -e '/^stop$/e kill -s INT 1' &
  exec /opt/some/daemon

exec is required to run the daemon with process ID 1, which is needed to stop it gracefully.

And to send commands to this container, use something like

echo stop|nc containername 9000

Note: you can use nc or ncat for sending commands, but on the receiving side nc from Busybox does not keep listening for new requests without using -e which would need a different approach.

When also using a restart policy with Docker Compose, this could be used to restart containers (for example, to reload configuration or certificates) without having to give the controlling container access to the Docker socket (/var/run/docker.sock), which is insecure.

Comments

1

Let's assume Linux and say container A needs to tell container B to execute the command foo.sh.

A safe approach would be to create a shared resource that A will update and B will watch.

You can use a file:

  • share a directory, say /run/foo, as a shared volume
  • in A, create a file whenever the command needs to run, e.g. touch /run/foo/please-execute
  • in B, watch for it using something like while sleep 60; do if [ -e /run/foo/please-execute ]; then foo.sh && rm /run/foo/please-execute; fi; done &

If B has the inotify utilities, you can use them to watch for the file, eliminating the polling delay.

ALternatively, you can use a named pipe:

  • create it (mkfifo) and use it as a volume in A and B
  • A writes a line to it: e.g., echo >> /run/foo/please-execute
  • B uses something like while read something; do foo.sh; done < /run/foo/please-execute &

Alternatively, add a container C with access to the Docker socket and have it monitor the file/pipe and execute the command in container B. That way, you don't need to modify container B.

Comments

-5

I believe

docker exec -it <container_name> <command>

should work, even inside the container.

You could also try to mount to docker.sock in the container you try to execute the command from:

docker run -v /var/run/docker.sock:/var/run/docker.sock ...

4 Comments

calling docker from inside a container won't work on it's own. You need 2 things, 1: the /var/run/docker.sock shared, and 2: A version of docker running on your container, if the container and host are both linux, some user share there hosts docker binary. But the better option is to have a container image with docker installed so that the host can be windows or linux.
Fair enough, did not think about requirement to have docker installed in the running container.
No problem,. Also worth noting you would only need the docker cli inside the container. Missed that bit :)
Remember that this also allows the container to make arbitrary changes to arbitrary files on the host as root and generally take over the whole system. I would not do this casually.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.