172

I am trying to create a shell script for setting up a docker container. My script file looks like:

#!bin/bash

docker run -t -i -p 5902:5902 --name "mycontainer" --privileged myImage:new /bin/bash

Running this script file will run the container in a newly invoked bash.

Now I need to run a script file (test.sh)which is already inside container from the above given shell script.(eg: cd /path/to/test.sh && ./test.sh) How to do that?

2
  • 1
    Why not use WORKDIR and CMD? Commented Jul 23, 2015 at 5:33
  • 1
    You probably don't want to be using --privileged here. See: stackoverflow.com/questions/36425230/… Commented May 29, 2018 at 22:26

10 Answers 10

208

You can run a command in a running container using docker exec [OPTIONS] CONTAINER COMMAND [ARG...]:

docker exec mycontainer /path/to/test.sh

And to run from a bash session:

docker exec -it mycontainer /bin/bash

From there you can run your script.

Sign up to request clarification or add additional context in comments.

8 Comments

what if i need to enter into /bin/bash first and then run command inside that bash?
You can also run a local script from the host directly docker exec -i mycontainer bash < mylocal.sh This reads the local host script and runs it inside the container. You can do this with other things (like .tgz files piped into tar) - its just using the '-i' to pipe into the container process std input.
@Marvin whats the equivalent in PowerShell? the "<" character is not recognized.
I'm not a powershell guru (thankfully) but I wandered around SO and found stackoverflow.com/a/11788475/500902. So, maybe Get-Content mylocal.sh | docker exec -i mycontainer bash. I don't know if that works though.
For me it was: docker exec -i containerID /bin/sh < someLocalScript.sh
|
163

Assuming that your docker container is up and running, you can run commands as:

docker exec mycontainer /bin/sh -c "cmd1;cmd2;...;cmdn"

5 Comments

I like this answer; you don't have to log into the docker container to execute a command or set of commands. Thank you!
Do you know how to take this a step further and pass the entire command (/bin/sh -c "cmd1; cmd2; ...; cmdn") as the value of a shell variable? I ask because 'docker run' seems to expect a single command and individual unquoted arguments rather than a quoted string.
@meowsqueak: This answer tells you how to run multiple commands inside an already created and running container without logging in that container, which is helpful in automation. However if you want to run multiple commands at the time of container creation (PS: docker run command creates and starts the container), you can achieve that by following answers in this same thread stackoverflow.com/a/41363989/777617
In place of cmd1, I need to pass in an entire for-loop - but it expects ; after the loop statement and do statement. How can I run the entire for-loop as a single command i.e cmd1? Is it possible?
exactly what i needed!
32

I was searching an answer for this same question and found ENTRYPOINT in Dockerfile solution for me.

Dockerfile

...
ENTRYPOINT /my-script.sh ; /my-script2.sh ; /bin/bash

Now the scripts are executed when I start the container and I get the bash prompt after the scripts has been executed.

Comments

15

Thomio's answer is helpful but it expects the script to exist inside the image. If you have a one-of script that you want to run/test inside a container (from command-line or to be useful in a script), then you can use

$ docker run ubuntu:bionic /bin/bash -c '
  echo "Hello there"
  echo "this could be a long script"
  '

Comments

14

This command worked for me

cat local_file.sh | docker exec -i container_name bash

Comments

13

In case you don't want (or have) a running container, you can call your script directly with the run command.

Remove the iterative tty -i -t arguments and use this:

    $ docker run ubuntu:bionic /bin/bash /path/to/script.sh

This will (didn't test) also work for other scripts:

    $ docker run ubuntu:bionic /usr/bin/python /path/to/script.py

Comments

7

You could also mount a local directory into your docker image and source the script in your .bashrc. Don't forget the script has to consist of functions unless you want it to execute on every new shell. (This is outdated see the update notice.)

I'm using this solution to be able to update the script outside of the docker instance. This way I don't have to rerun the image if changes occur, I just open a new shell. (Got rid of reopening a shell - see the update notice)

Here is how you bind your current directory:

docker run -it -v $PWD:/scripts $my_docker_build /bin/bash

Now your current directory is bound to /scripts of your docker instance.

(Outdated) To save your .bashrc changes commit your working image with this command:

docker commit $container_id $my_docker_build

Update

To solve the issue to open up a new shell for every change I now do the following:

In the dockerfile itself I add RUN echo "/scripts/bashrc" > /root/.bashrc". Inside zshrc I export the scripts directory to the path. The scripts directory now contains multiple files instead of one. Now I can directly call all scripts without having open a sub shell on every change.

BTW you can define the history file outside of your container too. This way it's not necessary to commit on a bash change anymore.

1 Comment

@zappy the solution from javier did not solve this problem conveniently for me - but my solution did, I thought it would be interesting for those who had a similar problem where they don't want to restart the docker image(s) to update a view functions they need. For example if you use multiple docker images at once to spin up a dev cluster you don't want to restart them all the time.
5

This is old, and I don't have enough reputation points to comment. Still, I guess it is worth sharing how one can generalize Marvin's idea to allow parameters.

docker exec -i mycontainer bash -s arg1 arg2 arg3 < mylocal.sh

Comments

1

Have a look at entry points too. You will be able to use multiple CMD https://docs.docker.com/engine/reference/builder/#/entrypoint

Comments

0

If you want to run the same command on multiple instances you can do this :

for i in c1 dm1 dm2 ds1 ds2 gtm_m gtm_sl; do docker exec -it $i /bin/bash -c "service sshd start"; done

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.