159

Is there any elegant way to add SSL certificates to images that have come from docker pull?.

I'm looking for a simple and reproducible way of adding a file into /etc/ssl/certs and run update-ca-certificates. (This should cover ubuntu and Debian images).

I'm using docker on CoreOS, and the CoreOS machine trusts the needed SSL certificates, but the docker containers obviously only have the default.

I've tried using docker run --entrypoint=/bin/bash to then add the cert and run update-ca-certificates, but this seems to permanently override the entry point.

I'm also wondering now, would it be more elegant to just mount /etc/ssl/certs on the container from the host machines copy? Doing this would implicitly allow the containers to trust the same things as the host.

I'm at work with an annoying proxy that resigns everything :(. Which breaks SSL and makes containers kind-of strange to work with.

2
  • 4
    Have you thought about creating a Dockerfile that would use your image, add the file and run update-ca-certificates? or is that not what you are looking for? Commented Sep 26, 2014 at 14:04
  • 2
    I have done that for some images. It's not a bad solution. Does require you to build on all images with your own though. Commented Oct 6, 2014 at 0:07

8 Answers 8

128

Mount the certs onto the Docker container using -v:

docker run -v /host/path/to/certs:/container/path/to/certs -d IMAGE_ID "update-ca-certificates"

Note: the -v flag is used to bind/specify volumes to the docker container.

Sign up to request clarification or add additional context in comments.

6 Comments

That's pretty nifty. If the container uses the same style of ssl_certs you wouldn't even need the update-ca-certificates line, the host would have already done it :).
and if we are building in the cloud?
How does this play nicely with the container images CMD or ENTRYPOINT. Isn't the "update-ca-certificates" either interpreted as an additional argument or replacing the actual command defined in the Dockerfile?
what if '/host/path/to/certs' is a symlink? and what is the '/container/path/to/certs' if WORKDIR is '/usr/src/app' ?
This "accepted" answer doesn't actually work for the question asked, unless your container entrypoint is bash and you don't need to pass any commands to the container (a minority of cases). It's mounting the extra root CA folder from the host to guest, then running the CA update command, but that state isn't saved for the next run so it has to be called every time.
|
43

I am trying to do something similar to this. As commented above, I think you would want to build a new image with a custom Dockerfile (using the image you pulled as a base image), ADD your certificate, then RUN update-ca-certificates. This way you will have a consistent state each time you start a container from this new image.

# Dockerfile
FROM some-base-image:0.1
ADD you_certificate.crt:/container/cert/path
RUN update-ca-certificates

Let's say a docker build against that Dockerfile produced IMAGE_ID. On the next docker run -d [any other options] IMAGE_ID, the container started by that command will have your certificate info. Simple and reproducible.

12 Comments

I would be wary of putting certificates into any public container. Someone else could pull your container and extract your private certs.
While that is a very good point, the solution above does not make anything public. This is meant to add your own certificates into an image that is built locally and then used privately. You could then push the resulting image to a public repository, but that would be a bad idea as you said.
Since when certificates are secret?
Since your server needs a private key to match the certificate it is publishing.
@MyUsername112358 this is talking about public keys, not private keys. The cert can't do anything dangerous without the private key.
|
36

As was suggested in a comment above, if the certificate store on the host is compatible with the guest, you can just mount it directly.

On a Debian host (and container), I've successfully done:

docker run -v /etc/ssl/certs:/etc/ssl/certs:ro ...

3 Comments

So what to do when building Docker image on the build server? :/
@Ewoks You could host your certs on some private DNS and load them inside your helm charts and you can automate creating the volume on your cluster.
Based on the question asked, they aren't compatible. Ubuntu/Debian uses legacy certificate layout, while Fedora CoreOS (like Arch) uses a modern certificate layout that includes additional files.
8

You can use relative path to mount the volume to container:

docker run -v `pwd`/certs:/container/path/to/certs ...

Note the back tick on the pwd which give you the present working directory. It assumes you have the certs folder in current directory that the docker run is executed. Kinda great for local development and keep the certs folder visible to your project.

2 Comments

How do I know which local certs I need to mount to the container? I have so many certs locally. Also I am on macos and the container is in linux; how to make the container run update ca-certificates without overriding the CMD in Dockerfile?
The base problem behind the question is that they don't have the certs to mount in the first place.
8

There's isn't really a great way to solve this when you're talking about CoreOS (Fedora) and an Ubuntu/Debian guest. Fedora uses the modern standard for organizing the "trust-anchors", while Ubuntu/Debian still uses the older style. The two aren't directly compatible.

Having spent an excessively long time trying to solve the reverse of this problem (Fedora on Ubuntu), your options are:

  1. Get the container image to add first-class support for custom certificates to be added via environment variable (common on well crafted containers, but not going to happen for a direct Ubuntu distro image).
  2. Find a way to run a similar host system (usually not a viable option) and mount the host trust-anchors over the guest ones.
  3. Spin your own version of the image that adds the certs or support for specifying them (usually not maintainable to manage long-running fork)
  4. Wrap the ENTRYPOINT with a script that adds and runs the CA addition/installation from an optional extra host-mount (very problematic, see below)
  5. Run a/the container once with modified arguments to generate a copy of an updated trust-store in a host-mount, then host-mount that over subsequent runs of the container (do this one).

The very best option is usually to try to get the container image maintainer (or submit a PR yourself) to add support for loading extra CA certificates from an environment variable since this is a very common use case among corporate users and self-hosters. However this usually adds excessive overhead for one-shot containers that's unacceptable, and the image maintainer may have other good reasons not to do this. It also doesn't solve the problem for you in the mean time.

Changing your host and "forking" the image to spin your own also aren't great options, usually they're non-starters for deployment or maintainability reasons.

Wrapping the ENTRYPOINT is basically the equivalent of doing an ad-hoc version of modifying the container to support custom certificates, but purely from the outside of the image. It has all the same potential reasons for not doing it, and the downsides that you're doing it from outside the container, but has the benefit that you don't need to wait on an image update to do it. I would not recommend this option usually. This solution is basically to write a script you host-mount into the container that will do the CA setup, and then run whatever the ENTRYPOINT and CMDs are. However there are some major gotchas here. First, you need to customize it to specific container you're running so it runs the same entrypoint. With some scripting this can probably be determined, but you need to watch out for well-crafted containers that have an init system to handle the pid 1 problem (https://github.com/Yelp/dumb-init#why-you-need-an-init-system tl;dr: catching signals like interrupts and not losing system resources when force stopping a container requires a pid 1 init process to manage it). There are a handful of different init systems out there, and you can't wrap an init system. Additionally, if you're using Docker, you can't override entrypoints with multiple commands from the command-line. Containers with init systems like dumb-init take an argument to the command actually being run, so the entrypoint is a list (['/usr/bin/dumb-init', '/usr/bin/my-command']). Docker only allows multi-command entrypoints to be specified via the API, not via the command-line, so there's no way to keep the dumb-init command and supply your own script for the second argument.

The "Best" Solution: While long running containers would strongly benefit from option #1 above, your best bet for one-shot containers and for an immediate solution is to generate a host-mount of the guest trust-anchors.
The best way is to generate a host-stored copy of what the updated container trust-anchors should look like, and mount that over the top of your container trust-store. The most compatible way is to do this using the target container image itself, but with an override for the entrypoint, host-mounting a "cache" folder for the trust-anchors in the project workspace associated with the container. However that might not work in cloud and CI situations. An alternative option is to keep a separate container volume around that uses each of the two major trust-anchor styles (modern, e.g. Fedora, Arch, etc, and legacy, e.g. Debian, Ubuntu, etc) and is separately updated semi-regularly from a generic container image of the appropriate type. The resulting container volumes then merely becomes a volume dependency where the proper one is selected based on the target container image type. The gist of how to generate one of these is to host-mount a script that adds the root CAs to the appropriate folder (FYI, legacy trust-anchors will search the root CA folders recursively, but modern will not), runs the trust-anchor update command, and then copies the resulting trust-anchor folders to a host-mount.


Update:

If it's still relevant, most Ubuntu container base images use cloud-init internally (now), which has support for a lot of common things, including adding custom root CAs to the container image, e.g. they already support option 1.
https://cloudinit.readthedocs.io/en/latest/topics/examples.html#configure-an-instances-trusted-ca-certificates

I believe you can add a file mount to /etc/cloud/cloud.cfg.d/ that has YAML like in the example link and it will get picked up during container boot. You could easily generate that YAML programatically based on the the extra root CA certificates you wanted.


EDIT1: Fixed: I reversed which was host and guest from the original question. Also added update about cloud-init. EDIT2: Fixed style typo

Comments

3

This won't directly answer your question but this is how I solved the same issue.

I was running golang:1.16.4-buster and nothing I tried with certificates worked. I switched to golang:1.17.8-alpine3.15 and it worked from the start without having to try to load any certificates. Plus, the bonus of a smaller distro.

1 Comment

Almost certainly the specific image was built to pull in environment variables when you docker run it and mount a CA-related variable as a path for additional CAs. That implies the original container image was created with extra certificates in mind, and you incidentally had one of them set properly.
2

I've written a script that wraps docker and sets up the host's SSL certificates in the guest.

The bonus is that you don't need to rebuild any containers - it should Just Work.

It's called docker, so you could either copy it somewhere on your $PATH higher than docker, or rename and put elsewhere.

Do let me know via Github if you have any issues with it!

4 Comments

Your script is just a wrapper around the already accepted answer. On StackOverflow you should provide a description of what you're doing in your answer and then can link to the source that has the implementation, and should avoid duplication of answers.
As far as the user is concerned, this is a significantly easier way to solve the problem - just drop in the script. I don't think SO answers are required to describe the implementation method of tools used in answers.
Answers on StackOverflow are expected to explain how/why this solves the problem. Also your script doesn't solve the problem. With an Ubuntu/Debian host, the host system's root CA trust store is not compatible with the target Fedora CoreOS container image's trust store.
Do you hava a source for that?
0

Those on podman looking to get the machine to use their mac's certificates

Follow below steps
0. podman machine ssh "mkdir -p ~/custom-certs"
1. cat /opt/homebrew/etc/ca-certificates/cert.pem | podman machine ssh "cat > ~/custom-certs/cert.pem"
2. sudo cp ~/custom-certs/cert.pem /etc/pki/ca-trust/source/anchors/
3. podman machine ssh sudo update-ca-trust || sudo update-ca-certificates
4. podman machine stop
5. podman machine start

Now any commands to pull images from registry should work if you credentials baked in.
Certificate related errors shouldn't slow you down if architecture is set

podman pull --arch arm64 docker.io/amazon/aws-glue-libs:glue_libs_4.0.0_image_01
Trying to pull docker.io/amazon/aws-glue-libs:glue_libs_4.0.0_image_01...
Getting image source signatures
Copying blob sha256:4da2d9d9fc0ac9ed44aeae53462e0d9963af208e9eaffabdeb253524d6a818f9

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.