5/5 - (1 vote)

You can find many Docker images on the Internet that do all kinds of useful and cool things, but if you download them without using any mechanisms of trust and authentication, you simply run an arbitrary software on your systems.

  • Where did this image come from?
  • Do you trust its creators? What security policies do they use?
  • Do you have objective cryptographic evidence that the image was really created by these people?
  • Are you sure that no one changed the image after it was downloaded?

Docker will run everything you ask for, so encapsulation won’t help here. Even if you use exclusively images of your own production, it makes sense to check whether someone else has changed them after creation. The solution ultimately comes to a classic PKI-based trust chain.

Best practices

General: do not run untested software and / or software obtained from untrusted sources.

Using the Docker registry servers, which can be found in this list of Docker Security Tools, expand the trust server.

For any image that is uploaded or runs on the system, provide mandatory verification of the digital signature.

Examples

Deploying a fully-fledged trusted server is not fully described in this article, but you can start now with signing your images.

If you don’t have an account on the Docker Hub yet, create one.

Create a directory with a simple Dockerfile which contents:

# cat Dockerfile
FROM alpine:latest

 

Build the image:

# docker build -t <youruser>/alpineunsigned .

 

Log in to your Docker Hub account and upload the image:

# docker login
[…]
# docker push <youruser>/alpineunsigned:latest

 

Enable Docker trust verification:

# export DOCKER_CONTENT_TRUST=1

 

Now try to get the image you just uploaded:

# docker pull <youruser>/alpineunsigned

 

You should get the following error:

Using default tag: latest
Error: remote trust data does not exist for docker.io/<youruser>/alpineunsigned:
notary.docker.io does not have trust data for docker.io/<youruser>/alpineunsigned

 

While DOCKER_CONTENT_TRUST enabled, reassemble the container. Now it will be signed by default.

# docker build --disable-content-trust=false -t <youruser>/alpinesigned:latest .

 

Now you can download and upload signed containers without any security warnings. The first time you upload a trusted image, Docker will create a root key for you. You will also need a repository key. In both cases, you will be asked to set a password.

Your private keys will be saved in the ~ / .docker / trust directory, restrict access to them and create a backup copy.

DOCKER_CONTENT_TRUST is an environment variable that will disappear after closing a terminal session. However, trust verification should be implemented at every stage of the process – from assembling images and placing them in registries to uploading and running on servers.

Resource abuse

In general, containers are much more numerous than virtual machines. They are lightweight, which allows you to run many containers even on very simple hardware. This is certainly an advantage, but from the other side there is serious competition for host resources. Software bugs, design mistakes, and hacker attacks can lead to the Denial of Service. To prevent them, you must properly configure resource limits.

Best practices

By default, in most containerization systems, the restriction of these resources is disabled. However, in production, their configuration is simply required. We recommend to adhere to the following principles:

  1. Use the resource-limiting functions included in the Linux core and / or containerization system.
  2. Try to make a stress testing of the system before putting it into commercial operation. For this, are used synthetic tests, as well as artificial creation of the “real” traffic for the system. Stress testing is vital for determining ultimate and normal workloads.
  3. Deploy a monitoring and alerting Docker systems. We are sure that in case of abuse of resources (malicious or not), you would prefer to receive a timely warning, rather than “crashing into the wall at full speed”.

Examples

Control groups (cgroups) are a tool provided by the Linux core that allows you to restrict the access of processes and containers to system resources. Some limits can be controlled from the Docker command line:

# docker run -it --memory=2G --memory-swap=3G ubuntu bash

 

This command sets a limit of 2 GB of memory available to the container. To check the limit, run a load simulator, for example, the stress program, which is in the Ubuntu repositories:

root@e05a311b401e:/# stress -m 4 --vm-bytes 8G

 

In the output of the program, you will see the line ‘FAILED’.

The following lines should appear in the syslog of the host:

Aug 15 12:09:03 host kernel: [1340695.340552] Memory cgroup out of memory: Kill process 22607 (stress) score 210 or sacrifice child
Aug 15 12:09:03 host kernel: [1340695.340556] Killed process 22607 (stress) total-vm:8396092kB, anon-rss:363184kB, file-rss:176kB, shmem-rss:0kB

 

With docker stats, you can check the current memory consumption and set limits. In Kubernetes, in the definition of the pod, you can reserve the resources necessary for the normal operation of the application, as well as set limits.

[...] - name: wp image: wordpress resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" [...]

Stay tuned for our next articles about Docker security!