03 août 2015

Docker: Initial Setup for jenkins-slave0

For an overview of the workflow and of the architecture in this use-case, see: Docker: a jenkins-slave0 strategy

Architecture #1 and #2: Nodes & Links

I assume you already know how to connect to your SCM, so the only links to be clarified are the ones below:

Nodes and Links

For Architecture #1, the following section, “Setup”, will talk about:

  1. Versions
  2. Shared folders
  3. Direct SSH access to “VM-with-docker” from the outside
  4. Docker installation
  5. nginx configuration
  6. Direct SSH access to “jenkins-slave0” from the outside
  7. Docker registry

Architecture 1: Nodes and Links

For Architecture #2, where the Jenkins VM is on the same host as the “VM-with-docker”, so it can connect to the containers via SSH and uses the Docker Plugin:

  1. Docker REST API via TCP
  2. SSH access to the Docker containers
  3. Jenkins Docker Plugin

Architecture 2: Nodes and Links

There’s also an additional paragraph, “X. Things I don’t say”.

Setup

I use a generic “myuser” for this procedure, but of course it must be replaced with a real user. In my case, it was “dandriana”.

1. Versions

We’re using CentOS 7.1.1503 on “VM-with-docker”, since Docker needs a 3.10+ Linux kernel.

Debian 8.1 on “Host 2”.

VirtualBox 4.3 installed on “Host 2”.

The following procedure installs Docker 1.7.0 on “VM-with-docker”.

nginx 1.6.2 — Note: Earlier versions, such as 1.2.1 that comes with Debian 7.8, would emit a “411 Length Required” error on POST requests with Transfer-Encoding: chunked. This problem disappears with nginx 1.3.9+.

Jenkins 1.6.23.

Docker Plugin 0.10.2 installed in Jenkins — This version doesn’t support Docker 1.7.1, hence the 1.7.0.

2. Shared folders

We create a custom /var/docker-registry directory on “Host 2” and make it a shared folder:

$ sudo mkdir /var/docker-registry
$ sudo chown myuser:myuser /var/docker-registry
$ VBoxManage sharedfolder add VM-with-docker --automount \
    --name docker-registry --hostpath /var/docker-registry

Then, on “VM-with-docker”:

$ sudo usermod -aG vboxsf myuser

Note: Maybe “myuser” doesn’t exist yet on “VM-with-docker”. In that case, look at the following paragraph, “3. Direct SSH access to VM-with-docker from the outside”.

Log out from “VM-with-docker”, log in again, so the “myuser” user belongs to the “vboxsf” group.

Then, on “VM-with-docker”:

$ sudo ln -s /media/sf_docker-registry /var/docker-registry

3. Direct SSH access to VM-with-docker from the outside

This SSH access is not mandatory, since every Docker command may be passed through its REST API (see below). However, I like to be able to connect to “VM-with-docker” via SSH.

This could be done with port forwarding, but I generally consider the host (“Host 2”) as a barrier from the outside: Managing iptables, logging access, etc. Therefore, I tend to make all connections to VMs go through some services running on the host.

The idea is this one:

From my laptop, working as “myuser”, I must be able to run:

$ whoami
myuser
$ hostname
my-laptop
$ ssh docker@<Host 2>

And be immediately connected as “myuser” on “VM-with-docker”:

Login successful.
$ whoami
myuser
$ hostname
VM-with-docker

Here’s my setup:

  • A “docker” user is added to “Host 2”.
  • It has a public key that allows to connect via SSH as “myuser” on “VM-with-docker”.
  • My public key from the outside (Real user: “myuser”) is added to this “docker” user authorized keys, with a special SSH command, which will perform a SSH connection to “VM-with-docker”.

So, on “Host 2”:

$ sudo useradd -m docker
$ sudo su - docker
$ ssh-keygen -N '' -f /home/docker/.ssh/id_rsa

Note: Default file names (= "id_rsa / id_rsa.pub"), and no passphrase.

$ cat .ssh/id_rsa.pub
ssh-rsa AAAAB3Nza...9nG8yMNCDl docker@<Host 2>

On “VM-with-docker”, connected as “myuser”, add this public key to /home/myuser/.ssh/authorized_keys

Check the connection: As “docker” on “Host 2”, you should be able to do this:

$ ssh -i /home/docker/.ssh/id_rsa myuser@VM-with-docker

Now, from the real world, get your “myuser” public key:

ssh-rsa AAAAB3S31...T8/vkhv9uP myuser@my-laptop

Add it to /home/docker/.ssh/authorized_keys on “Host 2”, but with the following command as a prefix:

command="/home/docker/docker_ssh.sh myuser" ssh-rsa AAAAB3S31...T8/vkhv9uP myuser@my-laptop

And now for the “docker_ssh.sh” script to be placed in /home/docker on “Host 2”:

#!/bin/sh

PROGRAM=`echo "${0}" | grep -oP "[^/]*$"`
CALLER="${1}"
if [ -z "${CALLER}" ]; then
    echo "*** Unknown SSH User. Exiting." >&2
    exit -1
fi

CLIENT_HOST=`echo "${SSH_CONNECTION}" | grep -oP "^[^\s]*"`
logger "${PROGRAM}: caller=${CALLER} host=${CLIENT_HOST} command=${SSH_ORIGINAL_COMMAND}"
ssh -t -l "${CALLER}" VM-with-docker ${SSH_ORIGINAL_COMMAND}

Don’t forget to make the file executable:

$ chmod +x /home/docker/docker_ssh.sh

Now, you can connect from the outside as “docker” with your “myuser” public key:

$ ssh -i /home/myuser/.ssh/id_rsa docker@<Host 2>

Logs are written to /var/log/syslog on “Host 2”.

4. Docker installation

On “VM-with-docker”, create a /etc/yum.repos.d/docker.repo file with this content:

[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg

Now install via yum:

$ sudo yum update
$ sudo yum install docker-engine

Enable and start the service:

$ sudo systemctl enable docker
$ sudo systemctl start docker

At the beginning, you need to sudo in order to use docker. To get rid of that limitation, add your user to the “docker” group:

$ sudo usermod -aG docker myuser

Then log out, log in, so the modification is taken into account.

Now, you can try Docker:

$ docker run hello-world
Hello from Docker.
... 

$ docker images -a
REPOSITORY    TAG      IMAGE ID       CREATED         VIRTUAL SIZE
hello-world   latest   91c95931e552   3 months ago    910 B
<none>        <none>   a8219747be10   3 months ago    910 B

$ docker ps -a
CONTAINER ID IMAGE       COMMAND  CREATED   STATUS                    PORTS NAMES
2e5b012c0179 hello-world "/hello" 38 m. ago Exited (0) 38 minutes ago       desp…

Because it’s a lot easier for networking inside containers, we stop the firewall on “VM-with-docker”, and restart docker:

$ sudo systemctl stop firewalld

$ sudo systemctl restart docker

5. nginx configuration

The goal of this is to offer the outside a REST access to Docker. Nginx acts as a reverse proxy to Docker’s TCP socket (see 8. Docker REST API via TCP for that part).

Bear in mind that with this setup, the HTTP or HTTPS connection will not be secured by authentication. In my case, I protect the access via restrictions on IP Addresses.

Also, it’s nice to have a secured traffic, but the Docker Plugin doesn’t support HTTPS, only HTTP. Actually that’s not a problem, since the interesting use case with the Docker Plugin is when Jenkins and Docker are on the same host, therefore we can limit the HTTP traffic with Docker to their internal, host-only, network.

That said, in my scenario I wanted HTTPS. We need a SSL certificate. Let’s create an autosigned one:

$ sudo openssl req -subj '/CN=docker.my.domain/' -x509 -days 365 \
  -batch -nodes -newkey rsa:2048 \
  -keyout /etc/ssl/private/docker.my.domain.key \
  -out    /etc/ssl/certs/docker.my.domain.crt

Where the “docker.my.domain” CN (Common Name) will be a domain name, linked to “Host 2”; It will be the name of the HTTP server provided by nginx.

Note: You may use /etc/ssl/ on Debian 7, but /etc/pki/tls/ on CentOS 7.

We will make Docker listen on TCP port :2375, and nginx listen on :443, that is the standard HTTPS port.

I use the following configuration:

server {
listen 443 ssl;
    server_name docker.my.domain;    

    allow my.ip.1; # Jenkins IP
    allow my.ip.2; # Workstation IP
    allow my.ip.3; # Some other IP
    deny all;

    client_max_body_size 40m;

    ssl on;
    ssl_certificate     /etc/ssl/certs/docker.my.domain.crt;
    ssl_certificate_key /etc/ssl/private/docker.my.domain.key;

    ssl_session_timeout 5m;

    ssl_protocols SSLv3 TLSv1;
    ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
    ssl_prefer_server_ciphers on;

    access_log /var/log/nginx/docker-access.log;
    error_log  /var/log/nginx/docker-error.log;

    location / {
        proxy_headers_hash_bucket_size 256;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_pass http://<VM-with-docker>:2375;
    }
}

6. Direct SSH access to “jenkins-slave0” from the outside

In the particular case of Architecture #1, when there’s only one container running, it may interesting to be able to connect to it via SSH from the outside.

It’s the same as with 3. Direct SSH access to VM-with-docker from the outside.

From my laptop, working as “myuser”, I want to be able to run:

$ whoami
myuser
$ hostname
my-laptop
$ ssh jenkins_slave0@<Host 2>

And be immediately connected as “jenkins” on the “jenkins-slave0” container:

Login successful.
$ whoami
jenkins
$ hostname
<jenkins-slave0>

This can be done with port forwarding at the “Host 2” host level, but I prefer to have some kind of indirection and some logs.

First, we need to be able to access the container from its Docker host (“VM-with-docker”) without having to search for its local IP.

This is done by port forwarding at the Docker level. We started our container via this command line:

$ docker run -d -p 22000:22 avantage-compris/jenkins-slave0

That means that connecting to “VM-with-docker”:22000 will give me SSH access on the container on port 22.

Then, we need to be able to transparently access the “VM-with-docker” environment from “Host 2”. For that, we create a “jenkins_slave0” user on “Host 2”, generate a public key for it (with no passphrase), and… add this public key to our Docker image via the Dockerfile. Remember the “jenkins.pub” file? That’s the place where to put this public key.

In case you’re wondering how to update the Dockerfile for that:

# User: jenkins
RUN useradd -m -s /bin/bash jenkins
ADD [ "jenkins.pub", "/home/jenkins/" ]
ADD [ "jenkins_slave0.pub", "/home/jenkins/" ]
RUN mkdir /home/jenkins/.ssh/; \
    cat /home/jenkins/jenkins.pub >> /home/jenkins/.ssh/authorized_keys; \
    cat /home/jenkins/jenkins_slave0.pub >> /home/jenkins/.ssh/authorized_keys; \
    chown -R jenkins:jenkins /home/jenkins/.ssh/; \
    chmod 700 /home/jenkins/.ssh/; \
    chmod 600 /home/jenkins/.ssh/authorized_keys

After having updated the Docker image (i.e. update the Dockerfile and rebuild the image), removed the old container and started a new one (don’t forget the -p 22000:22 directive), perform a check from “Host 2”:

$ ssh -i /home/jenkins_slave0/.ssh/id_rsa -l jenkins -p 22000 <VM-with-docker>

Then we need a “jenkins_slave0_ssh.sh” script to be placed in /home/jenkins_slave0 on “Host 2”:

#!/bin/sh

PROGRAM=`echo "${0}" | grep -oP "[^/]*$"`
CALLER="${1}"
if [ -z "${CALLER}" ]; then
    echo "*** Unknown SSH User. Exiting." >&2
    exit -1
fi

CLIENT_HOST=`echo "${SSH_CONNECTION}" | grep -oP "^[^\s]*"`
logger "${PROGRAM}: caller=${CALLER} host=${CLIENT_HOST} command=${SSH_ORIGINAL_COMMAND}"
ssh -t -l "${CALLER}" -p 22000 VM-with-docker ${SSH_ORIGINAL_COMMAND}

It’s not exactly the same file as “docker_ssh.sh” we’ve seen before, because now we’re using port 22000: This time, at the end we want to be connected to the Docker container, not to the Docker host.

Don’t forget to make the file executable:

$ chmod +x /home/jenkins_slave0/jenkins_slave0_ssh.sh

Last but not least, add the real user “myuser”’s public key to /home/jenkins_slave0/.ssh/authorized_keys.

Now, you can connect from the outside as “jenkins_slave0” with your “myuser” public key:

$ ssh -i /home/myuser/.ssh/id_rsa jenkins_slave0@<Host 2>

Logs are written to /var/log/syslog on “Host 2”.

7. Docker registry

For our scenario, this step is optional.

The Docker registry is actually a container, launched from the “registry:2” image.

To start it:

$ docker run -d -p 5000:5000 --restart=always \
    -e REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/docker-registry \
    -v /var/docker-registry:/var/lib/docker-registry \
    --name registry registry:2

8. Docker REST API via TCP

This API is absolutely needed for the Docker Plugin.

Because we need Docker to listen on a TCP socket, we modify /usr/lib/systemd/system/docker.service on “VM-with-docker”.

Change:

ExecStart=/usr/bin/docker -d -H fd://

into:

ExecStart=/usr/bin/docker -d -H fd:// -H tcp://<VM IP Address>:2375

where “VM IP Address” is the one of “VM-with-docker”, for instance “192.168.56.101”.

Note: 2375 is an arbitrary port, I chose this one because it appears a lot in samples and tutorials.

Then reload the systemd configuration and restart Docker:

$ sudo systemctl daemon-reload
$ sudo systemctl restart docker

If you use firewalld, you should allow connections on port :2375 for the host-only network where the VM can be found. For instance:

$ sudo firewall-cmd --permanent --zone=internal --add-source=192.168.56.0/24
$ sudo firewall-cmd --permanent --zone=internal --add-port=2375/tcp
$ sudo firewall-cmd --reload

Then, try to connect from “Host 2”:

$ curl http://<VM IP Address>:2375/info

This URL returns a JSON document equivalent to “docker info”.

Note that it’s not secured at all: No SSL, no authentication. That’s why in my scenario I put nginx between this entry point and the internet, plus restrictions on IP Addresses.

If you use nginx as a HTTPS reverse proxy (see 5. nginx configuration), check that nginx is correctly configured by typing from a machine allowed to access it (you may have to declare “<Host 2> docker.my.domain” in your /etc/hosts file):

$ curl --insecure https://docker.my.domain/info

Where “docker.my.domain” points to “Host 2”, and is exactly the CN (Common Name) in the SSL certificate used by nginx.

9. SSH access to the Docker containers

In the case of the Docker Plugin, where Docker containers are started and their port 22 is mapped to random ports on “VM-with-docker” (-p 32768:22, -p 32769:22, etc.), the SSH access cannot be opened to the outside.

In that case, running on the same host “Host 2”, Jenkins just connects directly to “VM-with-docker” on ports 32768, 32769, etc.

In the case of a one container started manually with -p 22000:22, that you want to access from the outside (and in that case you cannot use the Docker Plugin), see 6. Direct SSH access to “jenkins-slave0” from the outside.

10. Jenkins Docker Plugin

In the Jenkins administration console, add a new cloud and select “Docker”.

Jenkins Config: Add a new cloud: Docker

Then enter the URL where Docker can be accessed. If you configured the REST API (see 8. Docker REST API via TCP), the URL is of the form: http://<VM-with-docker>:2375

Please note the Docker Plugin (0.10.2) doesn’t support HTTPS yet.

Also, it cannot work with Docker > 1.7.0.

The “Docker Version” field means “API Version”. Enter: 1.19

“Container Cap” is the maximum number of containers you allow this cloud to create.

Jenkins Docker configuration

A button allows you to test the connection to the API. It displays the version of Docker:

Jenkins Docker Version 1.7.0

There are no credentials to access Docker itself.

Then, we add a Docker Template: What image will Jenkins use, and SSH credentials to log in the container (must match the “jenkins.pub” copied from the Dockerfile. See: Docker: a jenkins-slave0 strategy)

“Instance capacity” is the maximum number of containers you allow this croud to create with this image.

Jenkins Docker Template configuration

Labels: With labels, you can force a job to be run on a container of a specific Docker image. Here’s my example with a “toto” label.

Label toto

Credentials: It‘s the private key associated with the “jenkins.pub” public key you added to the Docker image (see the Dockerfile at: Docker: a jenkins-slave0 strategy)

Credentials

The plugin allows you to see what Docker containers are started and what images are available:

  • Go to: Manage Jenkins, Docker

Jenkins running containers

  • In the list, click on a Docker server

Jenkins running containers

Jenkins gives you a view of all current slaves, including Docker containers currently used for jobs:

  • Go to: Manage Jenkins, Manage Nodes

Jenkins nodes

That’s it.

X. Things I don’t say

  • Configuration files on “Host 2” and “VM-with-docker” such as authorized_keys, id_rsa.pub, and docker_ssh.sh, are synced with a git repository. Simple files, such as docker_ssh.sh, are directly pulled from the git repository, and symbolic links are used. Complicated files, such as authorized_keys (which requires specific permissions) are not pulled from the git repository, but a copy exists in the git repository, and the two files (the one actually used by the system and the one stored in git) are compared from time to time.

  • nginx itself is installed inside a VM, with port forwarding from “Host 2”. (Note I didn’t put nginx inside a Docker container.)

  • iptables configuration.

Aucun commentaire: