3.3 Docker Datacenter LAB – Installing and Configuring Docker DC – Part IV

Howdy stranger. This is 4th instalment of the blog where I am building [smaller enterprise version of] Docker DataCenter. As all prep work has been completed now I am ready to start working on Docker Datacenter and its components. So let’s jump to it.
Going back to what the Docker DataCenter is – integrated solution of open source software for running your own Container As A Service infrastructure aka CaaS. Major components are Docker Trusted Registry [DTR], Universal Control Plane [UCP] and CS Docker Engine. The DTR is an on-prem registry server used to store your images, in a way blueprints for running containers. It is possible to configure the DTR to use 3rd party pluggable storage drivers. I am going to utilize OpenStack Swift for example to store my images. It has nice GUI and built it metrics and logging capabilities. Next we have UCP, enterprise citizen for deployment and management of CaaS, with features like LDAP/AD and High Availability [HA]. And then there is CS Docker Engine – a glue that makes all of this possible. One of the newer features is the Docker Content Trust – gives you the ability to verify both the integrity and the publisher of all the data received from a registry over any channel. Another component under the scene is Docker Swarm – clustering software component that uses RAFT protocol for communication – and basically makes all of your UCP worker nodes look as single virtual UCP server.

The secrets of RAFT explained here – http://thesecretlivesofdata.com/raft/.

Worth noting is that neither DTR nor UCP offer built-in external load balancing. So when building HA components important task in the enterprise is to have scalable and redundant north load balancer to serve as single intake point. One of my next posts will actually go into showing how to configure load balancer for DTR/UCP.

screenshot

Process, can I checkmark everything 🙁

  • Install and configure 8x RHEL7.2 instances on OpenStack [done in previous posts]
  • Install CS Engine on ALL 8 instances, going forward referred as nodes. [done in previous posts]
  • On nodes 1-8 install UCP
    • ucp install
      • # installs the UCP manager
      • [node 1]
    • ucp join –replica
      • # builds UCP replicas
      • [nodes 2-3]
    • ucp join
      • # adds UCP workers
      • [nodes 4-8]
  • On nodes 4-6 install DTR
    • dtr install
      • # builds DTR master
      • [must be part of UCP cluster]
      • [node 4]
    • dtr join
      • # builds replicas
      • [must be part of UCP cluster]
      • [node 5-6]
  • On UCP/DTR configure AD for Auth.
  • Exchange certificates and build trust between components.
  • Build Load Balancers and update DNS
As I stated before all of the below process in this post is fully automated, but for clarity here are the manual steps.

UCP = Universal Control Plane, but HA please

Let’s install UCP on docker-lab-1 [to use below just change VARIABLES; commands stay the same] :

# Install UCP on docker-lab-1
# docker_subscription.lic is your downloaded enterprise license
docker run --rm -it --name ucp \ 
  -v /var/run/docker.sock:/var/run/docker.sock \ 
  -v /home/cloud-user/docker_subscription.lic:/home/cloud-user/docker_subscription.lic \ 
  docker/ucp install -i \ 
    --host-address docker-lab-1

# Backup UCP config
# … creates tar file with certs and keys
# ... that I will use to build 2 replicas
# ... node 2 and 3
docker run --rm -i --name ucp \   
  -v /var/run/docker.sock:/var/run/docker.sock \   
  -v /home/cloud-user/docker_subscription.lic:/home/cloud-user/docker_subscription.lic \   
  docker/ucp backup \   
    --interactive \   
    --root-ca-only \   
    --passphrase ${PASSPHRASE} > ~/ucp1-backup.tar
# Copy created ucp1-backup.tar file to docker-lab-2, and docker-lab-3
scp -i docker-key1.pem ucp1-backup.tar cloud-user@docker-lab-2:
scp -i docker-key1.pem ucp1-backup.tar cloud-user@docker-lab-3:

 

I’ll join nodes 2 and 3.
[Change export VARIABLES to use in your environment; commands stay the same]
# join Swarm cluster 
# ...HINT one way to get fingerprint: gui clicking on Add node ... under nodes menu 
docker run --rm -it --name ucp \ 
-v /var/run/docker.sock:/var/run/docker.sock \ 
-v /home/cloud-user/docker_subscription.lic:/home/cloud-user/docker_subscription.lic \ 
-v ${BACKUP_PATH}/ucp1-backup.tar:${BACKUP_PATH}/ucp1-backup.tar \ 
docker/ucp join \   
  --admin-username ${ADMIN_USERNAME} \   
  --admin-password ${ADMIN_PASSWORD} \   
  --host-address ${HOST_ADDRESS} \   
  --url $URL \   
  --fingerprint ${FINGERPRINT} \   
  --replica \   
  --passphrase ${PASSPHRASE}
# Restore keys and certs to other nodes
# ...in case primary node goes offline
# HINT! ...download client bundle, or look below how to get it from CLI
# HINT!... run eval $(<env.sh) and capture/script your node IDs
export ID=...
docker run --rm -i --name ucp \ 
-v /var/run/docker.sock:/var/run/docker.sock \ 
docker/ucp restore \   
  --root-ca-only \   
  --id ${ID}  \   
  --passphrase ${PASSPHRASE} < ${BACKUP_PATH}/ucp1-backup.tar

 

To finish UCP HA setup I’ll advertise each node state in the cluster:
# Run on each node, one at the time
sudo docker run --rm -it \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp engine-discovery \
  --update && \
  sudo systemctl daemon-reload && \
  sudo systemctl restart docker && \
  sudo systemctl status -l docker

 

I’ll add all remaining nodes to the cluster.

 

Execute on nodes 4 to 8.
[Change export VARIABLES to use in your environment; commands stay the same]
docker run --rm -it --name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp join \
  --url ${URL} \
  --admin-username ${ADMIN_USERNAME} \
  --admin-password ${ADMIN_PASSWORD} \
  --fingerprint  ${FINGERPRINT} \
  --san ${SAN}

Bring Registry to the house, yes behind the firewall

Let me first get client bundle using CLI, as a cloud-user:
[Change export VARIABLES to use in your environment; commands stay the same]
export URL=https://docker-lab-1
# get the root CA
# saved it as backup because client bundle will overwrite ucp-ca.pem file
curl -k ${URL}/ca > ucp-ca-backup.pem

# get the auth_token
AUTH_TOKEN=$(curl -sk -d '{"username":"admin","password":"MYPASSWORD"}' ${URL}/auth/login | \
awk -F: ' { print $2 }' | sed -e "s/\"//g" -e "s/}//g")
 
# download the UCP client bundle
curl -k -H "Authorization: Bearer $AUTH_TOKEN" ${URL}/api/clientbundle -o bundle.zip

# unzip the bundle
unzip bundle.zip

# apply the environment
eval $(< env.sh)

 

Now as with UCP, first I’ll build DTR master then add two replicas.
Here,  I am using join subcommand, not restore, as was the case with UCP.
Installing DTR on node 4:
# on docker-lab-4
 docker run -it --rm \
 docker/dtr install \
  --ucp-url ${UCP_URL} \
  --ucp-node ${UCP_NODE} \
  --dtr-external-url ${DTR_EXTERNAL_URL} \
  --ucp-username ${UCP_USERNAME} \
  --ucp-password ${UCP_PASSWORD} \
  --ucp-ca "$(cat ucp-ca.pem)"

 

Building 2 DTR replicas, node 5 and 6:
# node docker-lab-5/6
docker run -it --rm \
docker/dtr join \
  --ucp-url ${UCP_URL} \
  --ucp-node $UCP_NODE \
  --existing-replica-id ${EXISTING_REPLICA_ID} \
  --ucp-username ${UCP_USERNAME} \
  --ucp-password ${UCP_PASSWORD} \
  --ucp-ca "$(cat ucp-ca.pem)"

 

From each Docker Client [all UCP nodes and all clients accessing the trusted registry] I should be able to test push/pull of images.
This should fail initially as have not updated certificates.

 

Before configuring Docker engine, I’ll try to login to trusted registry:
docker login -u admin -p MYPASSWORD docker-lab-4
Error response from daemon: Get https://docker-lab-4/v1/users/: x509: certificate signed by unknown authority

 

Configuring Docker Engine on node 1-8:
sudo curl -k https://docker-lab-4/ca -o /etc/pki/ca-trust/source/anchors/dtr.crt
sudo update-ca-trust
sudo /bin/systemctl restart docker
docker login -u admin -p MYPASSWORD docker-lab-4
Login Succeeded

 

Done. Oh Del boy, no I am not. So, let me check my pulse and let me recap what I’ve done so far in this post.
I have 3 UCP controllers in HA, 2 UCP nodes [workers] and 3 DTR in HA [Docker Trusted Registry] deployed.
Now, to do remaining steps, that I planned for this LAB, using CLI with current version of the software is somewhat complicated. I would have to edit yaml file that defines for example Docker Registry and modify it to use Swift on my local OpenStack deployment. Then I would have to update running configuration with these changes. CLI options are not there yet. Some of this is actually being addressed in upcoming Docker releases of DDC, but for now I will point you/myself in the right direction.

 

What is left to be done:
  • Integrate UCP with DTR
    • https://docs.docker.com/ucp/configuration/dtr-integration/
    • Securely store/manage the Docker images stored in local repo.
    • Note! Get UCP cluster CA certificate
      # run from node 1
      docker run –rm –name ucp \
      -v /var/run/docker.sock:/var/run/docker.sock \
      docker/ucp dump-certs \
      –cluster —ca > ucp-cluster-ca.pem
    • all UCP nodes [1-8] must have DTR certificate of each DTR replica
      sudo mkdir -p /etc/docker/certs.d/docker-lab-4
      sudo cp dtr-ca-4.pem /etc/docker/certs.d/docker-lab-4/ca.pem
  • Build Load Balancer for UCP and DTR
  • Update DNS with FQDN for LB for UCP, and DTR

 

Test Registry

# from every UCP cluster node
eval $(< env.sh)
docker info
docker pull hello-world
# ... where X increases
docker tag hello-world:latest docker-lab-4/admin/hello-world:X
docker push docker-lab-4/admin/hello-world:X

 

What now

The major shortcoming of above configuration is lack of load balancing and proxy solution for UCP and DTR. Any controller failure when UCP is deployed in HA will not have any impact on your UCP cluster, both from UCP GUI or Docker client perspective. But, let’s assume that you are not using slow DNS Round-robin service discovery and you have statically mapped DNS record to a primary UCP controller. If primary UCP controller goes down that will render UCP service unavailable until you manually intervene. For that reason, it is recommended [read mandatory or desired] to deploy load balancer(s) to tackle the issue. The same applies for DTR. About that and testing read in the next post…

Leave a Reply

Your email address will not be published. Required fields are marked *