3.4 Docker Datacenter LAB – Configuring Docker DC part V – Load Balancer 4 UCP

/Load Balancing UCP service/

This is fifth instalment in the series Installing and  Configuring Docker DC [for Ops people].

Continuing on Docker story, belated one I may say, focus is on bringing Load Balancer [LB] for UCP into the spotlight. I decided to build  a single HAProxy virtual server for load balancing of 3x UCP servers, so the service is not tied to any particular UCP IP but proxy instead. The proxy server role is to make sure that incoming request is redirected to the proper [read next available in line] UCP server.

Definition of what constitutes down service can be anything, from hw failures, software issues to making undesired OS or OpenStack changes on instances. As I have built 3 UCP servers my configuration can tolerate single UCP controller failure. If proxy cannot reach UCP server’s  IP it will pass the request to the next available server. If more resilient [read enterprise] configuration is needed one can add/build UCP server[s] on the fly by provisioning new OpenStack instance. This of course should be automated, let’s say using HEAT [Autoscaling feature], Puppet, or Ansible. If HEAT is used, setup min and max thresholds for each service, and let Ceilometer monitor the rest for you, for example. With some scripting, yaml file can contain all it needs to build a new instance of UCP. But that’s beyond scope of this POST.

Also, when talking about HA obvious question is what happens if, or better when, my HAProxy goes down.

 

/SERVICE DOWN! PLEASE TRY AGAIN LATER./

When the service is considered critical, in this case HAProxy for UCP, we would have to make sure that not only UCP servers are configured for high availability but load balancers as well. There is difference how load balancing service works compared to cluster software. Not to go into details but lets just clarify that load balancer only job is to make sure the server is available before redirecting a client request. Clustering software on the other hand have a full set of features to maintain life cycle and recovery of underlying service[s] behind VIP [virtual or service ip] and keep it in active-passive configuration.

We can build second HAProxy server and setup clustered service with Pacemaker or InfoScale Availability to provide HA, and new VIP as service entry point.  Just for completeness, you can use any clustering software, but for production use Pacemaker or Veritas InfoScale Availability, aka Veritas Cluster Server.

InfoScale Availability provides high availability and disaster recovery over any distance for your critical business services, including individual databases, custom applications, and complex multi-tier applications across physical and virtual environments.

As I have single HAProxy I do not have to worry about this for now.

/I see multiple UCP servers, which one to use/

 

I have created new instance docker-lb1 for UCP load balancing and external service discovery. Specs are: 2GB of memory, 2VCPUs and 10GB root disk.

Word of caution! As I have VMs in OpenStack I do not have to worry about firewall but I have to make sure that ports 80 and 443 are open for ingress/egress traffic.

Quick setup is below.

Red Hat subscription:

subscription-manager register —auto-attach
subscription-manager repos —disable=*
subscription-manager repos –enable=rhel-7-server-rpms
yum update -y

Docker UCP server certificates are created under:

/var/lib/docker/volumes/ucp-controller-server-certs/

Dumping certificate and key into single file so HAProxy can recognize it.

cat server.crt server.key > server.pem
cp server.pem /etc/ssl/private/

Another option is to create self-sign certificate or use externally signed certificate.

Example of creating Self-sign certificate on HAProxy server:

openssl req -new -newkey rsa:2048 -x509 -days 365 -nodes -keyout server.key -out server.crt
cat server.crt server.key > server.pem
cp server.pem /etc/ssl/private/

 

/etc/haproxy/haproxy.cfg

This is part of  my HAProxy configuration.

frontend  myhttps
    bind *:80
    redirect scheme https if !{ ssl_fc }
    bind *:443 ssl crt /etc/ssl/private/server.pem
    monitor-uri   /_ping
    option httpclose
    option forwardfor
    default_backend ucp
backend ucp
    mode http
    balance roundrobin
    server docker-lab-1 10.236.13.26:443 check port 443 ssl
    server docker-lab-2 10.236.13.27:443 check port 443 ssl
    server docker-lab-3 10.236.13.28:443 check port 443 ssl

Enable and start HAProxy service

systemctl enable haproxy && systemctl start haproxy
systemctl status -l haproxy

/Test-tweak-test/

Rebooting docker-lab-1.

Service before failure,

screenshot

during,

screenshot1
[client request redirected to the next available node]
screenshot2
and after reboot,

screenshot3

By using HAProxy IP I have prevented service disruption.

/FQDN? Of course./

One last UCP configuration improvement is to make DNS  entry in AD [Active Directory] so the external service discovery will use FQDN instead of IP.
OK. The next and last post in this series, Configuring Docker DC, will talk about Load balancer for DTR [Docker Trusted Registry] . Story is similar for DTR, but as DTR software have changed over past few days, some testing is needed. Bake time may be longer than usual…

Leave a Reply

Your email address will not be published. Required fields are marked *