Docker Datacenter LAB – Prepare OpenStack – Part II

This is part two of the work started in the previous blog post, Docker Datacenter LAB – Prepare OpenStack .


I want fancy new project, with multi-domain support please

OK, I have modified OpenStack configuration in order to use keystone authentication V3.


As a result of above changes I am now using external AD for authentication.
Next, I have created a new project called docker, a new  docker user in AD, and assign him admin and _member_ privileges.
All bellow commands are executed as docker [AD] user.
$ openstack project create docker --domain highvail
$ openstack user list --domain highvail
| ID                                                               | Name           |
| 0404330166e28bc1792055b4f76942a7985295d84bc96b978e9e4e7b912595b1 | user1          |
| b75712a4ac26b23cfbc43cc9b84e48e2a1a7b5fe543cbd74279c2f1b622a364b | user2          |
| 794021875da0238cf4badcae748e3be25c4c12f2aa166772c04b96e82f637961 | user3          |
| 9b6df39f88ab6e1f5ffc7886ed5773bca01e6266b34c69864a97e1b1d57a71f4 | svc-user       |
| 4caa389088f8d185aedcfe73a26f0953e2b8e174f41c3e439b21334f86c6f4bb | domain-admin   |
| 1ad3b204e40de3eb27e073dd619c5223af5dc2ee0f2216d3fab1879fe150e2c5 | docker         |
 It is important to give docker and admin user, heat_stack_owner role.
If not, later automation using HEAT templates will fail!!
$ openstack role add --user-domain highvail --project docker --user docker heat_stack_owner
$ openstack role add --user-domain highvail --project docker --user domain-admin heat_stack_owner


Can I come in? Access and security todos

A bit more openstack housekeeping is needed to make sure that I can communicate safely with Docker DC VMs. This means updating firewall rules, creating access keys and optionally creating [external; floating network] pool of IP addresses for the external access.[In my case heat template creates VM, and automates create-and-associate process of floating ips.]

Here’s how to create the security groups and rules:

$ nova secgroup-create ICMP_rules "Rules for ICMP Traffic"
$ nova secgroup-add-rule ICMP_rules icmp -1 -1

$ nova secgroup-create SSH_rules "Rules for SSH Traffic"
$ nova secgroup-add-rule SSH_rules tcp 22 22

$ nova secgroup-create Web_rules "Rules for Web Traffic"
$ nova secgroup-add-rule Web_rules tcp 443 443
$ nova secgroup-add-rule Web_rules tcp 80 80

$ nova secgroup-create DDC_rules "Rules for Docker DC"
$ nova secgroup-add-rule DDC_rules tcp 123 123
$ nova secgroup-add-rule DDC_rules tcp 2375 2376
$ nova secgroup-add-rule DDC_rules tcp 3128 3128
$ nova secgroup-add-rule DDC_rules tcp 2375 2376
$ nova secgroup-add-rule DDC_rules tcp 4789 4789
$ nova secgroup-add-rule DDC_rules udp 4789 4789
$ nova secgroup-add-rule DDC_rules tcp 7946 7946
$ nova secgroup-add-rule DDC_rules udp 7946 7946
$ nova secgroup-add-rule DDC_rules tcp 12376 12386


The private-public key-pair for accessing VMs:
$ nova keypair-add docker-key1 > ~/docker-key1.pem
$ nova keypair-add docker-key1-spare > ~/docker-key1-spare.pem
$ nova keypair-list
| Name              | Type | Fingerprint                                     |
| docker-key1       | ssh  | 8b:e8:7d:e2:b9:bb:db:45:be:47:e4:2d:33:29:c2:75 |
| docker-key1-spare | ssh  | 14:c8:54:dc:c0:1c:6a:e1:39:f8:7a:89:f6:6c:69:51 |
$ chmod 600 ~/docker-key*.pem


Build me some VMs, please automate something

Lets create a flavour that all Docker VMs are going to use.
Bellow command creates a flavour with 4GB of memory, and 2VCPUs, with 0GB root disk.
[NOTE! 0GB disk size is workaround for Red Hat bug where nova increases ‘local disk quota’ instead of NFS.]
openstack flavor create –id 10 –ram 4096 –disk 0 –ephemeral 0 –vcpus 2 –public docker.standard
| Field                      | Value           |
| OS-FLV-DISABLED:disabled   | False           |
| OS-FLV-EXT-DATA:ephemeral  | 0               |
| disk                       | 0               |
| id                         | 10              |
| name                       | docker.standard |
| os-flavor-access:is_public | True            |
| ram                        | 4096            |
| rxtx_factor                | 1.0             |
| swap                       |                 |
| vcpus                      | 2               |


I have named instances as:
docker-lab-1 [role: UCP]
docker-lab-2 [role: UCP]
docker-lab-3 [role: UCP]
docker-lab-4 [role: DTR]
docker-lab-5 [role: DTR]
docker-lab-6 [role: DTR]
docker-lab-7 [role: worker]
docker-lab-8 [role: worker]


I have generic yaml templates that automate all installation.

As I redo lab on a regular basis, there is a HEAT template for external as well as for internal networks.
To create external and internal VLAN I run:
$ heat stack-create -f external_network.yaml stack-nova
$ heat stack-create -f internal_network_vlan1701_v3.yaml stack-vlan1701


A stripped-down [read simplified] VM template used to create all docker VMs:

heat_template_version: 2015-10-15
description: >
                Template to deploy a docker vm on internal vlan and
                connected to the external network
                and configure, install and run docker engine - v3 - HighVail Systems Inc.

    type: string
    label: VM Name
    description: VM Name
    default: vm
    type: string
    label: Key Name
    description: Name of key-pair to be used for compute instance
    default: docker-key1
    type: string
    label: Image ID
    description: Image to be used for compute instance
    default: rhel7.2mod
    type: string
    label: Instance Type
    description: Type of instance (flavor) to be used
    default: docker.standard
    type: string
    label: External Network
    description: External network
    default: nova
    type: string
    label: Network ID
    description: Network ID used when creating a subnet
    type: number
    label: Volume Size
    description: Size of the my-server volume
    default: 20
    type: string
    label: Network Name
    description: Network Name
    default: vlanxxxx
    type: string
    label: Subnet Name
    description: Subnet Name
    default: vlanxxxx
    type: OS::Nova::Server
      name: { get_param: vm_name }
      key_name: { get_param: key_name }
      flavor: { get_param: flavor }
        - port: { get_resource: my_server_port }
        - delete_on_termination: true
          device_name: vda
          volume_id: { get_resource: my_server_vol }
      user_data_format: RAW
      user_data: |
        #!/bin/sh -v
        cat > ~/ << EOF

        #create docker.repo file
        sudo tee -a /etc/yum.repos.d/docker.repo <<'EOF2'
        name=Docker Repository

        sudo subscription-manager \
        --username=XXX@XXX.XXX \
        --password=XXX register \
        sudo subscription-manager repos --disable=*
        sudo subscription-manager repos --enable=rhel-7-server-rpms

        #enable and start service
        sudo systemctl enable docker.service
        sudo systemctl start docker.service

        #verify engine is running
        sudo docker info


        #fix 'error: sudo require tty...'
        sed -i 's/Defaults    requiretty/Defaults    !requiretty/g' /etc/sudoers

        #install docker engine
        chmod +x ~/
        sudo ~/

    type: OS::Neutron::Port
      network: {get_param: internal_network }
        - subnet: { get_param: internal_subnet }
      security_groups: [ 'ICMP_rules' , 'SSH_rules' , 'DDC_rules' , 'Web_rules']
    type: OS::Neutron::FloatingIP
      floating_network: { get_param: external_network }
    type: OS::Neutron::FloatingIPAssociation
      floatingip_id: { get_resource: my_server_floating_ip }
      port_id: { get_resource: my_server_port }
    type: OS::Cinder::Volume
      size: { get_param: vol_size }
      image: { get_param: image }
    type: OS::Cinder::Volume
      size: 20
      description: Volume for stack
    type: OS::Cinder::VolumeAttachment
      volume_id: { get_resource: my_server_vol2 }
      instance_uuid: { get_resource: my_server }
 Note! If you want to give it a try replace XXX with your settings.


The above yaml template is stripped-down version of our production code.
What the above yaml template does is:
  • creates docker.repo file and pulls Docker bits,
  • registers the system with RHN [Red Hat Subscription Service],
  • enables and starts CD Docker Engine.
  • uses default device mapper settings and loopback device [not production ready, but good for testing].


We have some good, tasty, sweet-and-sour sauce [read cloud-init code] injected into the production  yaml mould.

What the production version of the yaml template does is:

  • creates docker.repo file and pulls Docker bits,
  • registers the system with RHN [Red Hat Subscription Service],
  • updates and pre-configures the OS and pulls additional required packages,
  • re-configures Docker service to use direct LVM device on a second attachment point,
  • enables and starts CD Docker Engine.


Let’s finish the work by creating 3 instances for UCP, 3 instances for DTR and 2 worker nodes.
for i in $(seq 1 8); do heat stack-create \
-P vm_name=docker-lab-${i} \
-f scripts/docker-vm-vlan1701-v2.yaml stack-vm${i}; done
So, voila.


The Docker datacenter LAB is almost ready.
What is left is to install the software according to the specific role on each node and start playing with DEV workflow.
Can’t wait to put my developer hat on.
That’s coming next. Again, bake time 1-2 days.

Leave a Reply

Your email address will not be published. Required fields are marked *