Docker Datacenter LAB – Prepare OpenStack – Part I

As promised here is part one of OpenStack setup for Docker DC.  This is getting now a bit technical…

Please tell me it’s not server-less

Here is the hardware used for the LAB:
  • CISCO UCS Mini
    • 1x blade – controller
    • 1x blade – compute
  • Cisco UCS C-Series servers
  • Cisco Nexus 3000 switching
  • NetApp FAS2240 storage controllers
Take a look at “Red Hat OpenStack Architecture on CISCO UCS Platform” reference architecture document for process. Google it!
Of course, in our case we made a number of modification.
If you are interested in talking to me about this or other scenarios please visit http://www.highvail.com/.

Does it look good?

The Openstack architectural view:
screenshot

 

OOO as in under and over …cloud

The RedHat OpenStack 8 platform is installed using automated installer called OSP Director [TripleO], and consists of one controller and one compute. I will not go into explaining full process of setting up the OSP8, suffice to say Red Hat documentation is very good on the topic. In the meantime version 9 [Mitaka] of the Red Hat OpenStack Platform has been released. The process however stays the same.

 

I will show final steps needed to setup overcloud where my Docker DC is going to [temporarily] live.
It is, I would say, obvious for an astute reader to realize that for bellow command to work number of YAML configuration files must me created that describe openstack resources, such as:
  • compute
  • control
  • storage
  • network
  • overlay network, etc…

 

After templates are created on already deployed undercloud, the final command used for creating overcloud is, run from OSP Director:
openstack overcloud deploy \
–templates \
-e /home/stack/templates/openstack-tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \
-e /home/stack/templates/network-isolation.yaml \
-e /home/stack/templates/network-environment.yaml \
-e /home/stack/templates/storage-environment-v2.yaml \
–control-scale 1 \
–compute-scale 1 \
–ceph-storage-scale 0 \
–block-storage-scale 0 \
–swift-storage-scale 0 \
–control-flavor control \
–compute-flavor compute \
–neutron-tunnel-types vlan \
–neutron-network-type vlan \
–neutron-flat-networks datacentre,physnet-tenant \
–neutron-bridge-mappings datacentre:br-ex,physnet-tenant:br-tenant \
–neutron-network-vlan-ranges datacentre:1612:1612,physnet-tenant:1701:1720 \
–neutron-disable-tunneling \
–timeout 300 \
–log overcloud.log \
–verbose && sudo openstack-service stop && sudo openstack-service start

 

From above few things are important in our lab setup:
  • initial overcloud [cloud deployed by using OSP director] uses local storage for instances,
  • ovs network overlay is vlan [Cisco does not supprt VXLAN as of this writing],
  • there are two flat networks defined in UCSM, one for tenant[physnet-tenant], and other, public [datacentre]
  • vlan range must be satisfied for both nets

Reconfiguring storage? Yes, not all is perfect

The initial deployment of OSP is using local disk.
In oder to use NFS two additional steps are needed.
Im my case I want to use NetApp Cinder driver. Some pointers can be found here:

 

On deployed and running controller, as root, I have created a file that points to my NFS share:
cat > /etc/cinder/cinder-nfs.conf << EOF
10.237.4.100:/OSP
EOF
chgrp cinder /etc/cinder/cinder-nfs.conf

 

Update overcloud with NFS changes.
My /home/stack/templates/cinder-netapp-config-v2.yaml file:
# A Heat environment file which can be used to enable a
# a Cinder NetApp backend, configured via puppet
resource_registry:
  OS::TripleO::ControllerExtraConfigPre: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/pre_deploy/controller/cinder-netapp.yaml
parameter_defaults:
  CinderEnableNetappBackend: true
  CinderNetappBackendName: ‘tripleo_netapp’
  CinderNetappLogin: ‘admin’
  CinderNetappPassword: ‘XXX’
  CinderNetappServerHostname: ‘XXX.XXX.XXX.XXX’
  CinderNetappServerPort: ’80’
  CinderNetappSizeMultiplier: ‘1.2’
  CinderNetappStorageFamily: ‘ontap_cluster’
  CinderNetappStorageProtocol: ‘nfs’
  CinderNetappTransportType: ‘http’
  CinderNetappVfiler: ”
  CinderNetappVolumeList: ”
  CinderNetappVserver: ‘OSPsvm’
  CinderNetappPartnerBackendName: ”
  CinderNetappNfsShares: ”
  CinderNetappNfsSharesConfig: ‘/etc/cinder/cinder-nfs.conf’
  CinderNetappNfsMountOptions: ‘rw,sync’
  CinderNetappCopyOffloadToolPath: ”
  CinderNetappControllerIps: ”
  CinderNetappSaPassword: ”
  CinderNetappStoragePools: ”
  CinderNetappEseriesHostType: ‘linux_dm_mp’
  CinderNetappWebservicePath: ‘/devmgr/v2’
Note: Swap XXX with your password and IP above!

 

You must use all your previous yaml files plus new additions if you want configuration to remain as before. I ran bellow command on my OSP Director:
openstack overcloud deploy \
–templates \
-e /home/stack/templates/openstack-tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \
-e /home/stack/templates/network-isolation.yaml \
-e /home/stack/templates/network-environment.yaml \
-e /home/stack/templates/storage-environment-v2.yaml \
-e /home/stack/templates/cinder-netapp-config-v2.yaml \
–log overcloud.log \
–verbose && sudo openstack-service stop && sudo openstack-service start

Tired already?

In the next post I will go over finishing touches for the overcloud. Preparing project, users, permissions, access and finally heat templates that will fully automate built of all 8 instances for hosting all components of the Docker Datacenter. And that post should be baked in no later then day or two. Let’s above simmer a little….

Leave a Reply

Your email address will not be published. Required fields are marked *