Docker ‘ubuntu-ansible’ update

I have been working with Ansible and all of the vSphere modules an enormous amount recently. As part of that work, I’ve extended the functionality of the Docker container I use for all of my development work. The container can be downloaded from Docker Hub and consumed by anyone — there is no proprietary information within the container.

The updated version includes two vSAN Python modules required for an updated vSAN Ansible module I am working on. In addition, the container now pulls the upstream NSX-v Ansible module from VMware, instead of my cloned repo on GitHub.com/virtualelephant. The reason being, all of the code I’ve written for NSX-v is now in the upstream module.

The full docker file can be obtained on GitHub.

  1 # Dockerfile for creating an Ansible Control Server with
  2 # the VMware modules necessary to build a complete Kubernetes
  3 # stack.
  4 # Blog details available: http://virtualelphant.com
  5 
  6 FROM ubuntu:artful
  7 MAINTAINER Chris Mutchler <chris@virtualelephant.com>
  8 
  9 RUN \
 10   apt-get -y update && \
 11   apt-get -y dist-upgrade && \
 12   apt-get -y install software-properties-common python-software-properties vim && \
 13   apt-add-repository ppa:ansible/ansible
 14 
 15 # Install packages needed for NSX modules in Ansible
 16 RUN \
 17   apt-get -y update && \
 18   apt-get -y install ansible python-pip python-dev libxml2 libxml2-dev libxslt1-dev zlib1g-dev npm git && \
 19   pip install --upgrade pyvmomi && \
 20   pip install pysphere && \
 21   pip install nsxramlclient && \
 22   npm install -g https://github.com/yfauser/raml2html && \
 23   npm  install -g raml-fleece
 24 
 25 # Get NSXRAML
 26 
 27 # Add additional Ansible modules for NSX and VM folders
 28 RUN \
 29   git clone -b 6.4 https://github.com/vmware/nsxraml.git /opt/nsxraml && \
 30   git clone https://github.com/vmware/nsxansible && \
 31   git clone https://github.com/vmware/ansible-modules-extras-gpl3.git && \
 32   rm -rf nsxansible/library/__init__.py && \
 33   cp nsxansible/library/*.py /usr/lib/python2.7/dist-packages/ansible/modules/cloud/vmware/ && \
 34   git clone https://github.com/openshift/openshift-ansible-contrib && \
 35   /bin/cp openshift-ansible-contrib/reference-architecture/vmware-ansible/playbooks/library/vmware*.py /usr/lib/python2.7/dist-packages/ansible/modules/cloud/vmw    are/
 36 
 37 # Add vSAN Python API modules - must be done after pyVmomi installation
 38 COPY vsanmgmtObjects.py /usr/lib/python2.7/
 39 COPY vsanapiutils.py /usr/lib/python2.7/
 40 
 41 # Setup container to properly use SSH bastion host for Ansible
 42 RUN mkdir /root/.ssh
 43 RUN chmod 740 /root/.ssh
 44 COPY config /root/.ssh/config
 45 COPY ansible.cfg /etc/ansible/
 46 
 47 # Edit MOTD to give container consumer info
 48 COPY motd /etc/motd
 49 RUN echo '[ ! -z "$TERM" -a -r /etc/motd ] && cat /etc/issue && cat /etc/motd' >> /etc/bash.bashrc

I am still mounting a local volume that contains the Ansible playbooks within it. For reference, I run the container with the following command:

$ docker run -it --rm --name ansible-sddc -v /PATH/TO/ANSIBLE:/opt/ansible virtualelephant/ubuntu-ansible

If you run into an issues with the Docker container, please let me know on Twitter. Enjoy!

Docker for Ansible + VMware NSX Automation

I am writing this as I sit and watch the annual viewing of The Hobbit and The Lord of the Rings trilogy over the Christmas holiday. The next couple of weeks of time should provide the time necessary to hopefully complete the Infrastructure-as-Code project I undertook last month. As part of the Infrastructure-as-Code project, I spoke previous about how Ansible is being used to provide the automation layer for the deployment and configuration of the SDDC Kubernetes stack. As part of the bootstrapping effort, I have decided to create a Docker image with the necessary components to perform the initial virtual machine deployment and NSX configuration.

The Dockerfile for the Ubuntu-based Docker container is hosted both on Docker Hub and within the Github repository for the larger Infrastructure-as-Code project.

When the Docker container is launched, it includes the necessary components to interact with the VMware stack, including additional modules for VM folders, resource pools and VMware NSX.

To launch the container, I am running it with the following options to include the local copies of the Infrastructure-as-Code project.

$ docker run -it --name ansible -v /Users/cmutchler/github/vsphere-kubernetes/ansible/:/opt/ansible virtualelephant/ubuntu-ansible

The Docker container is a bit on the larger side, but it is designed to run locally on a laptop or desktop. The image includes the required Python and NSX bits so that the additional Github repositories that are cloned into the image will operate correctly. The OpenShift project includes additional modules for interacting with vSphere folders and resource pools, while the NSX modules from the VMware Github repository includes the necessary bits for leveraging Ansible with NSX.

Once running, the Docker container is then able to bootstrap the deployment of the Infrastructure-as-Code project using the Ansible playbooks I’ve published on Github. Enjoy!

OpenStack Client Docker Container

OpenStack has been my world for the past 8 months. It started out with the a work project to design and deploy a large-scale VMware Integrated OpenStack environment for internal use. It then became the design I would submit for my VCDX Defense and spend a couple hundred hours pouring over and documenting. Since then it has included helping other get “up-to-speed” on how to operationalize OpenStack. One of the necessary tools is the ability to execute commands against an OpenStack environment from anywhere.

The easiest way to do that?

A short-lived Docker container with the clients installed!

The container is short and to the point — it uses Ubuntu:latest as the base and simply adds the OpenStack clients.

# Docker container with the latest OpenStack clients

FROM ubuntu:latest

MAINTAINER chris@virtualelephant.com

RUN apt-get -y update && apt-get -y upgrade

RUN apt-get -y install python-openstackclient vim

Follow that up with a quick Docker command to launch the instance, and I’m ready to troubleshoot whatever issue may require my attention.

$ docker run -it chrismutchler/vio-client

Where I am not a developer, I find the usefulness of creating these small types of Docker containers really fun. The ability to quickly spin up a container on my laptop or whatever VM I find myself on at the time priceless.

The repo can be seen on hub.docker.com/chrismutchler/vio-client.

If you need a OpenStack Client Docker container, I hope you’ll give this one a try. Enjoy!

Bind Docker Container for vPod Lab

I am currently working on building out a vPod nested ESXi lab environment that will be deployed through OpenStack’s Heat orchestration service. As I worked out the vPod application components, I realized that I wanted to include a single Linux VM that would run various services inside Docker containers.

I needed a Bind Docker container!

It seems like everything in a VMware SDDC environment needs both the forward and reverse records working properly — so I started here. The Docker container is completely self-contained — all external zone data is stored in S3 and downloaded when the container is built.

https://hub.docker.com/r/chrismutchler/vpod-bind/

The Dockerfile for the container contains the following code:

# Designed to be used in conjunction with a nested ESXi
# virtual lab environment deployed through an OpenStack
# Heat template.

FROM ubuntu:latest

MAINTAINER chris@virtualelephant.com

RUN apt-get -y update && apt-get -y install bind9 dnsutils curl

RUN curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/db.192.168 -o /etc/bind/db.192.168 && curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/db.vsphere.local -o /etc/bind/db.vsphere.local && curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/named.conf.options -o /etc/bind/named.conf.options && curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/named.conf.local -o /etc/bind/named.conf.local

EXPOSE 53

CMD ["/usr/sbin/named", "-g", "-c", "/etc/bind/named.conf", "-u", "bind"]

To start the container, I setup the Ubuntu VM to execute the following code when it is deployed inside OpenStack.

# docker run -d -p 53:53 -p 53:53/udp chrismutchler/vpod-bind

Once running, it is now able to provide the critical DNS service inside the vPod ESXi environment. From here it is onto building out the Heat template that will leverage the container.

Enjoy!

Updating Apache Mesos Chef Recipes

 

Apache Mesos Chef Recipes

The recent update to v2.3.1 of VMware Big Data Extensions required a few updates on the management server to enable the proper deployment of Apache Mesos, Mesosphere Marathon and Docker. In order to have BDE deploy the latest code, the Apache Mesos Chef recipes needed a few updates. I prefer to use the Mesosphere CentOS repo for installing the packages, as it allows for keeping the clusters up-to-date with the proper packages. In order to do so, the following file should be created in /opt/serengeti/www/yum.

mesos.repo

[mesosphere]
name=Mesosphere Packages for EL 6 - $basearch
baseurl=http://repos.mesosphere.io/el/6/$basearch/
enabled=1
gpgcheck=1
gpgkey=https://bde.local.domain/yum/RPM-GPG-KEY-mesosphere

[mesosphere-noarch]
name=Mesosphere Packages for EL 6 - noarch
baseurl=http://repos.mesosphere.io/el/6/noarch/
enabled=1
gpgcheck=1
gpgkey=https://bde.local.domain/yum/RPM-GPG-KEY-mesosphere

[mesosphere-source]
name=Mesosphere Packages for EL 6 - $basearch - Source
baseurl=http://repos.mesosphere.io/el/6/SRPMS/
enabled=0
gpgcheck=1
gpgkey=https://bde.local.domain/yum/RPM-GPG-KEY-mesosphere

[cloudera-cdh4]
name=Cloudera's Distribution for Hadoop, Version 4
baseurl=http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/4/
gpgkey =  http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera
gpgcheck = 1

The file adds an extra repo for Zookeeper that Cloudier provides. These repos were necessary in order for the updated Chef recipes to properly install those packages. The remaining changes are strictly within the Chef recipes on the BDE management server.

/opt/serengeti/chef/cookbooks/mesos/recipes/install.rb

 69 when 'rhel', 'centos', 'amazon', 'scientific'
 70   %w( unzip libcurl ).each do |pkg|
 71     yum_package pkg do
 72       action :install
 73     end
 74   end
 75 
 76   template "/etc/pki/rpm-gpg/RPM-GPG-KEY-mesosphere" do
 77     source "RPM-GPG-KEY-mesosphere.erb"
 78   end
 79 
 80   package 'mesos'
 81   package 'chronos' if node.role?('mesos_chronos')
 82   package 'marathon' if node.role?('mesos_marathon')
 83 end

100 if distro == 'debian'
101   bash 'reload-configuration-debian' do
102     user 'root'
103     code <<-EOH
104     update-rc.d -f mesos-master remove
105     EOH
106     not_if { ::File.exist? '/usr/sbin/mesos-master' }
107   end
108 else
109   bash 'reload-configuration' do
110     user 'root'
111     code <<-EOH
112     initctl reload-configuration
113     EOH
114     not_if { ::File.exist? '/usr/sbin/mesos-master' }
115   end
116 end

Update:

A colleague pointed out the addition for the RPM-GPG-KEY-mesosphere.erb file was not as clear as it should have been. Make sure you add lines 76-78 AND copy the GPG key into /opt/serengeti/chef/cookbooks/mesos/templates/default with a .erb file extension.

/opt/serengeti/chef/cookbooks/mesos/templates/default/chronos.conf.erb

Several of the template files have an old path for executables that need to be updated. The Chronos configuration file needs the following change.

  9 exec /usr/bin/chronos

/opt/serengeti/chef/cookbooks/mesos/templates/default/mesos-slave.conf.erb

  6 exec /usr/bin/mesos-init-wrapper slave

/opt/serengeti/chef/cookbooks/mesos/templates/default/marathon.conf.erb

  9 exec /usr/bin/marathon

The final step was to update the Chef server so that the changes will take effect by executing the knife cookbook upload -a command. When all of the outlined steps have been completed, the Apache Mesos Chef recipes on the Big Data Extensions management server will be all set to deploy the latest code for Apache Mesos, Mesosphere Marathon, Chronos and Docker on CentOS.

There is a series of posts coming to show how to tie VMware NSX together with Apache Mesos, Apache Hadoop and VMware Big Data Extensions.

Enjoy!