Updated Ansible Control Server Docker Container

The Docker container I built earlier this year, when I embarked on the Infrastructure-as-Code project, has been taken and used as the base container for the internal project to automate the SDDC using Ansible. As such, most of the recent updates I have made to the container have been only published internally. I decided to spend a few minutes updating the container on the public side to take advantage of some of the improvements and changes made.

An important note, the container is not what I would call lightweight. It is intended to be used as a development container, where it can provide a base-level of libraries and binaries for running Ansible against a vSphere, vCenter or NSX-v endpoint.

The first major change I’ve made is to move where the repo lives in GitHub. I’ve broken out the repository from the virtualelephant/vsphere-kubernetes repo and placed it in the virtualelephant/containers repo (link here).

Running the container

The default CMD of the container will display the installed version of Ansible and default version of Python.

The container continues to clone several useful community Ansible modules, including vmware/nsxansible and OpenShift/ansible-ansible-contrib. I have modified the Dockerfile to copy these modules into the directory /opt/ansible/modules. The ansible.cfg file has been modified to leverage the new module location.

Another change is how the container is pulling the nsxraml spec and making it available. The container currently pulls down both the NSX-v 6.3 and 6.4 branches of the nsxraml spec and places them in /opt/nsxraml. The specs should be backwards compatible, however it is possible some future version will not be. Therefore, I have created a symlink in the container that will always point to the most recent version of the RAML spec, while leaving the other branches there in case a consumer of the container requires them.

How is this leveraged?

Well, within my Ansible dictionary variable for the nsxmanager_spec, I always point the RAML file to /opt/nsxraml/current/nsxvapi.raml.

Finally, the container includes clean-up of the git repositories reduce its size.

Learn More at VMworld

If you are going to be at VMworld, be sure to VMware {code} session  CODE5542U on Monday afternoon. I will be talking more about the internal Ansible project and will have some exciting news regarding new Ansible modules available to VMware users!

Otherwise, feel free to pull the container or the repo and leverage it based on your needs!

Enjoy!

Docker ‘ubuntu-ansible’ update

I have been working with Ansible and all of the vSphere modules an enormous amount recently. As part of that work, I’ve extended the functionality of the Docker container I use for all of my development work. The container can be downloaded from Docker Hub and consumed by anyone — there is no proprietary information within the container.

The updated version includes two vSAN Python modules required for an updated vSAN Ansible module I am working on. In addition, the container now pulls the upstream NSX-v Ansible module from VMware, instead of my cloned repo on GitHub.com/virtualelephant. The reason being, all of the code I’ve written for NSX-v is now in the upstream module.

The full docker file can be obtained on GitHub.

  1 # Dockerfile for creating an Ansible Control Server with
  2 # the VMware modules necessary to build a complete Kubernetes
  3 # stack.
  4 # Blog details available: http://virtualelphant.com
  5 
  6 FROM ubuntu:artful
  7 MAINTAINER Chris Mutchler <chris@virtualelephant.com>
  8 
  9 RUN \
 10   apt-get -y update && \
 11   apt-get -y dist-upgrade && \
 12   apt-get -y install software-properties-common python-software-properties vim && \
 13   apt-add-repository ppa:ansible/ansible
 14 
 15 # Install packages needed for NSX modules in Ansible
 16 RUN \
 17   apt-get -y update && \
 18   apt-get -y install ansible python-pip python-dev libxml2 libxml2-dev libxslt1-dev zlib1g-dev npm git && \
 19   pip install --upgrade pyvmomi && \
 20   pip install pysphere && \
 21   pip install nsxramlclient && \
 22   npm install -g https://github.com/yfauser/raml2html && \
 23   npm  install -g raml-fleece
 24 
 25 # Get NSXRAML
 26 
 27 # Add additional Ansible modules for NSX and VM folders
 28 RUN \
 29   git clone -b 6.4 https://github.com/vmware/nsxraml.git /opt/nsxraml && \
 30   git clone https://github.com/vmware/nsxansible && \
 31   git clone https://github.com/vmware/ansible-modules-extras-gpl3.git && \
 32   rm -rf nsxansible/library/__init__.py && \
 33   cp nsxansible/library/*.py /usr/lib/python2.7/dist-packages/ansible/modules/cloud/vmware/ && \
 34   git clone https://github.com/openshift/openshift-ansible-contrib && \
 35   /bin/cp openshift-ansible-contrib/reference-architecture/vmware-ansible/playbooks/library/vmware*.py /usr/lib/python2.7/dist-packages/ansible/modules/cloud/vmw    are/
 36 
 37 # Add vSAN Python API modules - must be done after pyVmomi installation
 38 COPY vsanmgmtObjects.py /usr/lib/python2.7/
 39 COPY vsanapiutils.py /usr/lib/python2.7/
 40 
 41 # Setup container to properly use SSH bastion host for Ansible
 42 RUN mkdir /root/.ssh
 43 RUN chmod 740 /root/.ssh
 44 COPY config /root/.ssh/config
 45 COPY ansible.cfg /etc/ansible/
 46 
 47 # Edit MOTD to give container consumer info
 48 COPY motd /etc/motd
 49 RUN echo '[ ! -z "$TERM" -a -r /etc/motd ] && cat /etc/issue && cat /etc/motd' >> /etc/bash.bashrc

I am still mounting a local volume that contains the Ansible playbooks within it. For reference, I run the container with the following command:

$ docker run -it --rm --name ansible-sddc -v /PATH/TO/ANSIBLE:/opt/ansible virtualelephant/ubuntu-ansible

If you run into an issues with the Docker container, please let me know on Twitter. Enjoy!

Docker for Ansible + VMware NSX Automation

I am writing this as I sit and watch the annual viewing of The Hobbit and The Lord of the Rings trilogy over the Christmas holiday. The next couple of weeks of time should provide the time necessary to hopefully complete the Infrastructure-as-Code project I undertook last month. As part of the Infrastructure-as-Code project, I spoke previous about how Ansible is being used to provide the automation layer for the deployment and configuration of the SDDC Kubernetes stack. As part of the bootstrapping effort, I have decided to create a Docker image with the necessary components to perform the initial virtual machine deployment and NSX configuration.

The Dockerfile for the Ubuntu-based Docker container is hosted both on Docker Hub and within the Github repository for the larger Infrastructure-as-Code project.

When the Docker container is launched, it includes the necessary components to interact with the VMware stack, including additional modules for VM folders, resource pools and VMware NSX.

To launch the container, I am running it with the following options to include the local copies of the Infrastructure-as-Code project.

$ docker run -it --name ansible -v /Users/cmutchler/github/vsphere-kubernetes/ansible/:/opt/ansible virtualelephant/ubuntu-ansible

The Docker container is a bit on the larger side, but it is designed to run locally on a laptop or desktop. The image includes the required Python and NSX bits so that the additional Github repositories that are cloned into the image will operate correctly. The OpenShift project includes additional modules for interacting with vSphere folders and resource pools, while the NSX modules from the VMware Github repository includes the necessary bits for leveraging Ansible with NSX.

Once running, the Docker container is then able to bootstrap the deployment of the Infrastructure-as-Code project using the Ansible playbooks I’ve published on Github. Enjoy!

OpenStack Client Docker Container

OpenStack has been my world for the past 8 months. It started out with the a work project to design and deploy a large-scale VMware Integrated OpenStack environment for internal use. It then became the design I would submit for my VCDX Defense and spend a couple hundred hours pouring over and documenting. Since then it has included helping other get “up-to-speed” on how to operationalize OpenStack. One of the necessary tools is the ability to execute commands against an OpenStack environment from anywhere.

The easiest way to do that?

A short-lived Docker container with the clients installed!

The container is short and to the point — it uses Ubuntu:latest as the base and simply adds the OpenStack clients.

# Docker container with the latest OpenStack clients

FROM ubuntu:latest

MAINTAINER chris@virtualelephant.com

RUN apt-get -y update && apt-get -y upgrade

RUN apt-get -y install python-openstackclient vim

Follow that up with a quick Docker command to launch the instance, and I’m ready to troubleshoot whatever issue may require my attention.

$ docker run -it chrismutchler/vio-client

Where I am not a developer, I find the usefulness of creating these small types of Docker containers really fun. The ability to quickly spin up a container on my laptop or whatever VM I find myself on at the time priceless.

The repo can be seen on hub.docker.com/chrismutchler/vio-client.

If you need a OpenStack Client Docker container, I hope you’ll give this one a try. Enjoy!

Bind Docker Container for vPod Lab

I am currently working on building out a vPod nested ESXi lab environment that will be deployed through OpenStack’s Heat orchestration service. As I worked out the vPod application components, I realized that I wanted to include a single Linux VM that would run various services inside Docker containers.

I needed a Bind Docker container!

It seems like everything in a VMware SDDC environment needs both the forward and reverse records working properly — so I started here. The Docker container is completely self-contained — all external zone data is stored in S3 and downloaded when the container is built.

https://hub.docker.com/r/chrismutchler/vpod-bind/

The Dockerfile for the container contains the following code:

# Designed to be used in conjunction with a nested ESXi
# virtual lab environment deployed through an OpenStack
# Heat template.

FROM ubuntu:latest

MAINTAINER chris@virtualelephant.com

RUN apt-get -y update && apt-get -y install bind9 dnsutils curl

RUN curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/db.192.168 -o /etc/bind/db.192.168 && curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/db.vsphere.local -o /etc/bind/db.vsphere.local && curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/named.conf.options -o /etc/bind/named.conf.options && curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/named.conf.local -o /etc/bind/named.conf.local

EXPOSE 53

CMD ["/usr/sbin/named", "-g", "-c", "/etc/bind/named.conf", "-u", "bind"]

To start the container, I setup the Ubuntu VM to execute the following code when it is deployed inside OpenStack.

# docker run -d -p 53:53 -p 53:53/udp chrismutchler/vpod-bind

Once running, it is now able to provide the critical DNS service inside the vPod ESXi environment. From here it is onto building out the Heat template that will leverage the container.

Enjoy!