Updated Ansible Control Server Docker Container

The Docker container I built earlier this year, when I embarked on the Infrastructure-as-Code project, has been taken and used as the base container for the internal project to automate the SDDC using Ansible. As such, most of the recent updates I have made to the container have been only published internally. I decided to spend a few minutes updating the container on the public side to take advantage of some of the improvements and changes made.

An important note, the container is not what I would call lightweight. It is intended to be used as a development container, where it can provide a base-level of libraries and binaries for running Ansible against a vSphere, vCenter or NSX-v endpoint.

The first major change I’ve made is to move where the repo lives in GitHub. I’ve broken out the repository from the virtualelephant/vsphere-kubernetes repo and placed it in the virtualelephant/containers repo (link here).

Running the container

The default CMD of the container will display the installed version of Ansible and default version of Python.

The container continues to clone several useful community Ansible modules, including vmware/nsxansible and OpenShift/ansible-ansible-contrib. I have modified the Dockerfile to copy these modules into the directory /opt/ansible/modules. The ansible.cfg file has been modified to leverage the new module location.

Another change is how the container is pulling the nsxraml spec and making it available. The container currently pulls down both the NSX-v 6.3 and 6.4 branches of the nsxraml spec and places them in /opt/nsxraml. The specs should be backwards compatible, however it is possible some future version will not be. Therefore, I have created a symlink in the container that will always point to the most recent version of the RAML spec, while leaving the other branches there in case a consumer of the container requires them.

How is this leveraged?

Well, within my Ansible dictionary variable for the nsxmanager_spec, I always point the RAML file to /opt/nsxraml/current/nsxvapi.raml.

Finally, the container includes clean-up of the git repositories reduce its size.

Learn More at VMworld

If you are going to be at VMworld, be sure to VMware {code} session  CODE5542U on Monday afternoon. I will be talking more about the internal Ansible project and will have some exciting news regarding new Ansible modules available to VMware users!

Otherwise, feel free to pull the container or the repo and leverage it based on your needs!

Enjoy!

Ansible RabbitMQ Playbooks

I am working on an AMQP Message Broker service architecture, using RabbitMQ, at work right now. As part of the design work, I have spent a bit of time in my vSphere lab standing up the cluster to work out all the configuration, RBAC, policies and other various settings that will be required by the solution. If you haven’t been able to tell lately, my automation tool of choice is Ansible for all the things — I just cannot get enough of it!

Once again, Ansible did not let me down and provides a set of built-in modules for managing RabbitMQ. I found several examples of using the modules to configure a RabbitMQ node and based the work I’ve done off of those. The reason I wrote my own, rather than just git cloning someone else’s work was so that I can write the playbooks (and eventually roles) based on the service architecture specifications I am documenting for the work project.

I have created a new project space on GitHub to host the RabbitMQ playbooks and you are welcome to clone or fork the code based on your needs.

There are currently two playbooks — one for deploying an Ubuntu template into a vSphere environment, one for installing and configuring RabbitMQ on the deployed nodes. I kept the two playbooks separate so that if you want to use install RabbitMQ on a bare-metal or AWS environment, the second playbook can be used as a standalone. If you are choosing to install RabbitMQ in a vSphere environment, the create_vms.yml playbook can be used.

The rabbitmq.yml Ansible playbook will read in a set of environment variables from rabbitmq-vars.yml and then go through the installation steps. I use official repositories for all of the RabbitMQ and Erlang packages.

Note: If you are not using the 16.04 Xenial release, you can change the playbook to use the distribution of Ubuntu you are using inside your environment. I have been sticking with Ubuntu 16.04 LTS mostly because the open-vm-tools package fully support dynamic configuration of the network interfaces through Ansible. If/when 17.10 or 18.04 fully support this configuration through Ansible, I will upgrade my template.

The first part of the playbook adds the official repositories for RabbitMQ and Erlang, then performs the installation of the RabbitMQ package on the hosts.

The next part is a good example of how to use the built-in RabbitMQ modules Ansible includes as part of the core distribution. The playbook starts the plugins needed for RabbitMQ, adds a new administrator user and removes the default RabbitMQ user.

As I finalize the AMQP Message Broker service architecture, the Ansible playbooks will more fully represent the specifications within the documentation. I hope to publicize the service architecture when it is complete in the coming week.

Enjoy!

Docker ‘ubuntu-ansible’ update

I have been working with Ansible and all of the vSphere modules an enormous amount recently. As part of that work, I’ve extended the functionality of the Docker container I use for all of my development work. The container can be downloaded from Docker Hub and consumed by anyone — there is no proprietary information within the container.

The updated version includes two vSAN Python modules required for an updated vSAN Ansible module I am working on. In addition, the container now pulls the upstream NSX-v Ansible module from VMware, instead of my cloned repo on GitHub.com/virtualelephant. The reason being, all of the code I’ve written for NSX-v is now in the upstream module.

The full docker file can be obtained on GitHub.

  1 # Dockerfile for creating an Ansible Control Server with
  2 # the VMware modules necessary to build a complete Kubernetes
  3 # stack.
  4 # Blog details available: http://virtualelphant.com
  5 
  6 FROM ubuntu:artful
  7 MAINTAINER Chris Mutchler <chris@virtualelephant.com>
  8 
  9 RUN \
 10   apt-get -y update && \
 11   apt-get -y dist-upgrade && \
 12   apt-get -y install software-properties-common python-software-properties vim && \
 13   apt-add-repository ppa:ansible/ansible
 14 
 15 # Install packages needed for NSX modules in Ansible
 16 RUN \
 17   apt-get -y update && \
 18   apt-get -y install ansible python-pip python-dev libxml2 libxml2-dev libxslt1-dev zlib1g-dev npm git && \
 19   pip install --upgrade pyvmomi && \
 20   pip install pysphere && \
 21   pip install nsxramlclient && \
 22   npm install -g https://github.com/yfauser/raml2html && \
 23   npm  install -g raml-fleece
 24 
 25 # Get NSXRAML
 26 
 27 # Add additional Ansible modules for NSX and VM folders
 28 RUN \
 29   git clone -b 6.4 https://github.com/vmware/nsxraml.git /opt/nsxraml && \
 30   git clone https://github.com/vmware/nsxansible && \
 31   git clone https://github.com/vmware/ansible-modules-extras-gpl3.git && \
 32   rm -rf nsxansible/library/__init__.py && \
 33   cp nsxansible/library/*.py /usr/lib/python2.7/dist-packages/ansible/modules/cloud/vmware/ && \
 34   git clone https://github.com/openshift/openshift-ansible-contrib && \
 35   /bin/cp openshift-ansible-contrib/reference-architecture/vmware-ansible/playbooks/library/vmware*.py /usr/lib/python2.7/dist-packages/ansible/modules/cloud/vmw    are/
 36 
 37 # Add vSAN Python API modules - must be done after pyVmomi installation
 38 COPY vsanmgmtObjects.py /usr/lib/python2.7/
 39 COPY vsanapiutils.py /usr/lib/python2.7/
 40 
 41 # Setup container to properly use SSH bastion host for Ansible
 42 RUN mkdir /root/.ssh
 43 RUN chmod 740 /root/.ssh
 44 COPY config /root/.ssh/config
 45 COPY ansible.cfg /etc/ansible/
 46 
 47 # Edit MOTD to give container consumer info
 48 COPY motd /etc/motd
 49 RUN echo '[ ! -z "$TERM" -a -r /etc/motd ] && cat /etc/issue && cat /etc/motd' >> /etc/bash.bashrc

I am still mounting a local volume that contains the Ansible playbooks within it. For reference, I run the container with the following command:

$ docker run -it --rm --name ansible-sddc -v /PATH/TO/ANSIBLE:/opt/ansible virtualelephant/ubuntu-ansible

If you run into an issues with the Docker container, please let me know on Twitter. Enjoy!

Infrastructure-as-Code: Project Update

The Infrastructure-as-Code project is progressing along rather well. When I set out on the project in November of 2017, I wanted to use the project as a means to learn several new technologies — Ansible, Python, CoreOS and Kubernetes. The initial stages of the project focused on understanding how CoreOS works and how to automate the installation of a CoreOS VM within a vSphere environment. Once completed, I moved onto automating the initial deployment of the environment and supporting components. This is where the bulk of my time has been spent the past several weeks.

As previous posts have shown, using Ansible as an automation framework within a vSphere environment is a powerful tool. The challenge has been leveraging the existing, publicly available modules to perform all the required actions to completely automate the deployment. The Ansible NSX modules available on Github are a good starting point, but they have lacked all of the desired functionality.

The lack of functionality lead to me fork the project into my own repo and submit my very first pull request on Github shortly after adding the necessary DHCP functionality.

The process of adding the desired functionality has become a bit of a rabbit-hole. Even still, I am enjoying the process of working through all the tasks and seeing the pieces begin to come together.

Thus far, I have the following working through Ansible:

  • NSX logical switch and NSX edge deployment.
  • DHCP and firewall policy configuration on NSX edge.
  • Ubuntu SSH bastion host deployment and configuration.
  • Ubuntu DNS hosts deployment and configuration.
  • Ubuntu-based Kubernetes master nodes deployment (static number).
  • CoreOS-based Kubernetes minion nodes deployment (dynamic number).

In addition to the Ansible playbooks that I’ve written to automate the project, creating a Docker image specifically to act as the Ansible Control Server, with all of the required third-party modules has really helped to streamline the project and make it something I should be able to ‘release’ for others to use and duplicate my efforts.

The remaining work before the project is complete:

  • Add DNAT/SNAT configuration functionality to Ansible for NSX edge (testing in progress).
  • Update CoreOS nodes to use logical switch DVS port group.
  • Kubernetes configuration through Ansible.

I’ve really enjoyed all the challenges and new technologies (to me) the project has allowed me to learn. I am also looking forward to being able to contribute back to the Ansible community with additional capabilities for NSX modules.

 

Docker for Ansible + VMware NSX Automation

I am writing this as I sit and watch the annual viewing of The Hobbit and The Lord of the Rings trilogy over the Christmas holiday. The next couple of weeks of time should provide the time necessary to hopefully complete the Infrastructure-as-Code project I undertook last month. As part of the Infrastructure-as-Code project, I spoke previous about how Ansible is being used to provide the automation layer for the deployment and configuration of the SDDC Kubernetes stack. As part of the bootstrapping effort, I have decided to create a Docker image with the necessary components to perform the initial virtual machine deployment and NSX configuration.

The Dockerfile for the Ubuntu-based Docker container is hosted both on Docker Hub and within the Github repository for the larger Infrastructure-as-Code project.

When the Docker container is launched, it includes the necessary components to interact with the VMware stack, including additional modules for VM folders, resource pools and VMware NSX.

To launch the container, I am running it with the following options to include the local copies of the Infrastructure-as-Code project.

$ docker run -it --name ansible -v /Users/cmutchler/github/vsphere-kubernetes/ansible/:/opt/ansible virtualelephant/ubuntu-ansible

The Docker container is a bit on the larger side, but it is designed to run locally on a laptop or desktop. The image includes the required Python and NSX bits so that the additional Github repositories that are cloned into the image will operate correctly. The OpenShift project includes additional modules for interacting with vSphere folders and resource pools, while the NSX modules from the VMware Github repository includes the necessary bits for leveraging Ansible with NSX.

Once running, the Docker container is then able to bootstrap the deployment of the Infrastructure-as-Code project using the Ansible playbooks I’ve published on Github. Enjoy!