Ansible RabbitMQ Playbooks

I am working on an AMQP Message Broker service architecture, using RabbitMQ, at work right now. As part of the design work, I have spent a bit of time in my vSphere lab standing up the cluster to work out all the configuration, RBAC, policies and other various settings that will be required by the solution. If you haven’t been able to tell lately, my automation tool of choice is Ansible for all the things — I just cannot get enough of it!

Once again, Ansible did not let me down and provides a set of built-in modules for managing RabbitMQ. I found several examples of using the modules to configure a RabbitMQ node and based the work I’ve done off of those. The reason I wrote my own, rather than just git cloning someone else’s work was so that I can write the playbooks (and eventually roles) based on the service architecture specifications I am documenting for the work project.

I have created a new project space on GitHub to host the RabbitMQ playbooks and you are welcome to clone or fork the code based on your needs.

There are currently two playbooks — one for deploying an Ubuntu template into a vSphere environment, one for installing and configuring RabbitMQ on the deployed nodes. I kept the two playbooks separate so that if you want to use install RabbitMQ on a bare-metal or AWS environment, the second playbook can be used as a standalone. If you are choosing to install RabbitMQ in a vSphere environment, the create_vms.yml playbook can be used.

The rabbitmq.yml Ansible playbook will read in a set of environment variables from rabbitmq-vars.yml and then go through the installation steps. I use official repositories for all of the RabbitMQ and Erlang packages.

Note: If you are not using the 16.04 Xenial release, you can change the playbook to use the distribution of Ubuntu you are using inside your environment. I have been sticking with Ubuntu 16.04 LTS mostly because the open-vm-tools package fully support dynamic configuration of the network interfaces through Ansible. If/when 17.10 or 18.04 fully support this configuration through Ansible, I will upgrade my template.

The first part of the playbook adds the official repositories for RabbitMQ and Erlang, then performs the installation of the RabbitMQ package on the hosts.

The next part is a good example of how to use the built-in RabbitMQ modules Ansible includes as part of the core distribution. The playbook starts the plugins needed for RabbitMQ, adds a new administrator user and removes the default RabbitMQ user.

As I finalize the AMQP Message Broker service architecture, the Ansible playbooks will more fully represent the specifications within the documentation. I hope to publicize the service architecture when it is complete in the coming week.

Enjoy!

Backup vCenter and NSX to AWS S3

As I go deeper into the Ansible rabbit-hole, I am beginning to look for ways to manage upgrade operations through Ansible playbooks. As part of that journey, I wanted to begin backing up my VCSA and NSX-v VM appliances using their built-in methods prior to executing playbooks to perform the upgrades. Both appliances allow FTP, SFTP or SCP connections through their management interfaces for backing up the configuration data — all that is needed is an endpoint.

I wondered if it would be possible to backup these items to S3 using my AWS account. A quick search through my AWS portal showed me that I could use the AWS Storage Gateway, setup a S3 bucket for backups and mount the partition on a Linux VM for the vSphere appliances to use as an endpoint. With minimal effort, I was able to configure both appliances to backup to the local Linux VM and see that data replicated into S3 in a matter of minutes.

Fortunately, AWS has outstanding documentation for deploying the Storage Gateway within a vSphere environment (here). Once the Storage Gateway is deployed, the S3 bucket is created and the file share is created you can mount it on a Linux VM.

linux-vm$ mount -t nfs -o nolock 10.180.138.20:/usa1-2-lab-backups /opt
linux-vm$ mkdir -p /opt/vcsa
linux-vm$ mkdir -p /opt/nsxv

I created a separate backup location on the NFS mount point to the Storage Gateway — one for the VCSA and one for the NSX-v. At this point, it just a matter of configuring the two appliances to use the endpoint.

For the VCSA, log into port 5480 over HTTPS and select the Backup option on the left-hand menu.

The above screenshot shows how to configure the backup schedule and then you can perform a backup job using those details manually.

Similiary, the NSX-v Manager has a Backup and Restore are inside its management interface where you can configure the endpoint. NSX-v only supports FTP or SFTP today, but using SFTP I was able to use the endpoint.

Once the backup location is configured, you can execute a backup job through the admin interface.

From there it was just a matter of verifying the data was being sent and replicated to the S3 bucket I created in AWS.

That is all there is to it! Backups of the appliance data to an AWS S3 bucket using the Storage Gateway is nice and easy. Now I can begin working on the Ansible playbooks to upgrade the VCSA through the API, knowing the data is backed up to the cloud!

Enjoy!

Docker ‘ubuntu-ansible’ update

I have been working with Ansible and all of the vSphere modules an enormous amount recently. As part of that work, I’ve extended the functionality of the Docker container I use for all of my development work. The container can be downloaded from Docker Hub and consumed by anyone — there is no proprietary information within the container.

The updated version includes two vSAN Python modules required for an updated vSAN Ansible module I am working on. In addition, the container now pulls the upstream NSX-v Ansible module from VMware, instead of my cloned repo on GitHub.com/virtualelephant. The reason being, all of the code I’ve written for NSX-v is now in the upstream module.

The full docker file can be obtained on GitHub.

  1 # Dockerfile for creating an Ansible Control Server with
  2 # the VMware modules necessary to build a complete Kubernetes
  3 # stack.
  4 # Blog details available: http://virtualelphant.com
  5 
  6 FROM ubuntu:artful
  7 MAINTAINER Chris Mutchler <chris@virtualelephant.com>
  8 
  9 RUN \
 10   apt-get -y update && \
 11   apt-get -y dist-upgrade && \
 12   apt-get -y install software-properties-common python-software-properties vim && \
 13   apt-add-repository ppa:ansible/ansible
 14 
 15 # Install packages needed for NSX modules in Ansible
 16 RUN \
 17   apt-get -y update && \
 18   apt-get -y install ansible python-pip python-dev libxml2 libxml2-dev libxslt1-dev zlib1g-dev npm git && \
 19   pip install --upgrade pyvmomi && \
 20   pip install pysphere && \
 21   pip install nsxramlclient && \
 22   npm install -g https://github.com/yfauser/raml2html && \
 23   npm  install -g raml-fleece
 24 
 25 # Get NSXRAML
 26 
 27 # Add additional Ansible modules for NSX and VM folders
 28 RUN \
 29   git clone -b 6.4 https://github.com/vmware/nsxraml.git /opt/nsxraml && \
 30   git clone https://github.com/vmware/nsxansible && \
 31   git clone https://github.com/vmware/ansible-modules-extras-gpl3.git && \
 32   rm -rf nsxansible/library/__init__.py && \
 33   cp nsxansible/library/*.py /usr/lib/python2.7/dist-packages/ansible/modules/cloud/vmware/ && \
 34   git clone https://github.com/openshift/openshift-ansible-contrib && \
 35   /bin/cp openshift-ansible-contrib/reference-architecture/vmware-ansible/playbooks/library/vmware*.py /usr/lib/python2.7/dist-packages/ansible/modules/cloud/vmw    are/
 36 
 37 # Add vSAN Python API modules - must be done after pyVmomi installation
 38 COPY vsanmgmtObjects.py /usr/lib/python2.7/
 39 COPY vsanapiutils.py /usr/lib/python2.7/
 40 
 41 # Setup container to properly use SSH bastion host for Ansible
 42 RUN mkdir /root/.ssh
 43 RUN chmod 740 /root/.ssh
 44 COPY config /root/.ssh/config
 45 COPY ansible.cfg /etc/ansible/
 46 
 47 # Edit MOTD to give container consumer info
 48 COPY motd /etc/motd
 49 RUN echo '[ ! -z "$TERM" -a -r /etc/motd ] && cat /etc/issue && cat /etc/motd' >> /etc/bash.bashrc

I am still mounting a local volume that contains the Ansible playbooks within it. For reference, I run the container with the following command:

$ docker run -it --rm --name ansible-sddc -v /PATH/TO/ANSIBLE:/opt/ansible virtualelephant/ubuntu-ansible

If you run into an issues with the Docker container, please let me know on Twitter. Enjoy!

Upcoming VMUG Conferences

Speaking to customers and public speaking is something I have come to really enjoy the past few years. The opportunity to share experiences, issues and resolutions really resonates with me, so I am grateful I have had opportunity to present in several VMUG webinars the past two years. Now I have the opportunity to speak, in-person, at two upcoming VMUG conferences — Phoenix, AZ in June and Indianapolis, IN in July.

I will be speaking on vSphere 6.7, including NSX, and how the VMware internal private cloud team leverages the SDDC to provide a variety of workload capacities to the internal R&D teams. I will be covering best practices and lessons learned for the SDDC stack with 6.7 and how to upgrade successfully to the latest releases from vSphere 6.0 and vSphere 6.5

Phoenix VMUG

June 21, 2018

12PM-4:30PM MDT

Register here.

 

Indianapolis VMUG

July 10, 2018

Register here.

I look forward to seeing you there!

Deploying an SDDC with Ansible

The small effort I started at the end of last year using Ansible to deploy NSX components has snowballed a bit and found its way into a project at work. As we are working to deploy a new HCI architecture internally, one of the efforts we are embarking on is a fully automated, infrastructure-as-code architecture design. There are several components that are working in conjunction with one another to be able to accomplish that task, but the part I am going to talk about today is automation through Ansible.

As many of you have seen, I’ve recently been automating NSX component delivery and configuration using the open source VMware NSX Ansible modules. I’ve been fortunate enough to put my meager coding skills to work and enhance those models this year — adding new capabilities exposed through the API for NSX Edge configuration. In addition to the NSX Ansible modules, there are a multitude of upstream Ansible modules for VMware components. The first step was evaluating what the current upstream modules were capable of performing and putting together a small demo for my colleagues to observe both the power of Ansible and the ease of use.

My initial impressions of Ansible is that it is probably the most user-friendly of the configuration management/automation tools currently available. And for the VMware SDDC components, it appears to be rather robust. I have identified a few holes, but nothing insurmountable — the great thing is if something is exposed via an API, creating an Ansible module to leverage said API is rather simplistic.

The Ansible playbooks are a first step, I really want to convert most of them into Ansible roles. I’ve started committing the code in my Github space. You can download the playbooks and start using them if you’d like.

https://github.com/virtualelephant/vsphere-sddc

I currently have playbooks for creating a datacenter, cluster, adding hosts, configuring several advanced settings on each ESXi host, creating a DVS with port groups and performing a few other configuration tasks. The bit that I want to work out next is deployment of the vCenter server through Ansible. It’s currently a work in progress, but it has been a fun effort thus far.

Enjoy!