NSX Ansible Module Update – nsx_manager_roles

As the previous post discussed, using the API directly through Ansible was an adequate initial step to configuring a users role within the NSX Manager. The next logical step was to include this functionality directly inside the upstream NSX Ansible module.

After an initial commit yesterday afternoon, I received some feedback from the community and posted a new update to the module today that is better inline with Ansible best practices and idempotency.

The module is now available in the masterbranch on GitHub.

The module can be used inside an Ansible role or playbook with the following code:

  1 ---
  2 - hosts: all
  3   connection: local
  4   gather_facts: False
  5 
  6   tasks:
  7     - name: Configure NSX Manager roles
  8       nsx_manager_roles:
  9         nsxmanager_spec: "{{ nsxmanager_spec }}"
 10         state: present
 11         name: "{{ nsx_uid }}"
 12         is_group: true
 13         role_type: "{{ nsx_role }}"
 14       register: add_nsx_role

The Ansible task can specify a state of present to add a new user and assign a role, modify to change the role of an existing user or group, and absent to remove a user or group.

If you run into an issues using the module, please reach out to me over Twitter or comment directly in the Issues section on GitHub for NSX Ansible.

Enjoy!

NSX Roles Automation through Ansible

This is a bit of a quick hit.

Yesterday, while working with Ansible to fully deploy and configure an NSX-v Manager, we worked out a method to add a user or group of users and assign the appropriate role. The current NSX Ansible module does not support this functionality, so the role we are executing relies on the URI module.

 64 - name: Set NSX Permissions
 65   uri:
 66     url: "https://{{ nsxmanager_spec.host }}/api/2.0/services/usermgmt/role/global_admins@virtualelephant.com?isGroup=true"
 67     method: POST
 68     url_username: "{{ nsxmanager_spec.user }}"
 69     url_password: "{{ nsxmanager_spec.password }}"
 70     headers:
 71       Content-Type: "application/xml"
 72       Accept: "application/xml"
 73     body: "<accessControlEntry><role>enterprise_admin</role></accessControlEntry>"
 74     body_format: raw
 75     force_basic_auth: yes
 76     validate_certs: no
 77     use_proxy: no
 78     return_content: yes
 79     status_code: 204
 80   tags: nsx_permissions
 81   delegate_to: localhost

The API call does not return a typical 200 status if the call was successful, so the task above specifies that Ansible should be looking for a 204 status.

I am currently working on adding this functionality to the NSX Ansible module published on GitHub. Time (and testing) allowing, the code will be available in the coming days.

Enjoy!

Ansible RabbitMQ Playbooks

I am working on an AMQP Message Broker service architecture, using RabbitMQ, at work right now. As part of the design work, I have spent a bit of time in my vSphere lab standing up the cluster to work out all the configuration, RBAC, policies and other various settings that will be required by the solution. If you haven’t been able to tell lately, my automation tool of choice is Ansible for all the things — I just cannot get enough of it!

Once again, Ansible did not let me down and provides a set of built-in modules for managing RabbitMQ. I found several examples of using the modules to configure a RabbitMQ node and based the work I’ve done off of those. The reason I wrote my own, rather than just git cloning someone else’s work was so that I can write the playbooks (and eventually roles) based on the service architecture specifications I am documenting for the work project.

I have created a new project space on GitHub to host the RabbitMQ playbooks and you are welcome to clone or fork the code based on your needs.

There are currently two playbooks — one for deploying an Ubuntu template into a vSphere environment, one for installing and configuring RabbitMQ on the deployed nodes. I kept the two playbooks separate so that if you want to use install RabbitMQ on a bare-metal or AWS environment, the second playbook can be used as a standalone. If you are choosing to install RabbitMQ in a vSphere environment, the create_vms.yml playbook can be used.

The rabbitmq.yml Ansible playbook will read in a set of environment variables from rabbitmq-vars.yml and then go through the installation steps. I use official repositories for all of the RabbitMQ and Erlang packages.

Note: If you are not using the 16.04 Xenial release, you can change the playbook to use the distribution of Ubuntu you are using inside your environment. I have been sticking with Ubuntu 16.04 LTS mostly because the open-vm-tools package fully support dynamic configuration of the network interfaces through Ansible. If/when 17.10 or 18.04 fully support this configuration through Ansible, I will upgrade my template.

The first part of the playbook adds the official repositories for RabbitMQ and Erlang, then performs the installation of the RabbitMQ package on the hosts.

The next part is a good example of how to use the built-in RabbitMQ modules Ansible includes as part of the core distribution. The playbook starts the plugins needed for RabbitMQ, adds a new administrator user and removes the default RabbitMQ user.

As I finalize the AMQP Message Broker service architecture, the Ansible playbooks will more fully represent the specifications within the documentation. I hope to publicize the service architecture when it is complete in the coming week.

Enjoy!

Docker ‘ubuntu-ansible’ update

I have been working with Ansible and all of the vSphere modules an enormous amount recently. As part of that work, I’ve extended the functionality of the Docker container I use for all of my development work. The container can be downloaded from Docker Hub and consumed by anyone — there is no proprietary information within the container.

The updated version includes two vSAN Python modules required for an updated vSAN Ansible module I am working on. In addition, the container now pulls the upstream NSX-v Ansible module from VMware, instead of my cloned repo on GitHub.com/virtualelephant. The reason being, all of the code I’ve written for NSX-v is now in the upstream module.

The full docker file can be obtained on GitHub.

  1 # Dockerfile for creating an Ansible Control Server with
  2 # the VMware modules necessary to build a complete Kubernetes
  3 # stack.
  4 # Blog details available: http://virtualelphant.com
  5 
  6 FROM ubuntu:artful
  7 MAINTAINER Chris Mutchler <chris@virtualelephant.com>
  8 
  9 RUN \
 10   apt-get -y update && \
 11   apt-get -y dist-upgrade && \
 12   apt-get -y install software-properties-common python-software-properties vim && \
 13   apt-add-repository ppa:ansible/ansible
 14 
 15 # Install packages needed for NSX modules in Ansible
 16 RUN \
 17   apt-get -y update && \
 18   apt-get -y install ansible python-pip python-dev libxml2 libxml2-dev libxslt1-dev zlib1g-dev npm git && \
 19   pip install --upgrade pyvmomi && \
 20   pip install pysphere && \
 21   pip install nsxramlclient && \
 22   npm install -g https://github.com/yfauser/raml2html && \
 23   npm  install -g raml-fleece
 24 
 25 # Get NSXRAML
 26 
 27 # Add additional Ansible modules for NSX and VM folders
 28 RUN \
 29   git clone -b 6.4 https://github.com/vmware/nsxraml.git /opt/nsxraml && \
 30   git clone https://github.com/vmware/nsxansible && \
 31   git clone https://github.com/vmware/ansible-modules-extras-gpl3.git && \
 32   rm -rf nsxansible/library/__init__.py && \
 33   cp nsxansible/library/*.py /usr/lib/python2.7/dist-packages/ansible/modules/cloud/vmware/ && \
 34   git clone https://github.com/openshift/openshift-ansible-contrib && \
 35   /bin/cp openshift-ansible-contrib/reference-architecture/vmware-ansible/playbooks/library/vmware*.py /usr/lib/python2.7/dist-packages/ansible/modules/cloud/vmw    are/
 36 
 37 # Add vSAN Python API modules - must be done after pyVmomi installation
 38 COPY vsanmgmtObjects.py /usr/lib/python2.7/
 39 COPY vsanapiutils.py /usr/lib/python2.7/
 40 
 41 # Setup container to properly use SSH bastion host for Ansible
 42 RUN mkdir /root/.ssh
 43 RUN chmod 740 /root/.ssh
 44 COPY config /root/.ssh/config
 45 COPY ansible.cfg /etc/ansible/
 46 
 47 # Edit MOTD to give container consumer info
 48 COPY motd /etc/motd
 49 RUN echo '[ ! -z "$TERM" -a -r /etc/motd ] && cat /etc/issue && cat /etc/motd' >> /etc/bash.bashrc

I am still mounting a local volume that contains the Ansible playbooks within it. For reference, I run the container with the following command:

$ docker run -it --rm --name ansible-sddc -v /PATH/TO/ANSIBLE:/opt/ansible virtualelephant/ubuntu-ansible

If you run into an issues with the Docker container, please let me know on Twitter. Enjoy!

Deploying an SDDC with Ansible

The small effort I started at the end of last year using Ansible to deploy NSX components has snowballed a bit and found its way into a project at work. As we are working to deploy a new HCI architecture internally, one of the efforts we are embarking on is a fully automated, infrastructure-as-code architecture design. There are several components that are working in conjunction with one another to be able to accomplish that task, but the part I am going to talk about today is automation through Ansible.

As many of you have seen, I’ve recently been automating NSX component delivery and configuration using the open source VMware NSX Ansible modules. I’ve been fortunate enough to put my meager coding skills to work and enhance those models this year — adding new capabilities exposed through the API for NSX Edge configuration. In addition to the NSX Ansible modules, there are a multitude of upstream Ansible modules for VMware components. The first step was evaluating what the current upstream modules were capable of performing and putting together a small demo for my colleagues to observe both the power of Ansible and the ease of use.

My initial impressions of Ansible is that it is probably the most user-friendly of the configuration management/automation tools currently available. And for the VMware SDDC components, it appears to be rather robust. I have identified a few holes, but nothing insurmountable — the great thing is if something is exposed via an API, creating an Ansible module to leverage said API is rather simplistic.

The Ansible playbooks are a first step, I really want to convert most of them into Ansible roles. I’ve started committing the code in my Github space. You can download the playbooks and start using them if you’d like.

https://github.com/virtualelephant/vsphere-sddc

I currently have playbooks for creating a datacenter, cluster, adding hosts, configuring several advanced settings on each ESXi host, creating a DVS with port groups and performing a few other configuration tasks. The bit that I want to work out next is deployment of the vCenter server through Ansible. It’s currently a work in progress, but it has been a fun effort thus far.

Enjoy!