This is the first post in what I plan to be a sporadic, yet on-going series highlighting certain aspects of a VCDX / Architect skillset. These VCDX Quick Hits will cover a range of topics and key in on certain aspects of the VCDX blueprint. It is my hope they will trigger some level of critical thinking on the readers part and help them improve their skillset.

The idea for this post came after listening to a post-mortem call for a recent incident that occurred at work. The incident itself was a lower priority Severity 2 incident, meaning it only impacted a small subset of customers in a small failure domain (a single vCenter Server). As architects, we know monitoring is a key component of any architecture design — whether it is intended for a VCDX submission or not.

In IT Architect: Foundation in the Art of Infrastructure Design (Amazon link), the authors state:

“A good monitoring solution will identify key metrics of both the physical and virtual infrastructure across all key resources compute, storage, and networking.”

The post-mortem call got me thinking about maturity within our monitoring solutions and improving our architecture designs by striving to understand the components better earlier in the design and pilot phases.

It is common practice to identify the key components and services of an architecture we designed, or are responsible for, to outline which are key to support the service offering. When I wrote the VMware Integrated OpenStack design documentation, which later became the basis for my VCDX defense, I identified specific OpenStack services which needed to be monitored. The following screen capture shows how I captured the services within the documentation.

As you can see from the above graphic, I identified each service definition with a unique ID, documented the component/service, documented where the service should be running, and a brief description of the component/service. The information was used to create the Sprint story for the monitoring team to create the alert definitions within the monitoring solution.

All good right?

The short answer is, not really. What I provided in my design was adequate for an early service offering, but left room for further maturity. Going back to the post-mortem call, this is where additional maturity in the architecture design would have helped reduce the MTTR of the incident.

During the incident, two processes running on a single appliance were being monitored to determine if they were running. Just like my VMware Integrated OpenStack design, these services had been identified and were being monitored per the architecture specification. However, what was not documented was the dependency between the two processes. In this case, process B was dependent on process A and although process A was running, it was not properly responding to the queries from process B. As a result, the monitoring system believed everything was running correctly — it was from an alert definition perspective — and the incident was not discovered immediately. Once process A was restarted, it began responding to the queries from process B and service was restored.

So what could have been done?

First, the architecture design could have written an alert definition for the key services (or processes) that went beyond just measuring whether the service is running.

Second, the architecture design could have better understood the inter-dependencies between these two processes and written an more detailed alert definition. In this case, there was a log entry written each time process A did not correctly respond to process B. Having an alert definition for this entry in the logs would have allowed the monitoring system to generate an alert.

Third, the architecture design could have used canary testing as a way to provide a mature monitoring solution. It may be necessary to clarify what I mean when I use the term canary testing.

“Well into the 20th century, coal miners brought canaries into coal mines as an early-warning signal for toxic gases, primary carbon monoxide. The birds, being more sensitive, would become sick before the miners, who would then have a chance to escape or put on protective respirators.” (Wikipedia link)

Canary testing would them imply a method of checking the service for issues prior to a customer discovering them. Canary testing should include common platform operations a customer would typically do — this can also be thought of as end-to-end testing.

For example, a VMware Integrated OpenStack service offering with NSX would need to ensure that both the NSX Manager is online, but also that the OpenStack Neutron service is able to communicate to it. A good test could be to make an OpenStack Neutron API call to deploy a NSX Edge Service Gateway, or create a new tenant network (NSX logical switch).

There are likely numerous ways a customer will interact with your service offering and defining these additional tests within the architecture design itself are something I challenge you consider.

Read More

As the project moves into the next phase, Ansible is beginning to be relied upon for the deployment of the individual components that will define the environment. This installment of the series is going to cover the use of Ansible with VMware NSX. VMware has provided a set of Ansible modules for integrating with NSX on GitHub. The modules easily allow the creation of NSX Logical Switches, NSX Distributed Logical Routers, NSX Edge Services Gateways (ESG) and many other components.

The GitHub repository can be found here.

Step 1: Installing Ansible NSX Modules

In order to support the Ansible NSX modules, it was necessary to install several supporting packages on the Ubuntu Ansible Control Server (ACS).

$ sudo apt-get install python-dev libxml2 libxml2-dev libxslt1-dev zlib1g-dev npm
$ sudo pip install nsxramlclient
$ sudo npm install -g https://github.com/yfauser/raml2html
$ sudo npm install -g https://github.com/yfauser/raml2postman
$ sudo npm install -g raml-fleece

In addition to the Ansible NSX modules, the ACS server will also require the vSphere for NSX RAML repository. The RAML specification includes information on the NSX for vSphere API. The repo will need to be cloned to a local directory on the ACS as well before execution of an Ansible Playbook will work.

Now that all of the prerequisites are met, the Ansible playbook for creating the NSX components can be written.

Step 2: Ansible Playbook for NSX

The first thing to know is the GitHub repo for the NSX modules include many great examples within the test_*.yml files which were leveraged to create the playbook below. To understand what the Ansible Playbook has been written to create, let’s first review the logical network design for the Infrastructure-as-Code project.

 

The design calls for three layers of NSX virtual networking to exist — the NSX ECMP Edges, the Distributed Logical Router (DLR) and the Edge Services Gateway (ESG) for the tenant. The Ansible Playbook below assumes the ECMP Edges and DLR already exist. The playbook will focus on creating the HA Edge for the tenant and configuring the component services (SNAT/DNAT, DHCP, routing).

The GitHub repository for the NSX Ansible modules provides many great code examples. The playbook that I’ve written to create the k8s_internal logical switch and the NSX HA Edge (aka ESG) took much of the content provided and collapsed it into a single playbook. The NSX playbook I’ve written can be found in the Virtual Elephant GitHub repository for the Infrastructure-as-Code project.

As I’ve stated, this project is mostly about providing me a detailed game plan for learning several new (to me) technologies, including Ansible. The NSX playbook is the first time I’ve used an answer file to obfuscate several of the sensitive variables needed specifically for my environment. The nsxanswer.yml file includes the variable required for connecting to the NSX Manager, which is the component Ansible will be communicating with to create the logical switch and ESG.

Ansible Answer File: nsxanswer.yml (link)

  1 nsxmanager_spec:
  2         raml_file: '/HOMEDIR/nsxraml/nsxvapi.raml'
  3         host: 'usa1-2-nsxv'
  4         user: 'admin'
  5         password: 'PASSWORD'

The nsxvapi.raml file is the API specification file that we cloned in step 1 from the GitHub repository. The path should be modified for your local environment, as should the password: variable line for the NSX Manager.

Ansible Playbook: nsx.yml (link)

  1 ---
  2 - hosts: localhost
  3   connection: local
  4   gather_facts: False
  5   vars_files:
  6     - nsxanswer.yml
  7   vars_prompt:
  8   - name: "vcenter_pass"
  9     prompt: "Enter vCenter password"
 10     private: yes
 11   vars:
 12     vcenter: "usa1-2-vcenter"
 13     datacenter: "Lab-Datacenter"
 14     datastore: "vsanDatastore"
 15     cluster: "Cluster01"
 16     vcenter_user: "administrator@vsphere.local"
 17     switch_name: "{{ switch }}"
 18     uplink_pg: "{{ uplink }}"
 19     ext_ip: "{{ vip }}"
 20     tz: "tzone"
 21 
 22   tasks:
 23   - name: NSX Logical Switch creation
 24     nsx_logical_switch:
 25       nsxmanager_spec: "{{ nsxmanager_spec }}"
 26       state: present
 27       transportzone: "{{ tz }}"
 28       name: "{{ switch_name }}"
 29       controlplanemode: "UNICAST_MODE"
 30       description: "Kubernetes Infra-as-Code Tenant Logical Switch"
 31     register: create_logical_switch
 32 
 33   - name: Gather MOID for datastore for ESG creation
 34     vcenter_gather_moids:
 35       hostname: "{{ vcenter }}"
 36       username: "{{ vcenter_user }}"
 37       password: "{{ vcenter_pass }}"
 38       datacenter_name: "{{ datacenter }}"
 39       datastore_name: "{{ datastore }}"
 40       validate_certs: False
 41     register: gather_moids_ds
 42     tags: esg_create
 43 
 44   - name: Gather MOID for cluster for ESG creation
 45     vcenter_gather_moids:
 46       hostname: "{{ vcenter }}"
 47       username: "{{ vcenter_user }}"
 48       password: "{{ vcenter_pass }}"
 49       datacenter_name: "{{ datacenter }}"
 50       cluster_name: "{{ cluster }}"
 51       validate_certs: False
 52     register: gather_moids_cl
 53     tags: esg_create
 54 
 55   - name: Gather MOID for uplink
 56     vcenter_gather_moids:
 57       hostname: "{{ vcenter }}"
 58       username: "{{ vcenter_user}}"
 59       password: "{{ vcenter_pass}}"
 60       datacenter_name: "{{ datacenter }}"
 61       portgroup_name: "{{ uplink_pg }}"
 62       validate_certs: False
 63     register: gather_moids_upl_pg
 64     tags: esg_create
 65 
 66   - name: NSX Edge creation
 67     nsx_edge_router:
 68       nsxmanager_spec: "{{ nsxmanager_spec }}"
 69       state: present
 70       name: "{{ switch_name }}-edge"
 71       description: "Kubernetes Infra-as-Code Tenant Edge"
 72       resourcepool_moid: "{{ gather_moids_cl.object_id }}"
 73       datastore_moid: "{{ gather_moids_ds.object_id }}"
 74       datacenter_moid: "{{ gather_moids_cl.datacenter_moid }}"
 75       interfaces:
 76         vnic0: {ip: "{{ ext_ip }}", prefix_len: 26, portgroup_id: "{{ gather_moids_upl_pg.object_id }}", name: 'uplink0', iftype: 'uplink', fence_param: 'ethernet0.filter1.param1=1'}
 77         vnic1: {ip: '192.168.0.1', prefix_len: 20, portgroup_id: "{{ switch_name }}", name: 'int0', iftype: 'internal', fence_param: 'ethernet0.filter1.param1=1'}
 78       default_gateway: "{{ gateway }}"
 79       remote_access: 'true'
 80       username: 'admin'
 81       password: "{{ nsx_admin_pass }}"
 82       firewall: 'false'
 83       ha_enabled: 'true'
 84     register: create_esg
 85     tags: esg_create

The playbook expects to be provided three extra variables from the CLI when it is executed — switch, uplink and vip. The switch variable defines the name of the logical switch, the uplink variable defines the uplink VXLAN portgroup the tenant ESG will connect to, and the vip variable is the external VIP to be assigned from the network block. At the time of this writing, these sorts of variables continue to be command-line based, but will likely be moved to a single Ansible answer file as the project matures. Having a single answer file for the entire set of playbooks should simplify the adoption of the Infrastructure-as-Code project into other vSphere environments.

Now that Ansible playbooks exist for creating the NSX components and the VMs for the Kubernetes cluster, the next step will be to begin configuring the software within CoreOS to run Kubernetes.

Stay tuned.

Read More

The series so far has covered the high level design of the project, how to bootstrap CoreOS and understanding how Ignition works to configure a CoreOS node. The next stage of the project will begin to leverage Ansible to fully automate and orchestrate the instantiation of the environment. Ansible will initially be used to deploy the blank VMs and gather the IP addresses and FQDNs of each node created.

Ansible is one of the new technologies that I am using the Infrastructure-as-Code project to learn. My familiarity with Chef was helpful, but I still wanted to get a good primer on Ansible before proceeding. Fortunately, Pluralsight is a great training tool and the Hands-on Ansible course by Aaron Paxon was just the thing to start with. Once I worked through the video series, I dived right into writing the Ansible playbook to deploy the virtual machines for CoreOS to install. I quickly learned there were a few extras I needed on my Ansible control server before it would all function properly.

Step 1: Configure Ansible Control Server

As I stated before, I have deployed an Ubuntu Server 17.10 node within the environment where tftpd-hpa is running for the CoreOS PXEBOOT system. The node is also being leveraged as the Ansible control server (ACS). The ACS node required a few additional packages to be present on the system in order for Ansible to be the latest version and include the VMware modules needed.

To get started, the Ubuntu repositories only include Ansible v2.3.1.0 — which is not from the latest 2.4 branch.

There are several VMware module updates in Ansible 2.4 that I wanted to leverage, so I needed to first update Ansible on the Ubuntu ACS.

$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get upgrade

If you have not yet installed Ansible on the local system, run the following command:

$ sudo apt-get install ansible

If you need to upgrade Ansible from the Ubuntu package to the new PPA repository package, run the following command:

$ sudo apt-get upgrade ansible

Now the Ubuntu ACS is running Ansible v2.4.1.0.

In addition to just having Ansible and Python installed, there are additional Python pieces we need in order for all of the VMware Ansible modules to work correctly.

$ sudo apt-get install python-pip
$ sudo pip install --upgrade pyvmomi
$ sudo pip install pysphere
$ sudo pip list | grep pyvmomi

Note: Make sure pyvmomi is running a 6.5.x version to have all the latest code.

The final piece I needed to configure was to include an additional Ansible module to allow for new VM folders to be created. There is a 3rd party module, called vmware_folder, which includes the needed functionality. After cloning the Openshift-ansible-contrib repo, I copied the following vmware_folder.py file into the ACS directory /usr/lib/python2.7/dist-packages/ansible/modules/cloud/vmware.

The file can found on GitHub at the following link.

The Ubuntu ACS node now possesses all of the necessary pieces to get started with the environment deployment.

Step 2: Ansible Playbook for deployment

The starting point for the project is to write the Ansible playbook that will deploy the virtual machines and power them on — thus allowing the PXEBOOT system to download and install CoreOS onto each node. Ansible has several VMware modules that will be leveraged as the project progresses.

The Infrastructure-as-Code project source code is hosted on GitHub and is available for download and use. The project is currently under development and is being written in stages. By the end of the series, the entire instantiation of the environment will be fully automated. As the series progresses, the playbooks will get built out and become more complete.

The main.yml Ansible playbook currently includes two tasks — one for creating the VM folder and a second for deployment of the VMs. It uses a blank VM template that already exists on the vCenter Server.

When the playbook is run from the ACS, it will deploy a dynamic number of nodes, create a new VM folder and allow the user to specify a VM-name prefix.

When the deployment is complete, the VMs will be powered on and booting CoreOS. Depending on the download speeds in the environment, the over/under for the CoreOS nodes to be fully online is roughly 10 minutes right now.

The environment is now deployed and ready for Kubernetes! Next week, the series will focus on using Ansible for installing and configuring Kubernetes on the nodes post-deployment. As always, feel free to reach out to me over Twitter if you have questions or comments.

[Introduction] [Part 1 – Bootstrap CoreOS with Ignition] [Part 2 – Understanding CoreOS Ignition] [Part 3 – Getting started with Ansible]

Read More

The previous post introduced the Ignition file that is being used to configure the CoreOS nodes that will eventually be used for running Kubernetes. The Ignition file is a JSON formatted flat-file that needs to include certain information and is particularly sensitive when improperly written. In an effort to help users of Ignition, the CoreOS team have provided a Config Validator and Config Transpiler binary for taking a YAML coreos-cloudinit file and converting it into the JSON format.

This post will review how to use the Config Transpiler to generate a valid JSON file for use by Ignition. After demonstrating its use, I will cover the stateful-config.ign Ignition file being used to configure the CoreOS nodes within the environment.

Step 1: CoreOS Config Transpiler

The CoreOS Config Transpiler is delivered as a binary that can be downloaded to a local system and used to generate a working JSON file for Ignition. After downloading the binary to my Mac OS laptop, I began by writing one section at a time for the stateful-ignition.ign file and then running it through the Config Validator to be it had correct syntax. Generally, when working on a project of this magnitude, I will write small pieces of code and test them before moving onto the next part. This helps me when there are issues, as the Config Validator is not the most verbose tool when there is a misconfiguration. By building small blocks of code, it allows me to build the larger picture slowly and have confidence in the parts that are working.

One piece, which will be covered in greater detail later in the post, was to install Python on CoreOS. For that portion, I decided to have Ignition write a script file to the local filesystem when it boots. To accomplish this, I built the following YAML file:

storage:
  files:
    - path: /home/deploy/install-python.sh
      filesystem: root
      mode: 0644
      contents:
        inline: |
          #!/usr/bin/bash
          sudo mkdir -p /opt/bin
          cd /opt
          sudo wget http://192.168.0.2:8080/ActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695.tar.gz
          sudo tar -zxf ActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695.tar.gz
          sudo mv ActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695 apy
          sudo /opt/apy/install.sh -I /opt/python
          sudo ln -sf /opt/python/bin/easy_install /opt/bin/easy_install
          sudo ln -sf /opt/python/bin/pip /opt/bin/pip
          sudo ln -sf /opt/python/bin/python /opt/bin/python
          sudo ln -sf /opt/python/bin/python /opt/bin/python2
          sudo ln -sf /opt/python/bin/virtualenv /opt/bin/virtualenv
          sudo rm -rf /opt/ActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695.tar.gz

Once the YAML file was written, I used the CoreOS Config Transpiler to generate the JSON output. The screenshot below shows how to run the binary to produce the JSON output, which is written to the terminal.

From there, you can copy the entire output into an Ignition JSON file, or copy-and-paste just the bits that are needed to be added to an existing Ignition JSON file.

You’ve likely noticed there are lots of special characters in the JSON output that are necessary to write the script that will install Python, as described by the YAML file. In addition to that, the output is also one big blob of text — it does not have whitespace formatting, so you’ll need to decide how you want to format your own Ignition file. I personally prefer to take the time to properly format it in a reader-friendly way, as can be seen in the stateful-config.ign file.

Step 2: Understanding the PXEBOOT CoreOS Ignition File

pxeboot-config.ign (S3 download link)

The Ignition file can include a great number of configuration items within in. The Ignition specification includes sections for networking, storage, filesystems, systemd drop-ins and users. The pxeboot-config.ign Ignition file is much smaller compared to the one used when the stateful installation of CoreOS is performed. There is one section I want to highlight independently since it is crucial for it to be in place before the installation can begin.

 

The storage section includes a portion where fdisk is used to create a partition table on the local disk within the CoreOS virtual machine. The code included in this file will work regardless of what size disk is attached to the virtual machine. Right now I am creating a 50Gb disk on my vSAN datastore, however if I change the VM specification later to be larger or smaller, this bit of code will continue to work without modification.

The final part of the storage section then formats the partition using ext4 as the filesystem format. Ignition supports other filesystem types, such as xfs, if you choose to use a different format.

Step 3: Understanding the Stateful CoreOS Ignition File

stateful-config.ign (S3 download link)

Now we will go through each section of code included in the stateful-config.ign file I am using when the stateful installation of CoreOS is performed on one of the deployed nodes. At a minimum, an Ignition file should include at least one user, with an associated SSH key to allow for remote logins to be successful.

There are many examples available from the CoreOS site itself and these were used as reference points when I was building this Ignition file.

Now I will go through each section and describe what actions will be performed when the file is run.

Lines 1-5 define the Ignition version that is to be used — like an OpenStack Heat template, the version will unlock certain features contained in the specification.

The storage section of the Ignition file is where local files can be created (shell scripts, flat files, etc) and where storage devices are formatted. Lines 7-17 define the first file that needs to be created on the local filesystem. The file itself — /etc/motd — is a simple flat file that I wanted to write so that I would know the stateful installation had been performed on the local node. The contents section requires special formatting and this is where the Config Transpiler is helpful. As shown above, a YAML file can be created and the Config Transpiler used to convert it into the correctly formatted JSON code. The YAML file snippet looked like:

storage:
  files:
  - path: /etc/motd
    filesystem: root
    mode: 0644
    contents:
      inline: |
        Stateful CoreOS Installation.

Lines 18-28 create the /home/deploy/install_python.sh shell script that will be used later to actually perform the installation. Remember, the storage section in the Ignition file is not executing any files, it is merely creating them.

Lines 29-41 are now defining another shell script, /home/deploy/gethost.sh, that will be used to assign the FQDN as the hostname of the CoreOS node. This is an important piece since each node will be receiving a DHCP address and as we get further into the automation/orchestration with Ansible, it will be necessary to know exactly which FQDNs exist within the environment.

Line 41 closes off the storage section of the Ignition file. The next section is for systemd units and drop-ins.

Line 42 tells Ignition we are now going to be providing definitions we expect systemd to use during the boot process. This is where Ignition shows some of its robustness — it allows us to create systemd units early enough in the boot process to affect how the system will run when it is brought online fully.

Lines 44-48 define the first systemd unit. Using the /home/deploy/gethost.sh shell script that was defined in the storage section, the Ignition file creates the /etc/systemd/system/set-hostname.service file that will be run during the boot process. The formatting of the contents section here is less severe than the contents section inside a files unit (above). Here we can simply type the characters, including spaces and use the familiar ‘\n’ syntax for newlines.

As you can see the unit above creates the /etc/systemd/system/set-hostname.service file with the following contents:

[Unit]
Description=Use FQDN to set hostname.
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
ExecStartPre=/usr/bin/chmod 755 /home/deploy/gethost.sh
ExecStartPre=/usr/bin/chown deploy:deploy /home/deploy/gethost.sh
ExecStart=/home/deploy/gethost.sh

[Install]
WantedBy=multi-user.target

Lines 49-53 take the Python installation script Ignition created and creates a systemd unit for it as well. I confess that this may not be the most ideal method for installing Python, but it works.

The /etc/systemd/system/env-python.service file is created with the following contents:

[Unit]
Description=Install Python for Ansible.
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
ExecStartPre=/usr/bin/chmod 755 /home/deploy/install_python.sh
ExecStartPre=/usr/bin/chown deploy:deploy /home/deploy/install_python.sh
ExecStart=/home/deploy/install_python.sh

[Install]
WantedBy=multi-user.target

There is a systemd caveat I want to go over that were instrumental is being able to deliver a functional Ignition file. As I worked through setting the hostname — which should be a relatively easy task — I ran into all sorts of issues. After working through the script, adding debugging messages to the shell script, I was able to determine the systemd unit was being run before the network was fully online — resulting in the scripts inability to successfully query a DNS server to resolve the FQDN. After reading through more blog posts and GitHub pages, I came across the syntax for making sure my systemd services were not being executed until after the network was fully online.

The two key lines here are:

After=network-online.target
Wants=network-online.target

This instructs systemd to not execute this unit until after the network is confirmed to be online. There is another systemd target server — network.target — but it does not guarantee the network is actually fully online. Instead the network.target unit is released after the interface is configured, not necessarily after all of the networking components are fully operational. Using the network-online.target unit ensured the two shell scripts I needed systemd to execute were able to leverage the functioning network.

Lines 54-59 define the last systemd unit in my Ignition file, which tells CoreOS to start the etcd2 service. The configuration of etcd2 will be performed by Ansible and covered in a later post.

 

The final portion of the Ignition file defines users  the CoreOS system should have when it is fully configured. In the file I have configured a single user, deploy, and assigned an SSH key that can be used to log into the CoreOS node. The code also defines the user to be part of the sudo and the docker groups, which are predefined in the operating system.

Feel free to reach out over Twitter if you have any questions or comments.

[Introduction] [Part 1 – Bootstrap CoreOS with Ignition] [Part 2 – Understanding CoreOS Ignition] [Part 3 – Getting started with Ansible]

Read More

The first post in the series went over the design goals and the logical diagram of the Kubernetes environment. This post will include the necessary steps to PXEBOOT a CoreOS node, install the VMware Tools included version of CoreOS and perform an initial configuration of the CoreOS node with Ignition.

After determining what the Infrastructure-as-Code project would be working to accomplish, I broke the project down into several pieces. The decision was made to start off with learning how CoreOS worked, how to install and configure it in a manner that would allow the deployment of 1, 5, 100 or 1000 nodes — with each node operating in the same way every single time. As familiar as I am with Big Data Extensions and how the management server deploys things with Chef, I decided to go in a different direction. I did not want to use a template VM that is copied over and over again — instead I chose to use a PXEBOOT server for performing the initial installation of CoreOS.

In this post, I will detail how to configure an Ubuntu node to act as the PXEBOOT server, how to perform a CoreOS stateful installation and provide the necessary Ignition files for accomplishing these tasks.

Step 1: Ubuntu 17.10 PXEBOOT Node

I am using an Ubuntu Server 17.10 virtual machine as my beachhead node where I am running the tftpd and bind9 services for the entire micro-segmented network. It had been a few years since I had to setup a PXEBOOT server and I needed a refresher course when I set out to work on this project. After getting a base install with sshd running on an Ubuntu Server 17.10 node, the following steps were required to configure tftpd-hpa and get the PXE images in place.

Configure a PXEBOOT Linux server:

$ sudo apt-get -y install tftpd-hpa syslinux pxelinux initramfs-tools
$ sudo vim /etc/default/tftpd-hpa

# /etc/default/tftpd-hpa
TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/var/lib/tftpboot"
TFTP_ADDRESS=":69"
TFTP_OPTIONS="--secure"
RUN_DAEMON="yes"
OPTIONS="-l -s /var/lib/tftpboot"

$ sudo mkdir -p /var/lib/tftpboot/pxelinux.cfg
$ sudo /var/lib/tftpboot/pxelinux.cfg

default coreos
prompt 1
timeout 15
display boot.msg

label coreos
  menu default
  kernel coreos_production_pxe.vmlinuz
  initrd coreos_production_pxe_image.cpio.gz
  append coreos.first_boot=1 coreos.config.url=https://s3-us-west-1.amazonaws.com/s3-kube-coreos/pxe-config.ign cloud-config-url=https://s3-us-west-1.amazonaws.com/s3-kube-coreos/cloud-control.sh

Next, it is necessary to download the CoreOS boot files:

$ cd /var/lib/tftpboot
$ sudo wget https://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz
$ sudo wget https://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz.sig
$ sudo wget https://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz
$ sudo wget https://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz.sig
$ gpg --verify coreos_production_pxe.vmlinuz.sig
$ gpg --verify coreos_production_pxe_image.cpio.gz.sig

After the CoreOS images are downloaded a restart of the tftpd-hpa service should be all that is required for this step.

Step 2: CoreOS Ignition

CoreOS replaced the previous coreos-cloudinit orchestration with Ignition to provide the orchestration to the operating system. Ignition is designed to run early on in the boot process to allow the user space to be modified prior to executing many of the operation services. Whereas it used to be possible to use a YAML configuration file, Ignition now relies on a JSON to define what actions (partitioning, user creation, file creation, etc) are to occur during the first boot of the system. Creating the JSON file and understanding how systemd interacts with other services was the biggest initial challenge to adopting CoreOS for me.

If you are new to Ignition, I highly suggest reading the following pages:

A major challenge I faced was the inconsistent examples available on the Internet. Even using an Ignition file a co-worker provided to me proved to be difficult as it seemingly did not work as expected. Through much trial and error — I must have used up an entire /24 DHCP scope booting test VMs — I was able to get the following two Ignition files working.

The first Ignition file is used during the PXEBOOT process — it configures just enough of the system to perform the stateful installation.

pxe-config.ign (S3 download link)

{
  "ignition": {
    "version": "2.1.0",
    "config": {}
  },
  "storage": {
    "disks": [{
      "device": "/dev/sda",
      "wipeTable": true,
      "partitions": [{
        "label": "ROOT",
        "number": 0,
        "size": 0,
        "start": 0
      }]
    }],
  "filesystems": [{
    "mount": {
      "device": "/dev/sda1",
      "format": "ext4",
      "wipeFilesystem": true,
      "options": [ "-L", "ROOT" ]
     }
   }]
  },
  "systemd": {
     "units": [
       {
         "contents": "[Unit]\nDescription=Set hostname to DHCP FQDN\n\n[Service]\nType=oneshot\nExecStart=/bin/sh -c \"IP=$(ip add show ens192 | awk '/inet/ {print $2}' | cut -d/ -f1 |cut -d. -f4 | head -1) ; sudo hostnamectl set-hostname dhcp-coreos$IP\"\n",
         "enabled": true,
         "name": "set-hostname.service"
       },
       {
         "name": "etcd2.service",
         "enabled": true
       }
     ]
  },
  "networkd": {},
  "passwd": {
    "users": [
      {
        "name": "deploy",
        "sshAuthorizedKeys": [
          "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAEAQCsNebg9k312OhcZlC+JM8daEyT5XpFEb1gnUgEHms+/yft6rlr+Y/BOXC9r/0UR2VB41tpx9B8ZZADHa/I8cZKctRP4idwKWlPJlxqPohVWwgGk9oYyDY4612bO9gYQros9XKDI+IZMc0xOrdm7D7dowzheez77OQeZFKtef0w61LdBTQn4JXAK0DhuldGwvoH7SDkEMk1FH3U45DSljgMOAwbxnr6Gy2embr6qHo15zrGs0OyHFY0YZXCZ1xvhNYuBm8/H06JZnI2qPBGWaRwDNky6MXEtWBUSXjuMsIApGg1nR3hjZbwtN3uH0/VMH/uk7m9mvZXpeu/ktAn70IP/8wq4HjN6pXGY9gsvA2qQULNAI8t5wYuwSa/cm/aWC0Z8rgS6wE04j5i5jLlLpVNHvmBrc3BxKO5AV9k/19TQLSnqbmT9aU7mC8CvguHsy2g5nagqzUwHfpbOS64kYcgISu2LjYdOCRpr9NSzeR3N3l+3yG+QfNE73x9yPifd9aE21Mc3JLIwq+Qo0ZmKrgAu615Y2r7bcEx4wt7SF98rvAC8IZDbMNukSUMR3LPRaQq00OGUxVPdHdxwyLaH4UZ3wb43tFfaDreYAy1SeX1cTHjZ01MAHk2P5mhGPxeUh7LW7w+57GoeFY+aF9SEyrdqpd6DhUC15pJT9Tje/sxTOXUCVWyGgsyxi4ygeZ3ZUb0oUwQ2bnnnzNSXHl+qx722w9saE+LNuZOsnTY26+1TVaYKNczQwGsnjyZdF3VslsQskZ5cld5AeHkPrkrsISjhCAPxP7hOLJRhY2gZk/FqwycZdjARz75MNegidQFNN7MuGaN+F9YinQIHsbReoGHyaKN40tyThs9RwZr7lOPgngjhEddEuaAgre7k4sln9x3PRlNzGX5kPVK+7ccQMWI3DgvMUxkUtV5Di+BNzhtUVN8D8yNjajAf3zk7gEgWdeSNse+GUCwQWt0VCwDIfA1RhfWnyMwukgxqmQe7m5jM4YjLyR7AFe2CeB08jOES9s+N44kWOlrnG3Mf41W2oZ6FbiFcB7+YHGNxnlxK+0QluP17rISgUmnCkEgwGbyisXMrNHTaGfApxd4CertVab0wOvtDNnH4x7ejEiNHiN1crOzpMtnSVnrRi+M+f9w3ChCsirc+3H8tbpSOssI7D3p1eWZlF6z1OSb9pp4+JYwlmAisyz/vZyjC7vtEXsJt3e4JLM1ef62mZTcKHP8xWP3k78hPB5twzSwhMVtZCB/MIT3pg7DA90fbhBkHZIVczgBjN9tOJilHPTuBeuKNzWD0Rhi0CSdzohDYVsO/PKA5ZyEncx83Y9pc4zpcrxgdU2H5NdqkLW9yw7O5gvau7jj cmutchler@cmutchler-MBP.local"
          ],
          "groups": [ "sudo", "docker" ]
     }
   ]
  }
}

During the initial launch of the virtual machine, the PXEBOOT server tells the system to download the Ignition file and a Cloud Config file. The Cloud Config file downloads a shell script that I’ve written to install CoreOS to the /dev/sda disk attached to the VM.

cloud-config-url=https://s3-us-west-1.amazonaws.com/s3-kube-coreos/cloud-control.sh

cloud-control.sh (S3 download link)

#!/bin/bash

wget https://s3-us-west-1.amazonaws.com/s3-kube-coreos/stateful-config.ign
sudo coreos-install -d /dev/sda -i stateful-config.ign -C stable -V current -o vmware_raw
sudo reboot

As you can see, the cloud-control.sh script downloads a second Ignition file from S3 and uses that when it is performing the stateful install of CoreOS. The vmware_raw version of CoreOS includes VMtools within it — this will play an important role as we continue to automate the entire stack.

stateful-config.ign (S3 download link)

{
  "ignition": {
    "version": "2.1.0",
    "config": {}
  },
  "storage": {
    "files": [
      {
        "filesystem": "root",
        "group": {},
        "path": "/etc/motd",
        "user": {},
        "contents": {
          "source": "data:,Stateful%20CoreOS%20Installation.%0A",
          "verification": {}
        }
      },
      {
        "filesystem": "root",
        "group": {},
        "path": "/home/deploy/install_python.sh",
        "user": {},
        "contents": {
          "source":"data:,%23!%2Fusr%2Fbin%2Fbash%0Asudo%20mkdir%20-p%20%2Fopt%2Fbin%0Acd%20%2Fopt%0Asudo%20wget%20http%3A%2F%2F192.168.0.2%3A8080%2FActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695.tar.gz%0Asudo%20tar%20-zxf%20ActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695.tar.gz%0Asudo%20mv%20ActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695%20apy%0Asudo%20%2Fopt%2Fapy%2Finstall.sh%20-I%20%2Fopt%2Fpython%0Asudo%20ln%20-sf%20%2Fopt%2Fpython%2Fbin%2Feasy_install%20%2Fopt%2Fbin%2Feasy_install%0Asudo%20ln%20-sf%20%2Fopt%2Fpython%2Fbin%2Fpip%20%2Fopt%2Fbin%2Fpip%0Asudo%20ln%20-sf%20%2Fopt%2Fpython%2Fbin%2Fpython%20%2Fopt%2Fbin%2Fpython%0Asudo%20ln%20-sf%20%2Fopt%2Fpython%2Fbin%2Fpython%20%2Fopt%2Fbin%2Fpython2%0Asudo%20ln%20-sf%20%2Fopt%2Fpython%2Fbin%2Fvirtualenv%20%2Fopt%2Fbin%2Fvirtualenv%0Asudo%20rm%20-rf%20%2Fopt%2FActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695.tar.gz%0A",
          "verification": {},
          "mode": 420
        }
      },
      {
        "filesystem": "root",
        "group": {},
        "path": "/home/deploy/gethost.sh",
        "user": {},
        "contents": {
          "source":"data:,%23!%2Fbin%2Fbash%0AIP%3D%24(%2Fusr%2Fbin%2Fifconfig%20ens192%20%7C%20%2Fusr%2Fbin%2Fawk%20'%2Finet%5Cs%2F%20%7Bprint%20%242%7D'%20%7C%20%2Fusr%2Fbin%2Fxargs%20host%20%7C%20%2Fusr%2Fbin%2Fawk%20'%7Bprint%20%245%7D'%20%7C%20%2Fusr%2Fbin%2Fsed%20s'%2F.%24%2F%2F')%0AHOSTNAME%3D%24IP%0A%2Fusr%2Fbin%2Fsudo%20%2Fusr%2Fbin%2Fhostnamectl%20set-hostname%20%24HOSTNAME%0A",
          "verification": {},
          "mode": 493
        }
      }
    ]
  },
  "systemd": {
    "units": [
      {
        "contents": "[Unit]\nDescription=Use FQDN to set hostname.\nAfter=network-online.target\nWants=network-online.target\n\n[Service]\nType=oneshot\nExecStartPre=/usr/bin/chmod 755 /home/deploy/gethost.sh\nExecStartPre=/usr/bin/chown deploy:deploy /home/deploy/gethost.sh\nExecStart=/home/deploy/gethost.sh\n\n[Install]\nWantedBy=multi-user.target\n",
        "enabled": true,
        "name": "set-hostname.service"
      },
      {
        "contents": "[Unit]\nDescription=Install Python for Ansible.\nAfter=network-online.target\nWants=network-online.target\n\n[Service]\nType=oneshot\nExecStartPre=/usr/bin/chmod 755 /home/deploy/install_python.sh\nExecStartPre=/usr/bin/chown deploy:deploy /home/deploy/install_python.sh\nExecStart=/home/deploy/install_python.sh\n\n[Install]\nWantedBy=multi-user.target\n",
        "enabled": true,
        "name": "env-python.service"
      },
      {
        "name": "etcd2.service",
        "enabled": true
      }
    ]
  },
  "networkd": {},
  "passwd": {
    "users": [
      {
        "name": "deploy",
        "sshAuthorizedKeys": [
          "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAEAQCsNebg9k312OhcZlC+JM8daEyT5XpFEb1gnUgEHms+/yft6rlr+Y/BOXC9r/0UR2VB41tpx9B8ZZADHa/I8cZKctRP4idwKWlPJlxqPohVWwgGk9oYyDY4612bO9gYQros9XKDI+IZMc0xOrdm7D7dowzheez77OQeZFKtef0w61LdBTQn4JXAK0DhuldGwvoH7SDkEMk1FH3U45DSljgMOAwbxnr6Gy2embr6qHo15zrGs0OyHFY0YZXCZ1xvhNYuBm8/H06JZnI2qPBGWaRwDNky6MXEtWBUSXjuMsIApGg1nR3hjZbwtN3uH0/VMH/uk7m9mvZXpeu/ktAn70IP/8wq4HjN6pXGY9gsvA2qQULNAI8t5wYuwSa/cm/aWC0Z8rgS6wE04j5i5jLlLpVNHvmBrc3BxKO5AV9k/19TQLSnqbmT9aU7mC8CvguHsy2g5nagqzUwHfpbOS64kYcgISu2LjYdOCRpr9NSzeR3N3l+3yG+QfNE73x9yPifd9aE21Mc3JLIwq+Qo0ZmKrgAu615Y2r7bcEx4wt7SF98rvAC8IZDbMNukSUMR3LPRaQq00OGUxVPdHdxwyLaH4UZ3wb43tFfaDreYAy1SeX1cTHjZ01MAHk2P5mhGPxeUh7LW7w+57GoeFY+aF9SEyrdqpd6DhUC15pJT9Tje/sxTOXUCVWyGgsyxi4ygeZ3ZUb0oUwQ2bnnnzNSXHl+qx722w9saE+LNuZOsnTY26+1TVaYKNczQwGsnjyZdF3VslsQskZ5cld5AeHkPrkrsISjhCAPxP7hOLJRhY2gZk/FqwycZdjARz75MNegidQFNN7MuGaN+F9YinQIHsbReoGHyaKN40tyThs9RwZr7lOPgngjhEddEuaAgre7k4sln9x3PRlNzGX5kPVK+7ccQMWI3DgvMUxkUtV5Di+BNzhtUVN8D8yNjajAf3zk7gEgWdeSNse+GUCwQWt0VCwDIfA1RhfWnyMwukgxqmQe7m5jM4YjLyR7AFe2CeB08jOES9s+N44kWOlrnG3Mf41W2oZ6FbiFcB7+YHGNxnlxK+0QluP17rISgUmnCkEgwGbyisXMrNHTaGfApxd4CertVab0wOvtDNnH4x7ejEiNHiN1crOzpMtnSVnrRi+M+f9w3ChCsirc+3H8tbpSOssI7D3p1eWZlF6z1OSb9pp4+JYwlmAisyz/vZyjC7vtEXsJt3e4JLM1ef62mZTcKHP8xWP3k78hPB5twzSwhMVtZCB/MIT3pg7DA90fbhBkHZIVczgBjN9tOJilHPTuBeuKNzWD0Rhi0CSdzohDYVsO/PKA5ZyEncx83Y9pc4zpcrxgdU2H5NdqkLW9yw7O5gvau7jj cmutchler@cmutchler-MBP.local"
        ],
        "groups": [ "sudo", "docker" ]
      }
    ]
  }
}

Step 3: Configure DHCP on NSX Edge

The last piece before a virtual machine can be booted will be to configure the NSX Edge needs to have the DHCP services configured and setup to use the Ubuntu server. I plan to automate this piece through Ansible in a future article, for now I will simply show you how it needs to be configured in the UI.

Step 4: Booting a VM

Everything should be in place now to boot the first VM. To be fair, I booted the “first” VM about 150 times as I worked through all of the Ignition iterations to get everything working as I intended. For my lab virtual machines, I am configuring the nodes with the following specifications:

  • 2 vCPU
  • 8 GB RAM
  • 50 GB hard disk

After powering on the VM and watching it go through the boot process, it takes about 5 minutes for it to perform the stateful installation and become available over SSH.

The next post will go through the stateful-config.ign Ignition file in detail, reviewing all the actions it is performing. I hope you are enjoying the series! Find me on Twitter if you have questions or comments.

[Introduction] [Part 1 – Bootstrap CoreOS with Ignition] [Part 2 – Understanding CoreOS Ignition] [Part 3 – Getting started with Ansible]

Read More