Tag: CoreOS

The series so far has covered the high level design of the project, how to bootstrap CoreOS and understanding how Ignition works to configure a CoreOS node. The next stage of the project will begin to leverage Ansible to fully automate and orchestrate the instantiation of the environment. Ansible will initially be used to deploy the blank VMs and gather the IP addresses and FQDNs of each node created.

Ansible is one of the new technologies that I am using the Infrastructure-as-Code project to learn. My familiarity with Chef was helpful, but I still wanted to get a good primer on Ansible before proceeding. Fortunately, Pluralsight is a great training tool and the Hands-on Ansible course by Aaron Paxon was just the thing to start with. Once I worked through the video series, I dived right into writing the Ansible playbook to deploy the virtual machines for CoreOS to install. I quickly learned there were a few extras I needed on my Ansible control server before it would all function properly.

Step 1: Configure Ansible Control Server

As I stated before, I have deployed an Ubuntu Server 17.10 node within the environment where tftpd-hpa is running for the CoreOS PXEBOOT system. The node is also being leveraged as the Ansible control server (ACS). The ACS node required a few additional packages to be present on the system in order for Ansible to be the latest version and include the VMware modules needed.

To get started, the Ubuntu repositories only include Ansible v2.3.1.0 — which is not from the latest 2.4 branch.

There are several VMware module updates in Ansible 2.4 that I wanted to leverage, so I needed to first update Ansible on the Ubuntu ACS.

$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get upgrade

If you have not yet installed Ansible on the local system, run the following command:

$ sudo apt-get install ansible

If you need to upgrade Ansible from the Ubuntu package to the new PPA repository package, run the following command:

$ sudo apt-get upgrade ansible

Now the Ubuntu ACS is running Ansible v2.4.1.0.

In addition to just having Ansible and Python installed, there are additional Python pieces we need in order for all of the VMware Ansible modules to work correctly.

$ sudo apt-get install python-pip
$ sudo pip install --upgrade pyvmomi
$ sudo pip install pysphere
$ sudo pip list | grep pyvmomi

Note: Make sure pyvmomi is running a 6.5.x version to have all the latest code.

The final piece I needed to configure was to include an additional Ansible module to allow for new VM folders to be created. There is a 3rd party module, called vmware_folder, which includes the needed functionality. After cloning the Openshift-ansible-contrib repo, I copied the following vmware_folder.py file into the ACS directory /usr/lib/python2.7/dist-packages/ansible/modules/cloud/vmware.

The file can found on GitHub at the following link.

The Ubuntu ACS node now possesses all of the necessary pieces to get started with the environment deployment.

Step 2: Ansible Playbook for deployment

The starting point for the project is to write the Ansible playbook that will deploy the virtual machines and power them on — thus allowing the PXEBOOT system to download and install CoreOS onto each node. Ansible has several VMware modules that will be leveraged as the project progresses.

The Infrastructure-as-Code project source code is hosted on GitHub and is available for download and use. The project is currently under development and is being written in stages. By the end of the series, the entire instantiation of the environment will be fully automated. As the series progresses, the playbooks will get built out and become more complete.

The main.yml Ansible playbook currently includes two tasks — one for creating the VM folder and a second for deployment of the VMs. It uses a blank VM template that already exists on the vCenter Server.

When the playbook is run from the ACS, it will deploy a dynamic number of nodes, create a new VM folder and allow the user to specify a VM-name prefix.

When the deployment is complete, the VMs will be powered on and booting CoreOS. Depending on the download speeds in the environment, the over/under for the CoreOS nodes to be fully online is roughly 10 minutes right now.

The environment is now deployed and ready for Kubernetes! Next week, the series will focus on using Ansible for installing and configuring Kubernetes on the nodes post-deployment. As always, feel free to reach out to me over Twitter if you have questions or comments.

[Introduction] [Part 1 – Bootstrap CoreOS with Ignition] [Part 2 – Understanding CoreOS Ignition] [Part 3 – Getting started with Ansible]

Read More

The previous post introduced the Ignition file that is being used to configure the CoreOS nodes that will eventually be used for running Kubernetes. The Ignition file is a JSON formatted flat-file that needs to include certain information and is particularly sensitive when improperly written. In an effort to help users of Ignition, the CoreOS team have provided a Config Validator and Config Transpiler binary for taking a YAML coreos-cloudinit file and converting it into the JSON format.

This post will review how to use the Config Transpiler to generate a valid JSON file for use by Ignition. After demonstrating its use, I will cover the stateful-config.ign Ignition file being used to configure the CoreOS nodes within the environment.

Step 1: CoreOS Config Transpiler

The CoreOS Config Transpiler is delivered as a binary that can be downloaded to a local system and used to generate a working JSON file for Ignition. After downloading the binary to my Mac OS laptop, I began by writing one section at a time for the stateful-ignition.ign file and then running it through the Config Validator to be it had correct syntax. Generally, when working on a project of this magnitude, I will write small pieces of code and test them before moving onto the next part. This helps me when there are issues, as the Config Validator is not the most verbose tool when there is a misconfiguration. By building small blocks of code, it allows me to build the larger picture slowly and have confidence in the parts that are working.

One piece, which will be covered in greater detail later in the post, was to install Python on CoreOS. For that portion, I decided to have Ignition write a script file to the local filesystem when it boots. To accomplish this, I built the following YAML file:

storage:
  files:
    - path: /home/deploy/install-python.sh
      filesystem: root
      mode: 0644
      contents:
        inline: |
          #!/usr/bin/bash
          sudo mkdir -p /opt/bin
          cd /opt
          sudo wget http://192.168.0.2:8080/ActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695.tar.gz
          sudo tar -zxf ActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695.tar.gz
          sudo mv ActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695 apy
          sudo /opt/apy/install.sh -I /opt/python
          sudo ln -sf /opt/python/bin/easy_install /opt/bin/easy_install
          sudo ln -sf /opt/python/bin/pip /opt/bin/pip
          sudo ln -sf /opt/python/bin/python /opt/bin/python
          sudo ln -sf /opt/python/bin/python /opt/bin/python2
          sudo ln -sf /opt/python/bin/virtualenv /opt/bin/virtualenv
          sudo rm -rf /opt/ActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695.tar.gz

Once the YAML file was written, I used the CoreOS Config Transpiler to generate the JSON output. The screenshot below shows how to run the binary to produce the JSON output, which is written to the terminal.

From there, you can copy the entire output into an Ignition JSON file, or copy-and-paste just the bits that are needed to be added to an existing Ignition JSON file.

You’ve likely noticed there are lots of special characters in the JSON output that are necessary to write the script that will install Python, as described by the YAML file. In addition to that, the output is also one big blob of text — it does not have whitespace formatting, so you’ll need to decide how you want to format your own Ignition file. I personally prefer to take the time to properly format it in a reader-friendly way, as can be seen in the stateful-config.ign file.

Step 2: Understanding the PXEBOOT CoreOS Ignition File

pxeboot-config.ign (S3 download link)

The Ignition file can include a great number of configuration items within in. The Ignition specification includes sections for networking, storage, filesystems, systemd drop-ins and users. The pxeboot-config.ign Ignition file is much smaller compared to the one used when the stateful installation of CoreOS is performed. There is one section I want to highlight independently since it is crucial for it to be in place before the installation can begin.

 

The storage section includes a portion where fdisk is used to create a partition table on the local disk within the CoreOS virtual machine. The code included in this file will work regardless of what size disk is attached to the virtual machine. Right now I am creating a 50Gb disk on my vSAN datastore, however if I change the VM specification later to be larger or smaller, this bit of code will continue to work without modification.

The final part of the storage section then formats the partition using ext4 as the filesystem format. Ignition supports other filesystem types, such as xfs, if you choose to use a different format.

Step 3: Understanding the Stateful CoreOS Ignition File

stateful-config.ign (S3 download link)

Now we will go through each section of code included in the stateful-config.ign file I am using when the stateful installation of CoreOS is performed on one of the deployed nodes. At a minimum, an Ignition file should include at least one user, with an associated SSH key to allow for remote logins to be successful.

There are many examples available from the CoreOS site itself and these were used as reference points when I was building this Ignition file.

Now I will go through each section and describe what actions will be performed when the file is run.

Lines 1-5 define the Ignition version that is to be used — like an OpenStack Heat template, the version will unlock certain features contained in the specification.

The storage section of the Ignition file is where local files can be created (shell scripts, flat files, etc) and where storage devices are formatted. Lines 7-17 define the first file that needs to be created on the local filesystem. The file itself — /etc/motd — is a simple flat file that I wanted to write so that I would know the stateful installation had been performed on the local node. The contents section requires special formatting and this is where the Config Transpiler is helpful. As shown above, a YAML file can be created and the Config Transpiler used to convert it into the correctly formatted JSON code. The YAML file snippet looked like:

storage:
  files:
  - path: /etc/motd
    filesystem: root
    mode: 0644
    contents:
      inline: |
        Stateful CoreOS Installation.

Lines 18-28 create the /home/deploy/install_python.sh shell script that will be used later to actually perform the installation. Remember, the storage section in the Ignition file is not executing any files, it is merely creating them.

Lines 29-41 are now defining another shell script, /home/deploy/gethost.sh, that will be used to assign the FQDN as the hostname of the CoreOS node. This is an important piece since each node will be receiving a DHCP address and as we get further into the automation/orchestration with Ansible, it will be necessary to know exactly which FQDNs exist within the environment.

Line 41 closes off the storage section of the Ignition file. The next section is for systemd units and drop-ins.

Line 42 tells Ignition we are now going to be providing definitions we expect systemd to use during the boot process. This is where Ignition shows some of its robustness — it allows us to create systemd units early enough in the boot process to affect how the system will run when it is brought online fully.

Lines 44-48 define the first systemd unit. Using the /home/deploy/gethost.sh shell script that was defined in the storage section, the Ignition file creates the /etc/systemd/system/set-hostname.service file that will be run during the boot process. The formatting of the contents section here is less severe than the contents section inside a files unit (above). Here we can simply type the characters, including spaces and use the familiar ‘\n’ syntax for newlines.

As you can see the unit above creates the /etc/systemd/system/set-hostname.service file with the following contents:

[Unit]
Description=Use FQDN to set hostname.
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
ExecStartPre=/usr/bin/chmod 755 /home/deploy/gethost.sh
ExecStartPre=/usr/bin/chown deploy:deploy /home/deploy/gethost.sh
ExecStart=/home/deploy/gethost.sh

[Install]
WantedBy=multi-user.target

Lines 49-53 take the Python installation script Ignition created and creates a systemd unit for it as well. I confess that this may not be the most ideal method for installing Python, but it works.

The /etc/systemd/system/env-python.service file is created with the following contents:

[Unit]
Description=Install Python for Ansible.
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
ExecStartPre=/usr/bin/chmod 755 /home/deploy/install_python.sh
ExecStartPre=/usr/bin/chown deploy:deploy /home/deploy/install_python.sh
ExecStart=/home/deploy/install_python.sh

[Install]
WantedBy=multi-user.target

There is a systemd caveat I want to go over that were instrumental is being able to deliver a functional Ignition file. As I worked through setting the hostname — which should be a relatively easy task — I ran into all sorts of issues. After working through the script, adding debugging messages to the shell script, I was able to determine the systemd unit was being run before the network was fully online — resulting in the scripts inability to successfully query a DNS server to resolve the FQDN. After reading through more blog posts and GitHub pages, I came across the syntax for making sure my systemd services were not being executed until after the network was fully online.

The two key lines here are:

After=network-online.target
Wants=network-online.target

This instructs systemd to not execute this unit until after the network is confirmed to be online. There is another systemd target server — network.target — but it does not guarantee the network is actually fully online. Instead the network.target unit is released after the interface is configured, not necessarily after all of the networking components are fully operational. Using the network-online.target unit ensured the two shell scripts I needed systemd to execute were able to leverage the functioning network.

Lines 54-59 define the last systemd unit in my Ignition file, which tells CoreOS to start the etcd2 service. The configuration of etcd2 will be performed by Ansible and covered in a later post.

 

The final portion of the Ignition file defines users  the CoreOS system should have when it is fully configured. In the file I have configured a single user, deploy, and assigned an SSH key that can be used to log into the CoreOS node. The code also defines the user to be part of the sudo and the docker groups, which are predefined in the operating system.

Feel free to reach out over Twitter if you have any questions or comments.

[Introduction] [Part 1 – Bootstrap CoreOS with Ignition] [Part 2 – Understanding CoreOS Ignition] [Part 3 – Getting started with Ansible]

Read More

The first post in the series went over the design goals and the logical diagram of the Kubernetes environment. This post will include the necessary steps to PXEBOOT a CoreOS node, install the VMware Tools included version of CoreOS and perform an initial configuration of the CoreOS node with Ignition.

After determining what the Infrastructure-as-Code project would be working to accomplish, I broke the project down into several pieces. The decision was made to start off with learning how CoreOS worked, how to install and configure it in a manner that would allow the deployment of 1, 5, 100 or 1000 nodes — with each node operating in the same way every single time. As familiar as I am with Big Data Extensions and how the management server deploys things with Chef, I decided to go in a different direction. I did not want to use a template VM that is copied over and over again — instead I chose to use a PXEBOOT server for performing the initial installation of CoreOS.

In this post, I will detail how to configure an Ubuntu node to act as the PXEBOOT server, how to perform a CoreOS stateful installation and provide the necessary Ignition files for accomplishing these tasks.

Step 1: Ubuntu 17.10 PXEBOOT Node

I am using an Ubuntu Server 17.10 virtual machine as my beachhead node where I am running the tftpd and bind9 services for the entire micro-segmented network. It had been a few years since I had to setup a PXEBOOT server and I needed a refresher course when I set out to work on this project. After getting a base install with sshd running on an Ubuntu Server 17.10 node, the following steps were required to configure tftpd-hpa and get the PXE images in place.

Configure a PXEBOOT Linux server:

$ sudo apt-get -y install tftpd-hpa syslinux pxelinux initramfs-tools
$ sudo vim /etc/default/tftpd-hpa

# /etc/default/tftpd-hpa
TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/var/lib/tftpboot"
TFTP_ADDRESS=":69"
TFTP_OPTIONS="--secure"
RUN_DAEMON="yes"
OPTIONS="-l -s /var/lib/tftpboot"

$ sudo mkdir -p /var/lib/tftpboot/pxelinux.cfg
$ sudo /var/lib/tftpboot/pxelinux.cfg

default coreos
prompt 1
timeout 15
display boot.msg

label coreos
  menu default
  kernel coreos_production_pxe.vmlinuz
  initrd coreos_production_pxe_image.cpio.gz
  append coreos.first_boot=1 coreos.config.url=https://s3-us-west-1.amazonaws.com/s3-kube-coreos/pxe-config.ign cloud-config-url=https://s3-us-west-1.amazonaws.com/s3-kube-coreos/cloud-control.sh

Next, it is necessary to download the CoreOS boot files:

$ cd /var/lib/tftpboot
$ sudo wget https://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz
$ sudo wget https://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz.sig
$ sudo wget https://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz
$ sudo wget https://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz.sig
$ gpg --verify coreos_production_pxe.vmlinuz.sig
$ gpg --verify coreos_production_pxe_image.cpio.gz.sig

After the CoreOS images are downloaded a restart of the tftpd-hpa service should be all that is required for this step.

Step 2: CoreOS Ignition

CoreOS replaced the previous coreos-cloudinit orchestration with Ignition to provide the orchestration to the operating system. Ignition is designed to run early on in the boot process to allow the user space to be modified prior to executing many of the operation services. Whereas it used to be possible to use a YAML configuration file, Ignition now relies on a JSON to define what actions (partitioning, user creation, file creation, etc) are to occur during the first boot of the system. Creating the JSON file and understanding how systemd interacts with other services was the biggest initial challenge to adopting CoreOS for me.

If you are new to Ignition, I highly suggest reading the following pages:

A major challenge I faced was the inconsistent examples available on the Internet. Even using an Ignition file a co-worker provided to me proved to be difficult as it seemingly did not work as expected. Through much trial and error — I must have used up an entire /24 DHCP scope booting test VMs — I was able to get the following two Ignition files working.

The first Ignition file is used during the PXEBOOT process — it configures just enough of the system to perform the stateful installation.

pxe-config.ign (S3 download link)

{
  "ignition": {
    "version": "2.1.0",
    "config": {}
  },
  "storage": {
    "disks": [{
      "device": "/dev/sda",
      "wipeTable": true,
      "partitions": [{
        "label": "ROOT",
        "number": 0,
        "size": 0,
        "start": 0
      }]
    }],
  "filesystems": [{
    "mount": {
      "device": "/dev/sda1",
      "format": "ext4",
      "wipeFilesystem": true,
      "options": [ "-L", "ROOT" ]
     }
   }]
  },
  "systemd": {
     "units": [
       {
         "contents": "[Unit]\nDescription=Set hostname to DHCP FQDN\n\n[Service]\nType=oneshot\nExecStart=/bin/sh -c \"IP=$(ip add show ens192 | awk '/inet/ {print $2}' | cut -d/ -f1 |cut -d. -f4 | head -1) ; sudo hostnamectl set-hostname dhcp-coreos$IP\"\n",
         "enabled": true,
         "name": "set-hostname.service"
       },
       {
         "name": "etcd2.service",
         "enabled": true
       }
     ]
  },
  "networkd": {},
  "passwd": {
    "users": [
      {
        "name": "deploy",
        "sshAuthorizedKeys": [
          "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAEAQCsNebg9k312OhcZlC+JM8daEyT5XpFEb1gnUgEHms+/yft6rlr+Y/BOXC9r/0UR2VB41tpx9B8ZZADHa/I8cZKctRP4idwKWlPJlxqPohVWwgGk9oYyDY4612bO9gYQros9XKDI+IZMc0xOrdm7D7dowzheez77OQeZFKtef0w61LdBTQn4JXAK0DhuldGwvoH7SDkEMk1FH3U45DSljgMOAwbxnr6Gy2embr6qHo15zrGs0OyHFY0YZXCZ1xvhNYuBm8/H06JZnI2qPBGWaRwDNky6MXEtWBUSXjuMsIApGg1nR3hjZbwtN3uH0/VMH/uk7m9mvZXpeu/ktAn70IP/8wq4HjN6pXGY9gsvA2qQULNAI8t5wYuwSa/cm/aWC0Z8rgS6wE04j5i5jLlLpVNHvmBrc3BxKO5AV9k/19TQLSnqbmT9aU7mC8CvguHsy2g5nagqzUwHfpbOS64kYcgISu2LjYdOCRpr9NSzeR3N3l+3yG+QfNE73x9yPifd9aE21Mc3JLIwq+Qo0ZmKrgAu615Y2r7bcEx4wt7SF98rvAC8IZDbMNukSUMR3LPRaQq00OGUxVPdHdxwyLaH4UZ3wb43tFfaDreYAy1SeX1cTHjZ01MAHk2P5mhGPxeUh7LW7w+57GoeFY+aF9SEyrdqpd6DhUC15pJT9Tje/sxTOXUCVWyGgsyxi4ygeZ3ZUb0oUwQ2bnnnzNSXHl+qx722w9saE+LNuZOsnTY26+1TVaYKNczQwGsnjyZdF3VslsQskZ5cld5AeHkPrkrsISjhCAPxP7hOLJRhY2gZk/FqwycZdjARz75MNegidQFNN7MuGaN+F9YinQIHsbReoGHyaKN40tyThs9RwZr7lOPgngjhEddEuaAgre7k4sln9x3PRlNzGX5kPVK+7ccQMWI3DgvMUxkUtV5Di+BNzhtUVN8D8yNjajAf3zk7gEgWdeSNse+GUCwQWt0VCwDIfA1RhfWnyMwukgxqmQe7m5jM4YjLyR7AFe2CeB08jOES9s+N44kWOlrnG3Mf41W2oZ6FbiFcB7+YHGNxnlxK+0QluP17rISgUmnCkEgwGbyisXMrNHTaGfApxd4CertVab0wOvtDNnH4x7ejEiNHiN1crOzpMtnSVnrRi+M+f9w3ChCsirc+3H8tbpSOssI7D3p1eWZlF6z1OSb9pp4+JYwlmAisyz/vZyjC7vtEXsJt3e4JLM1ef62mZTcKHP8xWP3k78hPB5twzSwhMVtZCB/MIT3pg7DA90fbhBkHZIVczgBjN9tOJilHPTuBeuKNzWD0Rhi0CSdzohDYVsO/PKA5ZyEncx83Y9pc4zpcrxgdU2H5NdqkLW9yw7O5gvau7jj cmutchler@cmutchler-MBP.local"
          ],
          "groups": [ "sudo", "docker" ]
     }
   ]
  }
}

During the initial launch of the virtual machine, the PXEBOOT server tells the system to download the Ignition file and a Cloud Config file. The Cloud Config file downloads a shell script that I’ve written to install CoreOS to the /dev/sda disk attached to the VM.

cloud-config-url=https://s3-us-west-1.amazonaws.com/s3-kube-coreos/cloud-control.sh

cloud-control.sh (S3 download link)

#!/bin/bash

wget https://s3-us-west-1.amazonaws.com/s3-kube-coreos/stateful-config.ign
sudo coreos-install -d /dev/sda -i stateful-config.ign -C stable -V current -o vmware_raw
sudo reboot

As you can see, the cloud-control.sh script downloads a second Ignition file from S3 and uses that when it is performing the stateful install of CoreOS. The vmware_raw version of CoreOS includes VMtools within it — this will play an important role as we continue to automate the entire stack.

stateful-config.ign (S3 download link)

{
  "ignition": {
    "version": "2.1.0",
    "config": {}
  },
  "storage": {
    "files": [
      {
        "filesystem": "root",
        "group": {},
        "path": "/etc/motd",
        "user": {},
        "contents": {
          "source": "data:,Stateful%20CoreOS%20Installation.%0A",
          "verification": {}
        }
      },
      {
        "filesystem": "root",
        "group": {},
        "path": "/home/deploy/install_python.sh",
        "user": {},
        "contents": {
          "source":"data:,%23!%2Fusr%2Fbin%2Fbash%0Asudo%20mkdir%20-p%20%2Fopt%2Fbin%0Acd%20%2Fopt%0Asudo%20wget%20http%3A%2F%2F192.168.0.2%3A8080%2FActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695.tar.gz%0Asudo%20tar%20-zxf%20ActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695.tar.gz%0Asudo%20mv%20ActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695%20apy%0Asudo%20%2Fopt%2Fapy%2Finstall.sh%20-I%20%2Fopt%2Fpython%0Asudo%20ln%20-sf%20%2Fopt%2Fpython%2Fbin%2Feasy_install%20%2Fopt%2Fbin%2Feasy_install%0Asudo%20ln%20-sf%20%2Fopt%2Fpython%2Fbin%2Fpip%20%2Fopt%2Fbin%2Fpip%0Asudo%20ln%20-sf%20%2Fopt%2Fpython%2Fbin%2Fpython%20%2Fopt%2Fbin%2Fpython%0Asudo%20ln%20-sf%20%2Fopt%2Fpython%2Fbin%2Fpython%20%2Fopt%2Fbin%2Fpython2%0Asudo%20ln%20-sf%20%2Fopt%2Fpython%2Fbin%2Fvirtualenv%20%2Fopt%2Fbin%2Fvirtualenv%0Asudo%20rm%20-rf%20%2Fopt%2FActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695.tar.gz%0A",
          "verification": {},
          "mode": 420
        }
      },
      {
        "filesystem": "root",
        "group": {},
        "path": "/home/deploy/gethost.sh",
        "user": {},
        "contents": {
          "source":"data:,%23!%2Fbin%2Fbash%0AIP%3D%24(%2Fusr%2Fbin%2Fifconfig%20ens192%20%7C%20%2Fusr%2Fbin%2Fawk%20'%2Finet%5Cs%2F%20%7Bprint%20%242%7D'%20%7C%20%2Fusr%2Fbin%2Fxargs%20host%20%7C%20%2Fusr%2Fbin%2Fawk%20'%7Bprint%20%245%7D'%20%7C%20%2Fusr%2Fbin%2Fsed%20s'%2F.%24%2F%2F')%0AHOSTNAME%3D%24IP%0A%2Fusr%2Fbin%2Fsudo%20%2Fusr%2Fbin%2Fhostnamectl%20set-hostname%20%24HOSTNAME%0A",
          "verification": {},
          "mode": 493
        }
      }
    ]
  },
  "systemd": {
    "units": [
      {
        "contents": "[Unit]\nDescription=Use FQDN to set hostname.\nAfter=network-online.target\nWants=network-online.target\n\n[Service]\nType=oneshot\nExecStartPre=/usr/bin/chmod 755 /home/deploy/gethost.sh\nExecStartPre=/usr/bin/chown deploy:deploy /home/deploy/gethost.sh\nExecStart=/home/deploy/gethost.sh\n\n[Install]\nWantedBy=multi-user.target\n",
        "enabled": true,
        "name": "set-hostname.service"
      },
      {
        "contents": "[Unit]\nDescription=Install Python for Ansible.\nAfter=network-online.target\nWants=network-online.target\n\n[Service]\nType=oneshot\nExecStartPre=/usr/bin/chmod 755 /home/deploy/install_python.sh\nExecStartPre=/usr/bin/chown deploy:deploy /home/deploy/install_python.sh\nExecStart=/home/deploy/install_python.sh\n\n[Install]\nWantedBy=multi-user.target\n",
        "enabled": true,
        "name": "env-python.service"
      },
      {
        "name": "etcd2.service",
        "enabled": true
      }
    ]
  },
  "networkd": {},
  "passwd": {
    "users": [
      {
        "name": "deploy",
        "sshAuthorizedKeys": [
          "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAEAQCsNebg9k312OhcZlC+JM8daEyT5XpFEb1gnUgEHms+/yft6rlr+Y/BOXC9r/0UR2VB41tpx9B8ZZADHa/I8cZKctRP4idwKWlPJlxqPohVWwgGk9oYyDY4612bO9gYQros9XKDI+IZMc0xOrdm7D7dowzheez77OQeZFKtef0w61LdBTQn4JXAK0DhuldGwvoH7SDkEMk1FH3U45DSljgMOAwbxnr6Gy2embr6qHo15zrGs0OyHFY0YZXCZ1xvhNYuBm8/H06JZnI2qPBGWaRwDNky6MXEtWBUSXjuMsIApGg1nR3hjZbwtN3uH0/VMH/uk7m9mvZXpeu/ktAn70IP/8wq4HjN6pXGY9gsvA2qQULNAI8t5wYuwSa/cm/aWC0Z8rgS6wE04j5i5jLlLpVNHvmBrc3BxKO5AV9k/19TQLSnqbmT9aU7mC8CvguHsy2g5nagqzUwHfpbOS64kYcgISu2LjYdOCRpr9NSzeR3N3l+3yG+QfNE73x9yPifd9aE21Mc3JLIwq+Qo0ZmKrgAu615Y2r7bcEx4wt7SF98rvAC8IZDbMNukSUMR3LPRaQq00OGUxVPdHdxwyLaH4UZ3wb43tFfaDreYAy1SeX1cTHjZ01MAHk2P5mhGPxeUh7LW7w+57GoeFY+aF9SEyrdqpd6DhUC15pJT9Tje/sxTOXUCVWyGgsyxi4ygeZ3ZUb0oUwQ2bnnnzNSXHl+qx722w9saE+LNuZOsnTY26+1TVaYKNczQwGsnjyZdF3VslsQskZ5cld5AeHkPrkrsISjhCAPxP7hOLJRhY2gZk/FqwycZdjARz75MNegidQFNN7MuGaN+F9YinQIHsbReoGHyaKN40tyThs9RwZr7lOPgngjhEddEuaAgre7k4sln9x3PRlNzGX5kPVK+7ccQMWI3DgvMUxkUtV5Di+BNzhtUVN8D8yNjajAf3zk7gEgWdeSNse+GUCwQWt0VCwDIfA1RhfWnyMwukgxqmQe7m5jM4YjLyR7AFe2CeB08jOES9s+N44kWOlrnG3Mf41W2oZ6FbiFcB7+YHGNxnlxK+0QluP17rISgUmnCkEgwGbyisXMrNHTaGfApxd4CertVab0wOvtDNnH4x7ejEiNHiN1crOzpMtnSVnrRi+M+f9w3ChCsirc+3H8tbpSOssI7D3p1eWZlF6z1OSb9pp4+JYwlmAisyz/vZyjC7vtEXsJt3e4JLM1ef62mZTcKHP8xWP3k78hPB5twzSwhMVtZCB/MIT3pg7DA90fbhBkHZIVczgBjN9tOJilHPTuBeuKNzWD0Rhi0CSdzohDYVsO/PKA5ZyEncx83Y9pc4zpcrxgdU2H5NdqkLW9yw7O5gvau7jj cmutchler@cmutchler-MBP.local"
        ],
        "groups": [ "sudo", "docker" ]
      }
    ]
  }
}

Step 3: Configure DHCP on NSX Edge

The last piece before a virtual machine can be booted will be to configure the NSX Edge needs to have the DHCP services configured and setup to use the Ubuntu server. I plan to automate this piece through Ansible in a future article, for now I will simply show you how it needs to be configured in the UI.

Step 4: Booting a VM

Everything should be in place now to boot the first VM. To be fair, I booted the “first” VM about 150 times as I worked through all of the Ignition iterations to get everything working as I intended. For my lab virtual machines, I am configuring the nodes with the following specifications:

  • 2 vCPU
  • 8 GB RAM
  • 50 GB hard disk

After powering on the VM and watching it go through the boot process, it takes about 5 minutes for it to perform the stateful installation and become available over SSH.

The next post will go through the stateful-config.ign Ignition file in detail, reviewing all the actions it is performing. I hope you are enjoying the series! Find me on Twitter if you have questions or comments.

[Introduction] [Part 1 – Bootstrap CoreOS with Ignition] [Part 2 – Understanding CoreOS Ignition] [Part 3 – Getting started with Ansible]

Read More

In an effort to get caught-up with the Cloud Native space, I am embarking on an effort to build a completely dynamic Kubernetes environment entirely through code. To accomplish this, I am using (and learning) several technologies, including:

  • Container OS (CoreOS) for the Kubernetes nodes.
  • Ignition for configuring CoreOS.
  • Ansible for automation and orchestration.
  • Kubernetes
  • VMware NSX for micro-segmention, load balancing and DHCP.

There are a lot of great articles on the Internet around Kubernetes, CoreOS and other Cloud Native technologies. If you are unfamiliar with Kubernetes, I highly encourage you to read the articles written by Hany Michaels (Kubernetes Introduction for VMware Users and Kubernetes in the Enterprise – The Design Guide). These are especially useful if you already have a background in VMware technologies and are just getting started in the Cloud Native space. Mr. Michaels does an excellent job comparing concepts you are already familiar with and aligning them with Kubernetes components.

Moving on, the vision I have for this Infrastructure-as-Code project is to build a Kubernetes cluster leveraging my vSphere lab with the SDDC stack (vSphere, vCenter, vSAN and NSX). I want to codify it in a way that an environment can be stood up or torn down in a matter of minutes without having to interact with any user-interface. I am also hopeful the lessons learned whilst working on this project will be applicable to other cloud native technologies, including Mesos and Cloud Foundry environments.

Logically, the project will create the following within my vSphere lab environment:

 

I will cover the NSX components in a future post, but essentially each Kubernetes environment will be attached to a HA pair of NSX Edges. The ECMP Edges and Distributed Logical Router are already in place, as they are providing upstream network connectivity for my vSphere lab. The project will focus on the internal network (VXLAN-backed), attached to the NSX HA Edge devices, which will provide the inter-node network connectivity. The NSX Edge is configured to provide firewall, routing and DHCP services to all components inside its network space.

The plan for the project and the blog series is to document every facet of development and execution of the components, with the end goal being the ability of anyone reading the series to understand how all the pieces interrelate with one another. The series will kickoff with the following posts:

  • Bootstrapping CoreOS with Ignition
  • Understanding Ignition files
  • Using Ansible with Ignition
  • Building Kubernetes cluster with Ansible
  • Deploying NSX components using Ansible
  • Deploying full stack using Ansible

If time allows, I may also embark on migrating from NSX-V to NSX-T for providing some of the tenant software-defined networking.

I hope you enjoy the series!

[Introduction] [Part 1 – Bootstrap CoreOS with Ignition] [Part 2 – Understanding CoreOS Ignition] [Part 3 – Getting started with Ansible]

 

Read More

I had the opportunity to attend the CoreOS Fest 2017 in San Francisco for a day this past week. There are lots of exciting things happening in the cloud native space, and CoreOS, with its heavy influence with Kubernetes is at the forefront of much of the innovation. The conference itself was on the smaller side, but the amount of emerging technology focused sessions was impressive — I will be excited to see how it grows over the coming years. While there, I was able to attend the session by one of Adobe’s Principle Architects — Frans van Rooyen. (Frans and I worked together from 2012 – 2014 at Adobe.)

In his session, he spoke about several fundamental architecture principles and how they have been applied in the new multi-cloud initiative at Adobe. The platform they have built over the past two years is capable of being deployed inside a data center, inside AWS, inside Azure and even locally on a developers laptop — while providing the same experience to the developer or operations engineer.

The platform is based on CoreOS and uses the Ignition project to provide the same level of provisioning regardless of which cloud platform the workload is deployed on. I hadn’t heard of Ignition or how it operated to provide the level of provisioning it does and will be a technology I investigate further into now. If you are interested in learning more, I encourage you to reach out to Frans over Twitter.

Frans has also spoken about the multi-cloud platform at Mesoscon, focusing on the inclusion of Apache Mesos — the session can be watched on YouTube.

 

 

Read More