Infrastructure-as-Code: Bootstrap CoreOS with Ignition

The first post in the series went over the design goals and the logical diagram of the Kubernetes environment. This post will include the necessary steps to PXEBOOT a CoreOS node, install the VMware Tools included version of CoreOS and perform an initial configuration of the CoreOS node with Ignition.

After determining what the Infrastructure-as-Code project would be working to accomplish, I broke the project down into several pieces. The decision was made to start off with learning how CoreOS worked, how to install and configure it in a manner that would allow the deployment of 1, 5, 100 or 1000 nodes — with each node operating in the same way every single time. As familiar as I am with Big Data Extensions and how the management server deploys things with Chef, I decided to go in a different direction. I did not want to use a template VM that is copied over and over again — instead I chose to use a PXEBOOT server for performing the initial installation of CoreOS.

In this post, I will detail how to configure an Ubuntu node to act as the PXEBOOT server, how to perform a CoreOS stateful installation and provide the necessary Ignition files for accomplishing these tasks.

Step 1: Ubuntu 17.10 PXEBOOT Node

I am using an Ubuntu Server 17.10 virtual machine as my beachhead node where I am running the tftpd and bind9 services for the entire micro-segmented network. It had been a few years since I had to setup a PXEBOOT server and I needed a refresher course when I set out to work on this project. After getting a base install with sshd running on an Ubuntu Server 17.10 node, the following steps were required to configure tftpd-hpa and get the PXE images in place.

Configure a PXEBOOT Linux server:

$ sudo apt-get -y install tftpd-hpa syslinux pxelinux initramfs-tools
$ sudo vim /etc/default/tftpd-hpa

# /etc/default/tftpd-hpa
TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/var/lib/tftpboot"
TFTP_ADDRESS=":69"
TFTP_OPTIONS="--secure"
RUN_DAEMON="yes"
OPTIONS="-l -s /var/lib/tftpboot"

$ sudo mkdir -p /var/lib/tftpboot/pxelinux.cfg
$ sudo /var/lib/tftpboot/pxelinux.cfg

default coreos
prompt 1
timeout 15
display boot.msg

label coreos
  menu default
  kernel coreos_production_pxe.vmlinuz
  initrd coreos_production_pxe_image.cpio.gz
  append coreos.first_boot=1 coreos.config.url=https://s3-us-west-1.amazonaws.com/s3-kube-coreos/pxe-config.ign cloud-config-url=https://s3-us-west-1.amazonaws.com/s3-kube-coreos/cloud-control.sh

Next, it is necessary to download the CoreOS boot files:

$ cd /var/lib/tftpboot
$ sudo wget https://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz
$ sudo wget https://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz.sig
$ sudo wget https://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz
$ sudo wget https://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz.sig
$ gpg --verify coreos_production_pxe.vmlinuz.sig
$ gpg --verify coreos_production_pxe_image.cpio.gz.sig

After the CoreOS images are downloaded a restart of the tftpd-hpa service should be all that is required for this step.

Step 2: CoreOS Ignition

CoreOS replaced the previous coreos-cloudinit orchestration with Ignition to provide the orchestration to the operating system. Ignition is designed to run early on in the boot process to allow the user space to be modified prior to executing many of the operation services. Whereas it used to be possible to use a YAML configuration file, Ignition now relies on a JSON to define what actions (partitioning, user creation, file creation, etc) are to occur during the first boot of the system. Creating the JSON file and understanding how systemd interacts with other services was the biggest initial challenge to adopting CoreOS for me.

If you are new to Ignition, I highly suggest reading the following pages:

A major challenge I faced was the inconsistent examples available on the Internet. Even using an Ignition file a co-worker provided to me proved to be difficult as it seemingly did not work as expected. Through much trial and error — I must have used up an entire /24 DHCP scope booting test VMs — I was able to get the following two Ignition files working.

The first Ignition file is used during the PXEBOOT process — it configures just enough of the system to perform the stateful installation.

pxe-config.ign (S3 download link)

{
  "ignition": {
    "version": "2.1.0",
    "config": {}
  },
  "storage": {
    "disks": [{
      "device": "/dev/sda",
      "wipeTable": true,
      "partitions": [{
        "label": "ROOT",
        "number": 0,
        "size": 0,
        "start": 0
      }]
    }],
  "filesystems": [{
    "mount": {
      "device": "/dev/sda1",
      "format": "ext4",
      "wipeFilesystem": true,
      "options": [ "-L", "ROOT" ]
     }
   }]
  },
  "systemd": {
     "units": [
       {
         "contents": "[Unit]\nDescription=Set hostname to DHCP FQDN\n\n[Service]\nType=oneshot\nExecStart=/bin/sh -c \"IP=$(ip add show ens192 | awk '/inet/ {print $2}' | cut -d/ -f1 |cut -d. -f4 | head -1) ; sudo hostnamectl set-hostname dhcp-coreos$IP\"\n",
         "enabled": true,
         "name": "set-hostname.service"
       },
       {
         "name": "etcd2.service",
         "enabled": true
       }
     ]
  },
  "networkd": {},
  "passwd": {
    "users": [
      {
        "name": "deploy",
        "sshAuthorizedKeys": [
          "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAEAQCsNebg9k312OhcZlC+JM8daEyT5XpFEb1gnUgEHms+/yft6rlr+Y/BOXC9r/0UR2VB41tpx9B8ZZADHa/I8cZKctRP4idwKWlPJlxqPohVWwgGk9oYyDY4612bO9gYQros9XKDI+IZMc0xOrdm7D7dowzheez77OQeZFKtef0w61LdBTQn4JXAK0DhuldGwvoH7SDkEMk1FH3U45DSljgMOAwbxnr6Gy2embr6qHo15zrGs0OyHFY0YZXCZ1xvhNYuBm8/H06JZnI2qPBGWaRwDNky6MXEtWBUSXjuMsIApGg1nR3hjZbwtN3uH0/VMH/uk7m9mvZXpeu/ktAn70IP/8wq4HjN6pXGY9gsvA2qQULNAI8t5wYuwSa/cm/aWC0Z8rgS6wE04j5i5jLlLpVNHvmBrc3BxKO5AV9k/19TQLSnqbmT9aU7mC8CvguHsy2g5nagqzUwHfpbOS64kYcgISu2LjYdOCRpr9NSzeR3N3l+3yG+QfNE73x9yPifd9aE21Mc3JLIwq+Qo0ZmKrgAu615Y2r7bcEx4wt7SF98rvAC8IZDbMNukSUMR3LPRaQq00OGUxVPdHdxwyLaH4UZ3wb43tFfaDreYAy1SeX1cTHjZ01MAHk2P5mhGPxeUh7LW7w+57GoeFY+aF9SEyrdqpd6DhUC15pJT9Tje/sxTOXUCVWyGgsyxi4ygeZ3ZUb0oUwQ2bnnnzNSXHl+qx722w9saE+LNuZOsnTY26+1TVaYKNczQwGsnjyZdF3VslsQskZ5cld5AeHkPrkrsISjhCAPxP7hOLJRhY2gZk/FqwycZdjARz75MNegidQFNN7MuGaN+F9YinQIHsbReoGHyaKN40tyThs9RwZr7lOPgngjhEddEuaAgre7k4sln9x3PRlNzGX5kPVK+7ccQMWI3DgvMUxkUtV5Di+BNzhtUVN8D8yNjajAf3zk7gEgWdeSNse+GUCwQWt0VCwDIfA1RhfWnyMwukgxqmQe7m5jM4YjLyR7AFe2CeB08jOES9s+N44kWOlrnG3Mf41W2oZ6FbiFcB7+YHGNxnlxK+0QluP17rISgUmnCkEgwGbyisXMrNHTaGfApxd4CertVab0wOvtDNnH4x7ejEiNHiN1crOzpMtnSVnrRi+M+f9w3ChCsirc+3H8tbpSOssI7D3p1eWZlF6z1OSb9pp4+JYwlmAisyz/vZyjC7vtEXsJt3e4JLM1ef62mZTcKHP8xWP3k78hPB5twzSwhMVtZCB/MIT3pg7DA90fbhBkHZIVczgBjN9tOJilHPTuBeuKNzWD0Rhi0CSdzohDYVsO/PKA5ZyEncx83Y9pc4zpcrxgdU2H5NdqkLW9yw7O5gvau7jj cmutchler@cmutchler-MBP.local"
          ],
          "groups": [ "sudo", "docker" ]
     }
   ]
  }
}

During the initial launch of the virtual machine, the PXEBOOT server tells the system to download the Ignition file and a Cloud Config file. The Cloud Config file downloads a shell script that I’ve written to install CoreOS to the /dev/sda disk attached to the VM.

cloud-config-url=https://s3-us-west-1.amazonaws.com/s3-kube-coreos/cloud-control.sh

cloud-control.sh (S3 download link)

#!/bin/bash

wget https://s3-us-west-1.amazonaws.com/s3-kube-coreos/stateful-config.ign
sudo coreos-install -d /dev/sda -i stateful-config.ign -C stable -V current -o vmware_raw
sudo reboot

As you can see, the cloud-control.sh script downloads a second Ignition file from S3 and uses that when it is performing the stateful install of CoreOS. The vmware_raw version of CoreOS includes VMtools within it — this will play an important role as we continue to automate the entire stack.

stateful-config.ign (S3 download link)

{
  "ignition": {
    "version": "2.1.0",
    "config": {}
  },
  "storage": {
    "files": [
      {
        "filesystem": "root",
        "group": {},
        "path": "/etc/motd",
        "user": {},
        "contents": {
          "source": "data:,Stateful%20CoreOS%20Installation.%0A",
          "verification": {}
        }
      },
      {
        "filesystem": "root",
        "group": {},
        "path": "/home/deploy/install_python.sh",
        "user": {},
        "contents": {
          "source":"data:,%23!%2Fusr%2Fbin%2Fbash%0Asudo%20mkdir%20-p%20%2Fopt%2Fbin%0Acd%20%2Fopt%0Asudo%20wget%20http%3A%2F%2F192.168.0.2%3A8080%2FActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695.tar.gz%0Asudo%20tar%20-zxf%20ActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695.tar.gz%0Asudo%20mv%20ActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695%20apy%0Asudo%20%2Fopt%2Fapy%2Finstall.sh%20-I%20%2Fopt%2Fpython%0Asudo%20ln%20-sf%20%2Fopt%2Fpython%2Fbin%2Feasy_install%20%2Fopt%2Fbin%2Feasy_install%0Asudo%20ln%20-sf%20%2Fopt%2Fpython%2Fbin%2Fpip%20%2Fopt%2Fbin%2Fpip%0Asudo%20ln%20-sf%20%2Fopt%2Fpython%2Fbin%2Fpython%20%2Fopt%2Fbin%2Fpython%0Asudo%20ln%20-sf%20%2Fopt%2Fpython%2Fbin%2Fpython%20%2Fopt%2Fbin%2Fpython2%0Asudo%20ln%20-sf%20%2Fopt%2Fpython%2Fbin%2Fvirtualenv%20%2Fopt%2Fbin%2Fvirtualenv%0Asudo%20rm%20-rf%20%2Fopt%2FActivePython-2.7.13.2715-linux-x86_64-glibc-2.12-402695.tar.gz%0A",
          "verification": {},
          "mode": 420
        }
      },
      {
        "filesystem": "root",
        "group": {},
        "path": "/home/deploy/gethost.sh",
        "user": {},
        "contents": {
          "source":"data:,%23!%2Fbin%2Fbash%0AIP%3D%24(%2Fusr%2Fbin%2Fifconfig%20ens192%20%7C%20%2Fusr%2Fbin%2Fawk%20'%2Finet%5Cs%2F%20%7Bprint%20%242%7D'%20%7C%20%2Fusr%2Fbin%2Fxargs%20host%20%7C%20%2Fusr%2Fbin%2Fawk%20'%7Bprint%20%245%7D'%20%7C%20%2Fusr%2Fbin%2Fsed%20s'%2F.%24%2F%2F')%0AHOSTNAME%3D%24IP%0A%2Fusr%2Fbin%2Fsudo%20%2Fusr%2Fbin%2Fhostnamectl%20set-hostname%20%24HOSTNAME%0A",
          "verification": {},
          "mode": 493
        }
      }
    ]
  },
  "systemd": {
    "units": [
      {
        "contents": "[Unit]\nDescription=Use FQDN to set hostname.\nAfter=network-online.target\nWants=network-online.target\n\n[Service]\nType=oneshot\nExecStartPre=/usr/bin/chmod 755 /home/deploy/gethost.sh\nExecStartPre=/usr/bin/chown deploy:deploy /home/deploy/gethost.sh\nExecStart=/home/deploy/gethost.sh\n\n[Install]\nWantedBy=multi-user.target\n",
        "enabled": true,
        "name": "set-hostname.service"
      },
      {
        "contents": "[Unit]\nDescription=Install Python for Ansible.\nAfter=network-online.target\nWants=network-online.target\n\n[Service]\nType=oneshot\nExecStartPre=/usr/bin/chmod 755 /home/deploy/install_python.sh\nExecStartPre=/usr/bin/chown deploy:deploy /home/deploy/install_python.sh\nExecStart=/home/deploy/install_python.sh\n\n[Install]\nWantedBy=multi-user.target\n",
        "enabled": true,
        "name": "env-python.service"
      },
      {
        "name": "etcd2.service",
        "enabled": true
      }
    ]
  },
  "networkd": {},
  "passwd": {
    "users": [
      {
        "name": "deploy",
        "sshAuthorizedKeys": [
          "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAEAQCsNebg9k312OhcZlC+JM8daEyT5XpFEb1gnUgEHms+/yft6rlr+Y/BOXC9r/0UR2VB41tpx9B8ZZADHa/I8cZKctRP4idwKWlPJlxqPohVWwgGk9oYyDY4612bO9gYQros9XKDI+IZMc0xOrdm7D7dowzheez77OQeZFKtef0w61LdBTQn4JXAK0DhuldGwvoH7SDkEMk1FH3U45DSljgMOAwbxnr6Gy2embr6qHo15zrGs0OyHFY0YZXCZ1xvhNYuBm8/H06JZnI2qPBGWaRwDNky6MXEtWBUSXjuMsIApGg1nR3hjZbwtN3uH0/VMH/uk7m9mvZXpeu/ktAn70IP/8wq4HjN6pXGY9gsvA2qQULNAI8t5wYuwSa/cm/aWC0Z8rgS6wE04j5i5jLlLpVNHvmBrc3BxKO5AV9k/19TQLSnqbmT9aU7mC8CvguHsy2g5nagqzUwHfpbOS64kYcgISu2LjYdOCRpr9NSzeR3N3l+3yG+QfNE73x9yPifd9aE21Mc3JLIwq+Qo0ZmKrgAu615Y2r7bcEx4wt7SF98rvAC8IZDbMNukSUMR3LPRaQq00OGUxVPdHdxwyLaH4UZ3wb43tFfaDreYAy1SeX1cTHjZ01MAHk2P5mhGPxeUh7LW7w+57GoeFY+aF9SEyrdqpd6DhUC15pJT9Tje/sxTOXUCVWyGgsyxi4ygeZ3ZUb0oUwQ2bnnnzNSXHl+qx722w9saE+LNuZOsnTY26+1TVaYKNczQwGsnjyZdF3VslsQskZ5cld5AeHkPrkrsISjhCAPxP7hOLJRhY2gZk/FqwycZdjARz75MNegidQFNN7MuGaN+F9YinQIHsbReoGHyaKN40tyThs9RwZr7lOPgngjhEddEuaAgre7k4sln9x3PRlNzGX5kPVK+7ccQMWI3DgvMUxkUtV5Di+BNzhtUVN8D8yNjajAf3zk7gEgWdeSNse+GUCwQWt0VCwDIfA1RhfWnyMwukgxqmQe7m5jM4YjLyR7AFe2CeB08jOES9s+N44kWOlrnG3Mf41W2oZ6FbiFcB7+YHGNxnlxK+0QluP17rISgUmnCkEgwGbyisXMrNHTaGfApxd4CertVab0wOvtDNnH4x7ejEiNHiN1crOzpMtnSVnrRi+M+f9w3ChCsirc+3H8tbpSOssI7D3p1eWZlF6z1OSb9pp4+JYwlmAisyz/vZyjC7vtEXsJt3e4JLM1ef62mZTcKHP8xWP3k78hPB5twzSwhMVtZCB/MIT3pg7DA90fbhBkHZIVczgBjN9tOJilHPTuBeuKNzWD0Rhi0CSdzohDYVsO/PKA5ZyEncx83Y9pc4zpcrxgdU2H5NdqkLW9yw7O5gvau7jj cmutchler@cmutchler-MBP.local"
        ],
        "groups": [ "sudo", "docker" ]
      }
    ]
  }
}

Step 3: Configure DHCP on NSX Edge

The last piece before a virtual machine can be booted will be to configure the NSX Edge needs to have the DHCP services configured and setup to use the Ubuntu server. I plan to automate this piece through Ansible in a future article, for now I will simply show you how it needs to be configured in the UI.

Step 4: Booting a VM

Everything should be in place now to boot the first VM. To be fair, I booted the “first” VM about 150 times as I worked through all of the Ignition iterations to get everything working as I intended. For my lab virtual machines, I am configuring the nodes with the following specifications:

  • 2 vCPU
  • 8 GB RAM
  • 50 GB hard disk

After powering on the VM and watching it go through the boot process, it takes about 5 minutes for it to perform the stateful installation and become available over SSH.

The next post will go through the stateful-config.ign Ignition file in detail, reviewing all the actions it is performing. I hope you are enjoying the series! Find me on Twitter if you have questions or comments.

[Introduction] [Part 1 – Bootstrap CoreOS with Ignition] [Part 2 – Understanding CoreOS Ignition] [Part 3 – Getting started with Ansible]

Infrastructure-as-Code: Project Overview

In an effort to get caught-up with the Cloud Native space, I am embarking on an effort to build a completely dynamic Kubernetes environment entirely through code. To accomplish this, I am using (and learning) several technologies, including:

  • Container OS (CoreOS) for the Kubernetes nodes.
  • Ignition for configuring CoreOS.
  • Ansible for automation and orchestration.
  • Kubernetes
  • VMware NSX for micro-segmention, load balancing and DHCP.

There are a lot of great articles on the Internet around Kubernetes, CoreOS and other Cloud Native technologies. If you are unfamiliar with Kubernetes, I highly encourage you to read the articles written by Hany Michaels (Kubernetes Introduction for VMware Users and Kubernetes in the Enterprise – The Design Guide). These are especially useful if you already have a background in VMware technologies and are just getting started in the Cloud Native space. Mr. Michaels does an excellent job comparing concepts you are already familiar with and aligning them with Kubernetes components.

Moving on, the vision I have for this Infrastructure-as-Code project is to build a Kubernetes cluster leveraging my vSphere lab with the SDDC stack (vSphere, vCenter, vSAN and NSX). I want to codify it in a way that an environment can be stood up or torn down in a matter of minutes without having to interact with any user-interface. I am also hopeful the lessons learned whilst working on this project will be applicable to other cloud native technologies, including Mesos and Cloud Foundry environments.

Logically, the project will create the following within my vSphere lab environment:

 

I will cover the NSX components in a future post, but essentially each Kubernetes environment will be attached to a HA pair of NSX Edges. The ECMP Edges and Distributed Logical Router are already in place, as they are providing upstream network connectivity for my vSphere lab. The project will focus on the internal network (VXLAN-backed), attached to the NSX HA Edge devices, which will provide the inter-node network connectivity. The NSX Edge is configured to provide firewall, routing and DHCP services to all components inside its network space.

The plan for the project and the blog series is to document every facet of development and execution of the components, with the end goal being the ability of anyone reading the series to understand how all the pieces interrelate with one another. The series will kickoff with the following posts:

  • Bootstrapping CoreOS with Ignition
  • Understanding Ignition files
  • Using Ansible with Ignition
  • Building Kubernetes cluster with Ansible
  • Deploying NSX components using Ansible
  • Deploying full stack using Ansible

If time allows, I may also embark on migrating from NSX-V to NSX-T for providing some of the tenant software-defined networking.

I hope you enjoy the series!

[Introduction] [Part 1 – Bootstrap CoreOS with Ignition] [Part 2 – Understanding CoreOS Ignition] [Part 3 – Getting started with Ansible]

 

CoreOS Fest 2017 Synopsis

I had the opportunity to attend the CoreOS Fest 2017 in San Francisco for a day this past week. There are lots of exciting things happening in the cloud native space, and CoreOS, with its heavy influence with Kubernetes is at the forefront of much of the innovation. The conference itself was on the smaller side, but the amount of emerging technology focused sessions was impressive — I will be excited to see how it grows over the coming years. While there, I was able to attend the session by one of Adobe’s Principle Architects — Frans van Rooyen. (Frans and I worked together from 2012 – 2014 at Adobe.)

In his session, he spoke about several fundamental architecture principles and how they have been applied in the new multi-cloud initiative at Adobe. The platform they have built over the past two years is capable of being deployed inside a data center, inside AWS, inside Azure and even locally on a developers laptop — while providing the same experience to the developer or operations engineer.

The platform is based on CoreOS and uses the Ignition project to provide the same level of provisioning regardless of which cloud platform the workload is deployed on. I hadn’t heard of Ignition or how it operated to provide the level of provisioning it does and will be a technology I investigate further into now. If you are interested in learning more, I encourage you to reach out to Frans over Twitter.

Frans has also spoken about the multi-cloud platform at Mesoscon, focusing on the inclusion of Apache Mesos — the session can be watched on YouTube.

 

 

OpenStack Client Docker Container

OpenStack has been my world for the past 8 months. It started out with the a work project to design and deploy a large-scale VMware Integrated OpenStack environment for internal use. It then became the design I would submit for my VCDX Defense and spend a couple hundred hours pouring over and documenting. Since then it has included helping other get “up-to-speed” on how to operationalize OpenStack. One of the necessary tools is the ability to execute commands against an OpenStack environment from anywhere.

The easiest way to do that?

A short-lived Docker container with the clients installed!

The container is short and to the point — it uses Ubuntu:latest as the base and simply adds the OpenStack clients.

# Docker container with the latest OpenStack clients

FROM ubuntu:latest

MAINTAINER chris@virtualelephant.com

RUN apt-get -y update && apt-get -y upgrade

RUN apt-get -y install python-openstackclient vim

Follow that up with a quick Docker command to launch the instance, and I’m ready to troubleshoot whatever issue may require my attention.

$ docker run -it chrismutchler/vio-client

Where I am not a developer, I find the usefulness of creating these small types of Docker containers really fun. The ability to quickly spin up a container on my laptop or whatever VM I find myself on at the time priceless.

The repo can be seen on hub.docker.com/chrismutchler/vio-client.

If you need a OpenStack Client Docker container, I hope you’ll give this one a try. Enjoy!

Bind Docker Container for vPod Lab

I am currently working on building out a vPod nested ESXi lab environment that will be deployed through OpenStack’s Heat orchestration service. As I worked out the vPod application components, I realized that I wanted to include a single Linux VM that would run various services inside Docker containers.

I needed a Bind Docker container!

It seems like everything in a VMware SDDC environment needs both the forward and reverse records working properly — so I started here. The Docker container is completely self-contained — all external zone data is stored in S3 and downloaded when the container is built.

https://hub.docker.com/r/chrismutchler/vpod-bind/

The Dockerfile for the container contains the following code:

# Designed to be used in conjunction with a nested ESXi
# virtual lab environment deployed through an OpenStack
# Heat template.

FROM ubuntu:latest

MAINTAINER chris@virtualelephant.com

RUN apt-get -y update && apt-get -y install bind9 dnsutils curl

RUN curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/db.192.168 -o /etc/bind/db.192.168 && curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/db.vsphere.local -o /etc/bind/db.vsphere.local && curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/named.conf.options -o /etc/bind/named.conf.options && curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/named.conf.local -o /etc/bind/named.conf.local

EXPOSE 53

CMD ["/usr/sbin/named", "-g", "-c", "/etc/bind/named.conf", "-u", "bind"]

To start the container, I setup the Ubuntu VM to execute the following code when it is deployed inside OpenStack.

# docker run -d -p 53:53 -p 53:53/udp chrismutchler/vpod-bind

Once running, it is now able to provide the critical DNS service inside the vPod ESXi environment. From here it is onto building out the Heat template that will leverage the container.

Enjoy!