Author: Chris

In an effort to get caught-up with the Cloud Native space, I am embarking on an effort to build a completely dynamic Kubernetes environment entirely through code. To accomplish this, I am using (and learning) several technologies, including:

  • Container OS (CoreOS) for the Kubernetes nodes.
  • Ignition for configuring CoreOS.
  • Ansible for automation and orchestration.
  • Kubernetes
  • VMware NSX for micro-segmention, load balancing and DHCP.

There are a lot of great articles on the Internet around Kubernetes, CoreOS and other Cloud Native technologies. If you are unfamiliar with Kubernetes, I highly encourage you to read the articles written by Hany Michaels (Kubernetes Introduction for VMware Users and Kubernetes in the Enterprise – The Design Guide). These are especially useful if you already have a background in VMware technologies and are just getting started in the Cloud Native space. Mr. Michaels does an excellent job comparing concepts you are already familiar with and aligning them with Kubernetes components.

Moving on, the vision I have for this Infrastructure-as-Code project is to build a Kubernetes cluster leveraging my vSphere lab with the SDDC stack (vSphere, vCenter, vSAN and NSX). I want to codify it in a way that an environment can be stood up or torn down in a matter of minutes without having to interact with any user-interface. I am also hopeful the lessons learned whilst working on this project will be applicable to other cloud native technologies, including Mesos and Cloud Foundry environments.

Logically, the project will create the following within my vSphere lab environment:

 

I will cover the NSX components in a future post, but essentially each Kubernetes environment will be attached to a HA pair of NSX Edges. The ECMP Edges and Distributed Logical Router are already in place, as they are providing upstream network connectivity for my vSphere lab. The project will focus on the internal network (VXLAN-backed), attached to the NSX HA Edge devices, which will provide the inter-node network connectivity. The NSX Edge is configured to provide firewall, routing and DHCP services to all components inside its network space.

The plan for the project and the blog series is to document every facet of development and execution of the components, with the end goal being the ability of anyone reading the series to understand how all the pieces interrelate with one another. The series will kickoff with the following posts:

  • Bootstrapping CoreOS with Ignition
  • Understanding Ignition files
  • Using Ansible with Ignition
  • Building Kubernetes cluster with Ansible
  • Deploying NSX components using Ansible
  • Deploying full stack using Ansible

If time allows, I may also embark on migrating from NSX-V to NSX-T for providing some of the tenant software-defined networking.

I hope you enjoy the series!

[Introduction] [Part 1 – Bootstrap CoreOS with Ignition] [Part 2 – Understanding CoreOS Ignition] [Part 3 – Getting started with Ansible]

 

Read More

I had the opportunity to attend the CoreOS Fest 2017 in San Francisco for a day this past week. There are lots of exciting things happening in the cloud native space, and CoreOS, with its heavy influence with Kubernetes is at the forefront of much of the innovation. The conference itself was on the smaller side, but the amount of emerging technology focused sessions was impressive — I will be excited to see how it grows over the coming years. While there, I was able to attend the session by one of Adobe’s Principle Architects — Frans van Rooyen. (Frans and I worked together from 2012 – 2014 at Adobe.)

In his session, he spoke about several fundamental architecture principles and how they have been applied in the new multi-cloud initiative at Adobe. The platform they have built over the past two years is capable of being deployed inside a data center, inside AWS, inside Azure and even locally on a developers laptop — while providing the same experience to the developer or operations engineer.

The platform is based on CoreOS and uses the Ignition project to provide the same level of provisioning regardless of which cloud platform the workload is deployed on. I hadn’t heard of Ignition or how it operated to provide the level of provisioning it does and will be a technology I investigate further into now. If you are interested in learning more, I encourage you to reach out to Frans over Twitter.

Frans has also spoken about the multi-cloud platform at Mesoscon, focusing on the inclusion of Apache Mesos — the session can be watched on YouTube.

 

 

Read More

In preparing for my recent VCDX Defense, I read a great deal of articles and a few books to better understand how to properly document and justify the design decisions I was making. One book in particular provided valuable insight that has helped me not just with the VCDX certification, but also in becoming a better Infrastructure Architect.

In IT Architect: Foundation in the Art of Infrastructure Design (Amazon link), the authors state:

“Design Decisions will support the project requirements directly or indirectly…When a specific technology is required to meet a design goal, justification is important and should be provided. With each design decision there is a direct, intended impact, but there are also other areas that may be affected…These options and their respective value can add quality to the design you make and provide insight into why you took a specific path.”

As I thought through the impact of each design decision, I tried to identify several key points, including:

  • Justification
  • Impact
  • Decision Risks
  • Risk Mitigation
  • Requirements Achieved

After I had identified each of those key points, and in some cases multiple points, for each category I made sure they were properly documented. The book provided an example table to draw inspiration from, in addition Derek Seaman did as well on a blog article. I modified the examples to fit my writing style and then included a specific table for each design decision made at the end of each major section or heading within my architecture documentation.

An example of the table and categories showing the reasoning behind a set of design decisions from my VMware Integrated OpenStack VCDX Architecture document:

vcdx_design_decision_sample

Now, when I need to revisit a design decision or another architect is reviewing the decisions within the design, there is additional information to provide insight into the thought process. It also helps to highlight what impact the decision has on the architecture as a whole.

Beyond the table and the relevant information for the design decision, it may be necessary to highlight the alternatives that were considered. As we know, there are usually multiple ways to meet a requirement — “showing your work” and being able to explain why you chose to do X versus Y in the VCDX Defense is an important aspect of the process. I found doing so within my documentation useful and you may find that to be true also.

Enjoy!

The opinions expressed in this article are entirely my own and based solely on my own VCDX certification experience. They may or may not reflect the opinions of other VCDX certification holders or the VMware VCDX program itself.


Arrasjid, John Y., Mark Gabryjelski, and Chris McCain. “Chapter 2, Design Decisions.” IT Architect: Foundation in the Art of Infrastructure Design; a Practical Guide for IT Architects. Upper Saddle River, NJ: IT Architect Resource, 2016. 49. Print.

Read More

During the process of writing the documentation necessary for the VCDX certification, I read several books and a fair number of blog articles. One article in particular that I found helpful was from Derek Seaman’s blog.

Sample VCDX-DCV Architecture Outline

In the spirit of paying it forward, I am going to share my own table of contents for others to use as a starting point. No two will be the same and some of the things I included may not be necessary in your own design — you may even feel there are sections that are missing from my own. If nothing else, I hope it can be a starting point for you in the journey towards earning the VCDX certification.

Enjoy!

The opinions expressed in this article are entirely my own and based solely on my own VCDX certification experience. They may or may not reflect the opinions of other VCDX certification holders or the VMware VCDX program itself.

Read More

OpenStack has been my world for the past 8 months. It started out with the a work project to design and deploy a large-scale VMware Integrated OpenStack environment for internal use. It then became the design I would submit for my VCDX Defense and spend a couple hundred hours pouring over and documenting. Since then it has included helping other get “up-to-speed” on how to operationalize OpenStack. One of the necessary tools is the ability to execute commands against an OpenStack environment from anywhere.

The easiest way to do that?

A short-lived Docker container with the clients installed!

The container is short and to the point — it uses Ubuntu:latest as the base and simply adds the OpenStack clients.

# Docker container with the latest OpenStack clients

FROM ubuntu:latest

MAINTAINER chris@virtualelephant.com

RUN apt-get -y update && apt-get -y upgrade

RUN apt-get -y install python-openstackclient vim

Follow that up with a quick Docker command to launch the instance, and I’m ready to troubleshoot whatever issue may require my attention.

$ docker run -it chrismutchler/vio-client

Where I am not a developer, I find the usefulness of creating these small types of Docker containers really fun. The ability to quickly spin up a container on my laptop or whatever VM I find myself on at the time priceless.

The repo can be seen on hub.docker.com/chrismutchler/vio-client.

If you need a OpenStack Client Docker container, I hope you’ll give this one a try. Enjoy!

Read More