“Nothing in the world is worth having or worth doing unless it means effort, pain, difficulty… I have never in my life envied a human being who led an easy life. I have envied a great many people who led difficult lives and led them well.” -Theodore Roosevelt

That quote from Theodore Roosevelt sums up rather well the VCDX certification. The VCDX certification takes a great deal of effort, pain and difficulty to accomplish. My personal journey included multiple defense attempts — much to my dismay and benefit. Fortunately, it was all worth it!

I am VCDX #257!

The VCDX certification requires a significant amount of time to earn. If I had to estimate it, I would say I spent between 200+ hours working on my design documentation, defense presentation, mock defenses, Q&A sessions and just general research. The submitted design was also an actual work project, so some of that time investment was for my job — an added benefit not all candidates have.

The one lesson I would share with others thinking about or pursuing their own VCDX certification is the following — be careful who you ask advice of or take advice from. If they have not been a panelist in the past, their view into what to do (or not to do) is going to be mostly opinion. The VCDX program held a Q&A call the Friday before the defenses began in May.

On the call were Joe Silvagi, Simon Long and Karl Childs — all three are heavily involved in the program. The most frequent questions asked by the candidates started with the phrase, “My mentor says” or “The community says”. In nearly every instance the response from Joe was along of the lines of that isn’t right.

Attend one (or more) VCDX workshops prior to submitting so that you can ask questions and reach out to the people running the workshops to get trustworthy responses.

That’s all the advice I have to give.

There is an African Proverb, and the quote is outside one of the VMware conference rooms, that says:

“If you want to go fast, go alone. If you want to go far, go together.”

This is true of the VCDX certification. I got to this point not because I went alone, but because I went with others.

My wife – No one on this earth has supported me more. The countless hours over the past two decades of late nights as I strived to advance my career. This is as much her certification as it is my own.

Rich Steck (Adobe) – He mentored me during one of the most difficult years in my career. He challenged me to figure out where I wanted to go and to find paths to get there. Most importantly, he listened.

Frans van Rooyen (Adobe) – Already a brilliant cloud architect in his own right, he mentored me in my role as a Compute Platform Engineer for two years. He let me constantly challenge all of the decisions we were making (on-the-fly) as we built a rather large private cloud across the globe. He introduced me to VMware technologies and helped me gain the skills I would need to land my dream job at VMware in two short years.

Andrew Nelson (VMware) – While at Adobe, Frans introduced me to Andrew. Andy and I spoke at VMworld together in San Francisco and Barcelona in 2014. We briefly worked on a book together, during which time he told me if I wanted to get a job at VMware, I’d be surprised how quickly it would happen. I had an offer for my current role barely 1 month later.

OneCloud Architecture Team (VMware) – My dream job came with the opportunity to work with 3 double-VCDX certification holders. The first architecture review board call I attended they tore into another architect over his vRA design and it was at that moment I knew I was going to have to step up my game significantly to play with them. What a blessing it has been to work with them for the past two years — each of them has helped me grow my skills as an architect immensely. They taught me to critically challenge a design decision, not just for the sake of arguing, but because we are trying to understand the rationale for the decision.

Their support continued from afar as I went through the process of submitting and defending my design for my own certification. When I got the email saying I was now VCDX #257, they were right there celebrating my success with me.

Thank you to each of you for helping me realize my dreams and earn the VCDX certification!

 

Read More

I am currently working on building out a vPod nested ESXi lab environment that will be deployed through OpenStack’s Heat orchestration service. As I worked out the vPod application components, I realized that I wanted to include a single Linux VM that would run various services inside Docker containers.

I needed a Bind Docker container!

It seems like everything in a VMware SDDC environment needs both the forward and reverse records working properly — so I started here. The Docker container is completely self-contained — all external zone data is stored in S3 and downloaded when the container is built.

https://hub.docker.com/r/chrismutchler/vpod-bind/

The Dockerfile for the container contains the following code:

# Designed to be used in conjunction with a nested ESXi
# virtual lab environment deployed through an OpenStack
# Heat template.

FROM ubuntu:latest

MAINTAINER chris@virtualelephant.com

RUN apt-get -y update && apt-get -y install bind9 dnsutils curl

RUN curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/db.192.168 -o /etc/bind/db.192.168 && curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/db.vsphere.local -o /etc/bind/db.vsphere.local && curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/named.conf.options -o /etc/bind/named.conf.options && curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/named.conf.local -o /etc/bind/named.conf.local

EXPOSE 53

CMD ["/usr/sbin/named", "-g", "-c", "/etc/bind/named.conf", "-u", "bind"]

To start the container, I setup the Ubuntu VM to execute the following code when it is deployed inside OpenStack.

# docker run -d -p 53:53 -p 53:53/udp chrismutchler/vpod-bind

Once running, it is now able to provide the critical DNS service inside the vPod ESXi environment. From here it is onto building out the Heat template that will leverage the container.

Enjoy!

Read More

I am currently pursuing my VCDX certification and the design I have submitted is based on VMware Cloud Foundation and VMware Integrated OpenStack. As part of the required documentation, I included a deployment guide — unfortunately, it is not as simple as laying down the SDDC components and the VIO vApp for the deployment.

This blog post will cover a couple items that are needed to get the two pieces playing together.


Shared Edge & Workload Cluster

The VCF architecture currently has a limitation that a vCenter Server can only have a single vSphere cluster — it’s a 1:1 relationship. VMware Integrated OpenStack requires either 3 clusters in a single vCenter Server or a management cluster in one vCenter Server instance and two clusters in a second vCenter Server. Neither of these options are compatible with VMware Integrated OpenStack.

In order to make it work, we are going to use a two vCenter Server deployment of VMware Integrated OpenStack and modify the OMS server to combine the NSX Edge and Workload Clusters into one. We do this by editing a single configuration file and restarting the oms service running on the VIO vApp Management (OMS) VM.

$ cd /opt/vmware/vio/etc
$ sudo vim moms.properties

Add the following line to the end of the file:
oms.allow_shared_edge_cluster = true

$ sudo restart oms

VMware Integrated OpenStack can now be deployed on top of VMware Cloud Foundation.


VXLAN-backed External Network

This one is a bit trickier and is an obstacle whether or not you are using VMware Cloud Foundation as the infrastructure layer.

Logically, the end result for the OpenStack external network is to attach to a VXLAN port group created by NSX. The NSX logical switch network is attached to the internal interface on a NSX Distributed Logical Router.

The following is the logical diagram for the architecture.

external openstack

The issue is that during the deployment of an OpenStack instance using VMware Integrated OpenStack, you have to specify an external network. However, VMware Integrated OpenStack will not allow a vSphere Administrator to select a VXLAN port group during the deployment. I got around this by creating a non-VXLAN port group on the DVS used only for the deployment.

Once the OpenStack deployment is complete, I needed to attach the actual VXLAN-backed port group as the external network.

SSH to the OMS server
$ ssh -l viouser oms.domain.local

SSH to an OpenStack controller VM
$ ssh controller01
$ sudo cp /root/cloudadmin_v3.rc .
$ source cloudadmin_v3.rc
$ neutron

(neutron) net-list
(neutron) net-create --provider:network_type=portgroup --provider:physical_network=virtualwire-XX vio-external-network
(neutron) net-list

The network will now appear in the OpenStack network list. Go ahead and create your subnet for the external IP addresses, based on the network assignment in your environment.

If you have questions or issues with implementing these changes in your environment, please reach out.

Read More

 

Note: The following blog post is only relevant to VMware Integrated OpenStack deployments.

In a pilot environment running VMware Integrated OpenStack (VIO) v3.0, one of the ESXi management nodes experienced a network isolation event. As a result of the event, vSphere HA responded accordingly and began restarting the VMs on the isolated ESXi node onto other ESXi nodes. The isolated ESXi node happened to have the secondary VIO controller VM on it. When the VIO controller VM was restarted through vSphere HA on the new ESXi node, the operating system came online quickly. However, the vRealize Operations dashboard for OpenStack still reported the services in critical status.

vRealize Operations OpenStack Services Dashboard

When I logged into the secondary OpenStack controller VM, I noticed there were no OpenStack services running. That’s not good.

Digging in a bit deeper, I logged into the VIO management VM and ran a viocli command to check the status of the environment to see what other issues may be in existence.

It just so happens that this is expected behavior in the current VIO release when a vSphere HA event occurs. I wouldn’t classify that as ideal at all, but it’s software and sometimes we have to workaround limitations.

In order to restart the services of the secondary controller VM, there are a two options.

  1. Restart the entire OpenStack management stack.
  2. Restart just the affected controller VM.

Both require use of the vSphere Web Client with the corresponding VIO plugin.


Select the broken management VM and select the ‘All Actions’ drop-down menu at the top. Follow it up with selecting ‘Restart services’. A small pop-up window will appear verifying this is the action you wish to take.

Once the services are restarted the entire OpenStack management stack should once again begin functional.

One caveat worth noting, if the services have been stopped on the database nodes the restart of a single management VM through the UI may not re-establish the entire stack and a complete restart of the entire stack may still be required.

The vRealize Operations dashboards can play an integral part of a VMware Integrated OpenStack environment, allowing the services to be monitored remotely.

Read More
Posted on

VMware Integrated OpenStack (VIO) enables SSL encryption by default and will be installed with a self-signed certificate. In order to provide your own certificate from a trusted CA, the VIO management VM includes command-line tools for the vSphere administrator.

The first step is to generate the CSR for the environment.

$ sudo viocli deployment cert-req-create

The workflow will ask for some details and then output the CSR, which you can provide to your trusted CA of choice.

After you receive your signed certificate, append all of the CRT files to a single file.

$ cat intermediate1.crt intermediate2.crt root.crt server.crt >> /path/certificate.crt

The final step is to push the new certificate out to the VIO Load Balancers running in the environment.

$ sudo viocli deployment cert-update -p -f /path/certificate.crt

The output will include the following:

Once completed, you can check to see that the new certificate(s) were installed properly by logging into the Load Balancer VMs.

$ ssh usa1-2-violb1
$ cd /etc/ssl
$ sudo cat vio.pem

Enjoy!

Read More