I had the opportunity to attend the CoreOS Fest 2017 in San Francisco for a day this past week. There are lots of exciting things happening in the cloud native space, and CoreOS, with its heavy influence with Kubernetes is at the forefront of much of the innovation. The conference itself was on the smaller side, but the amount of emerging technology focused sessions was impressive — I will be excited to see how it grows over the coming years. While there, I was able to attend the session by one of Adobe’s Principle Architects — Frans van Rooyen. (Frans and I worked together from 2012 – 2014 at Adobe.)
In his session, he spoke about several fundamental architecture principles and how they have been applied in the new multi-cloud initiative at Adobe. The platform they have built over the past two years is capable of being deployed inside a data center, inside AWS, inside Azure and even locally on a developers laptop — while providing the same experience to the developer or operations engineer.
The platform is based on CoreOS and uses the Ignition project to provide the same level of provisioning regardless of which cloud platform the workload is deployed on. I hadn’t heard of Ignition or how it operated to provide the level of provisioning it does and will be a technology I investigate further into now. If you are interested in learning more, I encourage you to reach out to Frans over Twitter.
Frans has also spoken about the multi-cloud platform at Mesoscon, focusing on the inclusion of Apache Mesos — the session can be watched on YouTube.
I am currently pursuing my VCDX certification and the design I have submitted is based on VMware Cloud Foundation and VMware Integrated OpenStack. As part of the required documentation, I included a deployment guide — unfortunately, it is not as simple as laying down the SDDC components and the VIO vApp for the deployment.
This blog post will cover a couple items that are needed to get the two pieces playing together.
Shared Edge & Workload Cluster
The VCF architecture currently has a limitation that a vCenter Server can only have a single vSphere cluster — it’s a 1:1 relationship. VMware Integrated OpenStack requires either 3 clusters in a single vCenter Server or a management cluster in one vCenter Server instance and two clusters in a second vCenter Server. Neither of these options are compatible with VMware Integrated OpenStack.
In order to make it work, we are going to use a two vCenter Server deployment of VMware Integrated OpenStack and modify the OMS server to combine the NSX Edge and Workload Clusters into one. We do this by editing a single configuration file and restarting the oms service running on the VIO vApp Management (OMS) VM.
$ cd /opt/vmware/vio/etc
$ sudo vim moms.properties
Add the following line to the end of the file:
oms.allow_shared_edge_cluster = true
$ sudo restart oms
VMware Integrated OpenStack can now be deployed on top of VMware Cloud Foundation.
VXLAN-backed External Network
This one is a bit trickier and is an obstacle whether or not you are using VMware Cloud Foundation as the infrastructure layer.
Logically, the end result for the OpenStack external network is to attach to a VXLAN port group created by NSX. The NSX logical switch network is attached to the internal interface on a NSX Distributed Logical Router.
The following is the logical diagram for the architecture.
The issue is that during the deployment of an OpenStack instance using VMware Integrated OpenStack, you have to specify an external network. However, VMware Integrated OpenStack will not allow a vSphere Administrator to select a VXLAN port group during the deployment. I got around this by creating a non-VXLAN port group on the DVS used only for the deployment.
Once the OpenStack deployment is complete, I needed to attach the actual VXLAN-backed port group as the external network.
SSH to the OMS server
$ ssh -l viouser oms.domain.local
SSH to an OpenStack controller VM
$ ssh controller01
$ sudo cp /root/cloudadmin_v3.rc .
$ source cloudadmin_v3.rc
(neutron) net-create --provider:network_type=portgroup --provider:physical_network=virtualwire-XX vio-external-network
The network will now appear in the OpenStack network list. Go ahead and create your subnet for the external IP addresses, based on the network assignment in your environment.
If you have questions or issues with implementing these changes in your environment, please reach out.
The vDM30in30 challenge is complete! It was an amazing challenge to post almost every single day over the course of the month of November. I feel good about the quality of posts I put out, especially some of the VCDX related posts.
I found that I posted about OpenStack and VMware NSX far more than I had initially intended. I also did not have any posts related to Photon Platform, Mesosphere or Docker, which is a bit disappointing. However, I learned that most of the posts came from what I was involved in at work during the day.
There are a few outstanding topics I still want to write out before the end of the year, including:
VMware Integrated Containers
Network Link-State Tracking
NSX ECMP Edges with OSPF Dynamic Routing
Having worked at Adobe in the Digital Marketing business unit previously, I’ve learned how important analytics are. Looking at the analytics for the month, I drove visits and read posts significantly for the site over the month. The top 5 posts over the course of the month were:
VMware has a lot of products that compose the SDDC stack. Out of all of them — after the foundational ESXi — vRealize Log Insight had become my absolute favorite product. It is one of the first things I deploy inside any new environment, as soon as vCenter Server is online and sometimes before. It’s ability to parse, collect and search log messages throughout the stack, while providing easy-to-use dashboards make it so much more powerful than some of its competitors.
Steve Flanders blogs about several VMware-related topics, and the Log Insight posts are the highlight of his site. Oftentimes his site is my first stop when I have a question — even before the documentation.
Steve has a quick reference for his Log Insight posts, which can be found here.
The power of Log Insight can easily be realized within any environment and becomes even more powerful with the installation of the Content Packs. Content Packs can be downloaded through the UI interface directly and include components for both VMware products and industry products.
Some of the Content Packs I find myself relying on include:
Synology (for the home lab users out there)
There are dozens more and it is possible to even write your own content packs. The VMware Developer Center provides information on how to do so.
If you are thinking about using Log Insight or just looking for new information to learn how to better utilize it in your environment, I highly encourage you head over to SFlanders.net and check out all the resources there. Also give Steve a follow over on Twitter.
The VMware Cloud Foundation (VCF) platform automates the deployment and lifecycle management of the SDDC. The deployment will help an organization go from installation of the physical rack to a ready-for-deployment vSphere environment in a matter of hours. The VCF platform includes the following VMware products:
VMware vSphere Hypervisor
VMware vCenter Server
VMware vRealize Operations
VMware vRealize Log Insight
As the previous post mentioned, there are several management components VCF relies upon for its automation and workflow framework. After the initial deployment is complete, a vSphere Administrator will still need to perform several tasks to fully configure the environment and make it ready for a production workload. Some of those steps include:
Configuring LDAP or Active Directory authentication sources.
Creating local accounts.
Configuring the network uplinks on the physical network equipment.
Configuring NSX and/or the Virtual Distributed Switch for upstream network connectivity.
Configuring a jump host for accessing the OOB network where the iDRAC interfaces exists.
Multiple jump hosts will be required, one for each physical rack since the OOB network is duplicated within each rack.
NIOC will need to be configured.
Proper configuration of the Resource Pools VCF creates will need to be completed — no reservations or shares exist after initial deployment.
Log Insight management packs, where necessary, will need to be configured.
vRealize Operations will need to be configured.
Adjust the Virtual SAN storage policies per your environments requirements.
A few key points to remember,
Do not modify the cluster structure outside the VRM workflows — which means no creating new clusters or splitting existing clusters up.
Do not modify the names of any of the management virtual machines.
Do not modify the name of the Virtual Distributed Switches.
Do not modify the pre-configured portgroup names.
All expansion of hosts/capacity needs to be initiated from the VRM interface.
The management cluster will only deploy initially with 3 nodes — barely enough for any true fault tolerance for Virtual SAN. I highly encourage you to expand it to the recommended VMware Best Practice of a 4 hosts.
Upgrades always occur in the management cluster first, then the workload domains — which I personally believe to be a bit backwards.
The VCF product is a great first step along the path of fully automated deployments and lifecycle management. The biggest challenge to adopting it will be balancing the line between what VCF manages and what a typical vSphere Administrator is going to be used to doing. Operationally it will take some adjustment, especially when using the lifecycle management workflows for the first time.