Tag: vDM30in30
Posted on

ve-banner-logo-150-3

The vDM30in30 challenge is complete! It was an amazing challenge to post almost every single day over the course of the month of November. I feel good about the quality of posts I put out, especially some of the VCDX related posts.

I found that I posted about OpenStack and VMware NSX far more than I had initially intended. I also did not have any posts related to Photon Platform, Mesosphere or Docker, which is a bit disappointing. However, I learned that most of the posts came from what I was involved in at work during the day.

There are a few outstanding topics I still want to write out before the end of the year, including:

  • VMware Integrated Containers
  • Network Link-State Tracking
  • NSX ECMP Edges with OSPF Dynamic Routing

Having worked at Adobe in the Digital Marketing business unit previously, I’ve learned how important analytics are. Looking at the analytics for the month, I drove visits and read posts significantly for the site over the month. The top 5 posts over the course of the month were:

  1. IGMP, Multicast and learning a lesson – 128 views
  2. Multi-tenant OpenStack with NSX – Part 1 – 91 views
  3. NSX DLR Designated Instance – 89 views
  4. OpenStack Alert Definitions in vRealize Operations – 83 views
  5. VMware Storage I/O Control (SIOC) Overview – 78 views

Well, until next year. Enjoy.

Read More
Posted on

monitoring-header

OpenStack Neutron likes to use some pretty awesome reference IDs for the tenant network objects. You know, helpful strings like ec43c520-bfc6-43d5-ba2b-d13b4ef5a760. The first time I saw that, I said to myself that is going to be a nightmare when trying to troubleshoot an issue.

ve_nsx_neutron_troubleshoot01

Fortunately, VMware NSX also uses a similar character string when it creates logical switches. If NSX is being used in conjunction with OpenStack Neutron, magic happens. The logical switch is created with a string like vxw-dvs-9-virtualwire-27-sid-10009-ec43c520-bfc6-43d5-ba2b-d13b4ef5a760.

ve_nsx_neutron_troubleshoot02

A keen eye will have noticed the OpenStack Neutron reference ID is included in the NSX logical switch name!

From there you can reference the NSX Edge virtual machines and see which interface the NSX logical switch is attached to. This tidbit of information proved useful today when I was troubleshooting an issue for a developer and is a piece of information going into my VCDX SOP document.

ve_nsx_neutron_troubleshoot03

Enjoy!

Read More
Posted on

Caution, this post is highly opinionated.

I am deep into the process of completing my VCDX design documentation and application for (hopefully) a Q2 2017 defense. As it so happens, a short conversation was had on Twitter today regarding a post on the VMware Communities site for the VMware Validation Design for SDDC 3.x, including a new design decision checklist.

twitter-screen

The latest version of the VMware Validated Design (VVD) is a pretty awesome product for customers to reference when starting out on their private cloud journey. That being said, it is by no means a VCDX design or a set of materials that could simply be re-purposed for a VCDX design.

Why? Because there are no customer requirements.

For the same reason a hypothetical (or fake) design is often discouraged by people in the VCDX community, the VVD suffers from the same issue. In a vacuum you can make any decision you want, because there are no ramifications from your design decision. In the real-world this is simply not the case.

Taking a look at the Design Decisions Checklist, it goes through the over 200 design decisions the VVD made in the course of developing the reference architecture. The checklist does a good job of laying out the fields the design decision covers, like:

  • Design Decision
  • Design Justification
  • Design Implication

Good material. But if you’ve read my other post on design decisions, which you may or may not agree with, it highlights that a decision justification is made based on a requirement.

Let’s take a look at just one of the design decisions made by the VVD product and highlighted in the checklist.

vvd_decision_screencap

The decision is to limit a single compute pod to a single physical rack, as in no cross-rack clusters. Sounds like a reasonable decision, especially if the environment had a restriction on L2 boundaries or some other requirement. But what if I have a customer requirement that said a compute node must be able to join any compute pod (cluster) regardless of physical rack location within a data center?

Should I ignore that requirement because the VVD says to do otherwise?

Of course not.

My issue with the Twitter conversation is two-fold:

  1. The VVD design decisions are not in fact design decisions, but design recommendations. They can be used to help a company, group or architect to determine, based on their requirements, which of these “decisions” should be leveraged within their environment. They are not die-hard decisions that must be adhered to.
  2. From a VCDX perspective, blindly assuming you could copy/paste any of these design decisions and use them in a VCDX defense is naive. You must have a justification for every design decision made and it has to map back to a customer requirement, risk or constraint.

I also do not think that is what  was saying when he initially responded to the Tweet about the checklist. I do think though that some people may actually think they can just take the VVD, wrap it in a bow and call it good.

My suggestion is to take the VVD design documentation and consider it reference material, just like the many other great books and online resources available to the community. It won’t work for everyone, because every design has different requirements, constraints and risks. Take the bits that work for you and expand upon them. Most importantly, understand why you are using or making that design decision.

Let me know what you think on Twitter.

Again, this post is highly opinionated from my own limited perspective. Do not mistake it for the opinion of VMware or any VCDX certified individuals.

Read More

LOGINSIGHT_TITLE

VMware has a lot of products that compose the SDDC stack. Out of all of them — after the foundational ESXi — vRealize Log Insight had become my absolute favorite product. It is one of the first things I deploy inside any new environment, as soon as vCenter Server is online and sometimes before. It’s ability to parse, collect and search log messages throughout the stack, while providing easy-to-use dashboards make it so much more powerful than some of its competitors.

Steve Flanders blogs about several VMware-related topics, and the Log Insight posts are the highlight of his site. Oftentimes his site is my first stop when I have a question — even before the documentation.

Steve has a quick reference for his Log Insight posts, which can be found here.

The power of Log Insight can easily be realized within any environment and becomes even more powerful with the installation of the Content Packs. Content Packs can be downloaded through the UI interface directly and include components for both VMware products and industry products.

Some of the Content Packs I find myself relying on include:

  • VMware NSX
  • OpenStack
  • Synology (for the home lab users out there)
  • Nginx
  • vRealize Operations

There are dozens more and it is possible to even write your own content packs. The VMware Developer Center provides information on how to do so.

If you are thinking about using Log Insight or just looking for new information to learn how to better utilize it in your environment, I highly encourage you head over to SFlanders.net and check out all the resources there. Also give Steve a follow over on Twitter.

Enjoy!

Read More
Posted on

monitoring-header

Continuing the OpenStack + NSX series (Part 1, Part 2 and Part 3) on deploying a multi-tenant OpenStack environment that relies upon NSX, this post will cover the details of the deployment and configuration.

There have been a couple options discussed through the series, including the logical graphic with that relies on a NSX DLR w/o ECMP Edges:

nsx-dlr-openstack

Or a logical virtual network design with a DLR and ECMP Edges:

external openstack

Regardless of which virtual network design you choose, the configuration of the NSX Distributed Logical Router and the tie into OpenStack will need to be configured. In the course of building out a few VMware Integrated OpenStack labs, proof-of-concepts and pilot environments, I’ve learned a few things.

Rather than go through all 30+ steps to implement the entire stack, I want to simply highlight a few keys points. When you configure the DLR, you should end up with two interfaces — an uplink to either the ECMP layer or the physical VLAN and an internal interface to the OpenStack external VXLAN network.

ext_openstack_dlr14

Once the DLR is deployed, you can log into any of the ESXi hosts within the NSX transport zone and verify the routes are properly in place with a few simple CLI commands.

ext_openstack_dlr19

The implementation of tying the NSX components into OpenStack is now ready to be completed. I prefer to use the API method, using the neutron CLI — log into the VIO management server and then either of the Controller VMs.

ext_openstack_dlr20

ext_openstack_dlr22

Key points to remember here:

  • The physical_network parameter is the just the virtualwire-XX string from the NSX-created portgroup.
  • The name for the network must exactly match the NSX Logical Switch that was created for the OpenStack external network.

The commands I used here to create the network inside OpenStack:

$ source <cloudadmin_v3>
$ neutron net-list
$ neutron net-create --provider:network_type=portgroup --provide:physical_network=virtualwire-XX nsx_logical_switch_name
$ neutron net-list

All that remains is adding a subnet to the external network inside OpenStack, which can be performed through the Neutron CLI or the Horizon UI. All-in-all it is a pretty easy implementation, just make sure you remember to reference the proper the object names in NSX when creating the OpenStack network objects.

Enjoy!

Read More