IGMP, Multicast and learning a lesson

In a recent conversation, it became clear to me that my knowledge of the inner workings of VXLAN and VSAN were not a deep as they could be. Since I am also studying for my VCAP exams, I knew additional time educating myself around these two technologies was a necessity. As a result, I’ve spent the last day diving into the IGMP protocol, multicast traffic and how they are utilized both within VXLAN and VMware VSAN. I wanted to capture what I’ve learned on a blog post as much for myself as for anyone else who might be interested in the subject. Writing what I’ve learned is one way I can absorb and retain information long-term.

IGMP

IGMP is a layer 3 network protocol. It is a communications protocol use to establish multicast group memberships. It is encapsulated within an IP packet and does not use a transport layer — similar to ICMP. It is also used to register a router for receiving multicast traffic. There are two important pieces within the IGMP protocol that VXLAN and VSAN take advantage of — IGMP Querier and IGMP Snooping. Without these two pieces, IGMP would act astonishing more than a broadcast transmission and lack the efficiency required.

IGMP Querier

The IGMP Querier is the router or switch that acts as the master for the IGMP filter lists. It will check and track membership by sending queries on a timed interval.

IGMP Snooping

On a layer 2 switch, IGMP Snooping allows for the passive monitoring for IGMP packets sent between router(s) and host(s). It also does not send any additional network traffic across the wire, making it more efficient for multicast traffic passing through the network.

VXLAN

That’s IGMP in a nutshell — so how is it used in VXLAN?

In order for VXLAN to act as an overlay network, multicast traffic is used to enable the L2 over L3 connectivity — effectively spanning the entire logical network VXLAN has defined. When a virtual machine connects to a VXLAN logical network, it behaves as though everything is within a single broadcast domain. The ESXi hosts, configured for VXLAN, register themselves as VTEPs. Only those VTEPs that register with the VXLAN logical network participate in the multicast broadcasts. This is accomplished through IGMP Snooping and IGMP Querier. If you have 1000 ESXi hosts configured for VXLAN, but only a subset (say 100) of the hosts are concerned for a specific VXLAN logical network, you wouldn’t want to send multicast broadcasts out to all 1000 ESXi hosts — that would be inefficient by increasing the multicast traffic on the network unnecessarily.

There is a really good VMware Blog 4-part series on VXLAN and how it operates here.

VMware VSAN

The implementation for VSAN is very similar to that of VXLAN. The VSAN clusters require a methodology for learning what ESXi hosts are adjacent to each other and participating as a VSAN cluster. VMware uses layer 2 multicast traffic for the host discovery within VSAN.

Once again, IGMP Querier and IGMP Snooping are play a beneficial role. VMware states that implementing multicast flooding is not a best practice. By leveraging both IGMP Snooping and IGMP Querier, VSAN is able to understand who wants to participate within the multicast group. This is particularly beneficial when multiple network devices exist on the same VLAN that VSAN is operating on.

If you have multiple VSAN clusters operating on the same VLAN, it is recommended you change the broadcast address for the multicast traffic so they are not identical. This will prevent one VSAN cluster from receiving another clusters broadcasts. It can also help prevent the Misconfiguration detected error under the Network status sections of a VSAN cluster.

For a better understanding of how VSAN operates, please check out the VMware blog entry here.

For a season network professional, I highly doubt any of this was new or mind-blowing. For someone who does not generally dive into the various network protocols — but should probably start doing so — this information was both a good refresher on IGMP and helped me understand both VXLAN and VSAN a bit better.

Did I get something wrong? Let me know on Twitter.

Thoughts on the VMware Project Photon Announcement

project photon

VMware announced a new open source project call Project Photon today. The full announcement call be seen here. Essentially Project Photon is a lightweight Linux operating system built to support Docket and rkt (formerly Rocket) containers. The footprint is less than 400MB and can run containers immediately upon instantiation. I had heard rumors the announcement today was going to include some sort of OS, but I was not very excited about it until I started reading the material being released prior to the launch event in a few hours.

Having seen the demo and read the material, my mind went into overdrive for the possibilities both open source projects offer organizations who are venturing down the Cloud Native Apps (or Platform 3) road. I believe VMware has a huge opportunity here to cement themselves as the foundation for running robust, secure and enterprise-ready Cloud Native Applications. If you think about the performance gains vSphere 6.0 has provided, and then look at how they are playing in the OpenStack space with both VIO and NSX, the choice becomes obvious.

The area of focus now needs to be on tying all of the pieces together to offer organizations an enterprise-class end-to-end Platform-as-a-Service solution. This is where, I believe, the VMware Big Data Extensions framework should play an enormous part. The framework already allows deployment of Hadoop, Mesos and Kubernetes clusters. Partner the framework with Project Photon and you now have a minimal installation VM that can be launched within seconds with VMFork. From there, the resource plugin Virtual Elephant launched today could be mainstreamed (and improved) to allow for the entire deployment of a Mesos stack, backed by Project Photon, through the open source API OpenStack offers with Heat.

Epic win!

There is still work VMware could do with the Big Data Extensions framework to improve its capabilities, especially with newcomers SequenceIQ and their Cloudbreak offering stiff competition. Expanding BDE to be able to deploy clusters beyond an internal vSphere environment, but also to the major public cloud environments — including their own vCloud Air — will be key going forward. The code for BDE is already an open source project — by launching these two new open source projects they are showing the open source community they are serious.

This is a really exciting time in virtualization and I just got even more excited today!