Tag: VCF
Posted on

vcf_poster_snap

The VMware Cloud Foundation (VCF) platform automates the deployment and lifecycle management of the SDDC. The deployment will help an organization go from installation of the physical rack to a ready-for-deployment vSphere environment in a matter of hours. The VCF platform includes the following VMware products:

  • VMware vSphere Hypervisor
  • VMware vCenter Server
  • VMware NSX-v
  • VMware vRealize Operations
  • VMware vRealize Log Insight

As the previous post mentioned, there are several management components VCF relies upon for its automation and workflow framework. After the initial deployment is complete, a vSphere Administrator will still need to perform several tasks to fully configure the environment and make it ready for a production workload. Some of those steps include:

  • Configuring LDAP or Active Directory authentication sources.
  • Creating local accounts.
  • Configuring the network uplinks on the physical network equipment.
  • Configuring NSX and/or the Virtual Distributed Switch for upstream network connectivity.
  • Configuring a jump host for accessing the OOB network where the iDRAC interfaces exists.
    • Multiple jump hosts will be required, one for each physical rack since the OOB network is duplicated within each rack.
  • NIOC will need to be configured.
  • Proper configuration of the Resource Pools VCF creates will need to be completed — no reservations or shares exist after initial deployment.
  • Log Insight management packs, where necessary, will need to be configured.
  • vRealize Operations will need to be configured.
  • DNS integration.
  • Adjust the Virtual SAN storage policies per your environments requirements.

A few key points to remember,

  • Do not modify the cluster structure outside the VRM workflows — which means no creating new clusters or splitting existing clusters up.
  • Do not modify the names of any of the management virtual machines.
  • Do not modify the name of the Virtual Distributed Switches.
  • Do not modify the pre-configured portgroup names.
  • All expansion of hosts/capacity needs to be initiated from the VRM interface.
  • The management cluster will only deploy initially with 3 nodes — barely enough for any true fault tolerance for Virtual SAN. I highly encourage you to expand it to the recommended VMware Best Practice of a 4 hosts.
  • Upgrades always occur in the management cluster first, then the workload domains — which I personally believe to be a bit backwards.

The VCF product is a great first step along the path of fully automated deployments and lifecycle management. The biggest challenge to adopting it will be balancing the line between what VCF manages and what a typical vSphere Administrator is going to be used to doing. Operationally it will take some adjustment, especially when using the lifecycle management workflows for the first time.

Happy Thanksgiving!

Read More

love-o-land

The VMware Cloud Foundation platform includes a management stack, which includes several virtual appliances for handling the automation and SDDC lifecycle management of the platform. The virtual appliances will be deployed inside the management cluster before the VCF rack is shipped to the customer.

The VCF management stack include:

  • vRack Manager (vRM): 4 vCPU, 12GB Memory, Disk1 100GB, Disk2 50GB
  • vRack-ISVM[1-3]: 4 vCPU, 4 GB Memory, Disk1 20GB
  • vRack-LCM_Repository: 4 vCPU, 4GB Memory, Disk1 200GB, Disk2 900GB
  • vRack-LCM_Backup_Repository: 4 vCPU, 4GB Memory, Disk1 30GB

The following diagram is a logical representation of the VCF management stack.

vmware cloud foundation management

The platform creates a private, non-routable VLAN within the network to communication between the management virtual appliances. This network is used for all of the backend network communication.

These VMs are managed by the VCF platform and should not be modified in any manner. Modifying the VMs in any manner will have dire consequences on the platform and may prevent it from operating as intended.

The vRack Manager is the primary interface the user will deal with. The GUI HTTP server runs on this virtual appliance, including all of the workflows for creating, modifying and deleting workload domains within VCF. I have seen the VRM become unresponsive entirely or the GUI become painfully slow when interacting with it — a reboot of the virtual appliance has solved these issues when they have occurred.

In addition to the VCF management stack, the first rack of VCF hardware will also include vCenter Server, a Platform Services Controller, NSX Manager, a set of NSX Controllers, vRealize Operations virtual appliance and a vRealize Log Insight virtual appliance. It is important to take these virtual appliances into consideration when performing sizing calculations for the environment.

This management stack only exists on the first management cluster created on the first physical rack of VCF. The additional management stacks created in subsequent physical racks will leverage these same virtual appliances for their operations. The additional management clusters will only include a VRM, vCenter Server, NSX Manager and NSA controllers.

Update: VMware released a VMware Cloud Foundation Architecture Poster, which can be downloaded here in PDF form.

Note – this information is accurate as of November 2016 and may be subject to change based on future VMware Cloud Foundation platform changes.

Read More

qtq80-oy234M

The following post is entirely my opinion and does not reflect the opinions of my employer.

VMware Cloud Foundation is the latest iteration of what started as EVO:Rack and became EVO SDDC. With another big emphasis at VMworld 2016, VMware Cloud Foundation (VCF) saw the product become generally available to external customers. In my role at VMware, I have been able to see the sausage made and been heavily involved in the VCF product. As such, I thought it the product going GA, now would be a good time to talk about some of the products strengths and weaknesses. If you are unfamiliar with the VCF product, I encourage you to read the publicly available white paper published in August 2016.

Overview

VMware Cloud Foundation started out as an entirely hardware-based platform for private cloud, but has now evolved to also include a software-based platform that can be used to provide a hybrid cloud offering (public + private). In the interest of full disclosure, my experience has been entirely with the hardware-based platform running on several Dell hardware platforms. The most interesting piece of the VCF platform was the initial discussion of VMware trying to solve the holy grail of running a private cloud — updating the entire VMware software stack. I believe every vSphere administrator has seen how difficult it is to upgrade all of the vSphere components running inside the software stack — and has only gotten more difficult when new technologies like NSX have been added. The VCF platform has worked to solve the complexities by including a SDDC software lifecycle management component.

The hardware stack includes several options for hardware vendors, the common thread being the use of hyper-converged infrastructure (HCI) with the nodes being listed on the vSphere and Virtual SAN HCLs. The network stack includes a leaf-spine topology with some minor modifications to allow Virtual SAN clusters to span multiple racks.

The software stack includes what I would consider the full vSphere SDDC stack — vSphere (ESXi & vCenter), Virtual SAN, NSX, vRealize Operations and vRealize Log Insight. The SDDC Manager is a new component, unique to the VCF platform, that provides a unified interface for executing the workflows for deployment, installation and upgrades of the SDDC components. The SDDC Manager will become an interface for the vSphere administrator during the deployment of new hardware and upgrades of the VMware software.

VCF Strengths

So lets talk about the strengths, everyone likes those right?

  • Deployment of a full rack of servers 16-32 servers takes less than 3 hours. We are talking the installation of ESXi on every node, plus the SDDC stack deployed entirely on both the management (control) and workload (data) domains. It also includes vRealize Operations and vRealize Log Insight deployed and configured for the management and workload domains.
  • Automated upgrades, with certified VCF builds, of vSphere and NSX. Depending on the cluster sizes and the storage policies for Virtual SAN, these upgrades complete within a limited number of hours. The holy grail seems to have been solved by the SDDC Manager software lifecycle piece.
  • Extensive logging that is automated uploaded to the vRealize Log Insight instance running, so that if/when issues occur they can be searched and correlated with relative ease.

VCF Weaknesses

No product, from any company, is ever perfect. So what weaknesses have I seen?

  • Every physical rack includes a requirement for management domain, wasting nodes on subsequent racks added to the environment. These management clusters, because of Virtual SAN should be sized to include 4 nodes, are very underutilized and leave a lot of capacity on the table.
  • NSX is included in the hardware stack, but no configuration beyond the deployment of the NSX Manager and a set of NSX Controller nodes is performed. It will be left up to a customer to understand and implement NSX themselves post-deployment.
  • Related to the above point, no uplink configuration is performed within the physical network. The environment will need to be manually connected to a customers existing network infrastructure.
  • ESXi nodes are registered in vCenter using a DHCP-assigned IP address, not a FQDN that corresponds to the physical rack location of the server. This becomes important when hardware failures occur and a vSphere administrator has to log into the remote access module.
  • An external jump host is required to perform much of the initial deployment and post-operation control of the environment. Each rack has a management switch that re-uses the same out-of-band (OOB) IP addresses for every ESXi node instead of unique IPs across racks.

Conclusion

The above points do not merely cross one another off. The fact that the VCF team solved the problems of deployment and upgrades of the main SDDC components is a HUGE WIN for VMware! I think the hardest challenge is going to be convincing existing enterprise customers to incorporate existing VMware infrastructure with the VCF platform. I also do not think any of the current weaknesses are issues that cannot or will not be solved in future iterations of the platform.

I am looking to see how the VCF platform continues to evolve.

Read More