Tag: BigDataExtensions

 

li-logo

Let’s be honest, debugging error messages generated by VMware Big Data Extensions (BDE) can be painstaking, tedious and tiresome. Having recently begun to rely on VMware Log Insight more and more, I determined the deployment of BDE v2.3.1 would leverage the Log Insight agent. During the actual vApp deployment of BDE, you may have noticed an option to specify a remote syslog server. I chose to leave the option blank during the deployment, instead choosing to install and configure the Log Insight agent post-installation.

The Linux Log Insight agent can be downloaded on the Administration->Agents screen at the very bottom of the screen.

 

The download link will include the IP address of the Log Insight server and is used during the RPM deployment of the agent.

Management Server Installation

The BDE management server is currently running CentOS 6.7. After copying the agent to the management server, the following commands can be executed to install and perform the service configuration.

# rpm -ihv VMware-Log-Insight-Agent-3.0.0-2985111.noarch_192.168.1.2.rpm
# chkconfig liagentd --list
# chkconfig liagentd on
# service liagentd restart

After the installation is complete, the BDE management server should appear in the list of servers with installed agents.

LOGINSIGHT_AGENT_INSTALLED

The next thing to do is begin editing the /etc/liagent.ini configuration file to send the Serengeti log files to the Log Insight server. Mr. Steve Flanders has an article from 2014 that describes the process of adding custom log files for the agent to parse.

 41 [filelog|syslog]
 42 directory=/opt/serengeti/logs
 43 include=serengeti.log;ironfan.log
 44 event_marker=\[\d{4}-\d{2}-\d{2}T

BDE Template Installation

Having the Log Insight agent installed on the BDE management server is helpful, but I determined having the agent installed natively during the all deployments of Hadoop and Apache Mesos clusters would be even more helpful. There were two options for installing the agent during deployments:

  1. Create a Chef cookbook and include the Log Insight agent RPM file on the management server repo.
  2. Install and configure the agent on the template VM itself.

I opted for option #2, merely because it would be the quickest way initially. Admittedly, using a Chef recipe would have been the better long-term and more “DevOps-y” way to perform the installation. I may reconsider my choice in the future.

Just like the management server installation process, copy the Log Insight agent RPM file onto the template and install it using the same steps. No modifications to the /etc/liagent.ini file are necessary for the VM template node because the Serengeti logs don’t exist.

Be sure to delete the snapshot on the VM template node so the changes take effect.

Now all of the VMs deployed through VMware Big Data Extensions will immediately send log updates to Log Insight and the logs for all of the deployments are now captured as well. Having the logs accessible through Log Insight let you parse them with the power of the Log Insight UI and the filtering capabilities there.

Enjoy.

Read More

 

vmware-sliderVersion 2.3.1 of VMware Big Data Extensions was released on March 29, 2016. The latest version includes the fix for the glibc vulnerability disclosed in February. The current branch saw many new features included back in December when v2.3 was released, including an updated CentOS 6.7 template and support for multiple VM templates within the BDE vApp. The full release notes for the 2.3 branch can be viewed on the VMware site.

I’ve been anxious to upgrade my lab environment to 2.3 for the past several months, however time has been extremely limited due to a heavy workload and family life. Fortunately, the Bay Area experienced a rather rainy weekend and with all the little league baseball games getting cancelled, I was able to sit down and deploy the latest version into my vSphere 6.0 lab.

One of the major improvements that has been made to VMware Big Data Extensions (BDE), is the administrative HTTPS interface running on port 5480. Once the vApp is powered on and you have changed the default random password, point your browser to the interface and login. From there, you will be greeted with a summary screen where you can see the status of the running services. When the BDE management server is initializing, you can monitor the status of the initialization and see any error messages (if they occur).

BDE_2.3.1_Mgmt_screen_1

Clicking the ‘Details…’ link to the right of Initialization Status will load the following pop-up that allows you to watch the progress of the management server.

BDE_2.3.1_Mgmt_screen_2

Once all of the initialization steps complete successfully, the Summary screen can be refreshed and it should show all of the services operational.

BDE_2.3.1_Mgmt_screen_3

At this point, log out and back into the vSphere Web Client to see the Big Data Extensions icon and begin managing the vApp.

The BDE management server is missing two key packages which will prevent a deployment from being successful — mailx and wsdl4j. The BDE documentation includes the following instructions for adding these packages to the management server:

The wsdl4j and mailx RPM packages are not embedded within Big Data Extensions due to licensing agreements. For this reason you must install them within the internal Yum repository of the Serengeti Management Server.

In order to install these packages properly, you will need to execute the following commands on the BDE management server.

# su - serengeti
$ umask 022 
$ cd /opt/serengeti/www/yum/repos/centos/6/base/RPMS/
$ wget http://mirror.centos.org/centos/6/os/x86_64/Packages/mailx-12.4-8.el6_6.x86_64.rpm
$ wget http://mirror.centos.org/centos/6/os/x86_64/Packages/wsdl4j-1.5.2-7.8.el6.noarch.rpm
$ createrepo ..

After verifying the proper VMFS datastores and Networks are configured within the BDE application, I always like to perform a test deployment of a basic Hadoop cluster. Doing so allows me to be sure everything is working as expected before I begin modifying the BDE management server. A test deployment is also a good way to see if anything in the BDE workflow has changed — it just so happens there is now a nifty new drop-down menu for selecting the VM template that should be used for the deployment.

BDE_2.3.1_Dropdown

A successful installation of a basic Hadoop cluster means the VMware Big Data Extensions application is ready for consumption and modification to support the Cloud Native Applications (Marathon, Mesos, Kubernetes, etc) I require in my lab environment.

Enjoy.

Read More

VMware released the latest version of Big Data Extensions during Hadoop Summit on June 4, 2015. Included in the release notes, there are two features that have me really excited for this version.

Resize Hadoop Clusters on Demand. You can reduce the number of virtual machines in a running Hadoop cluster, allowing you to manage resources in your VMware ESXi and vCenter Server environments. The virtual machines are deleted, releasing all resources such as memory, CPU, and I/O.

Increase Cloning Performance and Resource Usage of Virtual Machines. You can rapidly clone and deploy virtual machines using Instant Clone, a feature of vSphere 6.0. Using Instant Clone, a parent virtual machine is forked, and then a child virtual machine (or instant clone) is created. The child virtual machine leverages the storage and memory of the parent, reducing resource usage.

These are really outstanding features, especially the ability to use the Instant Clone (aka VMFork) functionality introduced in vSphere 6.0. The Instant Clone technology has very interesting implications when considered with the work in the Cloud Native Application space. Deploying a very small Photon VM to immediately launch a Docker container workload in a matter of seconds will add a huge benefit to running virtualized Apache Mesos clusters (with Marathon) that have been deployed using the BDE framework.

I had hoped to see the functionality from the BDE Fling that included recipes for Mesos and Kubernetes to be incorporated into the official VMware release. On a positive note, the cookbooks for Mesos, Marathon and Kubernetes are present on the management server. It will just take a little effort to unlock those features.

06.27.2015 UPDATE: I have confirmed all of the cookbooks for Mesos, Docker and Kubernetes are present on the BDE v2.2 management server. I will have a post shortly describing how to unlock them for use.

Overall, a big release from the VMware team and it appears they are on the right track to increase the functionality of the BDE framework!

Read More

The post a week ago Monday announced the release of the Heat resource plugin for VMware OpenStack and VMware Big Data Extensions. As I was working through the final issues before the plugin was released and the initial blog post announcing it, I realized it may be helpful to describe how the OpenStack Heat resource plugin works. The initial code for the plugin was rather simplistic and based pretty closely off of the Python scripts I wrote to test the Big Data Extensions REST API. Once testing of the resource plugin began it became evident far more code was going to be necessary to be successful. This blog post will describe the workflows inside the resource plugin, the challenges that I encountered and where improvements could be made in the future. My ultimate hope for the resource plugin is for VMware to take the framework and develop a supported OpenStack Heat resource plugin for everyone to consume.

Basic Workflow

The basic workflow for the resource plugin is not as simplistic as I had hoped. Due to the integration of VMware OpenStack with VMware NSX-v, there were several additional API calls required before the Big Data Extension instances would be accepted into the fold and allowed to communicate over the network.

bde-resource-plugin-workflow

You can see from the figure above the number of API calls required to create a new cluster and allow it to communicate with the Big Data Extensions management server.

Properties

The resource plugin requires several variables in order to operate properly. These variables can then be specified within the JSON Heat template. The following variables are setup:

 48     properties_schema = {
 49         BDE_ENDPOINT: properties.Schema(
 50             properties.Schema.STRING,
 51             required=True,
 52             default='bde.localdomain'
 53         ),
 54         VCM_SERVER: properties.Schema(
 55             properties.Schema.STRING,
 56             required=True,
 57             default='vcenter.localdomain'
 58         ),
 59         USERNAME: properties.Schema(
 60             properties.Schema.STRING,
 61             required=True,
 62             default='administrator@vsphere.local'
 63         ),
 64         PASSWORD: properties.Schema(
 65             properties.Schema.STRING,
 66             required=True,
 67             default='password'
 68         ),
 69         CLUSTER_NAME: properties.Schema(
 70             properties.Schema.STRING,
 71             required=True
 72         ),
 73         CLUSTER_TYPE: properties.Schema(
 74             properties.Schema.STRING,
 75             required=True
 76         ),
 77         NETWORK: properties.Schema(
 78             properties.Schema.STRING,
 79             required=True
 80         ),
 81         CLUSTER_PASSWORD: properties.Schema(
 82             properties.Schema.STRING,
 83             required=False
 84         ),
 85         CLUSTER_RP: properties.Schema(
 86             properties.Schema.STRING,
 87             required=True,
 88             default='openstackRP'
 89         ),
 90         VIO_CONFIG: properties.Schema(
 91             properties.Schema.STRING,
 92             required=True,
 93             default='/usr/local/bin/etc/vio.config'
 94         ),
 95         BDE_CONFIG: properties.Schema(
 96             properties.Schema.STRING,
 97             required=False,
 98             default='/usr/local/bin/etc/bde.config'
 99         ),
100         SECURITY_GROUP: properties.Schema(
101             properties.Schema.STRING,
102             required=False,
103             default='9d3ecec8-e0e3-4088-8c71-8c35cd67dd8b'
104         ),
105         SUBNET: properties.Schema(
106             properties.Schema.STRING,
107             required=True
108         )
109     }

NSX-v Integration

The integration with NSX-v was a multi-step process and honestly took me nearly a full week of testing to sort out. I simply could not have worked through all of these steps were it not for the awesome VIO and BDE engineering teams at VMware who offered guidance, suggestions and screenshots on where NSX-v was hooked into the instances (or needed to be). I am hoping to see them at VMworld this year so that I can buy them dinner for all the help they provided.

Before we go any farther, let me state writing code in Python is not yet a strength for me. In fact, this project was really my first significant foray in using Python — so forgive me if the code is horrid looking.

Identifying the vDS Portgroup

The first piece that had to be identified was the UUID of the vDS Portgroup and having the resource plugin make that Portgroup available to Big Data Extensions.

289         # determine actual NSX portgroup created
290         # hack - regex in Python is not a strength
291         mob_string = '/mob/?moid=datacenter-2'
292         curl_cmd = 'curl -k -u ' + bde_user + ':' + bde_pass + ' ' + prefix + vcm_server + mob_string
293         grep_cmd = " | grep -oP '(?<=\(vxw).*(?=" + network + "\))' | grep -oE '[^\(]+$'"
294         awk_cmd = " | awk '{print $0 \"" + network + "\"}'"
295         full_cmd = curl_cmd + grep_cmd + awk_cmd
296 
297         p = subprocess.Popen(full_cmd, stdout=subprocess.PIPE, shell=True)
298         (net_uid, err) = p.communicate()
299 
300         # Check to see if network_id is as we expect it
301         if 'vxw' in net_uid:
302             network_id = net_uid
303         else:
304             network_id = "vxw" + net_uid
305 
306         network_id = network_id.rstrip('\n')
307 
308         # Authenticate in a requests.session to the BDE server
309         curr = self._open_connection()
310 
311         # Should check to see if network already exists as available network
312         # This logs a big fat error message in /opt/serengeti/logs/serengeti.log
313         # when the network doesn't exist.
314         header = {'content-type': 'application/json'}
315         api_call = '/serengeti/api/network/' + network
316         url = prefix + bde_server + port + api_call
317         r = curr.get(url, headers=header, verify=False)
318 
319         # Add new network to BDE as an available network if check fails
320         payload = {"name" : network, "portGroup" : network_id, "isDhcp" : "true"}
321         api_call = '/serengeti/api/networks'
322         url = prefix + bde_server + port + api_call
323         r = curr.post(url, data=json.dumps(payload), headers=header, verify=False)
324         logger.info(_("VirtualElephant::VMware::BDE - Network creation status code %s") % r.json)

As the code shows, determining the actual name of the vDS Portgroup required taking the UUID — that was passed into the resource plugin as a reference from the JSON file — and parsing HTML output from a call to the MOB. Depending on how many Portgroups existed within the vCenter, it also meant having to deal with additional HTML code that hid the complete list of Portgroups from view unless a user clicked a hyperlink. There is probably a better way of grep’ing and awk’ing in Python, but old habits from decades of writing Perl code are shining through there. You may also notice the MOB link is hard-coded (line 291), this has to be changed sometime soon and is not a sustainable method for using the resource plugin in an enterprise environment.

When writing the plugin, I had hoped this was all that would be needed in order for the BDE nodes to be able to tie into the NSX portgroup and be capable of passing traffic. Unfortunately, that was not the case and I had to also figure out how to add the MAC addresses of every node, through Neutron, into NSX-v.

Creating Neutron ports for NSX-v

The method is called after the BDE create cluster command is sent to the management server. The method is defined in the code on line 139:

139     def _create_nsx_ports(self):

After getting the necessary OpenStack Neutron credential information, the method starts by gathering the node list from BDE in order to match the node name in BDE and the corresponding name in vCenter. This is necessary because BDE relies on vCenter to track the MAC addresses of the nodes created.

192         # Get the node names for the cluster from BDE
193         curr = self._open_connection()
194         header = {'content-type': 'application/json'}
195         api_call = '/serengeti/api/cluster/' + cluster_name
196         url = prefix + bde_server + port + api_call
197         r = curr.get(url, headers=header, verify=False)
198         raw_json = json.loads(r.text)
199         cluster_data = raw_json["nodeGroups"]

I then chose to iterate through the cluster_data variable and work through each one serially to gather the MAC addresses.

201         # Open connect to the vSphere API
202         si = SmartConnect(host=vcm_server, user=admin_user, pwd=admin_pass, port=443)
203         search_index = si.content.searchIndex
204         root_folder = si.content.rootFolder
205         for ng in cluster_data:
206             nodes = ng["instances"]
207             for node in nodes:
208                 logger.info(_("VirtualElephant::VMware::BDE - Creating NSX port for %s") % node.get("name"))
209                 vm_name = node.get("name")
210                 vm_moId = node.get("moId")
211                 port_name = vm_name + "-port0"
212 
213                 # moId is not in format we need to match
214                 (x,y,z) = vm_moId.split(":")
215                 vm_moId = "'vim." + y + ":" + z + "'"
216 
217                 # Go through each DC one at a time, in case there are multiple in vCenter
218                 for dc in root_folder.childEntity:
219                     content = si.content
220                     objView = content.viewManager.CreateContainerView(dc, [vim.VirtualMachine], True)
221                     vm_list = objView.view
222                     objView.Destroy()
223 
224                     for instance in vm_list:
225                         # convert object to string so we can search
226                         i = str(instance.summary.vm)
227                         if vm_moId in i:
228                             # Matched the VM in BDE and vCenter
229                             logger.info(_("VirtualElephant::VMware::BDE - Match found for BDE node %s") % instance)

Once the match is found, then we get the MAC address:

230                             for device in instance.config.hardware.device:
231                                 if isinstance(device, vim.vm.device.VirtualEthernetCard):
232                                     mac_address = str(device.macAddress)
233                                     logger.info(_("VirtualElephant::VMware::BDE - Found MAC address %s") % mac_address)

Now it is possible for the MAC address to be used to create the Neutron port, adding it in properly to NSX-v, enabling BDE nodes to be able to access the network properly.

250                 # Create a new port through Neutron
251                 neutron = client.Client('2.0',
252                                         username=os_username,
253                                         password=os_password,
254                                         auth_url=os_auth_url,
255                                         tenant_name=os_tenant_name,
256                                         endpoint_url=os_url,
257                                         token=os_token)
258                 port_info = {
259                                 "port": {
260                                         "admin_state_up": True,
261                                         "device_id": vm_name,
262                                         "name": port_name,
263                                         "mac_address": mac_address,
264                                         "network_id": network_id
265                                 }
266                             }
267                 logger.info(_("VirtualElephant::VMware::BDE - Neutron port string %s") % port_info)
268 
269                 response = neutron.create_port(body=port_info)
270                 logger.info(_("VirtualElephant::VMware::BDE - NSX port creation response - %s") % response)

Conclusion

There remain some timing issues with creating the Neutron ports and getting the BDE nodes to be able to get a DHCP IP address. Depending on ‘quickly’ the virtual machines are reconfigured and being loading the OS, the Neutron ports will enable the network traffic. If the OS is already to the point where it is trying to get a DHCP address, then the virtual machine has to be reset/rebooted to get the address. The issue here is how do you know which stage the VM is in the BDE deployment process? This is the issue I continue to work through and I have not found a resolution yet.

If you have ideas, please reach out and let me know. I am hoping to solve it through the resource plugin, but ultimately I believe a more elegant solution will involve BDE to handle the NSX-v updates.

Read More

The opportunities for VMware with Project Photon and Project Lightwave are significant. The press release stated:

Designed to help enterprise developers securely build, deploy and manage cloud-native applications, these new open source projects will integrate into VMware’s unified platform for the hybrid cloud — creating a consistent environment across the private and public cloud to support cloud-native and traditional applications. By open sourcing these projects, VMware will work with a broad ecosystem of partners and the developer community to drive common standards, security and interoperability within the cloud-native application market — leading to improved technology and greater customer choice.

What I always find interesting is the lack of discussion around the orchestration and automation of the supporting applications. The orchestration layer does not miraculously appear within a private cloud environment for the developers to consume. The pieces have to be in place in order for developers to consume the services a Mesos cluster offers them. For me, the choice is pretty obvious — expand what the Big Data Extensions framework is capable of providing. I alluded to this thought on Monday when the announcement was made.

Building on that thought and after seeing a diagram of VMware’s vision for how all the pieces tie together, I worked on a logical diagram of how the entire architecture could look like. I believe it looks something like this:

CNA

 

In this environment, Project Photon and Project Lightwave are able to be leveraged beyond just ESXi. By enhancing the deployment options for BDE to include ESXi on vCloud Air (not shown above), KVM and physical (through Ironic), the story is slightly changed. The story now sounds something like this:

For a developer, you choose what Cloud Native application orchestration layer (Mesos, Marathon, Chronos, CloudFoundry, etc.) you would like and communicate with it over the provided API. For operations, the deployment of the tenants within the private cloud environment can be deployed using the OpenStack API (with Heat templates). For both sides, SDLC consistency is maintained through the development process to production.

Simplicity is achieved by only interacting with two APIs — one for operations and one for development. There is large amount of work to do here. First, I need to continue to improve the OpenStack resource plugin to be production-ready. Second, testing of Project Photon inside BDE needs to take place — I imagine there will be some work to have it integrated correctly with the Chef server. Third, the deployment mechanism inside BDE needs to be enhanced to support other options. If the first two were a heavy lift, the last one is going to take a small army — but it is a challenge I am ready to take on!

Ultimately, I feel the gaps in OpenStack around Platform-as-a-Service orchestration can be solved though integrating Big Data Extensions. The framework is more robust and mature when compared to the Sahara offering. The potential is there, it just needs to be executed on.

Read More