VMware BDE + Zookeeper: The unknown cluster option

If you have taken a look underneath the covers of VMware Big Data Extensions, then you have probably seen the Zookeeper Chef cookbooks that are part of every default cluster deployment. The Zookeeper role is built-in and was an critical part of being able to develop the option for a Mesosphere cluster so quickly using BDE — no need to reinvent the wheel. The only part missing from being able to deploy a Zookeeper only cluster is the JSON specification file.

I took a few minutes and put together a quick JSON specification file that can be used to deploy just Zookeeper as a cluster that could then be utilized by any application as a service layer. As with the Mesosphere cluster, I started with the basic cluster JSON file found in the /opt/serengeti/samples directory.

  1 // This is a cluster spec for creating a Zookeeper cluster without installing any hadoop stuff.
  2 {
  3   "nodeGroups":[
  4     {
  5       "name": "master",
  6       "roles": [
  7         "zookeeper"
  8       ],
  9       "instanceNum": 5,
 10       "cpuNum": 2,
 11       "memCapacityMB": 3768,
 12       "storage": {
 13         "type": "SHARED",
 14         "sizeGB": 50
 15       },
 16       "haFlag": "on"
 17     } 
 18   ]   
 19 }

A quick definition in the /opt/serengeti/conf/serengeti.properties file, the /opt/serengeti/www/specs/map file and /opt/serengeti/www/manifest file is all that is needed. Quickly restart tomcat on the management server and you are off to the races!

The unknown cluster option is now available with very little modification to your BDE environment.


BDE + Mesosphere cluster code on GitHub

I have uploaded the necessary files to begin including the option for deploying a Mesosphere cluster with VMware Big Data Extensions v2.1. You can download the tarball or clone the repo via the following link:


As I begin work and provide further extensions for other clustering technologies, I will make them available via GitHub as well. To include this in your deployment, extract it directly into the /opt/serengeti folder — although be aware it will replace the default map and default manifest files as well. After the files are extracted (as user serengeti), simply run two commands on the BDE management server:

# knife cookbook upload -a
# service tomcat restart

If you have any questions, feel free to reach out to me over Twitter.

Apache Mesos Clusters – Part 3

The post includes the final pieces necessary to get a Mesosphere stack deployed through Big Data Extensions within a VMware environment. I’ve included the Chef cookbooks and commands required for tying all of the pieces together for a cluster deployment. The wonderful thing about the framework is the extensibility — once I had Mesos deploying, it became very clear how simple it is to extend the framework even further — look for future posts.

The idea that you can now turn a large cluster of VMs into a single Mesos cluster for use by a product, engineering team or operations team opens up an entirely new world within our environments. This is a very exciting place to be investing time.

Chef Roles

Big Data Extensions uses role definitions within the framework, so the first step was to create a new role for Mesos. If you remember from Part 2, we defined the role in the JSON file and called it ‘mesos’.

The role files can be found in /opt/serengeti/chef/roles. I created the roles for both mesos_master and mesos_worker through the command line interface:

Continue reading “Apache Mesos Clusters – Part 3”

Apache Mesos Clusters – Part 2

Building Mesosphere & Apache Mesos into BDE:

After playing with Mesosphere in AWS for the week, getting familiar with the packages and the deployment process, the real work has begun — getting the Mesosphere stack (Apache Mesos, Apache Zookeeper,  Mesosphere Marathon, Chronos and HAProxy) deployed through VMware Big Data Extensions. Fortunately, BDE v2.1 has some example JSON cluster definition files that can be used for deploying different types of clusters and these are perfect for modification in this use-case.

The example files are located in the directory /opt/serengeti/samples. I used the basic_cluster.json file in the directory as the template. From there, I modified the file based on what the Mesosphere stack deployed in AWS, with some slight modifications. I chose to have a base Mesos cluster include 3 master nodes and 6 worker nodes. The master nodes are allocated with 2vCPU, 8GB RAM and 50GB of disk space. The worker nodes are allocated with 2vCPU, 8GB RAM and 100GB of disk space.

The remainder of the post will go through all the various pieces that are necessary to utilize the Big Data Extensions framework to offer the Mesosphere stack within a VMware virtual environment.

Continue reading “Apache Mesos Clusters – Part 2”

Apache Mesos Clusters – Part 1

I watched a webinar today from Ken Sipe (@kensipe) from Mesosphere on Mesos, Marathon and Chronos. The topics covered included how Mesos works, configuring and standup of a Mesos cluster in various public cloud offerings. If you are unfamiliar with Mesos, I would direct you to Mesosphere and the Apache Mesos Project.

The basic explanation of from the Apache Mesos Project page states:

Apache Mesos abstracts CPU, memory, storage, and other compute resources away from machines (physical or virtual), enabling fault-tolerant and elastic distributed systems to easily be built and run effectively.

Think turning an entire datacenter of compute resources into a single pool to be consumed. Instead of carving out individual pieces of compute, Mesos handles the scheduling and helps you scale an application across all of the resources available to it.

So how quickly can you deploy a cluster and begin using Mesos?

Continue reading “Apache Mesos Clusters – Part 1”