Category: Cloud Native Apps

I had the opportunity to attend the CoreOS Fest 2017 in San Francisco for a day this past week. There are lots of exciting things happening in the cloud native space, and CoreOS, with its heavy influence with Kubernetes is at the forefront of much of the innovation. The conference itself was on the smaller side, but the amount of emerging technology focused sessions was impressive — I will be excited to see how it grows over the coming years. While there, I was able to attend the session by one of Adobe’s Principle Architects — Frans van Rooyen. (Frans and I worked together from 2012 – 2014 at Adobe.)

In his session, he spoke about several fundamental architecture principles and how they have been applied in the new multi-cloud initiative at Adobe. The platform they have built over the past two years is capable of being deployed inside a data center, inside AWS, inside Azure and even locally on a developers laptop — while providing the same experience to the developer or operations engineer.

The platform is based on CoreOS and uses the Ignition project to provide the same level of provisioning regardless of which cloud platform the workload is deployed on. I hadn’t heard of Ignition or how it operated to provide the level of provisioning it does and will be a technology I investigate further into now. If you are interested in learning more, I encourage you to reach out to Frans over Twitter.

Frans has also spoken about the multi-cloud platform at Mesoscon, focusing on the inclusion of Apache Mesos — the session can be watched on YouTube.

 

 

Read More

OpenStack has been my world for the past 8 months. It started out with the a work project to design and deploy a large-scale VMware Integrated OpenStack environment for internal use. It then became the design I would submit for my VCDX Defense and spend a couple hundred hours pouring over and documenting. Since then it has included helping other get “up-to-speed” on how to operationalize OpenStack. One of the necessary tools is the ability to execute commands against an OpenStack environment from anywhere.

The easiest way to do that?

A short-lived Docker container with the clients installed!

The container is short and to the point — it uses Ubuntu:latest as the base and simply adds the OpenStack clients.

# Docker container with the latest OpenStack clients

FROM ubuntu:latest

MAINTAINER chris@virtualelephant.com

RUN apt-get -y update && apt-get -y upgrade

RUN apt-get -y install python-openstackclient vim

Follow that up with a quick Docker command to launch the instance, and I’m ready to troubleshoot whatever issue may require my attention.

$ docker run -it chrismutchler/vio-client

Where I am not a developer, I find the usefulness of creating these small types of Docker containers really fun. The ability to quickly spin up a container on my laptop or whatever VM I find myself on at the time priceless.

The repo can be seen on hub.docker.com/chrismutchler/vio-client.

If you need a OpenStack Client Docker container, I hope you’ll give this one a try. Enjoy!

Read More

I am currently working on building out a vPod nested ESXi lab environment that will be deployed through OpenStack’s Heat orchestration service. As I worked out the vPod application components, I realized that I wanted to include a single Linux VM that would run various services inside Docker containers.

I needed a Bind Docker container!

It seems like everything in a VMware SDDC environment needs both the forward and reverse records working properly — so I started here. The Docker container is completely self-contained — all external zone data is stored in S3 and downloaded when the container is built.

https://hub.docker.com/r/chrismutchler/vpod-bind/

The Dockerfile for the container contains the following code:

# Designed to be used in conjunction with a nested ESXi
# virtual lab environment deployed through an OpenStack
# Heat template.

FROM ubuntu:latest

MAINTAINER chris@virtualelephant.com

RUN apt-get -y update && apt-get -y install bind9 dnsutils curl

RUN curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/db.192.168 -o /etc/bind/db.192.168 && curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/db.vsphere.local -o /etc/bind/db.vsphere.local && curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/named.conf.options -o /etc/bind/named.conf.options && curl https://s3-us-west-1.amazonaws.com/virtualelephant-vpod-bind/named.conf.local -o /etc/bind/named.conf.local

EXPOSE 53

CMD ["/usr/sbin/named", "-g", "-c", "/etc/bind/named.conf", "-u", "bind"]

To start the container, I setup the Ubuntu VM to execute the following code when it is deployed inside OpenStack.

# docker run -d -p 53:53 -p 53:53/udp chrismutchler/vpod-bind

Once running, it is now able to provide the critical DNS service inside the vPod ESXi environment. From here it is onto building out the Heat template that will leverage the container.

Enjoy!

Read More

 

Apache Mesos Chef Recipes

The recent update to v2.3.1 of VMware Big Data Extensions required a few updates on the management server to enable the proper deployment of Apache Mesos, Mesosphere Marathon and Docker. In order to have BDE deploy the latest code, the Apache Mesos Chef recipes needed a few updates. I prefer to use the Mesosphere CentOS repo for installing the packages, as it allows for keeping the clusters up-to-date with the proper packages. In order to do so, the following file should be created in /opt/serengeti/www/yum.

mesos.repo

[mesosphere]
name=Mesosphere Packages for EL 6 - $basearch
baseurl=http://repos.mesosphere.io/el/6/$basearch/
enabled=1
gpgcheck=1
gpgkey=https://bde.local.domain/yum/RPM-GPG-KEY-mesosphere

[mesosphere-noarch]
name=Mesosphere Packages for EL 6 - noarch
baseurl=http://repos.mesosphere.io/el/6/noarch/
enabled=1
gpgcheck=1
gpgkey=https://bde.local.domain/yum/RPM-GPG-KEY-mesosphere

[mesosphere-source]
name=Mesosphere Packages for EL 6 - $basearch - Source
baseurl=http://repos.mesosphere.io/el/6/SRPMS/
enabled=0
gpgcheck=1
gpgkey=https://bde.local.domain/yum/RPM-GPG-KEY-mesosphere

[cloudera-cdh4]
name=Cloudera's Distribution for Hadoop, Version 4
baseurl=http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/4/
gpgkey =  http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera
gpgcheck = 1

The file adds an extra repo for Zookeeper that Cloudier provides. These repos were necessary in order for the updated Chef recipes to properly install those packages. The remaining changes are strictly within the Chef recipes on the BDE management server.

/opt/serengeti/chef/cookbooks/mesos/recipes/install.rb

 69 when 'rhel', 'centos', 'amazon', 'scientific'
 70   %w( unzip libcurl ).each do |pkg|
 71     yum_package pkg do
 72       action :install
 73     end
 74   end
 75 
 76   template "/etc/pki/rpm-gpg/RPM-GPG-KEY-mesosphere" do
 77     source "RPM-GPG-KEY-mesosphere.erb"
 78   end
 79 
 80   package 'mesos'
 81   package 'chronos' if node.role?('mesos_chronos')
 82   package 'marathon' if node.role?('mesos_marathon')
 83 end

100 if distro == 'debian'
101   bash 'reload-configuration-debian' do
102     user 'root'
103     code <<-EOH
104     update-rc.d -f mesos-master remove
105     EOH
106     not_if { ::File.exist? '/usr/sbin/mesos-master' }
107   end
108 else
109   bash 'reload-configuration' do
110     user 'root'
111     code <<-EOH
112     initctl reload-configuration
113     EOH
114     not_if { ::File.exist? '/usr/sbin/mesos-master' }
115   end
116 end

Update:

A colleague pointed out the addition for the RPM-GPG-KEY-mesosphere.erb file was not as clear as it should have been. Make sure you add lines 76-78 AND copy the GPG key into /opt/serengeti/chef/cookbooks/mesos/templates/default with a .erb file extension.

/opt/serengeti/chef/cookbooks/mesos/templates/default/chronos.conf.erb

Several of the template files have an old path for executables that need to be updated. The Chronos configuration file needs the following change.

  9 exec /usr/bin/chronos

/opt/serengeti/chef/cookbooks/mesos/templates/default/mesos-slave.conf.erb

  6 exec /usr/bin/mesos-init-wrapper slave

/opt/serengeti/chef/cookbooks/mesos/templates/default/marathon.conf.erb

  9 exec /usr/bin/marathon

The final step was to update the Chef server so that the changes will take effect by executing the knife cookbook upload -a command. When all of the outlined steps have been completed, the Apache Mesos Chef recipes on the Big Data Extensions management server will be all set to deploy the latest code for Apache Mesos, Mesosphere Marathon, Chronos and Docker on CentOS.

There is a series of posts coming to show how to tie VMware NSX together with Apache Mesos, Apache Hadoop and VMware Big Data Extensions.

Enjoy!

Read More

 

vmware-sliderVersion 2.3.1 of VMware Big Data Extensions was released on March 29, 2016. The latest version includes the fix for the glibc vulnerability disclosed in February. The current branch saw many new features included back in December when v2.3 was released, including an updated CentOS 6.7 template and support for multiple VM templates within the BDE vApp. The full release notes for the 2.3 branch can be viewed on the VMware site.

I’ve been anxious to upgrade my lab environment to 2.3 for the past several months, however time has been extremely limited due to a heavy workload and family life. Fortunately, the Bay Area experienced a rather rainy weekend and with all the little league baseball games getting cancelled, I was able to sit down and deploy the latest version into my vSphere 6.0 lab.

One of the major improvements that has been made to VMware Big Data Extensions (BDE), is the administrative HTTPS interface running on port 5480. Once the vApp is powered on and you have changed the default random password, point your browser to the interface and login. From there, you will be greeted with a summary screen where you can see the status of the running services. When the BDE management server is initializing, you can monitor the status of the initialization and see any error messages (if they occur).

BDE_2.3.1_Mgmt_screen_1

Clicking the ‘Details…’ link to the right of Initialization Status will load the following pop-up that allows you to watch the progress of the management server.

BDE_2.3.1_Mgmt_screen_2

Once all of the initialization steps complete successfully, the Summary screen can be refreshed and it should show all of the services operational.

BDE_2.3.1_Mgmt_screen_3

At this point, log out and back into the vSphere Web Client to see the Big Data Extensions icon and begin managing the vApp.

The BDE management server is missing two key packages which will prevent a deployment from being successful — mailx and wsdl4j. The BDE documentation includes the following instructions for adding these packages to the management server:

The wsdl4j and mailx RPM packages are not embedded within Big Data Extensions due to licensing agreements. For this reason you must install them within the internal Yum repository of the Serengeti Management Server.

In order to install these packages properly, you will need to execute the following commands on the BDE management server.

# su - serengeti
$ umask 022 
$ cd /opt/serengeti/www/yum/repos/centos/6/base/RPMS/
$ wget http://mirror.centos.org/centos/6/os/x86_64/Packages/mailx-12.4-8.el6_6.x86_64.rpm
$ wget http://mirror.centos.org/centos/6/os/x86_64/Packages/wsdl4j-1.5.2-7.8.el6.noarch.rpm
$ createrepo ..

After verifying the proper VMFS datastores and Networks are configured within the BDE application, I always like to perform a test deployment of a basic Hadoop cluster. Doing so allows me to be sure everything is working as expected before I begin modifying the BDE management server. A test deployment is also a good way to see if anything in the BDE workflow has changed — it just so happens there is now a nifty new drop-down menu for selecting the VM template that should be used for the deployment.

BDE_2.3.1_Dropdown

A successful installation of a basic Hadoop cluster means the VMware Big Data Extensions application is ready for consumption and modification to support the Cloud Native Applications (Marathon, Mesos, Kubernetes, etc) I require in my lab environment.

Enjoy.

Read More