The conference was completed just over two weeks ago, and since then I’ve had the opportunity to go through my notes, think about the sessions I attended and summarize what insight I gained while there.
The biggest takeaway I had for VMworld 2014 compared to last year revolved around lessons learned in 2013 were applied in 2014. The key insight in 2013 was that many other partners and customers of VMware were facing the same challenges around standardization, automation and self-service. It was helpful to learn that the things we were trying to accomplish within our department at Adobe were not unique to us.
This year, 2014, I learned that we have solved many of the challenges from the last year and now have great insight to offer out to the community. As we work towards building on the standardization, automation and self-service phases of offering both comprehensive IaaS and PaaS offerings, we are doing what we can to share that information with the broader community.
All of that is wonderful, but what are the next steps for our team, the market and others in the virtualization space? We heard a lot at the conference about OpenStack, Docker, VSAN and other emerging technologies. The focus I personally have for the next year is going to revolve around further implementation of the Hadoop ecosystem, using VMware technologies, and building out larger, comprehensive PaaS offerings.
There are many questions to be answered around how OpenStack and Docker plays in the space. I am looking forward to the challenges coming to us as we work with our engineering teams.
Should be an exciting year!
Yesterday was another great day in San Francisco and VMworld 2014. The big takeaway I had revolved around Docker and VMware integration. There is a great article over on the Office of the CTO blog regarding this exact topic. Two key takeaways the CEO of Docker said during his portion (paraphrased):
- Use VMs for security and consistency and use Docker for speed of deployment.
- Docker + VMware gets you the best of both worlds when utilized together
There are some exciting things, like Project Fargo, going on in the space right now that should enable Operations teams to incorporate Docker into their existing environments to give their applications the flexibility next-generation apps and engineering teams are starting to require.
Beyond the sessions, the CTO party last night was really amazing! Lots of networking and conversations were taking place and I was able to gain some good insight into how Mesos could be used to replace YARN. I am excited to follow-up on several of the conversations last night.
With VMworld 2014 in the United States fast approaching, I have been working on building out my schedule based on my personal objectives and checking the popular blogger sites for their recommendations. In that spirit, I thought I would share the sessions I am most excited about this year in San Francisco.
Last year was my first year at VMworld and I focused on the Hands-on-Labs (HoLs) and generic sessions to better understand the VMware ecosystem. This year I am focused on three primary topics:
- VMware NSX
- Openstack|Docker|Containers with VMware
- VMware VSAN
Here are the sessions I am focused on:
- SEC1746 NSX Distributed Firewall Deep Dive
- NET1966 Operational Best Practices for VMware NSX
- NET1949 VMware NSX for Docker, Containers & Mesos
- SDDC3350 VMware and Docker — Better Together
- SDDC2370 Why Openstack runs best with the vCloud suite
- STO1279 Virtual SAN Architecture Deep Dive
- STO1424 Massively Scaling Virtual SAN implementations
In addition to that, I am also excited for my own sessions at VMworld this year around Hadoop , VMware BDE and building a Hadoop-as-a-Service!
- VAPP1428 Hadoop-as-a-Service: Utilizing VMware Cloud Automation Center and Big Data Extensions at Adobe (Monday & Wednesday sessions)
Excited for the week to get kicked off and see all the exciting things coming to our virtualized world.
Not specifically related to Hadoop or Big Data Extensions, but I came across this bug tonight. There is a KB article on the VMware website (here), but the syntax it lists is incorrect.
The error I was seeing on the VM console was “vmsvc [warning] [guestinfo] RecordRoutingInfo: Unable to collect IPv4 routing table” immediately after it brought eth0 online. The workaround to fix the issue, beyond upgrading arping in the OS, is to add the following line in the virtual machine .vmx file:
rtc.diffFromUTC = “0”
The quotes are missing from the VMware knowledge base article and are indeed necessary to fix the issue and get the virtual machine past this point in the boot process.
Working on a specific use-case at work has required that I modify the Chef recipe templates for mapred-site.xml and yarn-site.xml to configure the memory allocations correctly. The container sizes themselves will depend on the size of VMs you are creating, and BDE has some generic settings by default, but again with each workload being different it is necessary to tune these parameters just as you would with a physical Hadoop cluster.
The virtual machines within this compute-only (Isilon-backed HDFS + NameNode) cluster utilized the ‘Medium’ sized node within BDE. That means:
- 2 vCPU
- 7.5GB RAM
- 100GB drives
The specific YARN and MapReduce settings I have used to take advantage of the total memory allocated to the cluster was:
155 <% else %>
161 <!-- <property> -->
162 <!-- <name>mapred.child.ulimit</name> -->
163 <!-- <value><%= node[:hadoop][:java_child_ulimit] %></value> -->
164 <!-- </property> -->
167 <description>MapReduce map memory, in MB</description>
173 <description>MapReduce map java options</description>
179 <description>MapReduce reduce memory, in MB</description>
185 <description>MapReduce reduce java options</description>
191 <description>MapReduce task IO sort, in MB</description>
196 <% end %>
73 <description>Amount of physical memory, in MB, that can be allocated
74 for containers.</description>
76 <!-- <value><%= node[:yarn][:nm_resource_mem] %></value> -->
81 <description>The amount of memory the MR AppMaster needs.</description>
83 <!-- <value><%= node[:yarn][:am_resource_mem] %></value> -->
88 <description>Scheduler minimum memory, in MB, that can be allocated.</description>
94 <description>Scheduler maximum memory, in MB, that can be allocated.</description>
100 <description>Application master options</description>
127 <description>Disable the vmem check that is turned on by default in Yarn.</description>
Again, mileage will vary depending on your Hadoop workload, but these configuration settings should allow you to utilize the majority of the memory resources within a cluster deployed with the ‘Medium’ sized nodes within BDE.
I used the following articles as guidelines when tuning my cluster, along with trial and error.