Apache Flume node in VMware vSphere BDE – Part 1

Apache Flume is an open-source project to assist Hadoop users to ingest data into HDFS. It provides a reliable mechanism for sending raw data into HDFS to be used by the Hadoop cluster. Many new users of Hadoop often ask, “How can I get my data into HDFS for me to begin taking advantage of it?” As a result, a Flume node within the Hadoop cluster is a good first step.

There is a white paper, written by VMware, that describes how to include an Apache Flume node within a BDE-deployed Hadoop cluster. The steps and use-case described in the white paper are quite adequate for deploying a node that can be made available to the cluster. However, as I began thinking about how to offer this as part of a Hadoop-as-a-Service offering, I realized that the ability to deploy a Flume node through BDE needed to happen at the time of deployment — not afterwards. I certainly did not want to have to go through many of the manual steps to configure Flume when all of that information is available to BDE at the time of the cluster deployment.

Continue reading “Apache Flume node in VMware vSphere BDE – Part 1”

Performance Tuning for Hadoop Clusters

As I stated previously, the session I learned the most from at Hadoop Summit was about performance tuning the OS to ensure the cluster is getting the most from the infrastructure (slides can be found here). In order to do so, I had to modify the Chef recipes inside of the BDE management server to have the updates installed on all new clusters.

  • Disable swampiness, increase the proc and file limits in /opt/serengeti/cookbooks/cookbooks/hadoop_cluster/recipes/dedicated_server_tuning.rb
   3 ulimit_hard_nofile = 32768
   4 ulimit_soft_nofile = 32768
   5 ulimit_hard_nproc = 32768
   6 ulimit_soft_nproc = 32768
   7 vm_swappiness = 0
   8 redhat_transparent_hugepage = "never"
   9 vm_swappiness_line = "vm.swappiness = 0"
  11 def set_proc_sys_limit desc, proc_path, limit
  12   bash desc do
  13     not_if{ File.exists?(proc_path) && (File.read(proc_path).chomp.strip == limit.to_s) }
  14     code  "echo #{limit} > #{proc_path}"
  15   end
  16 end
  18 def set_swap_sys_limit desc, file_path, limit
  19   bash desc do
  20     not_if{ File.exists?(file_path) && (File.read(file_path).chomp.strip == limit.to_s) }
  21     code  "echo #{limit} > #{file_path}"
  22   end
  23 end
  25 set_proc_sys_limit "VM overcommit ratio", '/proc/sys/vm/overcommit_memory', overcommit_memory
  26 set_proc_sys_limit "VM overcommit memory", '/proc/sys/vm/overcommit_ratio',  overcommit_ratio
  27 set_proc_sys_limit "VM swappiness", '/proc/sys/vm/swappiness', vm_swappiness
  28 set_proc_sys_limit "Redhat transparent hugepage defag", '/sys/kernel/mm/redhat_transparent_hugepage/defrag', redhat_transparent_hugepage
  29 set_proc_sys_limit "Redhat transparent hugepage enable", '/sys/kernel/mm/redhat_transparent_hugepage/enabled', redhat_transparent_hugepage
  31 set_swap_sys_limit "SYSCTL swappiness setting", '/etc/sysctl.conf', vm_swappiness_line
  • Remove root reserved space from the filesystems in /opt/serengeti/cookbooks/cookbooks/hadoop_common/libraries/default.rb

335 function format_disk_internal()
336 {
337   kernel=`uname -r | cut -d'-' -f1`
338   first=`echo $kernel | cut -d '.' -f1`
339   second=`echo $kernel | cut -d '.' -f2`
340   third=`echo $kernel | cut -d '.' -f3`
341   num=$[ $first*10000 + $second*100 + $third ]
343   # we cannot use [[ "$kernel" < "2.6.28" ]] here becase linux kernel
344   # has versions like "2.6.5"
345   if [ $num -lt 20628 ];
346   then
347     mkfs -t ext3 -b 4096 -m 0 $1;
348   else
349     mkfs -t ext4 -b 4096 -m 0 $1;
350   fi;
351 }
Once the recipes were updated, in order for the changes to take effect, be sure to execute the command:
# knife cookbook upload -a
At which point, BDE will now be configured to include several of the commonly missed performance enhancements on a Hadoop cluster. There are several more configuration changes that can be made that I will cover in a future post.