VMware OpenStack Heat Resource Plugin Code Review

The post a week ago Monday announced the release of the Heat resource plugin for VMware OpenStack and VMware Big Data Extensions. As I was working through the final issues before the plugin was released and the initial blog post announcing it, I realized it may be helpful to describe how the OpenStack Heat resource plugin works. The initial code for the plugin was rather simplistic and based pretty closely off of the Python scripts I wrote to test the Big Data Extensions REST API. Once testing of the resource plugin began it became evident far more code was going to be necessary to be successful. This blog post will describe the workflows inside the resource plugin, the challenges that I encountered and where improvements could be made in the future. My ultimate hope for the resource plugin is for VMware to take the framework and develop a supported OpenStack Heat resource plugin for everyone to consume.

Basic Workflow

The basic workflow for the resource plugin is not as simplistic as I had hoped. Due to the integration of VMware OpenStack with VMware NSX-v, there were several additional API calls required before the Big Data Extension instances would be accepted into the fold and allowed to communicate over the network.

bde-resource-plugin-workflow

You can see from the figure above the number of API calls required to create a new cluster and allow it to communicate with the Big Data Extensions management server.

Properties

The resource plugin requires several variables in order to operate properly. These variables can then be specified within the JSON Heat template. The following variables are setup:

 48     properties_schema = {
 49         BDE_ENDPOINT: properties.Schema(
 50             properties.Schema.STRING,
 51             required=True,
 52             default='bde.localdomain'
 53         ),
 54         VCM_SERVER: properties.Schema(
 55             properties.Schema.STRING,
 56             required=True,
 57             default='vcenter.localdomain'
 58         ),
 59         USERNAME: properties.Schema(
 60             properties.Schema.STRING,
 61             required=True,
 62             default='administrator@vsphere.local'
 63         ),
 64         PASSWORD: properties.Schema(
 65             properties.Schema.STRING,
 66             required=True,
 67             default='password'
 68         ),
 69         CLUSTER_NAME: properties.Schema(
 70             properties.Schema.STRING,
 71             required=True
 72         ),
 73         CLUSTER_TYPE: properties.Schema(
 74             properties.Schema.STRING,
 75             required=True
 76         ),
 77         NETWORK: properties.Schema(
 78             properties.Schema.STRING,
 79             required=True
 80         ),
 81         CLUSTER_PASSWORD: properties.Schema(
 82             properties.Schema.STRING,
 83             required=False
 84         ),
 85         CLUSTER_RP: properties.Schema(
 86             properties.Schema.STRING,
 87             required=True,
 88             default='openstackRP'
 89         ),
 90         VIO_CONFIG: properties.Schema(
 91             properties.Schema.STRING,
 92             required=True,
 93             default='/usr/local/bin/etc/vio.config'
 94         ),
 95         BDE_CONFIG: properties.Schema(
 96             properties.Schema.STRING,
 97             required=False,
 98             default='/usr/local/bin/etc/bde.config'
 99         ),
100         SECURITY_GROUP: properties.Schema(
101             properties.Schema.STRING,
102             required=False,
103             default='9d3ecec8-e0e3-4088-8c71-8c35cd67dd8b'
104         ),
105         SUBNET: properties.Schema(
106             properties.Schema.STRING,
107             required=True
108         )
109     }

NSX-v Integration

The integration with NSX-v was a multi-step process and honestly took me nearly a full week of testing to sort out. I simply could not have worked through all of these steps were it not for the awesome VIO and BDE engineering teams at VMware who offered guidance, suggestions and screenshots on where NSX-v was hooked into the instances (or needed to be). I am hoping to see them at VMworld this year so that I can buy them dinner for all the help they provided.

Before we go any farther, let me state writing code in Python is not yet a strength for me. In fact, this project was really my first significant foray in using Python — so forgive me if the code is horrid looking.

Identifying the vDS Portgroup

The first piece that had to be identified was the UUID of the vDS Portgroup and having the resource plugin make that Portgroup available to Big Data Extensions.

289         # determine actual NSX portgroup created
290         # hack - regex in Python is not a strength
291         mob_string = '/mob/?moid=datacenter-2'
292         curl_cmd = 'curl -k -u ' + bde_user + ':' + bde_pass + ' ' + prefix + vcm_server + mob_string
293         grep_cmd = " | grep -oP '(?<=\(vxw).*(?=" + network + "\))' | grep -oE '[^\(]+$'"
294         awk_cmd = " | awk '{print $0 \"" + network + "\"}'"
295         full_cmd = curl_cmd + grep_cmd + awk_cmd
296 
297         p = subprocess.Popen(full_cmd, stdout=subprocess.PIPE, shell=True)
298         (net_uid, err) = p.communicate()
299 
300         # Check to see if network_id is as we expect it
301         if 'vxw' in net_uid:
302             network_id = net_uid
303         else:
304             network_id = "vxw" + net_uid
305 
306         network_id = network_id.rstrip('\n')
307 
308         # Authenticate in a requests.session to the BDE server
309         curr = self._open_connection()
310 
311         # Should check to see if network already exists as available network
312         # This logs a big fat error message in /opt/serengeti/logs/serengeti.log
313         # when the network doesn't exist.
314         header = {'content-type': 'application/json'}
315         api_call = '/serengeti/api/network/' + network
316         url = prefix + bde_server + port + api_call
317         r = curr.get(url, headers=header, verify=False)
318 
319         # Add new network to BDE as an available network if check fails
320         payload = {"name" : network, "portGroup" : network_id, "isDhcp" : "true"}
321         api_call = '/serengeti/api/networks'
322         url = prefix + bde_server + port + api_call
323         r = curr.post(url, data=json.dumps(payload), headers=header, verify=False)
324         logger.info(_("VirtualElephant::VMware::BDE - Network creation status code %s") % r.json)

As the code shows, determining the actual name of the vDS Portgroup required taking the UUID — that was passed into the resource plugin as a reference from the JSON file — and parsing HTML output from a call to the MOB. Depending on how many Portgroups existed within the vCenter, it also meant having to deal with additional HTML code that hid the complete list of Portgroups from view unless a user clicked a hyperlink. There is probably a better way of grep’ing and awk’ing in Python, but old habits from decades of writing Perl code are shining through there. You may also notice the MOB link is hard-coded (line 291), this has to be changed sometime soon and is not a sustainable method for using the resource plugin in an enterprise environment.

When writing the plugin, I had hoped this was all that would be needed in order for the BDE nodes to be able to tie into the NSX portgroup and be capable of passing traffic. Unfortunately, that was not the case and I had to also figure out how to add the MAC addresses of every node, through Neutron, into NSX-v.

Creating Neutron ports for NSX-v

The method is called after the BDE create cluster command is sent to the management server. The method is defined in the code on line 139:

139     def _create_nsx_ports(self):

After getting the necessary OpenStack Neutron credential information, the method starts by gathering the node list from BDE in order to match the node name in BDE and the corresponding name in vCenter. This is necessary because BDE relies on vCenter to track the MAC addresses of the nodes created.

192         # Get the node names for the cluster from BDE
193         curr = self._open_connection()
194         header = {'content-type': 'application/json'}
195         api_call = '/serengeti/api/cluster/' + cluster_name
196         url = prefix + bde_server + port + api_call
197         r = curr.get(url, headers=header, verify=False)
198         raw_json = json.loads(r.text)
199         cluster_data = raw_json["nodeGroups"]

I then chose to iterate through the cluster_data variable and work through each one serially to gather the MAC addresses.

201         # Open connect to the vSphere API
202         si = SmartConnect(host=vcm_server, user=admin_user, pwd=admin_pass, port=443)
203         search_index = si.content.searchIndex
204         root_folder = si.content.rootFolder
205         for ng in cluster_data:
206             nodes = ng["instances"]
207             for node in nodes:
208                 logger.info(_("VirtualElephant::VMware::BDE - Creating NSX port for %s") % node.get("name"))
209                 vm_name = node.get("name")
210                 vm_moId = node.get("moId")
211                 port_name = vm_name + "-port0"
212 
213                 # moId is not in format we need to match
214                 (x,y,z) = vm_moId.split(":")
215                 vm_moId = "'vim." + y + ":" + z + "'"
216 
217                 # Go through each DC one at a time, in case there are multiple in vCenter
218                 for dc in root_folder.childEntity:
219                     content = si.content
220                     objView = content.viewManager.CreateContainerView(dc, [vim.VirtualMachine], True)
221                     vm_list = objView.view
222                     objView.Destroy()
223 
224                     for instance in vm_list:
225                         # convert object to string so we can search
226                         i = str(instance.summary.vm)
227                         if vm_moId in i:
228                             # Matched the VM in BDE and vCenter
229                             logger.info(_("VirtualElephant::VMware::BDE - Match found for BDE node %s") % instance)

Once the match is found, then we get the MAC address:

230                             for device in instance.config.hardware.device:
231                                 if isinstance(device, vim.vm.device.VirtualEthernetCard):
232                                     mac_address = str(device.macAddress)
233                                     logger.info(_("VirtualElephant::VMware::BDE - Found MAC address %s") % mac_address)

Now it is possible for the MAC address to be used to create the Neutron port, adding it in properly to NSX-v, enabling BDE nodes to be able to access the network properly.

250                 # Create a new port through Neutron
251                 neutron = client.Client('2.0',
252                                         username=os_username,
253                                         password=os_password,
254                                         auth_url=os_auth_url,
255                                         tenant_name=os_tenant_name,
256                                         endpoint_url=os_url,
257                                         token=os_token)
258                 port_info = {
259                                 "port": {
260                                         "admin_state_up": True,
261                                         "device_id": vm_name,
262                                         "name": port_name,
263                                         "mac_address": mac_address,
264                                         "network_id": network_id
265                                 }
266                             }
267                 logger.info(_("VirtualElephant::VMware::BDE - Neutron port string %s") % port_info)
268 
269                 response = neutron.create_port(body=port_info)
270                 logger.info(_("VirtualElephant::VMware::BDE - NSX port creation response - %s") % response)

Conclusion

There remain some timing issues with creating the Neutron ports and getting the BDE nodes to be able to get a DHCP IP address. Depending on ‘quickly’ the virtual machines are reconfigured and being loading the OS, the Neutron ports will enable the network traffic. If the OS is already to the point where it is trying to get a DHCP address, then the virtual machine has to be reset/rebooted to get the address. The issue here is how do you know which stage the VM is in the BDE deployment process? This is the issue I continue to work through and I have not found a resolution yet.

If you have ideas, please reach out and let me know. I am hoping to solve it through the resource plugin, but ultimately I believe a more elegant solution will involve BDE to handle the NSX-v updates.

Building on Project Photon & Project Lightwave

The opportunities for VMware with Project Photon and Project Lightwave are significant. The press release stated:

Designed to help enterprise developers securely build, deploy and manage cloud-native applications, these new open source projects will integrate into VMware’s unified platform for the hybrid cloud — creating a consistent environment across the private and public cloud to support cloud-native and traditional applications. By open sourcing these projects, VMware will work with a broad ecosystem of partners and the developer community to drive common standards, security and interoperability within the cloud-native application market — leading to improved technology and greater customer choice.

What I always find interesting is the lack of discussion around the orchestration and automation of the supporting applications. The orchestration layer does not miraculously appear within a private cloud environment for the developers to consume. The pieces have to be in place in order for developers to consume the services a Mesos cluster offers them. For me, the choice is pretty obvious — expand what the Big Data Extensions framework is capable of providing. I alluded to this thought on Monday when the announcement was made.

Building on that thought and after seeing a diagram of VMware’s vision for how all the pieces tie together, I worked on a logical diagram of how the entire architecture could look like. I believe it looks something like this:

CNA

 

In this environment, Project Photon and Project Lightwave are able to be leveraged beyond just ESXi. By enhancing the deployment options for BDE to include ESXi on vCloud Air (not shown above), KVM and physical (through Ironic), the story is slightly changed. The story now sounds something like this:

For a developer, you choose what Cloud Native application orchestration layer (Mesos, Marathon, Chronos, CloudFoundry, etc.) you would like and communicate with it over the provided API. For operations, the deployment of the tenants within the private cloud environment can be deployed using the OpenStack API (with Heat templates). For both sides, SDLC consistency is maintained through the development process to production.

Simplicity is achieved by only interacting with two APIs — one for operations and one for development. There is large amount of work to do here. First, I need to continue to improve the OpenStack resource plugin to be production-ready. Second, testing of Project Photon inside BDE needs to take place — I imagine there will be some work to have it integrated correctly with the Chef server. Third, the deployment mechanism inside BDE needs to be enhanced to support other options. If the first two were a heavy lift, the last one is going to take a small army — but it is a challenge I am ready to take on!

Ultimately, I feel the gaps in OpenStack around Platform-as-a-Service orchestration can be solved though integrating Big Data Extensions. The framework is more robust and mature when compared to the Sahara offering. The potential is there, it just needs to be executed on.

Thoughts on the VMware Project Photon Announcement

project photon

VMware announced a new open source project call Project Photon today. The full announcement call be seen here. Essentially Project Photon is a lightweight Linux operating system built to support Docket and rkt (formerly Rocket) containers. The footprint is less than 400MB and can run containers immediately upon instantiation. I had heard rumors the announcement today was going to include some sort of OS, but I was not very excited about it until I started reading the material being released prior to the launch event in a few hours.

Having seen the demo and read the material, my mind went into overdrive for the possibilities both open source projects offer organizations who are venturing down the Cloud Native Apps (or Platform 3) road. I believe VMware has a huge opportunity here to cement themselves as the foundation for running robust, secure and enterprise-ready Cloud Native Applications. If you think about the performance gains vSphere 6.0 has provided, and then look at how they are playing in the OpenStack space with both VIO and NSX, the choice becomes obvious.

The area of focus now needs to be on tying all of the pieces together to offer organizations an enterprise-class end-to-end Platform-as-a-Service solution. This is where, I believe, the VMware Big Data Extensions framework should play an enormous part. The framework already allows deployment of Hadoop, Mesos and Kubernetes clusters. Partner the framework with Project Photon and you now have a minimal installation VM that can be launched within seconds with VMFork. From there, the resource plugin Virtual Elephant launched today could be mainstreamed (and improved) to allow for the entire deployment of a Mesos stack, backed by Project Photon, through the open source API OpenStack offers with Heat.

Epic win!

There is still work VMware could do with the Big Data Extensions framework to improve its capabilities, especially with newcomers SequenceIQ and their Cloudbreak offering stiff competition. Expanding BDE to be able to deploy clusters beyond an internal vSphere environment, but also to the major public cloud environments — including their own vCloud Air — will be key going forward. The code for BDE is already an open source project — by launching these two new open source projects they are showing the open source community they are serious.

This is a really exciting time in virtualization and I just got even more excited today!

OpenStack Resource Plugin for VMware Big Data Extensions

The final challenge in offering a full-featured Platform 3 private cloud utilizing OpenStack and VMware technologies has been the Platform-as-a-Service layer. VMware has many tools for helping individuals and companies offer a Platform-as-a-Service layer — vRealize Automation, vRealize Orchestrator and VMware Big Data Extensions. However, with the release of VMware Integrated OpenStack — and OpenStack in general — there is a disconnect between the Infrastructure-as-a-Service and Platform-as-a-Service layers. OpenStack has a few projects — like Sahara — to bridge the gap, but they are immature when compared to VMware Big Data Extensions. The challenge for me became figuring out a method for integrating the two so that an OpenStack offering built on top of VMware vSphere technologies could offer up a robust Platform-as-a-Service offering.

It has taken a fair bit of time, effort and testing, but I am pleased to announce the alpha release of the Virtual Elephant Big Data Extensions resource plugin for OpenStack Heat. The resource plugin enables an OpenStack deployment to utilize and deploy any cluster application the VMware Big Data Extensions management server is configured to deploy. The resource plugin accomplishes this by making REST API calls to the VMware Big Data Extensions management server to deploy the pre-defined JSON cluster specification files. The addition of the resource plugin to an OpenStack environment greatly expands the capabilities of the environment, without requiring a infrastructure engineer or architect to start from scratch.

The resource plugin itself requires several modifications to the VMware Big Data Extensions management server to be made. One challenge I encountered initially was lack of functionality built into the REST API. I received assistance from one of the VMware Big Data Extensions engineers — Jesse — who modified several of the JAVA jar files to add the features necessary. Writing the resource plugin would have been much more difficult were it not for several people at VMware — including Jesse, Andy and several of the VIO engineering team — who assisted me in my efforts. A big THANK YOU to each of them!

Disclaimer: As stated, the resource plugin is considered in an alpha state. It can be rather temperamental, but I wanted to get the code out there and (hopefully) get others excited for the possibilities.

Environment Notes

Obviously, there may be some differences between my OpenStack environment and your own. The work I have done has obviously been focused around VMware technologies, including Big Data Extensions. The other technologies the resource plugin relies upon are VMware Integrated OpenStack and VMware NSX. That is not to say the resource plugin will not work if you are using other OpenStack technologies, I mention it so that there is no misunderstanding the environment for which the plugin has been written.

Fortunately, the entire environment I designed the resource plugin for can be referenced on the OpenStack website, as it is the official reference architecture for VMware Integrated OpenStack the foundation has adopted.

http://www.openstack.org/enterprise/virtualization-integration

Final note before I begin discussing the installation and configuration required for the resource plugin — this level of modification of VMware Big Data Extensions will most likely put it into an unsupported state if you have issues and try to contact VMware support.

Installation Guide

In order to begin using the resource plugin, the VMware Big Data Extensions management server will need to be modified. Depending on how many of the additional cluster deployments you have integrated from the Virtual Elephant site, additional steps may be required to enable deployments of every cluster type. The resource plugin, REST API test scripts and the updated JAVA files can be downloaded from the Virtual Elephant GitHub site. Once you have checked-out the repository, perform the following steps within your environment.

Note: I am using VMware Integrated OpenStack and the paths reflect that environment. You may need to adjust the commands for your implementation.

Copy the resource plugin (BigDataExtensions.py) to the OpenStack controller(s):

$ scp plugin/BigDataExtensions.py user@controller1.localdomain:/usr/lib/heat
$ ssh user@controller1.localdomain "service heat-engine restart"
$ ssh user@controller1.localdomain "grep VirtualElephant /var/log/heat/heat-engine.log"
$ scp plugin/BigDataExtensions.py user@controller2.localdomain:/usr/lib/heat
$ ssh user@controller2.localdomain "service heat-engine restart"
$ ssh user@controller2.localdomain "grep VirtualElephant /var/log/heat/heat-engine.log"

Copy the VIO config file (and modify) to the OpenStack controller(s):
$ scp plugin/vio.config user@controller1.localdomain:/usr/local/etc/
$ scp plugin/vio.config user@controller2.localdomain:/usr/local/etc/

Copy the update JAVA files to the Big Data Extensions management server:

$ scp java/cluster-mgmt-2.1.1.jar user@bde.localdomain:/opt/serengeti/tomcat6/webapps/serengeti/WEB-INF/lib/
$ scp java/commons-serengeti-2.1.1.jar user@bde.localdomain:/opt/serengeti/tomcat6/webapps/serengeti/WEB-INF/lib/
$ scp java/commons-serengeti-2.1.1.jar user@bde.localdomain:/opt/serengeti/cli/conf/
$ ssh user@bde.localdomain "service tomcat restart"

If using VMware Integrated OpenStack, the curl package is required:

$ ssh user@controller1.localdomain "apt-get -y install curl"
$ ssh user@controller2.localdomain "apt-get -y install curl"

If you do not have a resource pool definition on the Big Data Extensions management server for the OpenStack compute cluster, you will need to create it now.

$ ssh root@bde.localdomain
# java -jar /opt/serengeti/cli/serengeti-cli-2.1.1.jar
serengeti> connect --host bde.localdomain:8443
serengeti> resourcepool list
serengeti> resourcepool add --name openstackRP --vccluster VIO-CLUSTER-1

Note: If you use the resource pool name ‘openstackRP’, no further modifications to the JSON file are required. That value is the default for the resource plugin variable CLUSTER_RP, but it can be overridden in the JSON file.

At this point, the OpenStack controller(s) where Heat is running now have the resource plugin installed and you should have seen an entry stating it was registered when you restarted the heat-engine service. In addition, the management server for Big Data Extensions has the required updates that will allow the REST API to support the resource plugin. The next steps before the plugin can be consumed will be to copy/create JSON files for the cluster-types you intend to support within the environment. Within the GitHub repository, you will have an example JSON file that can be used. One of the updates to the management server included logic to look in the /opt/serengeti/conf file for these JSON files.

Copy example mesos-default-template-spec.json file:
$ scp json/mesos-default-template-spec.json user@bde.localdomain:/opt/serengeti/conf/

Heat Template Configuration

When creating the JSON file (or YAML) for OpenStack Heat to consume, there are several key parameters that will be required. As this is the initial release of the resource plugin, there are additional changes planned for the future, including a text configuration file you will place on the controllers to hide several of these parameters.

Sample JSON entry with required parameters:

 68         "Mesosphere-Tenant-0" : {
 69             "Type" : "VirtualElephant::VMware::BDE",
 70             "Properties" : {
 71                 "bde_endpoint" : "bde.localdomain",
 72                 "vcm_server" : "vcenter.localdomain",
 73                 "username" : "administrator@vsphere.local",
 74                 "password" : "password",
 75                 "cluster_name" : "mesosphere_tenant_01",
 76                 "cluster_type" : "mesos", 
 77                 "cluster_net" : { "Ref" : "mesos_network_01" }
 78             }
 79         }

You can see from the example above why parameters like ‘bde_endpoint’, ‘vcm_server’, ‘username’ and ‘password’ should be hidden by the consumers of the OpenStack Heat orchestration.

Once you have a JSON file defined, it can be deployed using OpenStack Heat — either through the user-interface or API. The deployment will then proceed and you can view the topology of the stack within your environment. If you use the JSON provided in GitHub (mesosphere_stack.json), it will look like the graphic below.

openstack mesosphere

Congratulations — you have now extended your OpenStack environment to be able to support robust cluster deployments using the VMware Big Data Extensions framework!

Future Enhancements

The resource plugin is not yet fully-baked and there are several features I would still like to implement in the future. Currently, the resource plugin has the necessary code to deploy and delete clusters when initiated through OpenStack Heat. Features I will be working on extending in the future include:

  1. Report back to OpenStack cluster deployment status – currently a fire-and-forget mentality.
  2. Assign floating IP to the cluster.
  3. Ability to scale-out clusters deployed with OpenStack Heat.
  4. Enhance Big Data Extensions REST API to utilize JSON specification files in /opt/serengeti/www/specs/ versus the segregated JSON files it is using today.
  5. Support for prescribed VM template sizes (i.e. m1.small, m1.medium, m1.large, etc).
  6. Enhanced error detection for REST API calls made within the resource plugin.
  7. Cluster password support.
  8. Check and abide by Big Data Extensions naming-schema for cluster names.
  9. Incorporate OpenStack key-pairs with cluster nodes.

Closing Notes

There is always more work to be done on a project such as this, but I am excited to have the offering available at this time — even in its limited alpha state. Being able to bridge the gap between the Infrastructure-as-a-Service and Platform-as-a-Service layers is a key requirement for private cloud providers. The challenges I have faced (along with my coworkers) supporting our current environment and designing/implementing our next-generation private cloud have brought this reality to the forefront. In order to provide an AWS-like service offering, bridging the gap between the layers was an absolute necessity and I am extremely grateful for the support I have received from my peers in helping to solve this problem.

Look for an upcoming post going through the resource plugin code, highlighting the integration that was necessary between Big Data Extensions and NSX-v. In the meantime, reach out to me on Twitter (@chrismutchler) if you have questions or comments on the resource plugin, the OpenStack implementation or VMware Big Data Extensions.

Sneak-Peek: OpenStack Deployments of Apache Mesos Cluster with VMware Big Data Extensions

The work is not yet complete, but I have made a significant amount of progress integrating VMware Big Data Extensions with OpenStack by allowing deployments to occur through the Heat API. The primary objective is to allow a developer (end-user) to deploy any cluster BDE supports through a Heat template within a micro-segmented network. The resource plugin for Heat then hides the fact that the deployment itself is being handled by VMware Big Data Extensions.

It then allows a small piece of JSON code to be inserted into the OpenStack Heat template that looks like this:

 50         "Mesosphere-Cell-0" : {
 51             "Type" : "VirtualElephant::VMware::BDE",
 52             "Properties" : {
 53                 "bde_endpoint" : "bde.localdomain",
 54                 "username" : "administrator@vsphere.local",
 55                 "password" : "password",
 56                 "cluster_name" : "mesos_heat_api_11",
 57                 "cluster_type" : "mesos",
 58                 "cluster_net" : "mgmtNetwork"
 59             }
 60         }

A topology view of the stack then looks like this:

sneak_peak_mesos

As I said, it is not fully complete right now — you’ll notice the Mesos cell is off by itself on the right-side of the screen capture. The code to attach it to the micro segmented network created in the JSON has not been written yet. But after struggling with Python the last few days (PERL is my preferred language) and working through issues with Heat itself, I made significant progress and wanted to share it with everyone.

As soon as it is ready, I’ll be posting all the code in my GitHub repo and sharing all the pieces that went into writing the resource plugin for Heat.