Category Archives: development

Nicira (VMWare) NVP/NSX: A Python API and Toolkit

Python

Python Programming Language

vmware NSX by Nicira.

Over the past few years working at EMC we regularly use NVP/NSX in some of our lab environments and throughout the years this means that we have needed to upgrade, manipulate and overhaul our network architecture with NVP/NSX every so often. We have been using a Python library developed internally by myself and Patrick Mullaney with some help from Erik Smith in the early days, and I wanted to share some of its tooling.

(need to drop this in here)

*Disclaimer : By no means does EMC make any representation or take any obligation with regard to the use of this library, NVP or NSX in any way shape or form. This post is the thoughts of the author and the author alone. The examples and API calls have mainly been testing against NVP up to 3.2 before it became NSX. However, most calls should work against the NSX API except for any net-new api endpoints that have not been added. Another note is that this API is not fully featured, it is merely a tool that we use in the lab for what we need, it can be extended, expanded however you like.(update 4/14/15′ See Open-source section toward the bottom for more information ) I’ll be working on opensource this library and try and fill in some of the missing features as I go along. There are two main uses for this python library:

1. Manage NVP/NSX Infrastructure

Managing, automating, and orchestrating the setup of NVP/NSX components was a must for us. We wanted to be able to spin up and down environments on the flow, and or manage upgrading new components when we wanted to upgrade.

The Library allows you to remotely setup Hypervisor Nodes, Gateway Nodes, Service Nodes etc. (Examples Below)

2. Python bindings to NVP/NSX REST API

 Having python bindings for some of the investigative projects we have been working on was out first motivation. A) because we developed and were familiar with Python, B) We have been working with OpenStack and it just made sense.

With the library you can list networking, attach ports, query nodes etc. (Examples are given in the test.py example below.)


Manage NVP/NSX Infrastructure Let me preface this by saying this isn’t a complete M&O / DevOps / Baremetal Provisioning service for NVP or NSX components, it does however manage setting up much of the logical state components with as little hands-on cli commands as possible,  like setting up OVS certificates, creating integrations bridges and contacting the Control Cluster to register the node as a Hypervisor Node, Service Node etc. Openvswitch is the only thing that does need to be installed, and any other nicira debs that come with their software switch/ hypervisor node offering. You just need to install the debs and then let the configuration tool take over.  (I am running this on Ubuntu 14.04 in this demo)

sudo dpkg --purge openvswitch-pki 
sudo dpkg -i openvswitch-datapath-dkms_1.11.0*.deb 
sudo dpkg -i openvswitch-common_1.11.0*.deb openvswitch-switch_1.11.0*.deb 
sudo dpkg -i nicira-ovs-hypervisor-node_1.11.0*.deb

Once OVS is installed on the nodes that you will be adding to your NVP/NSX architecture you can use the library and cli tools to setup, connect, and register the virtual network components. The way this is done is by describing the infrastructure components in JSON form from a single control host, or even a NVP/NSX linux host. We thought about using YAML here, but after everything we chose JSON. Sorry if you a YAML fan. The first thing needed by the tooling is some basic control cluster information. The IP of the controller, the username and password along with the port to use for authentication. (sorry for the ugly blacked out IPs) ss2-1 Next, you can describe different types of nodes for setup and configuration. These nodes can be:

SERVICE, GATEWAY, or COMPUTE

The difference with the SERVICE or GATEWAY node is that it will have a:

"mgmt_rendezvous_server" : true|false 
"mgmt_rendezvous_client" : true|false

respectively instead of a “data network interface. Service or Gateway nodes need this metadata in it’s configuration JSON to be created correctly. I dont have examples of using this (Sorry) and am focusing on using this for Hypervisor Nodes, since this is what we find we are creating/reconfiguring most. Here is an example of such a config. ss5 Once this configuration is complete, you can now reference the “name” of the compute node, and it will provision the COMPUTE node to the NVP/NSX system/cluster. If the hypervisor node is remote the toolkit will use remote sudo SSH, so you will need to enter a user/pass when you get prompted. ss3 Screen Shot 2014-10-10 at 2.17.05 PM As you can see, the command runs through all the process needed to setup the node, and at the end of it, you should have a working hypervisor node ready to go. (Scroll down, you can verify it’s setup by looking at the UI) Here is what it looks like when you do it remotely. loginpass You’ll then see similar output to the below, running through the sequence of setting up the hypervisor node remotely, keys, OVS calls, and contacting the NVP/NSX cluster to register the node using the PKI.

Sending… rm -f /etc/openvswitch/vswitchd.cacert
Sending… mkdir -p /etc/openvswitch
Sending… ovs-pki init –force
Sending… ovs-pki req+sign ovsclient controller –force
Sending… ovs-vsctl — –bootstrap set-ssl /etc/openvswitch/ovsclient-privkey.pem /etc/openvswitch/ovsclient-cert.pem /etc/openvswitch/vswitchd.cacert
Sending… cat /etc/openvswitch/ovsclient-cert.pem
printing status
0
[sudo] password for labadmin:
Certificate:
    Data:
        Version: 1 (0x0)
        Serial Number: 12 (0xc)
    Signature Algorithm: md5WithRSAEncryption
        Issuer: C=US, ST=CA, O=Open vSwitch, OU=controllerca, CN=OVS controllerca CA Certificate (2014 Oct 10 10:56:26)
        Validity
            Not Before: Oct 10 18:15:57 2014 GMT
            Not After : Oct  7 18:15:57 2024 GMT
        Subject: C=US, ST=CA, O=Open vSwitch, OU=Open vSwitch certifier, CN=ovsclient id:4caa4f75-f7b5-4c23-8275-9e2aa5b43221
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:c2:b9:7d:9f:1d:00:78:4b:c0:0d:8a:52:d5:61:
12:02:d3:01:7d:ea:f6:22:d4:7d:af:6f:c1:91:40:
                    7f:e1:a1:a1:3d:2d:3f:38:f6:37:f6:83:85:3c:62:
                    b4:cc:60:40:d9:d3:61:8d:26:96:94:47:57:2d:fa:
                    53:ce:48:84:4c:a2:01:84:d8:11:61:de:50:f9:b5:
                    ff:9c:4b:6e:9c:df:84:48:f1:44:ec:e0:fd:e4:a1:
                    b6:0b:5c:23:59:5c:1d:cf:46:44:19:14:1c:92:a1:
                    28:52:19:ab:b8:5e:23:17:7a:b8:51:af:bc:48:1c:
                    d2:d8:58:67:61:a3:e6:51:f5:a0:57:a9:16:36:e8:
16:35:f6:20:3c:51:f8:4c:82:51:74:8b:48:90:e4:
                    dc:7a:44:f2:2b:d7:68:81:f6:9e:df:15:14:80:27:
                    77:e1:24:36:ac:fd:79:c2:03:64:1a:c0:4a:5b:b7:
                    dd:d3:fb:ca:20:13:f7:09:9e:03:f8:b0:fe:14:e7:
                    c9:7e:aa:1d:79:c3:c1:c2:a6:c3:68:cf:ff:ec:a4:
                    9a:5f:d4:f4:df:c2:e6:1d:a2:63:68:f2:d1:1d:00:
                    19:18:18:93:72:37:9e:b9:a4:2b:23:fd:83:ab:40:
                    52:0d:2e:9c:08:82:50:c0:1b:ec:e9:40:fc:1d:74:
                    1d:f3
                Exponent: 65537 (0x10001)
    Signature Algorithm: md5WithRSAEncryption
         85:fe:4b:97:77:ce:82:32:67:fc:74:12:1e:d4:a5:80:a3:71:
         a5:be:31:6c:be:89:71:fb:18:9f:5f:37:eb:e7:86:b8:8d:2e:
         0c:d7:3d:42:4e:14:2c:63:4e:9b:47:7e:ca:34:8d:8e:95:e8:
         6c:35:b4:7c:fc:fe:32:5b:8c:49:64:78:ee:14:07:ba:6d:ea:
         60:e4:90:e5:c0:29:c2:bb:52:38:2e:b5:38:c0:05:c2:a2:c9:
         8b:d0:08:27:57:ec:aa:14:51:e2:a1:16:f6:cf:25:54:1f:64:
         c8:37:f0:31:22:c9:cb:9d:a1:5a:76:b8:3e:77:95:19:5e:de:
         78:ce:56:0d:60:0a:b2:e6:b8:f5:3a:30:63:68:24:27:e2:95:
         dc:09:9e:30:ff:ea:51:14:9c:55:bd:31:3c:0c:a2:26:8e:71:
         fc:ec:85:56:a7:9c:e0:27:1b:ad:d4:e8:35:f7:da:ee:2a:55:
         90:9b:bd:d1:db:b9:b8:f9:3a:f6:95:94:c2:34:32:ea:27:3f:
         f8:46:c0:40:c2:0c:32:45:0d:82:14:c2:f6:a8:3e:28:33:9b:
         64:79:c0:2e:06:7a:1b:a3:56:9e:16:70:a0:3c:57:95:cf:e1:
         b2:7f:97:42:c9:82:f0:3c:1e:77:07:86:60:c8:00:a6:c8:96:
         94:26:94:e3
—–BEGIN CERTIFICATE—–
MIIDjTCCAnUCAQwwDQYJKoZIhvcNAQEEBQAwgYkxCzAJBgNVBAYTAlVTMQswCQYD
VQQIEwJDQTEVMBMGA1UEChMMT3BlbiB2U3dpdGNoMRUwEwYDVQQLEwxjb250cm9s
bGVyY2ExPzA9BgNVBAMTNk9WUyBjb250cm9sbGVyY2EgQ0EgQ2VydGlmaWNhdGUg
KDIwMTQgT2N0IDEwIDEwOjU2OjI2KTAeFw0xNDEwMTAxODE1NTdaFw0yNDEwMDcx
ODE1NTdaMIGOMQswCQYDVQQGEwJVUzELMAkGA1UECBMCQ0ExFTATBgNVBAoTDE9w
ZW4gdlN3aXRjaDEfMB0GA1UECxMWT3BlbiB2U3dpdGNoIGNlcnRpZmllcjE6MDgG
A1UEAxMxb3ZzY2xpZW50IGlkOjRjYWE0Zjc1LWY3YjUtNGMyMy04Mjc1LTllMmFh
NWI0MzIyMTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMK5fZ8dAHhL
wA2KUtVhEgLTAX3q9iLUfa9vwZFAf+GhoT0tPzj2N/aDhTxitMxgQNnTYY0mlpRH
Vy36U85IhEyiAYTYEWHeUPm1/5xLbpzfhEjxROzg/eShtgtcI1lcHc9GRBkUHJKh
KFIZq7heIxd6uFGvvEgc0thYZ2Gj5lH1oFepFjboFjX2IDxR+EyCUXSLSJDk3HpE
8ivXaIH2nt8VFIAnd+EkNqz9ecIDZBrASlu33dP7yiAT9wmeA/iw/hTnyX6qHXnD
wcKmw2jP/+ykml/U9N/C5h2iY2jy0R0AGRgYk3I3nrmkKyP9g6tAUg0unAiCUMAb
7OlA/B10HfMCAwEAATANBgkqhkiG9w0BAQQFAAOCAQEAhf5Ll3fOgjJn/HQSHtSl
gKNxpb4xbL6JcfsYn1836+eGuI0uDNc9Qk4ULGNOm0d+yjSNjpXobDW0fPz+MluM
SWR47hQHum3qYOSQ5cApwrtSOC61OMAFwqLJi9AIJ1fsqhRR4qEW9s8lVB9kyDfw
MSLJy52hWna4PneVGV7eeM5WDWAKsua49TowY2gkJ+KV3AmeMP/qURScVb0xPAyi
Jo5x/OyFVqec4CcbrdToNffa7ipVkJu90du5uPk69pWUwjQy6ic/+EbAQMIMMkUN
ghTC9qg+KDObZHnALgZ6G6NWnhZwoDxXlc/hsn+XQsmC8DwedweGYMgApsiWlCaU
4w==
—–END CERTIFICATE—–
Sending… ovs-vsctl set-manager ssl:10.*.*.*
Sending… ovs-vsctl br-set-external-id br-int bridge-id br-int
Sending… ovs-vsctl — –may-exist add-br br-int
Sending… ovs-vsctl — –may-exist add-br br-eth1
Sending… ovs-vsctl — –may-exist  add-port br-eth1 eth1

After setup you can verify that the node is setup by logging into it if your remote, and running the ovs-vsctl show command. This will show you all configuration that has been done to setup the hypervisor node. Managers and Controllers are setup and connected, and bridge interfaces created and ready to connect tunnels. Screen Shot 2014-10-10 at 2.17.16 PM We can also verify that the Hypervisor node is setup correctly by looking at the NSX/NVP Manager Dashboard. This shows that the Ubuntu Node is now connected, using the correct Transport Zone, and is up and ready to go. Screen Shot 2014-10-10 at 2.17.29 PM Thats the end of what I wanted to show as far as remote configuration of NVP/NSX components goes. We use this a lot when setting up our OpenStack environments and when we add or remove new Compute nodes that need to talk on the Neutron/Virtual Network. Again there are some tweaks and cleanups I need to address, but hopefully I can have this available on a public repo soon.

Python bindings to NVP/NSX REST API

Now I want to get into using the toolkit’s NVP/NSX Python API. This is a API written in python that addresses the REST API’s exposed from NVP/NSX. The main class of the API takes Service Libraries as arguments, and them calls the init method on them to instantiate their provided functions. For instance “ControlSevices” focuses on control cluster API calls rather than the logical virtual network components.

from nvp2 import NVPClient
from transport import Transport
from network_services import NetworkServices
from control_services import ControlServices
import logging

class NVPApi(Transport, NetworkServices, ControlServices)
    client = NVPClient()
    log = logging
    def __init__(self, debug=False, verbose=False):
        self.client.login()
        if verbose:
          self.log.basicConfig(level=self.log.INFO)
        if debug:
          self.log.basicConfig(level=self.log.DEBUG)
        Transport.__init__(self, self.client, self.log)
        NetworkServices.__init__(self, self.client, self.log)
        ControlServices.__init__(self, self.client, self.log)

Some example of how to instantiate the library class and use the provided service functions for NVP/NSX are below. This is not a full featured list, and more than less they are for NVP 3.2 and below. We are in the process of making sure it works across the board, so far NVP/NSX APIs have been pretty good at being backward compatible.

from api.nvp_api import NVPApi
#Instantiate an  API Object
#api = NVPApi()
api = NVPApi(debug=True)
#api = NVPApi(debug=True, verbose=True)
#print "Available Functions of the API"
# print dir(api)
# See an existing "Hypervisor Node"
print api.tnode_exists("My_Hypervisor_Node")
# Check for existing Transport Zones
print api.get_transport_zones()
# Check for existing Transport Zone
print api.check_transport_exists("My_Transport_Zone")
# Check Control Cluster Nodes
nodes = api.get_control_cluster()
for node in nodes:
print "\n"
  print node['display_name']
  for role in node['roles']:
     print "%s" % role['role']
     print "%s\n" % role['listen_addr']
print "\n"
# Check stats for interfaces on transport node
stats = api.get_interface_statistics("80d9cb27-432c-43dc-9a6b-15d4c45005ee",
   "breth2+")
print "Interface %s" % ("breth2")
for stat, val in stats.iteritems():
  print "%s -- %s" % (stat, val)
print "\n"

I hope you enjoyed reading through some of this, it’s really gone from a bunch of scripts no one can understand to something halfway useful. There are definitely better ways to do this, and by no means is this a great solution, it just one we had lying around that I still use from time to time. Managing the JSON state/descriptions about the nodes was the hardest part when multiple people start using this. We wound up managing the files with Puppet and also using Puppet for installing base openvswitch software for new NVP/NSX components on Ubuntu servers. Happy Friday, Happy Coding, Happy Halloween!

(Update): Open-source

I wanted to update the post with information about the library and how to access it. The great folks at EMC Code ( http://emccode.github.io/ ) have added this project to the #DevHigh5 Program and it is available on their site. Look at the right side and lick the DevHigh5 tag, and look for Nicira NVP/NSX Python under the projects.

Screen Shot 2015-04-14 at 9.21.44 PM

Information can be seen by hovering over the project.

Screen Shot 2015-04-14 at 9.22.03 PM

The code can be viewed on Github and on pypi and can be installed via pip install nvpnsxapi

Screen Shot 2015-04-14 at 9.44.59 PM

https://pypi.python.org/pypi/nvpnsxapi  Screen Shot 2015-04-14 at 9.54.46 PM

Enjoy 🙂

Advertisements

Linux Containers: Parallels, LXC, OpenVZ, Docker and More

The State of Containers

Why should we care?

Image

Background

What’s a container?

            A container (Linux Container) at its core is an allocation, portioning, and assignment of host (compute) resources such as CPU Shares, Network I/O, Bandwidth, Block I/O, and Memory (RAM) so that kernel level constructs may jail-off, isolate or “contain” these protected resources so that specific running services (processes) and namespaces may solely utilize them without interfering with the rest of the system. These processes could be lightweight Linux hosts based on a Linux image, multiple web severs and applications, a single subsystem like a database backend, to a single process such as ‘echo “Hello”’ with little to no overhead.

            Commonly known as “operating system-level virtualization” or “OS Virtual Environments” containers differ from hypervisor level virtualization. The main difference is that the container model eliminates the hypervisor layer, redundant OS kernels, binaries, and libraries needed to typically run workloads in a VM.

Hypervisor-based

Image

Some of the main business drivers and strategic reasons to use containers are:

  • Ability to easily run and accommodate legacy applications
  • Performance benefits of running on bare-metal, no overhead of hypervisor
  • Higher density and utilization for resources in the datacenter
  • Adoption for new technologies is accelerated, put in isolated secure containers
  • Reduce “shipping” pains; code is easily streamlined to customers, fast. 

Container-based

Image

            Containers have been around for over 15 years, so why is there an influx of attention for containers? As compute hardware architectures become more elastic, potent, and dense, it becomes possible to run many applications at scale while lowering TCO, eliminating the redundant Kernel and Guest OS code typically used in a hypervisor-based deployment. This is attractive enough but also has benefits such as eliminating performance penalties, increase visibility and decrease difficulty of debug and management.

 Image

image credit: Jerome Petazzoni from dotCloud)

http://www.socallinuxexpo.org/sites/default/files/presentations/Jerome-Scale11x%20LXC%20Talk.pdf

            Because containers share the host kernel, binaries and libraries, can be packed even denser than typical hypervisor environments can pack VM’s.

 Image

image credit: Jerome Petazzoni from dotCloud)

http://www.socallinuxexpo.org/sites/default/files/presentations/Jerome-Scale11x%20LXC%20Talk.pdf

Solutions and Products

            Companies such as RedHat, Sun, Canonical, IBM, HP, Docker and others have adapted or procured slightly different solutions to Linux Containers. Below is a brief overview of the different solutions that deal with containers and or operating system-level virtualization.

Container Solutions

  • LxC ( Linux Containers )

o   0.1.0 releases in 2008

o   Works with general vanilla Linux kernels off the shelf.

o   GNU GPLv2 License

o   Used as a “container engine” in Docker

o   Google App Engine utilizes an LXC-like technology

o   Parellels Virtouzzo utilizes LXC

o   Rackspace Cloud Databases utilize LXC

o   Heroku (Application Deployment Platform) utilize LXC

  • Docker

o   Developed by (formally dotCloud) Docker Inc.

o   Apache 2.0 License

o   Docker is really an orchestration solution built on top of the linux kernel, namespaces, cgroups, chroot, and file system constructs. Docker originally chose LXC as the “engine” but recently developed their own solution called “libcontainer

o   Solutions:

  • “Decker” – Modified version of the engine works with Cloud Foundry to deploy application workloads
  • Openshift
  • AWS Elastic Beanstalk Containers
  • Openstack Solum
  • Openstack Nova
  • OpenVZ

o   Supported by Parallels Inc. (back in 1999 as SWsoft became Parallels in 2004)

o   Share many of the same developers as LXC, but was developed earlier on, LXC is a derivation of OpenVZ for the mainline kernel.

o   GNU GPL v2 License

o   Runs on a patched Linux kernel (specific kernel) or 3.x with reduced feature set

o   Live Migration Abilities (check pointing) (CRIU “criu.org)

Rackspace Cloud Databases also utilize OpenVZ

  • Warden

o   Developed by Cloud Foundry as an orchestration layer to create application containers. Initially said working with LxC was “too troublesome”.

o   (Comparison) Warden and Docker both orchestrate containers controlling the subsystems like linux cgroups, namespaces and security.

  • Solaris Containers

o   A “non-linux” containerization mechanism. Differ from “true” linux systems of the mainline kernel

o   Utilizes “Zones” as a construct for partitioning system resources. Zones are an enhanced chroot mechanism that adds additional features like ones included in ZFS that allow snapshotting and cloning.

o   Zones are commonly compared to FreeBSD Jails

  • (Free) BSD Jails

o   Also “non-linux” containerization mechanism. Differ from “true” linux systems of the mainline kernel

o   Also an “enhanced chroot”-like mechanism where not only does it use chroot to segregate the file system but it also does the same for users, processes and networks.

  • Linux V-Server

o   GNU GPL v2

o   Patched kernel to enable os-level virtualization

o   Partitions of CPU, Memory, Network, Filesystem are called “Security Contexts” which uses a chroot-like mechanism

o   Utilizes CoW(Copy on Write) file systems to save storage space.

  • Workload Partitions

o   AIX Implementation that provides resource isolation like container technologies to in Linux

  • Parallels (SWsoft) Virtuozzo Containers

o   Originially developed by SWsoft, parallels utilizes linux namespaces and cgroups technologies in the kernel to provide isolation.

o   Virtuozzo Containers become OpenVZ, when then became LXC for mainline linux kernel.

  • HP-UX Containers

o   HP’s Unix variant of containers. Like AIX WPARS this is a container technology tailored toward Unix platforms.

  • WPARS

o   Developed by IBM, this container technology is aimed at the AIX (Unix) based server OS platform.

o   Provides os-level environment isolation like other container models do.

o   Live application mobility (migration)

  • iCore Virtual Accounts

o   A Free Windows XP container solution. Provides os-level isolated computing environments for XP

  • Sandboxie

o   Developed by Invincea for Windows XP

o   “Sandboxes”, like a container, are created for isolated environments.

Related tools and mechanisms

  • Sysjail

o   A userspace virtualization tool developed for Open BSD systems. Much like the FreeBSS “jail”

  • Chroot

o   Kernel level function that allows a program to run in a host system in its own root filesytem.

  • Cgroups

o   Developed in 2006, used initially by Google Search

o   Unified in linux kernel by 2013.

  • Namespaces

o   Construct that allows partitioning and isolation of different resources so that they are only available to the processes in the container. Namespaces are Network (NET), UTS(hostname), PROC(process id), MNT (mount), IPC and User (Security Seperation)

  • Libcontainer

o   Written in Go programming language and developed by dotCloud/Docker it is a native Go implementation of “lxc-like” control over cgroups and namespaces.

  • Libct

o   Container library developed by engineers at Parallels

  • LPARs (Logical Partitions)

o   (Not linux related, not part of the related container “hype”) but an LPAR is essentially a partitioned set of network, compute, security, storage that can run processes and virtual machine.The difference is LPAR’s need an OS image.

Note: Kernel Namespaces and Cgroups became the defacto standard for creating linux containers and is used by most of the companies who have containerized technology, LXC, Docker, ZeroVM, Parallels, etc. 2013 was the first year that a linux kernel supporting OpenVZ worked with no patches, this was an example of kernel unification and the communities have since seen a boom in container technologies.

Different Containerization Models

            The models in which containers and containerization are formed have somewhat of a common denominator. They all need a shared kernel, and in some way have all made adaptations to the linux kernel to provide constructs like “security contexts”, “jails”, “containers”,“sanboxes”, “zones”, “virtual environments” etc. At a low-level the model remains consistent, partitioning host resources into smaller isolated environments, but when we look at how they are delivered, that’s where we see the different usecase models emerge.

Low-level Model

“Jails or Zones” with Patches Linux Kernel

  • Proprietary based solutions usually based off a patched linux kernel. OpenVZ, Parallels and Unix bases solutions started this way. Once cgroups and namespace were adopted into the kernel, this became the common way to bring a containerized solution to market.

“Cgroups and Namespaces”

  • Defacto standard for create Linux-based Isolated OS-level containers.

High-level Container Orchestration and Delivery Models

            The common ways containers are consumed are through some orchestration mechanism, usually through a portal or tool. This tool then communicates to a service level and requirements (YAML, Package List , or Built Images) get forwarded to a backend container engine. Whether that is LXC, Docker, OpenVZ, or others is up to the provider.

 Image

IaaS

            In this model containers are consumed as VM’s are, they can be requested with such attributes like CPU, Network, and Storage options. From the consumer’s point of view, it looks exactly like a VM.

            A few examples of this model are Openstack and Docker itself. The “docker-way” uses a userspace daemon that takes CLI or RESTful requests from a client. The daemon, which sits on the compute resources, utilizes a “container engine” like libcontainer or LxC to build the isolated environment based on a certain type of Linux image (from Docker Registry or Glance) provided. A note here that Docker Registry [4] is a platform for uploading and storing pre-built docker-images specific to an application, say a Fedora Image with Apache installed and configured.

            Openstack takes advantage of this Docker model and provides ways for Nova to integrate with Docker via a single driver to provide IaaS to consumers. It also integrates with Openstack Heat.

PaaS

            This is one of the major “best-fit” usecases for containers. Containers offer the agility, consistency, and efficiency PaaS platforms need, containers can be spun up/down, changed in seconds. This lends itself useful to platforms like OpenShift , Heroku, Cloudfoundry, and Openstack Solum. Applications can be imported and recognized at the same time that containers provide easily customizable computing environments on the fly for different types of workloads. The consumer does not interact with the container in the model, but rather the provider takes advantage of the container technology itself.

SaaS

            Containers lend themselves very well to sharing software; containers can easily be used to provide a software service on demand. An example of this case is http://www.memcachedasaservice.com/ [1], which uses containers to provide memcache-based service inside a container to the consumer. The benefit here is that you can provide these services in a largely distributed and scalable while also allowing the provider to densely utilizes its resources.

Pros & Cons of different containerization models

Model Pros Cons
IaaS Fast, Dense, Bare Metal performance. Limited Options, No windows VM’s. Lacks some security features compared to VM’s. (This can be argued though)
PaaS Efficient, Flexible, Dense, Easy to Manage Limited to Linux
SaaS Flexible, Easy to Manage Limited to Linux

Note* Although this says “limited to linux containers” there has been some talk about getting container orchestration solutions to be able to talk commonly between lightweight virtualization solutions so that describing a container could be common and thus this could deploy containers to LNX / Windows solutions. Libct is one effort here to unify container solutions.

Type of apps and workloads, what model works best

(Top use cases for containers, PaaS seems to stick out at best-fit)

HPC Worklods

            Containers do not have the overhead of a hypervisor layer and because of this they gain the performance of the host it is running on. Thousands of containers can be spun up in and instant to run distributed operations with power and scale.

Public and Private Clouds

            Containers lend themselves well to cloud-based solutions because of the density, flexibility, and speed of containers. Openstack, Google Compute and Tutum are all using containers in this space.

PaaS & Manages Services

            Probably one of the best use cases for containers in the market today. Providing PaaS involves a lot of orchestration and flexibility of the underlying service, containers are a clear winner in this space. CloudFoundry, Openshift, AWS Elastic BeanStalk, and Openstack Solum are PaaS solutions based on containers.

SaaS, Application Deployment

            A close second to a best-fit model to PaaS, containers also lend themselves well to SaaS architectures as containers can provide isolated, customizable environments for different software services independent of the host it is running on. Memcache as a Service, and Rackspace Cloud Databases are good examples of this.

Development and Test/QA

            One of this initial usecases for containers was to allow developers the freedom of running unit tests, trying new code, and running experiments in an isolated manner. Containers today are still widely used for this purpose and some system have there CI system built together with container technology to run isolated test jobs on new code.

An aside on Containers in the real world

Openstack + Containers

Nova

            Since the Havana release Openstack Nova has supported (in some way) using docker containers as an alternative or side-by-side to VM’s. Originally the openstack driver delivered was directly to a host, but now in the IceHouse release, Openstack Heat does the driving while the container engine is setup and run inside of a cloud instance. the nova driver is now part of stackforge and will possibly try to rejoin the nova code base in Juno.

http://blog.docker.com/tag/openstack-2/

Solum

            Openstack Solum is a PaaS incubation project in openstack that is currently part of stackforge that uses docker in a similar way that OpenShift and CloudFoundry do to orchestrate applications. Containers are used in the background to this project to build specialized workloads for the consumer.

https://wiki.openstack.org/wiki/Solum

Trove

            The DBaaS (Database as a Service) Openstack project is also using containers to deliver multi-tenant databases on-demand within the Openstack architecture.

CloudFoundry + Containers

            Cloud foundry Platform as a Service utilizes both LXC, and Docker technology under the covers. CloudFoundry had originally chosen LXC and built a tool called “warden” on top of it to manage the containers because they didn’t like using LXC outright. Docker containers also have something called a Dockerfile, which in short is a list of actions to be taken on the containerized environment once it’s built, like package management and installation to the startup and management of services. Much like a DevOps tool, this can be very powerful. This was a driving factor for the adopted version of Docker call “Decker” which implements their Droplet Execution Agent’s API. CF now lets you deploy docker and lxc based containers (droplets) using CF’s tooling.

Openshift+ Containers

            Openshift (by Redhat) much like the CloudFoundry Droplet provides something called Gears in its PaaS offering. Gears are native containers built from cgroups and namespaces that run the workloads. Openshift recently [2] adopted the Docker technology to deploy gears. This allowed them to take advantage of Docker inside their Cartridge and Gear system. By using Docker Images with metadata as a Cartidge and using Docker Containers as Gears(containers) based on the Cartridge. Redhat chose the container model because they could “achieve a higher density of applications per host OS and enable those applications to be deployed much more quickly than with a traditional VM-based approach”.

AWS+ Containers

            Amazon Elastic Beanstalk allows developers to load their applications into AWS while providing them flexibility and management within the PaaS. Elastic Beanstalk recently [3] adopted Docker so that developers can package or “build” Docker images (templates for the application) and deploy them into AWS with support from Elastic Beanstalk.

Google + Containers

Not (google+) but rather google using linux containers. I dont have much detail on the implementation here, but i’ve heard Google uses linux containers both, originally for Google Search, and now in its cloud compute engine. If anyone has more detail here, please comment 🙂

Legacy Code Support

            Containers are also used to run legacy application within the datacenter. Even when hardware refresh occurs, containers can implement older libraries and images to provide legacy applications to run on modern hardware.

New Technology Adoption

            Containers also offer a solution to early adoption to software. Containers offer secure isolated environments that let developers run, test, and evaluate new applications and software.

State of Security and Containers

            For truly secure container the root user in container can be mapped to “nobody” user/group and when this user gets out of container it doesn’t not affect “root” user on the host because “nobody” have very few privileges. Therefore:

  • Root on the container is not Root on the host.

            Not all container technologies utilize this security model but do implement SELinux, GRSEC, and AppArmour, which help. Safely running the workloads as non-root users will be the best way to help distort the lines between the security of VM and Containers.

            Other attack surfaces for containers can be (in order from less likely to more likely) the linux kernel constructs like cgroups and namespaces themselves, to the client or daemon responsible for responding to requests for host resources, which could be API, Websocket, or Unix Socket. Designed correctly, and used correctly, containers can be a secure solution.

There is a lot more on the security topic, you can start (here) for a good introduction.

Thanks for Reading

Please feel free to correct the history or any fact that I may have looked over too quick or didnt get right, I’d be happy to change it and get it correct. Containers are gaining ground in todays cloud infrastructures and there is a lot of interesting things going on, so keep your eyes and ears open because I’m sure you will hear more about them in the coming years.

Something I didnt touch on in the post was how storage works with containers. There are many different options that have pros and cons, Whether container solutions are using aufs, btrfs, xfs, device mapper, copy on write mechanisms, etc the main point is that they work at the file layer, not the block layer. While you could export iscsi/FC volumes to the hosts and use something lik e –volumes-from in Docker for persistency this is outside the direct scope of how containers maintain a low profile on the host. If you want more info or another post on this I can certainly do so.

Also, In coming posts, I think I will try and get some technical tutorials and demos around the container subject, keep posted, I will most likely be using Docker or LxC directly to do the demos!

Resources

docker architecture

References

[1] http://www.memcachedasaservice.com/,
    http://www.slideshare.net/julienbarbier42/building-a-saas-using-docker
[2] https://www.openshift.com/blogs/the-future-of-openshift-and-docker-containers
[3] http://aws.amazon.com/about-aws/whats-new/2014/04/23/aws-elastic-beanstalk-adds-docker-support/
[4] https://registry.hub.docker.com/

Puppet, Chef, Orchestration and DevOps:

The world of IT Systems Administration, DevOps and Orchestration of bare-metal resources to virtual applications is starting to have the need to become fully automated with custom hooks for different scale-out HPC workloads, cloud environments, to single deployments for SMBs. Automation via specialized scripts for network and compute need to become a thing of the past.

In steps:

I personally have had the opportunity to work with Foreman, Heat, Puppet, and (somewhat) Chef. The others also contain great ways to automate from bare-metal, all the way to fully virtual “stacks” of network, compute and storage.

Puppet and Chef share a space in the IT automation industry and both succeed in their vision. Whether you decide to use Puppet or Chef depends on your alignment and what you’re trying to accomplish. I’ve heard that Chef’s master node scales better but have personally never tested this theory. Both try to accomplish sub-version of package management and configuration control over a subset of nodes in an IT environment. Razor bare metal provisioning was developed as a venture between EMC and Puppetlabs and offers a path to full automation between the two, Not to say that any other DevOps tool or bare-metal provisioning workflows can’t be substituted, I may just be a bit bias.

Into the IT wormhole 

(brief notes)

Puppet

  • Client-server based. Puppetmaster and Puppet Clients. Declarative Language for “write once deploy many”.
  • Has open-source Openstack Packages for on-demand Openstack Delivery/Configuration Version control.
  • Integrates with Openstack Heat/TripleO for managing packages and configurations.
  • Deployment of monitoring tools, security tools all possible within private cloud.
  • Integrates well with Razor

Chef

  • Client-server based orchestration management “infrastructure as code” for deploying applications, version control, config files.
  • Written in Ruby. “Cookbooks” CB’s can be written to deploy Openstack “Chef for openstack” components, and potential to deploying security, monitoring etc.
  • Github.com/opscode/openstack-chef-repo (Grizzly, Nicira Plugin, KVM, LXC)
  • Ceilometer, Quantum Cookbooks (By Dreamhost)
  • NVP, OVS Cookbooks (By Nicira)
  • Chef agent for Arista switches, “kind of SDN”
  • Roles and recipes, Role could be “Allinone Devstack or Controller Node or Base Node

Juju

  • Like heat, JuJu deploys and manages services and application within a cloud provider. JuJu can deploy openstack components (e.g Glance) or deploy applications (e.g wordpress) on top of existing clouds.
  • Juju.ubuntu.org

To integrate with openstack you must specify these options:

openstack:
type: openstack_s3
control-bucket:  admin-secret:
auth-url: https://yourkeystoneurl:443/v2.0/
default-series: precise
juju-origin: ppa
ssl-hostname-verification: True
default-image-id: bb636e4f-79d7-4d6b-b13b-c7d53419fd5a
default-instance-type: m1.small

Heat

  • Heat is an orchestration tool for managing “stacks” or applications deployed on the cloud. Heat can orchestrate ports, routers, instances, Floating IPs, Private Networks etc.
  • Packaging can also be installed via Heat templates to do things like “deploy a stack and make it a 4 node WordPress cluster.”
  • Provides OpenStack-like CLI and Database show, list, create methods for interactions.

TripleO

  • Dynamic “Cloud on Cloud” version control of your cloud.
  • Need a “seed” cloud stack to provision 2 HA Nova Bare Metal (Ironic) servers, these bare metal stacks will provision a “overcloud” via Heat, the bare metal servers will know about available nodes via node enrollment via MAC Address.
  • Integrates well with Puppet/Chef for Package Management/Configuration if you did not want to use Heat.
  • Comes with a set of tools, os-apply-config, os-refresh-config, diskimage-builder.
    • diskimage-builder is used to build custom images with a notion of “elements”, these elements can be anything from a service, a application, a database, etc (e.g Glance, MySQL) and you can add them to the image you build. Quit a useful tool by itself actually.
    • You can build a base ubuntu qcow image that works with Openstack and Glance (Grizzly) by using the command:

 disk-image-create vm base -o base -a i386

Razor

  • Specialized microkernel used to PXE boot with that checks in with Razor to provide inventory of the system, user-created policies will apply a configuration to the node
  • Able to and off to DevOps (Chef, Puppet)

Pxe_dust

  • Complete solution for pxe booting, not really a package mgmt. or config solution.
  • Chef has pxe_dust recipe, AFAIK is interoperable with Chef.

Crowbar

  • Hardware provisioning and application mgmt. (by Dell/SUSE)
  • Crowbar.github.com
  • Features
    • server discovery (crowbar_machines –U crowbar –P crowbar list)
    • firmware upgrades
    • operating system installation via PXE Boot.
    • application deployment via Chef. (e.g. openstack)

Cobbler

Cobbler is a Linux installation server that allows for rapid setup of network installation environments. It glues together and automates many associated Linux tasks so you do not have to hop between many various commands and applications when deploying new systems, and, in some cases, changing existing ones. Cobbler can help with provisioning, managing DNS and DHCP, package updates, power management, configuration management orchestration, and much more. With a simple series of commands, network installs can be configured for PXE, reinstallation, media-based net-installs, and virtualized installs (supporting Xen, qemu, KVM, and some variants of VMware). Cobbler uses a helper program called ‘koan’ (which interacts with Cobbler) for reinstallation and virtualization support.

Foreman

Through deep integration with configuration management, DHCP, DNS, TFTP, and PXE-based unattended installations, Foreman manages every stage of the lifecycle of your physical or virtual servers. The Foreman provides comprehensive, auditable interaction facilities including a web frontend and robust, RESTful API

  • Theforman.org
  • Foreman has tight integration with Puppetlabs as well, Foreman integrates puppet manifests directory into it Web UI which makes for a nice management dashboard for provisioning applications.(see below)

xcat

xCAT’s purpose is to enable you to manage large numbers of servers used for any type of technical computing (HPC clusters, clouds, render farms, web farms, online gaming infrastructure, financial services, datacenters, etc.). xCAT is known for its exceptional scaling, for its wide variety of supported hardware, operating systems, and virtualization platforms, and for its complete day 0 setup capabilities.

  • Allows for a stateless boot (boot off/download RAMDisk image of xcat management node) with available scratch disk for persistent data on reboot. Satalite files (NFS mounted filesystem ) allows for other reboot/persistence. Though, in both cases, no stateful information should be allowed on either.
  • Developed by IBM, Power and Z Support.

Quality of Service using BigSwitch’s Floodlight Controller

So I wanted to tackle something traditional networks can do, but using Openflow and SDN. I came to conclusion that the opensource controller made by BigSwitch “Floodlight” fit just the ticket. Before I deep dive into some of the progress I’ve made in this area I wanted to make sure the audience is aware of a few outstanding issues regarding OpenFlow and QoS.

QoS Refernces:

  • OpenFlow (1.0) supports setting the network type of service bits and enqueuing packets. This does not however mean that every switch will support these actions.
  • Queuing Methods:
    Some Openflow implementation to NOT support queuing structures to attach to a specific ports, in turn then “enqueue:port:queue” action in Openflow 1.0 is optional. Therefore resulting in failure on some switches

So, now that some of the background is out of the way, my ultimate goal was so be able to change the PHB’s of flows within the network. I chose to use an OpenStack like example, assuming that QoS will be applied to “fabric” of OVS switches that support Queuing.  The below example will show you how Floodlight can be used to push basic QoS state into the network.

  • OVS 1.4.3 , Use of ovs,vsctl to set up queues.

Parts of the application:

QoS Module:

  • Allows the QoS service and policies to be managed on the controller and applied to the network

QoSPusher & QoSPath

        

  • Python application used to manage QoS from the command line
  • QoSPath is a python application that utilizes cirtcuitpusher.py to push QoS state along a specific circuit in a network.

Example

Network

Mininet Topo Used
sudo mn –topo linear,4 –switch ovsk –controller=remote,ip= –ipbase=10.0.0.0/8

Enable QoS on the controller:

Visit the tools seciton and click on Quality of Service

Validate that QoS has been enabled.

From the topology above, we want to Rate-Limit traffic from Host 10.0.0.1 to only 2Mbps. The links suggest we need to place 2 flows, one in switch 00:00:00:00:00:00:01 and another in 00:00:00:00:00:00:02 that enqueue the packets that match Host 1 to the rate-limted queue.

./qospusher.py add policy ‘ {“name”: “Enqueue 2:2 s1”, “protocol”:”6″,”eth-type”: “0x0800”, “ingress-port”: “1”,”ip-src”:”10.0.0.1″, “sw”: “00:00:00:00:00:00:00:01″,”queue”:”2″,”enqueue-port”:”2″}’ 127.0.0.1
QoSHTTPHelper
Trying to connect to 127.0.0.1…
Trying server…
Connected to: 127.0.0.1:8080
Connection Succesful
Trying to add policy {“name”: “Enqueue 2:2 s1”, “protocol”:”6″,”eth-type”: “0x0800”, “ingress-port”: “1”,”ip-src”:”10.0.0.1″, “sw”: “00:00:00:00:00:00:00:01″,”queue”:”2″,”enqueue-port”:”2″}
[CONTROLLER]: {“status” : “Trying to Policy: Enqueue 2:2 s1”}
Writing policy to qos.state.json
{
“services”: [],
“policies”: [
” {\”name\”: \”Enqueue 2:2 s1\”, \”protocol\”:\”6\”,\”eth-type\”: \”0x0800\”, \”ingress-port\”: \”1\”,\”ip-src\”:\”10.0.0.1\”, \”sw\”: \”00:00:00:00:00:00:00:01\”,\”queue\”:\”2\”,\”enqueue-port\”:\”2\”}”
]
}
Closed connection successfully

./qospusher.py add policy ‘ {“name”: “Enqueue 1:2 s2”, “protocol”:”6″,”eth-type”: “0x0800”, “ingress-port”: “1”,”ip-src”:”10.0.0.1″, “sw”: “00:00:00:00:00:00:00:02″,”queue”:”2″,”enqueue-port”:”1″}’ 127.0.0.1
QoSHTTPHelper
Trying to connect to 127.0.0.1…
Trying server…
Connected to: 127.0.0.1:8080
Connection Succesful
Trying to add policy {“name”: “Enqueue 1:2 s2”, “protocol”:”6″,”eth-type”: “0x0800”, “ingress-port”: “1”,”ip-src”:”10.0.0.1″, “sw”: “00:00:00:00:00:00:00:02″,”queue”:”2″,”enqueue-port”:”1″}
[CONTROLLER]: {“status” : “Trying to Policy: Enqueue 1:2 s2”}
Writing policy to qos.state.json
{
“services”: [],
“policies”: [
” {\”name\”: \”Enqueue 2:2 s1\”, \”protocol\”:\”6\”,\”eth-type\”: \”0x0800\”, \”ingress-port\”: \”1\”,\”ip-src\”:\”10.0.0.1\”, \”sw\”: \”00:00:00:00:00:00:00:01\”,\”queue\”:\”2\”,\”enqueue-port\”:\”2\”}”,
” {\”name\”: \”Enqueue 1:2 s2\”, \”protocol\”:\”6\”,\”eth-type\”: \”0x0800\”, \”ingress-port\”: \”1\”,\”ip-src\”:\”10.0.0.1\”, \”sw\”: \”00:00:00:00:00:00:00:02\”,\”queue\”:\”2\”,\”enqueue-port\”:\”1\”}”
]
}
Closed connection successfully

Take a look in the Browser to make sure it was taken

Verify the flows work, using iperf, from h1 –> h2

Iperf shows that the bandwith is limited to ~2Mbps. See below for counter iperf test to verify h2 –> h1

Verify the opposite direction is unchanged. (getting ~30mbps benchmark )

The set-up of the queues on OVS was left out of this example. but the basic setup is as follows:

  • Give 10GB bandwidth to the port (thats what is supports)
  • Add a qos record with 3 queues on it
  • 1st queue, q0 is default, give it a max of 10GB
  • 2nd queue is q1, rate limited it to 20Mbps
  • 3rd queue is q2, rate limited to 2Mbps.

I will be coming out with a video on this soon, as well as a community version of it once it is more fully fleshed out. Ultimately QoS and OpenFlow are at their infancy still, it will mature as the latter specs become adopted by hardware and virtual switches. The improvement and adoption of OFConfig will also play a major role in this realm. But this is used as a simple implementation of how it may work. Integrating OFConfig would be an exciting feature.

R

vagrant development environment

image courtesy of vagrantup.com

vagrantup.com is home of a powerful and neat new tool built on top of Oracle’s Virtualbox.

If you a developer and you haven’t heard of Vagrant then it is time for you to get familiar with  this tool. Vagrant appeals to the developer because of it’s easily to deploy VM’s that are configurable based on your individual, team, group or companies needs. Running through their tutorial on how to get started with vagrant is a good way to see the basic gist of what vagrant is. After you run through this tutorial, if your a web developer you will immediately see its use cases.

Because vagrant is built on top of Oracle’s Virtualbox it follows the “standing on the shoulders of giants” cliché, but this doesn’t hurt Vagrant at all since Virtualbox isn’t going anywhere and it well known virtualization software in the development realm. Vagrant appeals to the developer and to the enterprise development teams because of its power to centralize and control development environments.