Category Archives: cloud

Linux Containers: Parallels, LXC, OpenVZ, Docker and More

The State of Containers

Why should we care?

Image

Background

What’s a container?

            A container (Linux Container) at its core is an allocation, portioning, and assignment of host (compute) resources such as CPU Shares, Network I/O, Bandwidth, Block I/O, and Memory (RAM) so that kernel level constructs may jail-off, isolate or “contain” these protected resources so that specific running services (processes) and namespaces may solely utilize them without interfering with the rest of the system. These processes could be lightweight Linux hosts based on a Linux image, multiple web severs and applications, a single subsystem like a database backend, to a single process such as ‘echo “Hello”’ with little to no overhead.

            Commonly known as “operating system-level virtualization” or “OS Virtual Environments” containers differ from hypervisor level virtualization. The main difference is that the container model eliminates the hypervisor layer, redundant OS kernels, binaries, and libraries needed to typically run workloads in a VM.

Hypervisor-based

Image

Some of the main business drivers and strategic reasons to use containers are:

  • Ability to easily run and accommodate legacy applications
  • Performance benefits of running on bare-metal, no overhead of hypervisor
  • Higher density and utilization for resources in the datacenter
  • Adoption for new technologies is accelerated, put in isolated secure containers
  • Reduce “shipping” pains; code is easily streamlined to customers, fast. 

Container-based

Image

            Containers have been around for over 15 years, so why is there an influx of attention for containers? As compute hardware architectures become more elastic, potent, and dense, it becomes possible to run many applications at scale while lowering TCO, eliminating the redundant Kernel and Guest OS code typically used in a hypervisor-based deployment. This is attractive enough but also has benefits such as eliminating performance penalties, increase visibility and decrease difficulty of debug and management.

 Image

image credit: Jerome Petazzoni from dotCloud)

http://www.socallinuxexpo.org/sites/default/files/presentations/Jerome-Scale11x%20LXC%20Talk.pdf

            Because containers share the host kernel, binaries and libraries, can be packed even denser than typical hypervisor environments can pack VM’s.

 Image

image credit: Jerome Petazzoni from dotCloud)

http://www.socallinuxexpo.org/sites/default/files/presentations/Jerome-Scale11x%20LXC%20Talk.pdf

Solutions and Products

            Companies such as RedHat, Sun, Canonical, IBM, HP, Docker and others have adapted or procured slightly different solutions to Linux Containers. Below is a brief overview of the different solutions that deal with containers and or operating system-level virtualization.

Container Solutions

  • LxC ( Linux Containers )

o   0.1.0 releases in 2008

o   Works with general vanilla Linux kernels off the shelf.

o   GNU GPLv2 License

o   Used as a “container engine” in Docker

o   Google App Engine utilizes an LXC-like technology

o   Parellels Virtouzzo utilizes LXC

o   Rackspace Cloud Databases utilize LXC

o   Heroku (Application Deployment Platform) utilize LXC

  • Docker

o   Developed by (formally dotCloud) Docker Inc.

o   Apache 2.0 License

o   Docker is really an orchestration solution built on top of the linux kernel, namespaces, cgroups, chroot, and file system constructs. Docker originally chose LXC as the “engine” but recently developed their own solution called “libcontainer

o   Solutions:

  • “Decker” – Modified version of the engine works with Cloud Foundry to deploy application workloads
  • Openshift
  • AWS Elastic Beanstalk Containers
  • Openstack Solum
  • Openstack Nova
  • OpenVZ

o   Supported by Parallels Inc. (back in 1999 as SWsoft became Parallels in 2004)

o   Share many of the same developers as LXC, but was developed earlier on, LXC is a derivation of OpenVZ for the mainline kernel.

o   GNU GPL v2 License

o   Runs on a patched Linux kernel (specific kernel) or 3.x with reduced feature set

o   Live Migration Abilities (check pointing) (CRIU “criu.org)

Rackspace Cloud Databases also utilize OpenVZ

  • Warden

o   Developed by Cloud Foundry as an orchestration layer to create application containers. Initially said working with LxC was “too troublesome”.

o   (Comparison) Warden and Docker both orchestrate containers controlling the subsystems like linux cgroups, namespaces and security.

  • Solaris Containers

o   A “non-linux” containerization mechanism. Differ from “true” linux systems of the mainline kernel

o   Utilizes “Zones” as a construct for partitioning system resources. Zones are an enhanced chroot mechanism that adds additional features like ones included in ZFS that allow snapshotting and cloning.

o   Zones are commonly compared to FreeBSD Jails

  • (Free) BSD Jails

o   Also “non-linux” containerization mechanism. Differ from “true” linux systems of the mainline kernel

o   Also an “enhanced chroot”-like mechanism where not only does it use chroot to segregate the file system but it also does the same for users, processes and networks.

  • Linux V-Server

o   GNU GPL v2

o   Patched kernel to enable os-level virtualization

o   Partitions of CPU, Memory, Network, Filesystem are called “Security Contexts” which uses a chroot-like mechanism

o   Utilizes CoW(Copy on Write) file systems to save storage space.

  • Workload Partitions

o   AIX Implementation that provides resource isolation like container technologies to in Linux

  • Parallels (SWsoft) Virtuozzo Containers

o   Originially developed by SWsoft, parallels utilizes linux namespaces and cgroups technologies in the kernel to provide isolation.

o   Virtuozzo Containers become OpenVZ, when then became LXC for mainline linux kernel.

  • HP-UX Containers

o   HP’s Unix variant of containers. Like AIX WPARS this is a container technology tailored toward Unix platforms.

  • WPARS

o   Developed by IBM, this container technology is aimed at the AIX (Unix) based server OS platform.

o   Provides os-level environment isolation like other container models do.

o   Live application mobility (migration)

  • iCore Virtual Accounts

o   A Free Windows XP container solution. Provides os-level isolated computing environments for XP

  • Sandboxie

o   Developed by Invincea for Windows XP

o   “Sandboxes”, like a container, are created for isolated environments.

Related tools and mechanisms

  • Sysjail

o   A userspace virtualization tool developed for Open BSD systems. Much like the FreeBSS “jail”

  • Chroot

o   Kernel level function that allows a program to run in a host system in its own root filesytem.

  • Cgroups

o   Developed in 2006, used initially by Google Search

o   Unified in linux kernel by 2013.

  • Namespaces

o   Construct that allows partitioning and isolation of different resources so that they are only available to the processes in the container. Namespaces are Network (NET), UTS(hostname), PROC(process id), MNT (mount), IPC and User (Security Seperation)

  • Libcontainer

o   Written in Go programming language and developed by dotCloud/Docker it is a native Go implementation of “lxc-like” control over cgroups and namespaces.

  • Libct

o   Container library developed by engineers at Parallels

  • LPARs (Logical Partitions)

o   (Not linux related, not part of the related container “hype”) but an LPAR is essentially a partitioned set of network, compute, security, storage that can run processes and virtual machine.The difference is LPAR’s need an OS image.

Note: Kernel Namespaces and Cgroups became the defacto standard for creating linux containers and is used by most of the companies who have containerized technology, LXC, Docker, ZeroVM, Parallels, etc. 2013 was the first year that a linux kernel supporting OpenVZ worked with no patches, this was an example of kernel unification and the communities have since seen a boom in container technologies.

Different Containerization Models

            The models in which containers and containerization are formed have somewhat of a common denominator. They all need a shared kernel, and in some way have all made adaptations to the linux kernel to provide constructs like “security contexts”, “jails”, “containers”,“sanboxes”, “zones”, “virtual environments” etc. At a low-level the model remains consistent, partitioning host resources into smaller isolated environments, but when we look at how they are delivered, that’s where we see the different usecase models emerge.

Low-level Model

“Jails or Zones” with Patches Linux Kernel

  • Proprietary based solutions usually based off a patched linux kernel. OpenVZ, Parallels and Unix bases solutions started this way. Once cgroups and namespace were adopted into the kernel, this became the common way to bring a containerized solution to market.

“Cgroups and Namespaces”

  • Defacto standard for create Linux-based Isolated OS-level containers.

High-level Container Orchestration and Delivery Models

            The common ways containers are consumed are through some orchestration mechanism, usually through a portal or tool. This tool then communicates to a service level and requirements (YAML, Package List , or Built Images) get forwarded to a backend container engine. Whether that is LXC, Docker, OpenVZ, or others is up to the provider.

 Image

IaaS

            In this model containers are consumed as VM’s are, they can be requested with such attributes like CPU, Network, and Storage options. From the consumer’s point of view, it looks exactly like a VM.

            A few examples of this model are Openstack and Docker itself. The “docker-way” uses a userspace daemon that takes CLI or RESTful requests from a client. The daemon, which sits on the compute resources, utilizes a “container engine” like libcontainer or LxC to build the isolated environment based on a certain type of Linux image (from Docker Registry or Glance) provided. A note here that Docker Registry [4] is a platform for uploading and storing pre-built docker-images specific to an application, say a Fedora Image with Apache installed and configured.

            Openstack takes advantage of this Docker model and provides ways for Nova to integrate with Docker via a single driver to provide IaaS to consumers. It also integrates with Openstack Heat.

PaaS

            This is one of the major “best-fit” usecases for containers. Containers offer the agility, consistency, and efficiency PaaS platforms need, containers can be spun up/down, changed in seconds. This lends itself useful to platforms like OpenShift , Heroku, Cloudfoundry, and Openstack Solum. Applications can be imported and recognized at the same time that containers provide easily customizable computing environments on the fly for different types of workloads. The consumer does not interact with the container in the model, but rather the provider takes advantage of the container technology itself.

SaaS

            Containers lend themselves very well to sharing software; containers can easily be used to provide a software service on demand. An example of this case is http://www.memcachedasaservice.com/ [1], which uses containers to provide memcache-based service inside a container to the consumer. The benefit here is that you can provide these services in a largely distributed and scalable while also allowing the provider to densely utilizes its resources.

Pros & Cons of different containerization models

Model Pros Cons
IaaS Fast, Dense, Bare Metal performance. Limited Options, No windows VM’s. Lacks some security features compared to VM’s. (This can be argued though)
PaaS Efficient, Flexible, Dense, Easy to Manage Limited to Linux
SaaS Flexible, Easy to Manage Limited to Linux

Note* Although this says “limited to linux containers” there has been some talk about getting container orchestration solutions to be able to talk commonly between lightweight virtualization solutions so that describing a container could be common and thus this could deploy containers to LNX / Windows solutions. Libct is one effort here to unify container solutions.

Type of apps and workloads, what model works best

(Top use cases for containers, PaaS seems to stick out at best-fit)

HPC Worklods

            Containers do not have the overhead of a hypervisor layer and because of this they gain the performance of the host it is running on. Thousands of containers can be spun up in and instant to run distributed operations with power and scale.

Public and Private Clouds

            Containers lend themselves well to cloud-based solutions because of the density, flexibility, and speed of containers. Openstack, Google Compute and Tutum are all using containers in this space.

PaaS & Manages Services

            Probably one of the best use cases for containers in the market today. Providing PaaS involves a lot of orchestration and flexibility of the underlying service, containers are a clear winner in this space. CloudFoundry, Openshift, AWS Elastic BeanStalk, and Openstack Solum are PaaS solutions based on containers.

SaaS, Application Deployment

            A close second to a best-fit model to PaaS, containers also lend themselves well to SaaS architectures as containers can provide isolated, customizable environments for different software services independent of the host it is running on. Memcache as a Service, and Rackspace Cloud Databases are good examples of this.

Development and Test/QA

            One of this initial usecases for containers was to allow developers the freedom of running unit tests, trying new code, and running experiments in an isolated manner. Containers today are still widely used for this purpose and some system have there CI system built together with container technology to run isolated test jobs on new code.

An aside on Containers in the real world

Openstack + Containers

Nova

            Since the Havana release Openstack Nova has supported (in some way) using docker containers as an alternative or side-by-side to VM’s. Originally the openstack driver delivered was directly to a host, but now in the IceHouse release, Openstack Heat does the driving while the container engine is setup and run inside of a cloud instance. the nova driver is now part of stackforge and will possibly try to rejoin the nova code base in Juno.

http://blog.docker.com/tag/openstack-2/

Solum

            Openstack Solum is a PaaS incubation project in openstack that is currently part of stackforge that uses docker in a similar way that OpenShift and CloudFoundry do to orchestrate applications. Containers are used in the background to this project to build specialized workloads for the consumer.

https://wiki.openstack.org/wiki/Solum

Trove

            The DBaaS (Database as a Service) Openstack project is also using containers to deliver multi-tenant databases on-demand within the Openstack architecture.

CloudFoundry + Containers

            Cloud foundry Platform as a Service utilizes both LXC, and Docker technology under the covers. CloudFoundry had originally chosen LXC and built a tool called “warden” on top of it to manage the containers because they didn’t like using LXC outright. Docker containers also have something called a Dockerfile, which in short is a list of actions to be taken on the containerized environment once it’s built, like package management and installation to the startup and management of services. Much like a DevOps tool, this can be very powerful. This was a driving factor for the adopted version of Docker call “Decker” which implements their Droplet Execution Agent’s API. CF now lets you deploy docker and lxc based containers (droplets) using CF’s tooling.

Openshift+ Containers

            Openshift (by Redhat) much like the CloudFoundry Droplet provides something called Gears in its PaaS offering. Gears are native containers built from cgroups and namespaces that run the workloads. Openshift recently [2] adopted the Docker technology to deploy gears. This allowed them to take advantage of Docker inside their Cartridge and Gear system. By using Docker Images with metadata as a Cartidge and using Docker Containers as Gears(containers) based on the Cartridge. Redhat chose the container model because they could “achieve a higher density of applications per host OS and enable those applications to be deployed much more quickly than with a traditional VM-based approach”.

AWS+ Containers

            Amazon Elastic Beanstalk allows developers to load their applications into AWS while providing them flexibility and management within the PaaS. Elastic Beanstalk recently [3] adopted Docker so that developers can package or “build” Docker images (templates for the application) and deploy them into AWS with support from Elastic Beanstalk.

Google + Containers

Not (google+) but rather google using linux containers. I dont have much detail on the implementation here, but i’ve heard Google uses linux containers both, originally for Google Search, and now in its cloud compute engine. If anyone has more detail here, please comment 🙂

Legacy Code Support

            Containers are also used to run legacy application within the datacenter. Even when hardware refresh occurs, containers can implement older libraries and images to provide legacy applications to run on modern hardware.

New Technology Adoption

            Containers also offer a solution to early adoption to software. Containers offer secure isolated environments that let developers run, test, and evaluate new applications and software.

State of Security and Containers

            For truly secure container the root user in container can be mapped to “nobody” user/group and when this user gets out of container it doesn’t not affect “root” user on the host because “nobody” have very few privileges. Therefore:

  • Root on the container is not Root on the host.

            Not all container technologies utilize this security model but do implement SELinux, GRSEC, and AppArmour, which help. Safely running the workloads as non-root users will be the best way to help distort the lines between the security of VM and Containers.

            Other attack surfaces for containers can be (in order from less likely to more likely) the linux kernel constructs like cgroups and namespaces themselves, to the client or daemon responsible for responding to requests for host resources, which could be API, Websocket, or Unix Socket. Designed correctly, and used correctly, containers can be a secure solution.

There is a lot more on the security topic, you can start (here) for a good introduction.

Thanks for Reading

Please feel free to correct the history or any fact that I may have looked over too quick or didnt get right, I’d be happy to change it and get it correct. Containers are gaining ground in todays cloud infrastructures and there is a lot of interesting things going on, so keep your eyes and ears open because I’m sure you will hear more about them in the coming years.

Something I didnt touch on in the post was how storage works with containers. There are many different options that have pros and cons, Whether container solutions are using aufs, btrfs, xfs, device mapper, copy on write mechanisms, etc the main point is that they work at the file layer, not the block layer. While you could export iscsi/FC volumes to the hosts and use something lik e –volumes-from in Docker for persistency this is outside the direct scope of how containers maintain a low profile on the host. If you want more info or another post on this I can certainly do so.

Also, In coming posts, I think I will try and get some technical tutorials and demos around the container subject, keep posted, I will most likely be using Docker or LxC directly to do the demos!

Resources

docker architecture

References

[1] http://www.memcachedasaservice.com/,
    http://www.slideshare.net/julienbarbier42/building-a-saas-using-docker
[2] https://www.openshift.com/blogs/the-future-of-openshift-and-docker-containers
[3] http://aws.amazon.com/about-aws/whats-new/2014/04/23/aws-elastic-beanstalk-adds-docker-support/
[4] https://registry.hub.docker.com/

Optimizing the Cloud: Nova/KVM

Sources:[“ http://www.slideshare.net/openstackindia/openstack-nova-and-kvm-optimisation”,” http://www.linux-kvm.org/page/KSM”,” http://pic.dhe.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=%2Fliaat%2Fliaattunkickoff.htm”]

Compute nodes represent a potential bottleneck in an OpenStack Cloud Environment, because the compute nodes run the VM’s and Applications, the workloads fall on the I/O within the Hypervisor. Everything from Local File system I/O, RAM Resources and CPU can all affect the efficiency of your cloud.

One thing to consider when provisioning physical machines is to look at what Guest/VM flavours you are going to allow to be deployed on that machine. Flavours that eat up 2VCPUs and 32G of RAM may not be well suited with a machine with only 64G of RAM and 8 CPU Cores

The notion of tenancy is also important to keep track of, tenant size and activity factors into how your environments resources are used. Tenants consume images, snapshots, volumes and disk space. Consider how many tenants will consume your cloud and adjust your resources accordingly. Make sure if you plan to take advantage of overprovisioning think about thin provisioning and potential performance hits.

Using KVM:

KVM is a well-supported hypervisor for Nova and has its own ways to increase performance. KVM isn’t the only Hypervisor to choose, Hyper-V, Xen, VMware can also be supported in Openstack, but KVM is a powerful competitor. Tuning your hypervisor is just as important as tuning your cloud resources and environment, so here are some things to consider:

  • I/O Scheduler: cfq vs deadline
  • Huge Pages
  • KSM (Kernel Same-page Merging)
    • A de-duplication feature, saves on memory usage. Helps scale vms/per hypervisor node (compute node)
  • Hyper-threading
  • Guest FS location (on hypervisor block devices)
  • Disable Zone Reclaim
  • Swappiness

Personally running Grizzly, we can see that KSM is working,

cat /sys/kernel/mm/ksm/pages_sharing

1099262

Running Ubuntu linux 3.2.00-23 kernel ns qemu-kvm-1.0. This should mean that running multiple VM’s should have better performance.

Puppet, Chef, Orchestration and DevOps:

The world of IT Systems Administration, DevOps and Orchestration of bare-metal resources to virtual applications is starting to have the need to become fully automated with custom hooks for different scale-out HPC workloads, cloud environments, to single deployments for SMBs. Automation via specialized scripts for network and compute need to become a thing of the past.

In steps:

I personally have had the opportunity to work with Foreman, Heat, Puppet, and (somewhat) Chef. The others also contain great ways to automate from bare-metal, all the way to fully virtual “stacks” of network, compute and storage.

Puppet and Chef share a space in the IT automation industry and both succeed in their vision. Whether you decide to use Puppet or Chef depends on your alignment and what you’re trying to accomplish. I’ve heard that Chef’s master node scales better but have personally never tested this theory. Both try to accomplish sub-version of package management and configuration control over a subset of nodes in an IT environment. Razor bare metal provisioning was developed as a venture between EMC and Puppetlabs and offers a path to full automation between the two, Not to say that any other DevOps tool or bare-metal provisioning workflows can’t be substituted, I may just be a bit bias.

Into the IT wormhole 

(brief notes)

Puppet

  • Client-server based. Puppetmaster and Puppet Clients. Declarative Language for “write once deploy many”.
  • Has open-source Openstack Packages for on-demand Openstack Delivery/Configuration Version control.
  • Integrates with Openstack Heat/TripleO for managing packages and configurations.
  • Deployment of monitoring tools, security tools all possible within private cloud.
  • Integrates well with Razor

Chef

  • Client-server based orchestration management “infrastructure as code” for deploying applications, version control, config files.
  • Written in Ruby. “Cookbooks” CB’s can be written to deploy Openstack “Chef for openstack” components, and potential to deploying security, monitoring etc.
  • Github.com/opscode/openstack-chef-repo (Grizzly, Nicira Plugin, KVM, LXC)
  • Ceilometer, Quantum Cookbooks (By Dreamhost)
  • NVP, OVS Cookbooks (By Nicira)
  • Chef agent for Arista switches, “kind of SDN”
  • Roles and recipes, Role could be “Allinone Devstack or Controller Node or Base Node

Juju

  • Like heat, JuJu deploys and manages services and application within a cloud provider. JuJu can deploy openstack components (e.g Glance) or deploy applications (e.g wordpress) on top of existing clouds.
  • Juju.ubuntu.org

To integrate with openstack you must specify these options:

openstack:
type: openstack_s3
control-bucket:  admin-secret:
auth-url: https://yourkeystoneurl:443/v2.0/
default-series: precise
juju-origin: ppa
ssl-hostname-verification: True
default-image-id: bb636e4f-79d7-4d6b-b13b-c7d53419fd5a
default-instance-type: m1.small

Heat

  • Heat is an orchestration tool for managing “stacks” or applications deployed on the cloud. Heat can orchestrate ports, routers, instances, Floating IPs, Private Networks etc.
  • Packaging can also be installed via Heat templates to do things like “deploy a stack and make it a 4 node WordPress cluster.”
  • Provides OpenStack-like CLI and Database show, list, create methods for interactions.

TripleO

  • Dynamic “Cloud on Cloud” version control of your cloud.
  • Need a “seed” cloud stack to provision 2 HA Nova Bare Metal (Ironic) servers, these bare metal stacks will provision a “overcloud” via Heat, the bare metal servers will know about available nodes via node enrollment via MAC Address.
  • Integrates well with Puppet/Chef for Package Management/Configuration if you did not want to use Heat.
  • Comes with a set of tools, os-apply-config, os-refresh-config, diskimage-builder.
    • diskimage-builder is used to build custom images with a notion of “elements”, these elements can be anything from a service, a application, a database, etc (e.g Glance, MySQL) and you can add them to the image you build. Quit a useful tool by itself actually.
    • You can build a base ubuntu qcow image that works with Openstack and Glance (Grizzly) by using the command:

 disk-image-create vm base -o base -a i386

Razor

  • Specialized microkernel used to PXE boot with that checks in with Razor to provide inventory of the system, user-created policies will apply a configuration to the node
  • Able to and off to DevOps (Chef, Puppet)

Pxe_dust

  • Complete solution for pxe booting, not really a package mgmt. or config solution.
  • Chef has pxe_dust recipe, AFAIK is interoperable with Chef.

Crowbar

  • Hardware provisioning and application mgmt. (by Dell/SUSE)
  • Crowbar.github.com
  • Features
    • server discovery (crowbar_machines –U crowbar –P crowbar list)
    • firmware upgrades
    • operating system installation via PXE Boot.
    • application deployment via Chef. (e.g. openstack)

Cobbler

Cobbler is a Linux installation server that allows for rapid setup of network installation environments. It glues together and automates many associated Linux tasks so you do not have to hop between many various commands and applications when deploying new systems, and, in some cases, changing existing ones. Cobbler can help with provisioning, managing DNS and DHCP, package updates, power management, configuration management orchestration, and much more. With a simple series of commands, network installs can be configured for PXE, reinstallation, media-based net-installs, and virtualized installs (supporting Xen, qemu, KVM, and some variants of VMware). Cobbler uses a helper program called ‘koan’ (which interacts with Cobbler) for reinstallation and virtualization support.

Foreman

Through deep integration with configuration management, DHCP, DNS, TFTP, and PXE-based unattended installations, Foreman manages every stage of the lifecycle of your physical or virtual servers. The Foreman provides comprehensive, auditable interaction facilities including a web frontend and robust, RESTful API

  • Theforman.org
  • Foreman has tight integration with Puppetlabs as well, Foreman integrates puppet manifests directory into it Web UI which makes for a nice management dashboard for provisioning applications.(see below)

xcat

xCAT’s purpose is to enable you to manage large numbers of servers used for any type of technical computing (HPC clusters, clouds, render farms, web farms, online gaming infrastructure, financial services, datacenters, etc.). xCAT is known for its exceptional scaling, for its wide variety of supported hardware, operating systems, and virtualization platforms, and for its complete day 0 setup capabilities.

  • Allows for a stateless boot (boot off/download RAMDisk image of xcat management node) with available scratch disk for persistent data on reboot. Satalite files (NFS mounted filesystem ) allows for other reboot/persistence. Though, in both cases, no stateful information should be allowed on either.
  • Developed by IBM, Power and Z Support.

QoS Managment using BigSwitch Floodlight: Code Release and Video

Hi everyone,

I wanted to share an update on some of the work I’ve been doing around QoS and the BigSwitch Floodlight Controller.

(read below for more information about this application)

I released a video here: (Also available below)

You can also find out more about the project on the floodlight mailing list here:

https://groups.google.com/a/openflowhub.org/forum/?fromgroups=#!topic/floodlight-dev/nBCpAZDdmnc

A direct git link is available here:

https://github.com/wallnerryan/floodlight-qos-beta.git

Enjoy,

R

Quality of Service using BigSwitch’s Floodlight Controller

So I wanted to tackle something traditional networks can do, but using Openflow and SDN. I came to conclusion that the opensource controller made by BigSwitch “Floodlight” fit just the ticket. Before I deep dive into some of the progress I’ve made in this area I wanted to make sure the audience is aware of a few outstanding issues regarding OpenFlow and QoS.

QoS Refernces:

  • OpenFlow (1.0) supports setting the network type of service bits and enqueuing packets. This does not however mean that every switch will support these actions.
  • Queuing Methods:
    Some Openflow implementation to NOT support queuing structures to attach to a specific ports, in turn then “enqueue:port:queue” action in Openflow 1.0 is optional. Therefore resulting in failure on some switches

So, now that some of the background is out of the way, my ultimate goal was so be able to change the PHB’s of flows within the network. I chose to use an OpenStack like example, assuming that QoS will be applied to “fabric” of OVS switches that support Queuing.  The below example will show you how Floodlight can be used to push basic QoS state into the network.

  • OVS 1.4.3 , Use of ovs,vsctl to set up queues.

Parts of the application:

QoS Module:

  • Allows the QoS service and policies to be managed on the controller and applied to the network

QoSPusher & QoSPath

        

  • Python application used to manage QoS from the command line
  • QoSPath is a python application that utilizes cirtcuitpusher.py to push QoS state along a specific circuit in a network.

Example

Network

Mininet Topo Used
sudo mn –topo linear,4 –switch ovsk –controller=remote,ip= –ipbase=10.0.0.0/8

Enable QoS on the controller:

Visit the tools seciton and click on Quality of Service

Validate that QoS has been enabled.

From the topology above, we want to Rate-Limit traffic from Host 10.0.0.1 to only 2Mbps. The links suggest we need to place 2 flows, one in switch 00:00:00:00:00:00:01 and another in 00:00:00:00:00:00:02 that enqueue the packets that match Host 1 to the rate-limted queue.

./qospusher.py add policy ‘ {“name”: “Enqueue 2:2 s1”, “protocol”:”6″,”eth-type”: “0x0800”, “ingress-port”: “1”,”ip-src”:”10.0.0.1″, “sw”: “00:00:00:00:00:00:00:01″,”queue”:”2″,”enqueue-port”:”2″}’ 127.0.0.1
QoSHTTPHelper
Trying to connect to 127.0.0.1…
Trying server…
Connected to: 127.0.0.1:8080
Connection Succesful
Trying to add policy {“name”: “Enqueue 2:2 s1”, “protocol”:”6″,”eth-type”: “0x0800”, “ingress-port”: “1”,”ip-src”:”10.0.0.1″, “sw”: “00:00:00:00:00:00:00:01″,”queue”:”2″,”enqueue-port”:”2″}
[CONTROLLER]: {“status” : “Trying to Policy: Enqueue 2:2 s1”}
Writing policy to qos.state.json
{
“services”: [],
“policies”: [
” {\”name\”: \”Enqueue 2:2 s1\”, \”protocol\”:\”6\”,\”eth-type\”: \”0x0800\”, \”ingress-port\”: \”1\”,\”ip-src\”:\”10.0.0.1\”, \”sw\”: \”00:00:00:00:00:00:00:01\”,\”queue\”:\”2\”,\”enqueue-port\”:\”2\”}”
]
}
Closed connection successfully

./qospusher.py add policy ‘ {“name”: “Enqueue 1:2 s2”, “protocol”:”6″,”eth-type”: “0x0800”, “ingress-port”: “1”,”ip-src”:”10.0.0.1″, “sw”: “00:00:00:00:00:00:00:02″,”queue”:”2″,”enqueue-port”:”1″}’ 127.0.0.1
QoSHTTPHelper
Trying to connect to 127.0.0.1…
Trying server…
Connected to: 127.0.0.1:8080
Connection Succesful
Trying to add policy {“name”: “Enqueue 1:2 s2”, “protocol”:”6″,”eth-type”: “0x0800”, “ingress-port”: “1”,”ip-src”:”10.0.0.1″, “sw”: “00:00:00:00:00:00:00:02″,”queue”:”2″,”enqueue-port”:”1″}
[CONTROLLER]: {“status” : “Trying to Policy: Enqueue 1:2 s2”}
Writing policy to qos.state.json
{
“services”: [],
“policies”: [
” {\”name\”: \”Enqueue 2:2 s1\”, \”protocol\”:\”6\”,\”eth-type\”: \”0x0800\”, \”ingress-port\”: \”1\”,\”ip-src\”:\”10.0.0.1\”, \”sw\”: \”00:00:00:00:00:00:00:01\”,\”queue\”:\”2\”,\”enqueue-port\”:\”2\”}”,
” {\”name\”: \”Enqueue 1:2 s2\”, \”protocol\”:\”6\”,\”eth-type\”: \”0x0800\”, \”ingress-port\”: \”1\”,\”ip-src\”:\”10.0.0.1\”, \”sw\”: \”00:00:00:00:00:00:00:02\”,\”queue\”:\”2\”,\”enqueue-port\”:\”1\”}”
]
}
Closed connection successfully

Take a look in the Browser to make sure it was taken

Verify the flows work, using iperf, from h1 –> h2

Iperf shows that the bandwith is limited to ~2Mbps. See below for counter iperf test to verify h2 –> h1

Verify the opposite direction is unchanged. (getting ~30mbps benchmark )

The set-up of the queues on OVS was left out of this example. but the basic setup is as follows:

  • Give 10GB bandwidth to the port (thats what is supports)
  • Add a qos record with 3 queues on it
  • 1st queue, q0 is default, give it a max of 10GB
  • 2nd queue is q1, rate limited it to 20Mbps
  • 3rd queue is q2, rate limited to 2Mbps.

I will be coming out with a video on this soon, as well as a community version of it once it is more fully fleshed out. Ultimately QoS and OpenFlow are at their infancy still, it will mature as the latter specs become adopted by hardware and virtual switches. The improvement and adoption of OFConfig will also play a major role in this realm. But this is used as a simple implementation of how it may work. Integrating OFConfig would be an exciting feature.

R