Category Archives: development

Piping AWS output to Ansible Inventory

piping

I’ve had the opportunity to work with a few different infrastructure automation tools such as Puppet, Chef, Heat and CloudFormation but Ansible just has a simplicity to it that I like, although I admit I do have a strong preference for Puppet because i’ve used it extensively and have had good success with it.

In one of my previous project  I was creating a repeatable solution to create a Docker Swarm cluster (before SwarmKit) with Consul and Flocker. I wanted this to be completely scripted to I climbed on the shoulders of AWS, Ansible and Docker Machine.

The script would do 4 things.

  1. Initialize a security group in an existing VPC and create rules for the given setup.
  2. Create machines using Docker-Machine of Consul and Swarm
  3. Use AWS CLI to output the machines and pipe them to a python script that processes the JSON output and creates an Ansible inventory.
  4. Use the inventory to call Ansible to run something.

This flow can actually be used fairly reliable not only for what I used it for but to automate a lot of things, even expand an existing deployment.

An example of this workflow can be found here.

I’m going to focus on steps #3 and #4 here. First, we use the AWS CLI to output machine information and pass it to a script.

# List only running my-prefix* nodes
$ aws ec2 describe-instances \
   --filter Name=tag:Name,Values=my-prefix* \
   Name=instance-state-code,Values=16 --output=json | \
   python create_flocker_inventory.py

We use the instance-state-code of 16 as it corresponds with Running instances. You can find more codes here: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_InstanceState.html. Then we choose JSON output using –output=json. 

Next, the important piece is the pipe ( `|` ). This signifies we pass the output from the command on the left of the | to the command on the right which is create_flocker_inventory.py so that the output is used as input to the script.

Shell-pipes.png

So what does the python script do with the output? Below is the script that I used to process the JSON output. It first setups up an _AGENT_YML variable that contains YAML for a configuration then the main() function takes the JSON from json.loads() in the script initialization and creates an array of dictionaries that represent instances and opens a file and writes each instance to the Ansible inventory file called “ansible_inventory”. After that the “agent.yml” is written to a file along with some secrets from the environment.

import os
import json
import sys


_AGENT_YML = """
version: 1
control-service:
  hostname: %s
  port: 4524
dataset:
  backend: aws
  access_key_id: %s
  secret_access_key: %s
  region: %s
  zone: %s
"""

def main(input_data):
    instances = [
        {
            u'ip': i[u'Instances'][0][u'PublicIpAddress'],
            u'name': i[u'Instances'][0][u'KeyName']
        }
        for i in input_data[u'Reservations']
    ]

    with open('./ansible_inventory', 'w') as inventory_output:
        inventory_output.write('[flocker_control_service]\n')
        inventory_output.write(instances[0][u'ip'] + '\n')
        inventory_output.write('\n')
        inventory_output.write('[flocker_agents]\n')
        for instance in instances:
            inventory_output.write(instance[u'ip'] + '\n')
        inventory_output.write('\n')
        inventory_output.write('[flocker_docker_plugin]\n')
        for instance in instances:
            inventory_output.write(instance[u'ip'] + '\n')
        inventory_output.write('\n')
        inventory_output.write('[nodes:children]\n')
        inventory_output.write('flocker_control_service\n')
        inventory_output.write('flocker_agents\n')
        inventory_output.write('flocker_docker_plugin\n')

    with open('./agent.yml', 'w') as agent_yml:
        agent_yml.write(_AGENT_YML % (instances[0][u'ip'], os.environ['AWS_ACCESS_KEY_ID'], os.environ['AWS_SECRET_ACCESS_KEY'], os.environ['MY_AWS_DEFAULT_REGION'], os.environ['MY_AWS_DEFAULT_REGION'] + os.environ['MY_AWS_ZONE']))


if __name__ == '__main__':
    if sys.stdin.isatty():
        raise SystemExit("Must pipe input into this script.")
    stdin_json = json.load(sys.stdin)
    main(stdin_json)

After this processes the JSON from the AWS CLI, all that remains is to run Ansible with our newly created Ansible inventory. In this case, we pass the inventory and configuration along with the ansible playbook we want for our installation.

$ ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook \
 --key-file ${AWS_SSH_KEYPATH} \
 -i ./ansible_inventory \
 ./aws-flocker-installer.yml \
 --extra-vars "flocker_agent_yml_path=${PWD}/agent.yml"

 

Conclusion

Overall this flow can be used along with other Cloud CLI tools such as Azure, GCE etc that can output instance state that you can pipe to a script for more processing. It may not be the most effective way but if you want to get a semi complex environment up and running in a repeatable fashion for development needs it has worked pretty well to follow the “pre-setup_get-output_prcocess-output_install_config” flow outlined above.

Docker-based FIO I/O benchmarking

687474703a2f2f692e696d6775722e636f6d2f336f46443358502e706e67

What is FIO?

fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user. The typical use of fio is to write a job file matching the I/O load one wants to simulate. – (https://linux.die.net/man/1/fio)

fio can be a great tool for helping to measure workload I/O of a specific application workload on a particular device or file. Fio proves to be a detailed benchmarking tool used for workloads today with many options. I personally came across the tool while working at EMC when needing to benchmark Disk I/O of application running in different Linux container runtimes. This leads me to my next topic.

Why Docker based fio-tools

One of the projects I was working on was using Docker on AWS and various private cloud deployments and we wanted to see how workloads performed on these different cloud environments inside Docker container with various CPU, Memory, Disk I/O limits with various block, flash, or DAS based storage devices.

One way to wanted to do this was to containerize fio and allow users to pass the workload configuration and disk to the container that was doing the testing.

The first part of this was to containerize fio with the option to pass in JOB files by pathname or by a URL such as a raw Github Gist.

The Dockerfile (below) is based on Ubuntu 14 which admittedly can be smaller but we can easily install fio and pass a CMD script called run.sh.

FROM ubuntu:14.10
MAINTAINER <Ryan Wallner ryan.wallner@clusterhq.com>

RUN sed -i -e 's/archive.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list
RUN apt-get -y update && apt-get -y install fio wget

VOLUME /tmp/fio-data
ADD run.sh /opt/run.sh
RUN chmod +x /opt/run.sh
WORKDIR /tmp/fio-data
CMD ["/opt/run.sh"]

What does run.sh do? This script does a few things, is checked that you are passing a JOBFILE name (fio job) which without REMOTEFILES will expect it to exist in `/tmp/fio-data` it also cleans up the fio-data directory by copying the contents which may be jobs files out and then back in while removing any old graphs or output. If the user passes in REMOTEFILES it will be downloaded from the internet with wget before being used.

#!/bin/bash

[ -z "$JOBFILES" ] && echo "Need to set JOBFILES" && exit 1;
echo "Running $JOBFILES"

# We really want no old data in here except the fio script
mv /tmp/fio-data/*.fio /tmp/
rm -rf /tmp/fio-data/*
mv /tmp/*fio /tmp/fio-data/

if [ ! -z "$REMOTEFILES" ]; then
 # We really want no old data in here
 rm -rf /tmp/fio-data/*
 IFS=' '
 echo "Gathering remote files..."
 for file in $REMOTEFILES; do
   wget --directory-prefix=/tmp/fio-data/ "$file"
 done 
fi

fio $JOBFILES

There are two other Dockerfiles that are aimed at doing two other operations. 1. Producing graphs of the output data with fio2gnuplot and serving the graphs and output from a python SimpleHTTPServer on port 8000.

All Dockerfiles and examples can be found here (https://github.com/wallnerryan/fio-tools) and it also includes an All-In-One image that will run the job, generate the graphs and serve them all in one which is called fiotools-aio.

How to use it

  1. Build the images or use the public images
  2. Create a Fio Jobfile
  3. Run the fio-tool image
docker run -v /tmp/fio-data:/tmp/fio-data \
-e JOBFILES= \
wallnerryan/fio-tool

If your file is a remote raw text file, you can use REMOTEFILES

docker run -v /tmp/fio-data:/tmp/fio-data \
-e REMOTEFILES="http://url.com/.fio" \
-e JOBFILES= wallnerryan/fio-tool

Run the fio-genplots script

docker run -v /tmp/fio-data:/tmp/fio-data wallnerryan/fio-genplots \
<fio2gnuplot options>

Serve your Graph Images and Log Files

docker run -p 8000:8000 -d -v /tmp/fio-data:/tmp/fio-data \
wallnerryan/fio-plotserve

Easiest Way, run the “all in one” image. (Will auto produce IOPS and BW graphs and serve them)

docker run -p 8000:8000 -v /tmp/fio-data \
-e REMOTEFILES="http://url.com/.fio" \
-e JOBFILES=<your-fio-jobfile> \
-e PLOTNAME=MyTest \
-d --name MyFioTest wallnerryan/fiotools-aio

Other Examples

Important

  • Your fio job file should reference a mount or disk that you would like to run the job file against. In the job fil it will look something like: directory=/my/mounted/volume to test against docker volumes
  • If you want to run more than one all-in-one job, just use -v /tmp/fio-data instead of -v /tmp/fio-data:/tmp/fio-data This is only needed when you run the individual tool images separately

To use with docker and docker volumes

docker run \
-e REMOTEFILES="https://gist.githubusercontent.com/wallnerryan/fd0146ee3122278d7b5f/raw/cdd8de476abbecb5fb5c56239ab9b6eb3cec3ed5/job.fio" \
-v /tmp/fio-data:/tmp/fio-data \
--volume-driver flocker \
-v myvol1:/myvol \
-e JOBFILES=job.fio wallnerryan/fio-tool

To produce graphs, run the fio-genplots container with -t <name of your graph> -p <pattern of your log files>

Produce Bandwidth Graphs

docker run -v /tmp/fio-data:/tmp/fio-data wallnerryan/fio-genplots \
-t My16kAWSRandomReadTest -b -g -p *_bw*

Produce IOPS graphs

docker run -v /tmp/fio-data:/tmp/fio-data wallnerryan/fio-genplots \
-t My16kAWSRandomReadTest -i -g -p *_iops*

Simply serve them on port 8000

docker run -p 8000:8000 -d \
-v /tmp/fio-data:/tmp/fio-data \
wallnerryan/fio-plotserve

To use the all-in-one image

docker run \
-p 8000:8000 \
-v /tmp/fio-data \
-e REMOTEFILES="https://gist.githubusercontent.com/wallnerryan/fd0146ee3122278d7b5f/raw/006ff707bc1a4aae570b33f4f4cd7729f7d88f43/job.fio" \
-e JOBFILES=job.fio \
-e PLOTNAME=MyTest \
—volume-driver flocker \
-v myvol1:/myvol \
-d \
—name MyTest wallnerryan/fiotools-aio

To use with docker-machine/boot2docker/DockerForMac

You can use a remote fit configuration file using the REMOTEFILES env variable.

docker run \
-e REMOTEFILES="https://gist.githubusercontent.com/wallnerryan/fd0146ee3122278d7b5f/raw/d089b6321746fe2928ce3f89fe64b437d1f669df/job.fio" \
-e JOBFILES=job.fio \
-v /Users/wallnerryan/Desktop/fio:/tmp/fio-data \
wallnerryan/fio-tool

(or) If you have a directory that already has them in it. *NOTE*: you must be using a shared folder such as Docker > Preferences > File Sharing.

docker run -v /Users/wallnerryan/Desktop/fio:/tmp/fio-data \
-e JOBFILES=job.fio wallnerryan/fio-tool

To produce graphs, run the genplots container, -p

docker run \
-v /Users/wallnerryan/Desktop/fio:/tmp/fio-data wallnerryan/fio-genplots \
-t My16kAWSRandomReadTest -b -g -p *_bw*

Simply serve them on port 8000

docker run -v /Users/wallnerryan/Desktop/fio:/tmp/fio-data \
-d -p 8000:8000 wallnerryan/fio-plotserve

Notes

  • The fio-tools container will clean up the /tmp/fio-data volume by default when you re-run it.
  • If you want to save any data, copy this data out or save the files locally.

How to get graphs

  • When you serve on port 8000, you will have a list of all logs created and plots created, click on the .png files to see graph (see below for example screen)

687474703a2f2f692e696d6775722e636f6d2f6e6b73516b5a692e706e67

 

Testing and building with codefresh

As a side note, I recently added this repository to build on Codefresh. Right now, it builds the fiotools-aio Dockerfile  which I find most useful and moves on but it was an easy experience that I wanted to add to the end of this post.

Navigate to https://g.codefresh.io/repositories? or create a free account by logging into codefresh with your Github account. By logging in with Github it will have access to your repositories you gave access to and this is where the fio-tools images are.

I added the repository as a build and configured it like so.

screen-shot-2016-12-29-at-2-45-46-pm

This will automatically build my Dockerfile and run any integration tests and unit tests I may have configured in codefresh, thought right now I have none but will soon add some simple job to run against a file as an integration test with a codefresh composition.

Conclusion

I found over my time using both native linux tools and docker-based or containerized tools that there is need for both sometimes and in fact when testing container-native application workloads sometimes it is best to get metrics or benchmarks from the point of view of the application which is why we chose to run fio as a microservice itself.

Hopefully this was an enjoyable read and thanks for stopping by!

Ryan

Service Oriented Architecture vs Modern Microservices: Whats the difference?

 

Images thanks to http://martinfowler.com/articles/microservices.html and https://en.wikipedia.org/wiki/Service-oriented_architecture

I’ve been researching and working in the area of modern microservices for the past ~3 to 4 years and have always seen a strong relationship between Modern Microservices with tools and cultures like Docker and DevOps back to Service-Oriented Architecture (SOA) and design. I traced SOA roots back to Gartner Research in 1996 [2] or at least this is what I could find, feel free to correct me here if I haven’t pegged this. More importantly for this post I will briefly explore SOA concepts and design and how they relate to Modern Microservices.

Microservice Architectures (MSA) (credit to meetups and conversations with folks at meetups), are typically RESTful and based on HTTP/JSON. MSA is an architectural style not a “thing” to conform exactly to. In other words, I view it as more of a guideline. MSAs are derived from multiple code bases and each microservice (MS) has or can have its own language it’s written in. Because of this, MSAs typically have better readability and simpler deployments for each MS deployed which in turn leads to better release cycles as long as the organization surrounding the MS teams is put together effectively (more on that later). An MSA doesn’t NEED to be a polyglot of  but will often naturally become one because teams may be more familiar with one language over the other which helps delivery time especially if the interfaces between microservices are defined correctly, it truly doesn’t matter most of the time. It also enables scale at a finer level instead of worrying about the whole monolith which is more agile. Scaling a 100 lines of Golang that does one thing well can be achieved much easier when you dont have to worry about other parts of your application that dont need or you dont want to scale in the monolith. In most modern MSAs, the REST interfaces mentioned earlier can be considered the “contract” between microservices in an MSA. These contracts should self describing as they can be, meaning using formats like JSON which is human readable and well-organized.

Overall an MSA doesn’t just have technical benefits but could also mean fewer reviews and approvals because of smaller context boundaries for each microservice team. Better acquisition and on-boarding because you dont have to be so strict about language preference, instead of retooling, you can ingest using polyglot.

Motivations for SOA, from what I have learned, is typically business transformation oriented which shouldn’t be surprising. The enterprise based SOA transformation on a large budget but the motivation is different now with modern MSA, now its quick ROI and better technology to help scale using DevOps practices and platforms.

Some things to consider while designing your modern MSA that I’ve heard and stuck with me:

  • Do not create too many services/microservices
  • Try not to manage your own infrastructure if you can
  • Dont make too many dependencies, (e.g. 1 calls 2 calls 3 calls 4 calls 5 ……)
  • Circuit Breaker Pattern, a control point between microservices.
  • Bulkhead, do not allow 1 problem affect the entire boat, each microservice has its own data service / database / connection pool, 1 service does not take down the whole system or other microservices.
  • Chaos testing (Add it to your test suite!)  Example: Chaos Monkey
  • You can do microservices with or without service discovery / catalog. Does it over complicate things?

The referenced text[1] that I use for a comparison or similar concepts and differences in this post talk about a vast number of important topics related to Service-Oriented Architecture. Such topics include the overall challenges of SOA, service reuse, deployment efficiency, integration of application and data, agility, flexibility, alignment, reference architectures, common semantics, semantic pitfalls, legacy application integration, governance, security, service discovery, inventory and registration, best practices and more. This post does not go into depth of each individual part but instead this post aims at looking at some of the similarities and differences of SOA and modern microservices.

Service-Oriented Architecture:

Some of concepts of SOA that I’d like to mention (not fully encompassing):

  • Technologies widely used were SOAP, XML, WSDL, XSD and lots of Java
  • SOAs typically had a Service Bus or ESB (Enterprise Service Bus) a complex middleware aimed at providing access and masking of interfaces.
  • Identification and Inventory
  • Value chain and business model is more about changing the entire business process

Modern Microservces:

  • Technologies widely used are JSON, REST/HTTP and Polyglot services.
  • Communication is done over HTTP and the interfaces are abstracted using RESTful contracts.
  • Service Discovery
  • Value chain and business model is about efficiencies, small teams and DevOps practices while eliminating cilos.
The Bulkhead Analogy

I want to spend a little bit of time on one of the analogies that stuck with me about modern microservices. This was the Bulkhead analogy which I cannot for the life of me remember where I heard it or seem to google a successful author so credit to who or whom ever you are.

The bulkhead analogy is pretty simple actually but has a powerful statement for microservice design. The analogy is such that a MSA, like a large ship is made up of many containers (or in the ships case, bulkheads) that have boundaries between them and hold different component of the ships such as engines, cargo, pumps etc. In MSA, these containers hold different functions or processes that do something wether its handle auth requests, connection to a DB, service a lookup or transformation mechanism it doesn’t matter, just that in both cases you want all containers to be un-damaged for everything to be running the best it can.

The bulkhead analogy goes further to say that if a container gets damaged and takes on water then the entire ship should not sink due to one or few failures. In MSA this can be applied by saying that a few broken microservices should not be designed in a way where there failure would take down your entire application or business process. It essence designs the bulkheads or containers to take damage and remain afloat or “running”.

Again, this analogy is quite simple, but when designing your MSA it’s important to think about these details and is why doing things like proper RESTful design and Chaos testing is worth your time in the long run.

Similarities and Differences or the two architectures / architecture styles:

Given the little glimpse of information I’ve provided above about service oriented architectures and microservices architectures I want to spend a little time talking about the obvious similarities and differences.

Similarities

Both SOA and MSA do the following:

  • Code or service reuse
  • Loose coupling of services
  • Extensibility of the system as a whole
  • Well-defined, self-contained services or functions that overall help the business process or system
  • Services Registries/Catalogs to discover services

Differences

Some of the differences that stick out to me are:

  • Focus on business process, instead of the focus of many services making one important business process MSA focuses on allowing one thing (containerized process) to do one thing and do it well. This allows tighter context boundaries for microservices.
  • SOA tailors towards SOAP, XML, WSDL while MSA favors JSON, REST and Polyglot. This is one of the major differences to me, even though its just a tech difference this RESTful polyglot paradigm enables MSAs to thrive with todays developers.
  • The value chain and business model is more DevOps centric allowing the focus to be on loosely coupled teams that break down cilos and can focus on faster release cycles and CI/CD of their services rather than with SOA teams typically still had one monolithic view of the ESB and services without the DevOps focus.

Conclusion

Overall this post was mainly a complete high-level overview of what I think are some of the concepts and major differences between traditional SOA and Modern Microservices that stemmed from a course I took during my masters that explored SOA while I was in the industry working on Microservices. The main point I would say I have is that SOA and MSA are very similar but MSA being SOA’s offspring in a way using modern tooling and architecture approaches to todays scaleable data center.

Note* by no means did I cover SOA or MSA to do them any real justice, so I suggest looking into some of the topics talked about here or reading through some of the references below if your interested.

Cheers!

[1] Rosen, Michael “Applied SOA: Service-Oriented Design Architecture and Design Strategies”  Wiley, Publishing Inc. 2008

[2] Gartner Research “Service Oriented” Architectures, Part 1:” – //www.gartner.com/doc/code/29201

[3] “SOA fundamentals in a nutshell” Aka Sniv February 2015 http://www.ibm.com/developerworks/webservices/tutorials/ws-soa-ibmcertified/ws-soa-ibmcertified.html

#DockerCon #DockerCon #DockerCon Production and Persistence for Containers

Screen Shot 2015-06-19 at 1.25.05 PM

It has been a crazy past few months leading up to DockerCon in San Francisco and I wanted to share some thoughts about current events and some of the work we have been participating in and around the Docker community and ecosystem now that we’re post-conference this week.

I have been working in open-source communities for more than five years now across technology domains including software defined networking, infrastructure & platform as a service, and container technologies. Working on projects in the Openflow/SDN, Openstack, and container communities has had its ups and downs but Docker is arguably the hottest tech communities out there right now.

There are so many developments going on in this ecosystem around pluggable architectures, logging, monitoring, migrations, networking and enabling stateful services in containers. Before I talk more in depth about persistence and some of the work my team and partners have been up to I want to highlight some of the major takeaways from the conference and the community right now.

The theme at @DockerCon was “docker in production” and by this I mean docker is ready to be used in production. Depending on who you ask and how docker is being used, production and containers with microservices can either be “hell” as Bryan Cantrill put it (If you haven’t seen Bryan’s Talks on the unix philosophy and production debugging, I highly recommend any of his sessions, especially the ones from the recent Orielly conference and this past weeks DockerCon) or it can really help your applications break down into their bounded domains with highly manageable and efficient teams going through the CI/CD build/ship/deploy process efficiently well. Netflix OSS [http://netflix.github.io/] is always a great example of this done well and many talks by Adrian Cockcroft dig into this sufficiently. You can also see my last post [https://aucouranton.com/2015/04/10/what-would2-microservices-do/] about microservices which will help with some context here.

So, Is Docker ready for production? Here are a list of what you need to make sure you have a handle on before productions use, and I might add that these are also big topics at DockerCon technically and there are many projects and problems still being actively solved in these spaces, I’ll try to list a few as I go through.

• Networking

Docker’s “aquirehire-sition” of the Socketplane startup culminated in the ability to use lib network [https://github.com/docker/libnetwork] which mean you can connect your docker daemons from hosts to host to allow container to easily send IP traffic over layer 2 networks. Libnetwork is maintains outside of the main docker daemon and abstracts the networking nicely and most importantly abstracts too much network knowledge away from the end user and make things “just work”

• Security

Docker is hardening the registry and images as well as the docker engine itself. I got to chat with Eric Windisch from Docker about what he had been focusing on with docker security. The docker engine’s security and hardening is at the forefront of focus because any vulnerabilities there means the rest of your container could be compromised. There is a lot of work going on around basic source hardening as well as other techniques using apparmour and selinux. Looking forward to seeing how to security aspect of Docker unfolds with other projects like Lightwave from VMware.

• Logging & Monitoring & Manageability

Containers are great, but once your running thousands to tens of thousands of them in products the need for great tooling to help debug, audit, troubleshoot and manage is a necessity. There seems to be lots of great tooling coming out to help manage containers. First docker talked about “Project Orca”, an opinionated container stack that aims at combining Docker Engine, Docker Swarm, Networking, GUI, Docker Compose, security, plus tools for installation, deployment, configuration. This of course isn’t always the way docker will be run but would be nice to have a way to get all this up and running quickly with manageability. Other tools like loggly, cadvisor, ruxit, datadog, log entries and others are all competing to be the best options here and quite frankly thats great!

• Pluggability

Docker has given the power to its ecosystem by telling it wants a pluggable, extensible toolset that allows for different plugins to work with their network, auth, and storage stacks. This provides a way similar to openstack for customers and users to say plugin an openvswitch network driver, along with lightwave for auth and EMC ScaleIO for persistent storage. Pretty cool stuff considering docker is only just over 2 years old!

• Stateful / Persistant services

This last bullet here is near and dear to my employer EMC and we have done some really awesome work by partnering up with ClusterHQ (The Data Container Poeple) [https://clusterhq.com/] who’s open-source Flocker project can manage volumes for your containers and enables mobility and HA for those volumes when you want to go ahead and start moving around or recovering your containerized applications, really cools stuff.

Docker’s announcement of native volume extensions/plugins for Docker https://github.com/docker/docker/blob/master/experimental/plugins_volume.md was a popular one at DockerCon. Even though its experimental in the 1.7.0 releases the pluggable nature to persistence allows different options for managing your stateful apps.

I mentioned before the we partnered up with ClusterHQ to delivers some drivers that work with their Flocker [https://github.com/ClusterHQ/flocker]. Flocker itself can also work with docker’s new volume extensions with the —volume-driver=flocker flag if you use the Flocker-Docker Plugin [https://github.com/ClusterHQ/flocker-docker-plugin]

EMC integrated and open-sourced 2 drivers (below), some more information on the partnership can be found here
https://clusterhq.com/2015/06/17/emc-partnership/

EMC ScaleIOhttps://github.com/emccorp/scaleio-flocker-driver
EMC XtremIO https://github.com/emccorp/xtremio-flocker-driver

Below are some videos showing the integrations.

ScaleIO
https://www.youtube.com/watch?v=ufSbF0-pk_Q&list=PLbssOJyyvHuWiBQAg9EFWH570timj2fxt&index=8

XtremIO
https://www.youtube.com/watch?v=QKwNWTEE6hA&list=PLbssOJyyvHuWiBQAg9EFWH570timj2fxt&index=7

 

We had the opportunity to host a meetup at Pivotal Labs in San Francisco to showcase the partnership and drivers, at the Pivotal labs office we had a number of people come out for a few live demos, some beer and great food and conversation. Here is a gist for the ScaleIO demo we ran at the meetup showcasing Flocker + ScaleIO running on Amazon AWS deploying a MEAN stack application that ingested twitter data and placed it into MongoDB using node and express.
https://gist.github.com/wallnerryan/7ccc5455840b76c07a70

The slides from the meetup are also available http://slides.com/ryanwallner/persistence-docker-chq-emc/live#/

Pivotal Meetup

At DockerCon Clint Kitson and I along with some ECS folks had a packed house for our partner tutorial session showcasing the ClusterHQ, RexRay and ECS announcements at DockerCon. The room was pas standing room only and attendees started to fill the floor. We hoped to have a bit more time to let the folks with laptops actually get to hack on some of the work we did but unfortunately pressed into 40 minutes we did what we could!

EMC also integrates through a native Go implementation called RexRay, a way to manage your persistence volumes but without the auto-mobility Flocker gives. RexRay is really cool in the way its working on letting you use multiple backends at the same time, say EC2 EBS Volumes as well as EMC ScaleIO.

More information here:

RexRay

https://github.com/emccode/rexray

Video

https://www.youtube.com/watch?v=rF8Bc3HZnAU&list=PLbssOJyyvHuWiBQAg9EFWH570timj2fxt&index=11

In all, persistence and containers is a here to stay and there are a number of reason why and some items to keep and eye on. First the stateless and 12-factor app was the rage, but its not realistic and people are realizing state exists and running stateful containers like databases is actually important to the microservices world. This is because all containers have state, even if its “stateless” there is in memory state like running programs and open sockets that may need to be dealt with in certain use cases like live migration. If you haven’t seen the live movement of realtime Quake playing and moving between cities across oceans please watch, its great! (I’ll post the video once I find it)

Data is becoming a first class citizen in these containerized environments. As more workloads gets mapped to container architectures we see the need for the import and of data consistency, integrity and availability come into play for services that produce state and need it to persist. Enter enterprise storage into the mix. Over the week and weekend we saw a number of companies and announcements around this including some of our own at EMC. A few offerings that caught my eye are:

A scalable distributed database you install on your application servers

Open-source container volume manager that give you the ability to containerize databases and other stateful applications and move them around without worrying about managing your backend.

  • RedHat

Red Hat announced it has its own integration for persistent storage for container using it RHS(Red Hat Storage)

http://redhatstorage.redhat.com/2015/06/22/red-hat-drives-deeper-integration-of-persistent-storage-for-containerized-environments/

  • Nutanix

Nutanix talks about a “Volume API” for its platform that helps your provision persistence to containers on its platform.

http://itbloodpressure.com/2015/06/22/nutanix-volume-api-and-containers-dockercon/

  • Portworx

A seemingly new startup that competes with the likes of ClusterHQ and what it does. Announcement below talks about how “some” of its platform will be open-source. http://venturebeat.com/2015/06/22/container-storage-startup-portworx-puts-away-8-5m/

  • Kubernetes Support for storage

Kubernetes updated some docs on github that reflect being able to use a Google Cloud persistent disk with k8s.

https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md#gcepersistentdisk

All in all it was a great DockerCon full of fun events, great people and innovative technology. Hopefully we’ll see you out in Barcelona in November!

Exploring Powerstrip from ClusterHQ: A Socketplane Adapter for Docker

icon_512x512ClusterHQ_logopowerstrip@2x

sources: http://socketplane.io, https://github.com/ClusterHQ/powerstrip, http://clusterhq.com

Over the past few months one of the areas worth exploring within the container ecosystem is how it works with external services and applications. I currently work in EMC CTO Advanced Development so naturally my interest level is more about data services, but because my background working with SDN controllers and architectures is still one of my highest interests I figured I would get to know what Powerstrip was by working with Socketplane’s Tech Release.

*Disclaimer:

This is not the official integration for powerstrip with sockeplane merged over the last week or so, I was working on this in a rat hole and it works a little differently than the one that Socketplane merged recently.

What is Powerstrip?

Powerstrip is a simple proxy for docker requests and responses to and from the docker client/daemon that allows you to plugin “adapters” that can ingest a docker request, perform an action, modification, service setup etc, and output a response that is then returned to Docker. There is a good explaination on ClusterHQ’s Github page for the project.

Powerstrip is really a prototype tool for Docker Plugins, and a more formal discussion , issues, and hopefully future implementation of Docker Plugins will come out of such efforts and streamline the development of new plugins and services for the container ecosystem.

Using a plugin or adapter architecture, one could imagine plugging storage services, networking services, metadata services, and much more. This is exactly what is happening, Weave, Flocker both had adapters, as well as Socketplane support recently.

Example Implementation in GOlang

I decided to explore using Golang, because at the time I did not see an implementation of the PowerStripProtocol in Go. What is the PowerStripProtocol?

The Powerstrip protocol is a JSON schema that Powerstrip understands so that it can hook in it’s adapters with Docker. There are a few basic objects within the schema that Powerstrip needs to understand and it varies slightly for PreHook and PostHook requests and responses.

Pre-Hook

The below scheme is what PowerStripProtocolVersion: 1 implements, and it needs to have the pre-hook Type as well as a ClientRequest.

{
    PowerstripProtocolVersion: 1,
    Type: "pre-hook",
    ClientRequest: {
        Method: "POST",
        Request: "/v1.16/container/create",
        Body: "{ ... }" or null
    }
}

Below is what your adapter should respond with, a ModifiedClientRequest

{
    PowerstripProtocolVersion: 1,
    ModifiedClientRequest: {
        Method: "POST",
        Request: "/v1.16/container/create",
        Body: "{ ... }" or null
    }
}

Post-Hook

The below scheme is what PowerStripProtocolVersion: 1 implements, and it needs to have the post-hook Type as well as a ClientRequest and a Server Response. We add ServerResponse here because post hooks are already processed by Docker, therefore they already have a response.

{
    PowerstripProtocolVersion: 1,
    Type: "post-hook",
    ClientRequest: {
        Method: "POST",
        Request: "/v1.16/containers/create",
        Body: "{ ... }"
    }
    ServerResponse: {
        ContentType: "text/plain",
        Body: "{ ... }" response string
                        or null (if it was a GET request),
        Code: 404
    }
}

Below is what your adapter should respond with, a ModifiedServerResponse

{
    PowerstripProtocolVersion: 1,
    ModifiedServerResponse: {
        ContentType: "application/json",
        Body: "{ ... }",
        Code: 200
    }
}

Golang Implementation of the PowerStripProtocol

What this looks like in Golang is the below. (I’ll try and have this open-source soon, but it’s pretty basic :] ). Notice we implement the main PowerStripProtocol in a Go struct, but the JSON tag and options likes contain an omitempty for certain fields, particularly the ServerResponse. This is because we always get a ClientRequest in pre or post hooks but now a ServerResponse.

powerstripprotogo

We can implement these Go structs to create Builders, which may be Generic, or serve a certain purpose like catching pre-hook Container/Create Calls from Docker and setting up socketplane networks, this you will see later. Below are generall function heads that return an Marshaled []byte Go Struct to gorest.ResponseBuilder.Write()

buildprehook

builtposthook

Putting it all together

Powerstrip suggests that adapters be created as Docker containers themselves, so the first step was to create a Dockerfile that built an environment that could run the Go adapter.

Dockerfile Snippets

First, we need a Go environment inside the container, this can be set up like the following. We also need a couple of packages so we include the “go get” lines for these.

pwerstripdockerfilego

Next we need to enable our scipt (ADD’ed earlier in the Dockerfile) to be runnable and use it as an ENTRYPOINT. This script takes commands like run, launch, version, etc

runascript

Our Go-based socketplane adapter is laid out like the below. (Mind the certs directory, this was something extra to get it working with a firewall).

codelayout

“powerstrip/” owns the protocol code, actions are Create.go and Start.go (for pre-hook create and post-hook Start, these get the ClientRequests from:

  • POST /*/containers/create

And

  • POST /*/containers/*/start

“adapter/” is the main adapter that processes the top level request and figures out whether it is a prehook or posthook and what URL it matches, it uses a switch function on Type to do this, then sends it on its way to the correct Action within “action/”

“actions” contains the Start and Create actions that process the two pre hook and post hook calls mentioned above. The create hook does most of the work, and I’ll explain a little further down in the post.

actions

Now we can run “docker buid -t powerstrip-socketplane .” in this directory to build the image. Then we use this image to start the adapter like below. Keep in mind the script is actually using the “unattended nopowerstrip” options for socketplane, since were using our own separate one here.

docker run -d --name powerstrip-socketplane \
 --expose 80 \
 --privileged \ 
 --net=host \
 -e BOOTSTRAP=true \
 -v /var/run/:/var/run/ \
 -v /usr/bin/docker:/usr/bin/docker \
 powerstrip-socketplane launch

Once it is up an running, we can use a simple ping REST URL to test if its up: It should respond “pong” if everything is running.

$curl http://localhost/v1/ping
pong

Now we need to create our YAML file for PowerStrip and start our Powerstrip container.

Screen Shot 2015-02-04 at 4.23.59 PM

Screen Shot 2015-02-04 at 4.24.05 PM

If all is well, you should see a few containers running and look somthing like this

dddd151d4076        socketplane/socketplane:latest   "socketplane --iface   About an hour ago   Up About an hour                             romantic_babbage

6b7a63ce419a        clusterhq/powerstrip:v0.0.1      "twistd -noy powerst   About an hour ago   Up About an hour    0.0.0.0:2375->2375/tcp   powerstrip
d698047800b1        powerstrip-socketplane:latest    "/opt/run.sh launch"   2 hours ago         Up About an hour                             powerstrip-socketplane

The adapter will automatically spawn off a socketplane/socketplane:latest container because it installs socketplane brings up the socketplane software.

Once this is up, we need to update our DOCKER_HOST environment variable and then we are ready to go to start issuing commands to docker and our adapter will catch the requests. Few examples below.

export DOCKER_HOST=tcp://127.0.0.1:2375

Next we create some containers with a SOCKETPLANE_CIDR env vairable, the adapter will automatically catch this and process the networking information for you.

docker create --name powerstrip-test1 -e SOCKETPLANE_CIDR="10.0.6.1/24" ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done"
docker create --name powerstrip-test2 -e SOCKETPLANE_CIDR="10.0.6.1/24" ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done”

Next, start the containers.

docker start powerstrip-test1

docker start powerstrip-test2

If you issue an ifconfig on either one of these containers, you will see that it owns an ovs<uuid> port that connects it to the virtual network.

sudo docker exec powerstrip-test2 ifconfig
ovs23b79cb Link encap:Ethernet  HWaddr 02:42:0a:00:06:02

          inet addr:10.0.6.2  Bcast:0.0.0.0  Mask:255.255.255.0

          inet6 addr: fe80::a433:95ff:fe8f:c8d6/64 Scope:Link

          UP BROADCAST RUNNING  MTU:1440  Metric:1

          RX packets:12 errors:0 dropped:0 overruns:0 frame:0

          TX packets:9 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:956 (956.0 B)  TX bytes:726 (726.0 B)

We can issue a ping to test connectivity over the newly created VXLAN networks. (powerstrip-test1=10.0.6.2, and powerstrip-test2=10.0.6.3)

$sudo docker exec powerstrip-test2 ping 10.0.6.2

PING 10.0.6.2 (10.0.6.2) 56(84) bytes of data.

64 bytes from 10.0.6.2: icmp_seq=1 ttl=64 time=0.566 ms

64 bytes from 10.0.6.2: icmp_seq=2 ttl=64 time=0.058 ms

64 bytes from 10.0.6.2: icmp_seq=3 ttl=64 time=0.054 ms

So what’s really going on under the covers?

In my implementation of the powerstrip adapater, the adapter does the following things

  • Adapter recognizes a Pre-Hook POST /containers/create call and forwards it to PreHookContainersCreate
  • PreHookContainersCreate checks the client request Body foe the ENV variable SOCKETPLANE_CIDR, if it doesn’t have it it returns like a normal docker request. If it does then it will probe socketplane to see if the network exists or not, if it doesn’t it creates it.
  • In either case, there will be a “network-only-container” created connected to the OVS VXLAN L2 domain, it will then modify the response body in the ModifiedClientRequest so that the NetworkMode gets changed to –net:container:<new-network-only-container>.
  • Then upon start the network is up and the container boots likes normal with the correct network namespace connected to the socketplane network.

Here is a brief architecture to how it works.

diag

Thanks for reading, please comment or email me with any questions.

Cheers!

Docker Virtual Networking with Socketplane.io

Screen Shot 2015-01-16 at 11.03.27 AM

Screen Shot 2015-01-16 at 11.04.21 AMScreen Shot 2015-01-16 at 11.03.50 AM

Screen Shot 2015-01-16 at 11.06.36 AM

ImgCred: http://openvswitch.org/, http://socketplane.io/, https://consul.io/, https://docker.com/

Containers have no doubt been a hyped technology in 2014 and now moving into 2015. Containers have been around for a while now (See my other post on a high-level overview of the timeline) and will be a major technology to think about for the developer as well as within the datacenter moving forward.

Today I want to take the time to go over Socketplane.io’s first preview of the technology they have been working on and since announcing their company in mid-october. Socketplane.io is “driving DevOps Defined Networking by enabling distributed security, application services and orchestration for Docker and Linux containers.” and is backed by some great tech talent, Brent Salisbury, Dave Tucker, Madhu Venugopal, John M. Willis who all bring leading edge network and ops skills into one great effort which is socketplane.io. I have had the pleasure to meet up with Brent and Madhu at ONS last year and have done some work with Brent way back when I was working on Floodlight, and am very excited for the future of Socketplane.io.

What’s behind Socketplane.io and What is the current preview technology?

The current tech preview released on github allows you to get a taste of multi-host networking between Dockerhosts using Open vSwitch and Consul as core enablers by building VXLAN tunnels between hosts to connect docker containers on the same virtual(logical) network with no remote/external SDN controller needed. The flows are programmed via OVSDB into the software switch so the user experience and maintenance is smooth with the least amount of moving parts. Users will interact with a wrapper CLI called “socketplane” for docker that also controls how socketplane’s virtual networks are created, deleted and manipulated. Socketplane’s preview uses this wrapper but if your following Docker’s plugin trend then you know they hope to provide virtual network services this way in the future (keep posted on this). I’d also love to see this tech be portable to other container technologies such as LXD or Rocket in the future. Enough text, lets get into the use of Socketplane.io

Walkthrough

First lets look at the components of what we will be setting up in this post. Below you will see 2 nodes: socketplane node1 and socketplane node2, we will be setting up these using Vagrant and Virtualbox using Socketplane’s included Vagrantfile. In these two nodes, when socketplane starts up we it will install OVS and Docker and start a socketplane container that runs Consul for managing network state. (one socketplane container will be the master, I’ll show more on this later as well). Then we can create networks, create containers and play with some applications. I will cover this in detail as well as show how the hosts are connected via VXLAN and demo a sample web application across hosts.

socketplane-arch

Setup Socketplane.io’s preview.

First install Virtualbox and Vagrant (I dont cover this, but use the links), then lets Checkout the repo

socketplanedemo1

Set an environment variable named SOCKETPLANE_NODES that tells the installation file how many nodes to setup on your local environment. I chose 3. Then run “vagrant up” in the source directory.

socketplanedemo2

After a few or ten minutes you should be all set to test out socketplane thanks to the easy vagrant setup provided by the socketplane guys. (There are also manual install instructions on their github page if you fancy setting this on on bare-metal or something) You should see 3 nodes in virtualbox after this. Or you can run “vagrant status”

socketplanedemo15

Now we can SSH into one of our socketplane nodes. Lets SSH into node1.

socketplamedemo3

Now you SSHed into one of the socketplane nodes. We can issues a “sudo socketplane” command and see the available options the CLI tool gives us.

socketplanedemo5

socketplanedemo4

Some of the commands that are used to run. start, stop, remove etc containers are used via “socketplanerun | start | stop | rm | attach” and these are used just like “docker run | start | stop | rm | attach”

Socketplane sets up a “default” network that (for me) has a 10.1.0.0/16 subnet address and if you run “socketplane network list” you should see this network. To see how we can create virtual networks (vnets) we can issue a command pictures below “socketplane network create foo4 10.6.0.0/16”

Screen Shot 2015-01-16 at 1.27.10 PM

This will create a vnet named foo4 along with a vlan for vxlan and default gateway at the .1 address. Now we can see both our default network and our “foo4” network in the list command.

Screen Shot 2015-01-16 at 1.55.26 PM

If we look at our Open vSwitch configuration now using the “ovs-vsctl show” command we will also see a new port named foo4 that acts as our gateway so we can talk to the rest of the nodes on this virtual network. You should also see the vxlan endpoints that aligns with your eth1 interfaces on the sockeplane node.

Screen Shot 2015-01-16 at 2.07.18 PMGreat, now we are all set up so run some containers that connect over the virtual network we just created. So on socketplane-1 issue a “sudo socketplane run -n foo4 -it ubuntu:14.10 /bin/bash”, this will start a ubuntu container on socketplane-1 and connect it to the foo4 network.

Screen Shot 2015-01-16 at 2.12.09 PM

You can Ctrl-Q + Ctrl-P to exit the container and leave the tty open. If you issue a ovs-vsctl show command again you will see a ovs<uuid> port added to the docker0-ovs bridge. This connects the container to the bridge allowing it to communicate over the vnet. Lets create another container, but this time on our socketplane-2 host. So exit out and ssh into socketplane-2 and issue the same command. We should then be able to ping between our two containers on different hosts using ths same vnet.

Screen Shot 2015-01-16 at 2.18.18 PM

Awsome, we can ping out first container from our second without having to setup any network information on the second host. This is because the network state it propagated across the cluster so when we reference “foo4” on any of the nodes it will use the same network information. If you Ctrl-Q + Ctrl-P while running ping, we can also see the flows that are in our switch. We just need to use appctl and reference our docker0-ovs integration bridge.

Screen Shot 2015-01-16 at 2.22.05 PM

As we can see our flows indicate the VXLAN flows thatheader and forward it to the destination vxlan endpoint and pop (action:pop_vlan) the vlan off the encap in ingress to our containers.

To show a more useful example we can start a simple web application on socketplane-2 and access it over our vnet on socketplane-1 without having to use the Dockerhost IP or NAT. See blow.

First start an image named tutum/hello-world and add it to the foo4 network and expose port 80 at runtime and give it a name “Web”. Use the “web” name with the socketplane info command to get the IP Address.

Screen Shot 2015-01-16 at 2.28.10 PM

Next, logout and SSH to socketplane-1 and run an image called tutm/curl (simple curl tool) and run a curl <IP-Address> and you should get back a response from the simple “Web” container we just setup.

Screen Shot 2015-01-16 at 2.31.20 PM

This is great! No more accessing pages based on host addresses and NAT. Although a simple use-case, this shows some of the benefit of running a virtual network across many docker hosts.

A little extra

So i mentioned before that socketplane runs Consul in a separate container, you can see the logs of consul by issuing “sudo socketplane agent logs” on any node. But for some more fun and to poke around at some things we are going to use nsenter. First find the socketplane docker container, then follow the commands to get into the socketplane container.

Screen Shot 2015-01-16 at 2.41.32 PM

Now your in the socketplane container, we can issue an ip link see that socketplane uses HOST networking to attach consul and get consul running on the host network so the Consul cluster can communicate. We can confirm this by looking at the script used to start the service container.Screen Shot 2015-01-16 at 2.41.49 PM

See line:5 of this snippet, docker runs socketplane with host networking.

socketplane in container with host networking

You can issue this command on the socketplane-* hosts or in the socketplane container and you should receive a response back from Consul showing you that is listening on 8500.

Screen Shot 2015-01-16 at 2.42.40 PM

You can issue “consul members” on the socketplane hosts to see the cluster as well.

Screen Shot 2015-01-16 at 2.46.08 PM

You can also play around with consul via the python-consul library to see information stored in Consul.

Screen Shot 2015-01-16 at 2.53.38 PM

Conclusion

Overall this a great upgrade to the docker ecosystem, we have seen other software products like Weave, Flannel, Flocker and others i’m probably missing address a clustered Docker setup with some type of networking overlay or communications proxy to connect multi-hosted containers. Socketplane’s preview is completely opensource and is developed on github, if your interested in walking through the code, working on bugs or possibly suggesting or adding features visit the git page. Overall I like the OVS integration a lot mainly because I am a proponent of the software switch and pushing intelligence to the edge. I’d love to see some optional DPDK integration for performance in the near future as well as more features that enable fire-walling between vnets and others. I’m sure its probably on the horizon and am eagerly looking forward to see what Socketplane.io has for containers in the future.

cheers!

Docker Remote Host Management with Openstack

Untitled
Screen Shot 2012-07-26 at 6.34.17 PM

So I decided to participate in #DockerGlobalHackday and oh boy was it a learning experience. First off, the hackday started off with great presentations from some of the hackers and docker contributors. One that caught my eye was Host Management (https://www.youtube.com/watch?v=lZGmvGw-mWc) and (https://github.com/docker/docker/issues/8681)

Ben Firshman and contributors thought up and created this feature for Docker that lets you provision remote daemons on demand given cloud providers. It had me thinking that maybe I should hack on a driver for a Local Openstack Deployment. So I did, and this is my DockerHackDayHack.

https://github.com/wallnerryan/docker/tree/host-managment-openstack 

https://github.com/bfirsh/docker/pull/13

*Note, the code is raw, very raw, I haven’t coded in Go until this Hackday 🙂 Which is what I guess it is good for.

*Note this code was developed using Devstack with Flat Network orignaly, so there is some rough edged code for supporting out of the box devstack with nova network but it probably won’t work 🙂 I’ll make an update on this soon.

*Note the working example was testing on Openstack Icehouse with Neutron Networking. Neutron has one public and one private network. The public network is where the floating ip comes from for the docker daemon.

Here are the options now for host-management:

./bundles/1.3.0-dev/binary/docker-1.3.0-dev hosts create
Screen Shot 2014-11-03 at 3.29.50 PM

Notice the areas with  “–openstack-” prefix, this is what was added. If your using neutron network then the network for floating ips is needed. The image can be a Ubuntu or Debian based cloud image, but must support Cloud-Init / Metadata Service. This is how the docker installation is injected.Below is an example of how to kickoff a new Docker OpenStack Daemon: (beware the command is quite long with openstack options, replace X.X.X.X with your keystone endpoint, as well as UUIDs of any openstack resources.) It also includes –openstack-nameserver, this is not required but in my case I was, and will inject a nameserver line into the resolve.conf of the image using Cloud-Init / Metadata Service

In the future I plan on making this so we don’t need as many UUIDs. but rather the driver will take text as input and search for the relevant UUIDs to use. ( limited time to hack on this )

#./bundles/1.3.0-dev/binary/docker-1.3.0-dev hosts create -d openstack

–openstack-image-id=”d4f62660-3f03-45b7-9f63-165814fea55e” \

–openstack-auth-endpoint=”http://X.X.X.X:5000/v2.0” \

–openstack-floating-net=”4a3beafb-2ecf-42ca-8de3-232e0d137931″ \

–openstack-username=”admin” \

–openstack-password=”danger” \

–openstack-tenant-id=”daad3fe7f60e42ea9a4e881c7343daef” \

–openstack-keypair=”keypair1″ \

–openstack-region-name=”regionOne” \

–openstack-net-id=”1664ddb9-8a14-48cd-9bee-a3d4f2fe16a0″ \

–openstack-flavor=”2″ \

-openstack-nameserver=”10.254.66.23″ \

–openstack-secgroup-id=“e3eb2dc6-4e67-4421-bce2-7d97e3fda356” \

openstack-dockerhost-1

The result you will see is: (with maybe some errors if your security groups are already setup)

#[2014-11-03T08:31:51.524125904-08:00] [info] Creating server.
#[2014-11-03T08:32:16.698970304-08:00] [info] Server created successfully.
#%!(EXTRA string=63323227-1c1e-40f6-9c25-78196010936b)[2014-11-03T08:32:17.888292214-08:00] [info] Created Floating Ip
#[2014-11-03T08:32:18.439105902-08:00] [info] “openstack-dockerhost-1” has been created and is now the active host. Docker commands #will now run against that host
If you specified a ubuntu image (I just downloaded the Ubuntu Cloud Img from here)The Docker Daemon will get deployed in a Ubuntu Server in openstack and the Daemon will start on port 2375
Screen Shot 2014-11-03 at 11.36.10 AM
The Instance will look somthing like this in the dashboard of openstack.
Screen Shot 2014-11-03 at 11.36.27 AM
Here is the floating ip association with the Daemon Host.
Screen Shot 2014-11-03 at 11.36.44 AM
And it’s fully functional, see some other Docker Hosts commands below. Here is a video of the deployment (https://www.youtube.com/watch?v=aBG3uL8g124)
(View Hosts)
./bundles/1.3.0-dev/binary/docker-1.3.0-dev hosts
Screen Shot 2014-11-03 at 3.17.30 PM
You can make either the local unix socket or the openstack node the active Daemon and you can use it like any other docker client. This “hosts” command can run locally on your laptop but your containers and daemon run in OpenStack.  One could see this feature replacing something like Boot2Docker.
(docker ps) – Shows containers running in your openstack deployed docker daemon
./bundles/1.3.0-dev/binary/docker-1.3.0-dev ps -a
Screen Shot 2014-11-19 at 10.15.30 AM
(View Remote Info)
./bundles/1.3.0-dev/binary/docker-1.3.0-dev info
Screen Shot 2014-11-03 at 3.18.48 PM
Thanks for the Docker community for putting these events together! Pretty cool! Happy Monday and Happy Dockering.
P.S. I also used a Packer/VirtualBox Setup for DevStack in the begining. Here is the Packer Config and the Preseed.cfg. Just download devstack and run it on there.
{
 "variables": {
 "ssh_name": "yourname",
 "ssh_pass": "password",
 "hostname": "packer-ubuntu-1204"
 },

 "builders": [{
 "type": "virtualbox-iso",
 "guest_os_type": "Ubuntu_64",

 "vboxmanage": [
 ["modifyvm", "{{.Name}}", "--vram", "32"],
 ["modifyvm", "{{.Name}}", "--memory", "2048"],
 ["modifyvm", "{{.Name}}","--natpf1", "web,tcp,,8080,,80"],
 ["modifyvm", "{{.Name}}","--natpf1", "fivethousand,tcp,,5000,,5000"],
 ["modifyvm", "{{.Name}}","--natpf1", "ninesixninesix,tcp,,9696,,9696"],
 ["modifyvm", "{{.Name}}","--natpf1", "eightsevensevenfour,tcp,,8774,,8774"],
 ["modifyvm", "{{.Name}}","--natpf1", "threefivethreefiveseven,tcp,,35357,,35357"]
 ],

 "disk_size" : 10000,

 "iso_url": "http://releases.ubuntu.com/precise/ubuntu-12.04.4-server-amd64.iso",
 "iso_checksum": "e83adb9af4ec0a039e6a5c6e145a34de",
 "iso_checksum_type": "md5",

 "http_directory" : "ubuntu_64",
 "http_port_min" : 9001,
 "http_port_max" : 9001,

 "ssh_username": "{{user `ssh_name`}}",
 "ssh_password": "{{user `ssh_pass`}}",
 "ssh_wait_timeout": "20m",

 "shutdown_command": "echo {{user `ssh_pass`}} | sudo -S shutdown -P now",

 "boot_command" : [
 "<esc><esc><enter><wait>",
 "/install/vmlinuz noapic ",
 "preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg ",
 "debian-installer=en_US auto locale=en_US kbd-chooser/method=us ",
 "hostname={{user `hostname`}} ",
 "fb=false debconf/frontend=noninteractive ",
 "keyboard-configuration/modelcode=SKIP keyboard-configuration/layout=USA ",
 "keyboard-configuration/variant=USA console-setup/ask_detect=false ",
 "initrd=/install/initrd.gz -- <enter>"
 ]
 }]
}


(Preseed.cfg Starts HERE)
# Some inspiration:
# * https://github.com/chrisroberts/vagrant-boxes/blob/master/definitions/precise-64/preseed.cfg
# * https://github.com/cal/vagrant-ubuntu-precise-64/blob/master/preseed.cfg

# English plx
d-i debian-installer/language string en
d-i debian-installer/locale string en_US.UTF-8
d-i localechooser/preferred-locale string en_US.UTF-8
d-i localechooser/supported-locales en_US.UTF-8

# Including keyboards
d-i console-setup/ask_detect boolean false
d-i keyboard-configuration/layout select USA
d-i keyboard-configuration/variant select USA
d-i keyboard-configuration/modelcode string pc105


# Just roll with it
d-i netcfg/get_hostname string this-host
d-i netcfg/get_domain string this-host

d-i time/zone string UTC
d-i clock-setup/utc-auto boolean true
d-i clock-setup/utc boolean true


# Choices: Dialog, Readline, Gnome, Kde, Editor, Noninteractive
d-i debconf debconf/frontend select Noninteractive

d-i pkgsel/install-language-support boolean false
tasksel tasksel/first multiselect standard, ubuntu-server


# Stuck between a rock and a HDD place
d-i partman-auto/method string lvm
d-i partman-lvm/confirm boolean true
d-i partman-lvm/device_remove_lvm boolean true
d-i partman-auto/choose_recipe select atomic

d-i partman/confirm_write_new_label boolean true
d-i partman/confirm_nooverwrite boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true

# Write the changes to disks and configure LVM?
d-i partman-lvm/confirm boolean true
d-i partman-lvm/confirm_nooverwrite boolean true
d-i partman-auto-lvm/guided_size string max

# No proxy, plx
d-i mirror/http/proxy string

# Default user, change
d-i passwd/user-fullname string yourname
d-i passwd/username string yourname
d-i passwd/user-password password password
d-i passwd/user-password-again password password
d-i user-setup/encrypt-home boolean false
d-i user-setup/allow-password-weak boolean true

# No language support packages.
d-i pkgsel/install-language-support boolean false

# Individual additional packages to install
d-i pkgsel/include string build-essential ssh

#For the update
d-i pkgsel/update-policy select none

# Whether to upgrade packages after debootstrap.
# Allowed values: none, safe-upgrade, full-upgrade
d-i pkgsel/upgrade select safe-upgrade

# Go grub, go!
d-i grub-installer/only_debian boolean true

d-i finish-install/reboot_in_progress note