OpenStack + Docker + OpenContrail

Docker is a tool that simplifies the process of building container images. One of the issues with OpenStack is that building glance images is an off-line process. It is often difficult to track the contents of the images, how they where created and what software they contain. Docker also does not depend on virtualization; it creates linux container images that can be run directly by the host OS. This provides a much more efficient use of memory as well as better performance. It is a very attractive solution for DC operators that run a private infrastructure that serves in-house developed applications.

In order to run Docker as an openstack “hypervisor” start with devstack on ubuntu 12.04LTS. devstack includes a docker installer that will add a debian repository with the latest version of the docker packages.

After cloning the devstack repository one can issue the command:


tools/docker/install_docker.sh

For OpenContrail there isn’t yet a similar install tool. I built the OpenContrail packages from source and installed them manually, modifying the configuration files in order to have config, control and compute-node components all running locally.

Next, I edited the devstack localrc file to have the following settings:

VIRT_DRIVER=docker

disable_service n-net
enable_service neutron
enable_service q-svc
Q_PLUGIN=contrail

NEUTRON_REPO=https://github.com/Juniper/neutron.git
NEUTRON_BRANCH=contrail/havana

I also added the following file to devstack:

function has_neutron_plugin_security_group() {
    return 1
}

function neutron_plugin_configure_common() {
    Q_PLUGIN_CONF_PATH=etc/neutron/plugins/juniper/contrail
    Q_PLUGIN_CONF_FILENAME=ContrailPlugin.ini
    Q_DB_NAME=neutron
    Q_PLUGIN_CLASS=neutron.plugins.juniper.contrail.contrailplugin.ContrailPlugin
}

function neutron_plugin_configure_debug_command() {
    :
}

function neutron_plugin_create_nova_conf() {
    NOVA_VIF_DRIVER=nova_contrail_vif.contrailvif.VRouterVIFDriver
}

function neutron_plugin_configure_service() {
    iniset $NEUTRON_CONF quotas quota_driver neutron.quota.ConfDriver
}

function neutron_plugin_setup_interface_driver() {
    :
}

function neutron_plugin_check_adv_test_requirements() {
    return 0
}

function is_neutron_ovs_base_plugin() {
    return 1
}

Unfortunately, the docker driver was moved out form the main nova code to stackforge. This requires the following change:

diff --git a/lib/nova_plugins/hypervisor-docker b/lib/nova_plugins/hypervisor-do
cker
index cdbc4d1..b4c1db9 100644
--- a/lib/nova_plugins/hypervisor-docker
+++ b/lib/nova_plugins/hypervisor-docker
@@ -53,7 +53,8 @@ function cleanup_nova_hypervisor {
 
 # configure_nova_hypervisor - Set config files, create data dirs, etc
 function configure_nova_hypervisor {
-    iniset $NOVA_CONF DEFAULT compute_driver docker.DockerDriver
+    iniset $NOVA_CONF DEFAULT compute_driver novadocker.virt.docker.driver.DockerDriver
     iniset $GLANCE_API_CONF DEFAULT container_formats ami,ari,aki,bare,ovf,docker
 }

The next step is to install the nova-docker package. The master branch is available at https://github.com/stackforge/nova-docker. The fork that contains the opencontrail vif_driver is currently at https://github.com/pedro-r-marques/nova-docker.git in the branch “opencontrail”.

Before executing the nova-docker driver, i had create an extra rootwrap config file.

[Filters]
# novadocker/virt/docker/driver.py: 'ln', '-sf'

There is an additional change that needs to be performed. The nova configuration file requires following lines to execute the opencontrail vif_driver rather than the default:

[docker]
vif_driver = novadocker.virt.docker.opencontrail.OpenContrailVIFDriver

After this steps, you can execute stack.sh and boot an instance. The stack.sh script creates a network called “private”. In order to start a docker container via nova one can issue the command:

nova boot --image {image-uuid} --nic net-id={network-uuid} --flavor 1 {image-name}

And thats all folks!…

If we take a peak at the vif_driver code above it is really striking how few lines of code are involved.

There is some additional work that needs to be done in the backend; the compute-node API that is used for nova, neutron, docker and netns provisioning needs to be extracted into a single library. That needs to be sorted out… more than anything else we need a simple tool to install opencontrail from a ppa.

But in my mind, docker + opencontrail are a great combination for clusters built to host internally developed applications such as is the case of SaaS providers. Entire application stacks can be deployed in minutes, at scale.

The only piece missing is a compute scheduler that is designed to manage the instant load of an application rather than virtual machines that come in “flavor” size increments of memory consumption.

Network Namespace Provisioning

I’d written previously on how¬†to use OpenContrail with Linux network namespaces. I managed to find the cycles to put together a configuration wrapper that can be used as a pre-start and post-stop scripts when starting a daemon out of init.d. The scripts are in a python package available in github.

As in the previous post, the test application i used was the apache web server. But most Linux services follow a rather similar pattern when it comes to their init scripts.

I started by installing two bare metal servers with the OpenContrail community packages; one server running the configuration service and both of them running both control-node and compute-node components.

For this exercise, the objective was to be able to select the routing for the outbound traffic for a specific application. For this purpose, I started by creating two virtual-networks, one used for incoming traffic and separate one to be used for outbound traffic for a specific application. The script network_manage.py can be used for this purpose; it can create and delete virtual-networks as well as add and delete external route targets.

After creating an inbound and app-specific outbound networks, one can use the netns-daemon-start script to create a Linux network namespace. A network namespace contains a set of interfaces and its own routing table; one or more applications can use the namespace. Using this mechanism an application can be bound to one or more virtual-networks.

When the netns-daemon-start script is given both a “–network” and a “–outbound” parameter it creates two virtual interfaces; with the default route being added only to the “outbound” virtual-network. This makes it such that all traffic exists through this network.

The script hides a couple of tricks such as disabling the default RPF check in Linux for the “inbound” interface as well as configuring the interface address via “ip addr set” rather than using the DHCP client. Currently it is not possible to control whether or not to advertise the default route via DHCP in a virtual network, despite the fact that i’d previously written a post on how to implement it.

Once the application is bound to the virtual-networks, the network_manage.py script above allows one to selectively import routes from other virtual-networks. The command “network_manage.py –import-only –rtarget x:y rtarget_add <network>” can be used to control which routes are imported into the outbound VRF.

Please note that the way that the script is currently implemented is a bit of an “hack”. It is changing the routing-instance directly given that at the moment there is no way to specify that a route-target is supposed to be import-only at the routing-table level. Hopefully this will be fixed soon.

While the package above lacks support for several features typically available in an OpenContrail OpenStack cluster (floating-ip comes to mind) it is capable of attaching a specifically application directly to one or more virtual-networks. Fine grain control of outbound application traffic is something that i see more and more people interested in.

Enjoy !