Within an OpenStack cluster that uses the OpenContrail Neutron plugin all the network ports are associated with a virtual-network. Virtual-networks are implemented as an overlay, using the control plane protocol defined in RFC 4364.
There needs to be a gateway between the virtualized data-center network and the non-virtualized external network. For production environments it is desirable to use an L3VPN capable router such as a Juniper MX-series device. For test environments it is often preferable to be able to use a software gateway.
The OpenContrail software itself can be used as a software gateway. I recently configured a system to run the contrail configuration, control-plane and vrouter components to serve as a gateway.
One needs to run the following processes:
- vrouter agent
Before running the OpenContrail deamons it is necessary to install several infrastructure packages. When using a ubuntu 12.04 LTS system one can use the following script:
This installs zookeeper, cassandra and redis which are requirements for the contrail api-server. In addition to these packages, it is also necessary to install the IF-MAP server (“irond”). This can be retrieved from http://trust.f4.hs-hannover.de/download/iron/archive/irond-0.3.0-bin.zip
“irond” authentication is based on the file
basicauthuser.properties on the working directory when the application is started. This file must contain the following entries:
The next step is to install the OpenContrail binaries, either from the binary distribution or by compiling from source.
We can start by launching the api-server. It requires the following configuration (usually /etc/contrail/api_server.conf):
The command line arguments for the api-server should be:
python /vnc_cfg_api_server.py --conf_file /etc/contrail/api_server.conf --listen_port 8082 --worker_id 0
The schema-transformer requires the following configuration file (/etc/contrail/schema-transformer.conf):
And a command line of:
python /schema_transformer/to_bgp.py --conf_file /etc/contrail/schema_transformer.conf
The control-node process requires the following command-line arguments:
control-node --map-user control-user --map-password control-user-passwd --hostname $(hostname) --bgp-port 179 --map-server-url https://localhost:8443
The steps above provision and start the minimum configuration and control-node components. The vrouter agent must be configured such that the there is both a vhost and a gateway interfaces defined. In the following example the physical interface eth4.50 connects to the underlay. The vgw0 interface will be attached to the routing-instance that corresponds to a network called “Public”.
/etc/network/interfaces we have:
The network interfaces can be brought up by the sequence (as root):
After this the contrail agent must be started with the command line argument:
vnswad --config-file /etc/contrail/agent.conf
Once these steps are complete all the components should be fully functional.
Finally, we need to write a small script that configures the system. The following is an example:
In the example above, the script:
- Creates the Public network and associates with it the route-target 64514:1
- Sets the local autonomous-system to 64513
- Creates a peering session with the control-node running at “d-tnnclc-0000” (10.34.2.21).
Next we will need to configure the control-node running at the remote peer system (AS 64512) to also have a peering session with the gateway.
The sequence of steps above may seem daunting at first. But in reality we just build an L3VPN gateway from scratch, provisioned the software processes and configured it using the python script. The later is verbose but no more complex than configuring an L3VPN gateway to exchange traffic between a VRF and its master routing instance.