Tuesday, 12 August 2014

On OpenStack networking with a provider network ...

I've been playing around with OpenStack recently, its a proof of concept development right now which will hopefully turn into something good over the next few months and years. Its a bit of an hybrid HPC cloud type system (dynamic provisioning of large scale VMs for data processing), more details on that later in the year!

For the PoC, I've got a bunch of ports available on a properly routed public net block and a couple of machines behind that to provide the tin for the VMs. Ultimately these will be attached to 10 GbE switches with public network connectivity, I'm not if these will use IP assignment and Neutron L3 agent, or if they'll be direct external networks.  For this PoC we don't need to use per tenant networks, perhaps when we get a bit further towards production deployment, then maybe. To be honest, I'm not sure why using Neutron L3 routing is necessary when I have real routers and need to provide relatively high bandwidth external connections into the kit for large data transfers.

To be clear in the PoC, these are flat provider networks. i.e. no segregation of the network into VLANs, and a real router connected to the public internet. When we move towards production, I'm expecting (hoping maybe!) to get a class C net block for VMs, maybe we'll chop that up a bit into smaller tenant networks, but I'm not sure right now.

The kit is running Scientific Linux 6.5, and I figured packstack/RedHat RDO was a good place to start.

Packstack has a whole bunch of features and config file options, some of which aren't actually implemented.

Now whilst I want my kit running the VMs to be publicly connected, I also don't want those boxes to have public IP addresses. So I have 1 NIC connected to a management network, and a second NIC connected to the public switch. The public switch side is outside of my control at present, its provided by our networks teams, so VLAN tagging etc is out for the PoC system.

In my packstack-answers file, I set:
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-eth1
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_VLAN_RANGES=physnet1:1:1
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=physnet1:br-eth1
CONFIG_NEUTRON_OVS_VLAN_RANGES=physnet1:1:1
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-eth1:eth1

Now one might expect this to attach eth1 to the openvswitch bridge br-eth1, but this does't happen by magic and needs configuring.

Create/edit /etc/sysconfig/network-scripts/ifcfg-br-eth1
DEVICE=br-eth1
DEVICETYPE=ovs
TYPE=OVSBridge
ONBOOT=yes
OVSBOOTPROTO=none

And also eth1 is connected to the public network, so change ifcfg-eth1 to be:
DEVICE=eth1
BOOTPROTO=static
NM_CONTROLLED=no
ONBOOT=yes
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-eth1

Note that I don't have an IP assigned on br-eth1. This means services like sshd on the hosting node aren't listening on the public interface which means my bare metal tin is relatively safe from the outside world, it is however perfectly possible to run VMs which are bound to this br-eth1 public bridge and have public facing services. Note the only exception to this is the box running Neutron networking - whilst we don't actually use the L3 agent for routing, it does need to be 'up' so that things like DHCP work on the network.

As we won't be using the L3 agent, we need to ensure that the dhcp server provides a route for the metadata server (provides config into the VM images, and things like ssh key pairs). Edit /etc/neutron/dhcp_agent.ini and set:
enable_isolated_metadata = True

Then restart the neutron-dhcp-agent service. (note if you re-run packstack at any point, this will revert to false).

(at this point I should note that in dhcp_agent.ini, use_namespaces = true. I've seen the dhcp agent fail to bind correctly for example after reboot, and the solution seems to be set this to false, restart service, set back to true and restart service).

The /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini should look contain something like:
[OVS]
enable_tunneling=False
integration_bridge=br-int

bridge_mappings=physnet1:br-eth1

At this point we want to create the flat network. I've swapped out my real IP addressees here for 10.10.10.0/24, yes this isn't a public net block, but swap it for your own real public class C.
neutron net-create --provider:physical_network=physnet1 --provider:network_type=flat --shared SHAREDNET
neutron subnet-create SHAREDNET 10.10.10.0/24 --name NET10 --no-gateway --host-route destination=0.0.0.0/0,nethop=10.10.10.1 --allocation-pool start=10.10.10.20,end=10.10.10.250 --dns-servers list=true 10.20.0.125

Assume that 10.10.10.1 is your router IP address, and 10.20.0.125 is your DNS server (or list of).

Note that we need to use --no-gateway, if you don't do this, then the route for the metadata server (169.254 address) won't get injected from the DHCP server even if you set enable_isolated_metadata=true as listed above).

And that should be that, start a VM on NET10 and it should talk directly on the eth1 public interface.

Of course at this point, br-eth1 could contain an Ethernet bond underneath, or different physical interfaces.

With reference and thanks to the docs at https://developer.rackspace.com/blog/neutron-networking-simple-flat-network/. Once the PoC is a bit more developed, I'll take a further look at this networking config, in particular possibly chopping up the real network with vlan provider networks https://developer.rackspace.com/blog/neutron-networking-vlan-provider-networks/.

No comments:

Post a Comment