So we've implemented saltstack for config management. We don't just use xcat for the OpenStack install but also for a general HPC cluster and for a few other supporting systems. I'm fairly agnostic about the choice of config management tools, but one of my colleagues is familiar with salt, so that's the way we've gone.
Its taken a couple of weeks to butcher the legacy xcat configs into something approaching manageability with salt, which is why there's been a lack of blog posts for a while.
Anyway, I now have a nice data structure in salt pillar data which I can use to configure my OpenStack services as well as building the configs for keepalived and haproxy. For example I have something along these lines:
glance:
dbusers:
glance:
password: DBPASSWORD
hosts:
- localhost
- "%.climb.cluster"
grants:
"%.climb.cluster": "glance.*"
databases:
- glance
osusers:
glance: OPENSTACKPASSWORD
hosts:
server1: glance-1.climb.cluster
server2: glance-2.climb.cluster
vipname: climb-glance.climb.cluster
realport:
glanceregistry: 9191
glanceapi: 9292
backendport:
glanceregistry: 9191
glanceapi: 9292
One thing to note is that I'm not keen on specifying IP addresses as far as possible in the config files (e.g. the keystone auth url can use a name), so we use vipname to specify what would be used by clients. The hosts: section defines the internal IP address that the service should bind to. This is important in cases where the haproxy is running on the same nodes as those providing the service.
Once we've specified this outline config, we then have templates in salt along the following lines:
{% set OS_CONFIG="/etc/glance/glance-registry.conf" %}
{% set OS_CONFIGAPI="/etc/glance/glance-api.conf" %}
{% set OS_CONFIGCACHE="/etc/glance/glance-cache.conf" %}
{% set OS_GLANCE=pillar['climb']['openstackconfig']['glance']['osusers'].keys()[0] %}
{% set OS_AUTHINT='http://' + pillar['climb']['openstackconfig']['keystone']['vipname'] + ':' + pillar['climb
']['openstackconfig']['keystone']['realport']['public']|string + '/v2.0/' %}
{% set OS_AUTHSRV=pillar['climb']['openstackconfig']['keystone']['vipname'] %}
{% set OS_AUTHPORT=pillar['climb']['openstackconfig']['keystone']['realport']['admin']|string %}
{% set OS_GLANCE_URL='http://' + pillar['climb']['openstackconfig']['glance']['vipname'] + ':' + pillar['clim
b']['openstackconfig']['glance']['realport']['glanceapi']|string %}
# create the database if it doesn't exist
glance-db-init:
cmd:
- run
- name: glance-manage db_sync
- runas: glance
- unless: echo 'select * from images' | mysql {{ pillar['climb']['openstackconfig']['glance']['databases'][0] }}
- require:
- pkg: openstack-glance-pkg
# ensure the openstack user for the service is present
openstack-user.climb.glance:
keystone.user_present:
- name: {{ OS_GLANCE }}
- password: "{{ pillar['climb']['openstackconfig']['glance']['osusers'][OS_GLANCE] }}"
- email: "{{ pillar['climb']['openstackconfig']['config']['adminmail'] }}"
- tenant: {{ pillar['climb']['openstackconfig']['config']['tenant']['service'] }}
- roles:
- {{ pillar['climb']['openstackconfig']['config']['tenant']['service'] }}:
- {{ pillar['climb']['openstackconfig']['config']['role'][pillar['climb']['openstackconfig']['config']['tenant']['service']] }}
# ensure the endpoint is present
openstack-endpoint.climb.glance:
keystone.endpoint_present:
- name: glance
- publicurl: "{{ OS_GLANCE_URL }}"
- adminurl: "{{ OS_GLANCE_URL }}"
- internalurl: "{{ OS_GLANCE_URL }}"
- region: {{ pillar['climb']['openstackconfig']['config']['region'] }}
# define the database config to use
openstack-config.climb.glance.database_connection:
openstack_config.present:
- name: connection
- filename: {{ OS_CONFIG }}
- section: database
- value: "mysql://{{ pillar['climb']['openstackconfig']['glance']['dbusers'].keys()[0] }}:{{ pillar['climb']['openstackconfig']['glance']['dbusers'][pillar['climb']['openstackconfig']['glance']['dbusers'].keys()[0]]['password'] }}@{{ pillar['climb']['openstackconfig']['database']['vipname'] }}/{{ pillar['climb']['openstackconfig']['glance']['databases'][0] }}"
- require:
- pkg: openstack-glance-pkg
- require_in:
- service: openstack-glance-registry
- cmd: glance-db-init
We of course set a lot more properties in the glance registry and api configs, but as far as possible we use the salt state openstack_config.present to abstract this. I've only put a bit of sample config here.
Whilst ideally I'd like to be able to build the whole OpenStack cluster from salt, its sort of not possible. For example we have a GPFS file-system sitting under it, and getting salt to setup the GPFS file-system is kinda scary, similarly getting the HA MariaDB database up or the swift ring is scary. So my compromise it that there's a few bits and pieces that need setting up in a kind of 'chicken/egg' situation, but that salt can be used to re-provision any node in the OpenStack cluster assuming we didn't lose everything, and that the re-provision should leave the node in a fully working state. Basically this means a bit of manual intervention, e.g. setup swift ring, but they get copied back to salt and pushed out from there.
One thing I will say about jinja templates with salt is that I feel {{ overload }}...
It will be interesting to see how it performs when we have all the HPC and OpenStack nodes running via salt.
No comments:
Post a comment