Salt version 0.11.0 introduced the reactor system. The premise behind the reactor system is that with Salt's events and the ability to execute commands, a logic engine could be put in place to allow events to trigger actions, or more accurately, reactions.
This system binds sls files to event tags on the master. These sls files then define reactions. This means that the reactor system has two parts. First, the reactor option needs to be set in the master configuration file. The reactor option allows for event tags to be associated with sls reaction files. Second, these reaction files use highdata (like the state system) to define reactions to be executed.
A basic understanding of the event system is required to understand reactors. The event system is a local ZeroMQ PUB interface which fires salt events. This event bus is an open system used for sending information notifying Salt and other systems about operations.
The event system fires events with a very specific criteria. Every event has a tag. Event tags allow for fast top level filtering of events. In addition to the tag, each event has a data structure. This data structure is a dict, which contains information about the event.
Reactor SLS files and event tags are associated in the master config file. By default this is /etc/salt/master, or /etc/salt/master.d/reactor.conf.
New in version 2014.7.0: Added Reactor support for salt://
file paths.
In the master config section 'reactor:' is a list of event tags to be matched and each event tag has a list of reactor SLS files to be run.
reactor: # Master config section "reactor"
- 'salt/minion/*/start': # Match tag "salt/minion/*/start"
- /srv/reactor/start.sls # Things to do when a minion starts
- /srv/reactor/monitor.sls # Other things to do
- 'salt/cloud/*/destroyed': # Globs can be used to matching tags
- /srv/reactor/destroy/*.sls # Globs can be used to match file names
- 'myco/custom/event/tag': # React to custom event tags
- salt://reactor/mycustom.sls # Put reactor files under file_roots
Reactor sls files are similar to state and pillar sls files. They are by default yaml + Jinja templates and are passed familiar context variables.
They differ because of the addition of the tag
and data
variables.
tag
variable is just the tag in the fired event.data
variable is the event's data dict.Here is a simple reactor sls:
{% if data['id'] == 'mysql1' %}
highstate_run:
local.state.apply:
- tgt: mysql1
{% endif %}
This simple reactor file uses Jinja to further refine the reaction to be made.
If the id
in the event data is mysql1
(in other words, if the name of
the minion is mysql1
) then the following reaction is defined. The same
data structure and compiler used for the state system is used for the reactor
system. The only difference is that the data is matched up to the salt command
API and the runner system. In this example, a command is published to the
mysql1
minion with a function of state.apply
. Similarly, a runner can be called:
{% if data['data']['overstate'] == 'refresh' %}
overstate_run:
runner.state.orchestrate
{% endif %}
This example will execute the state.orchestrate
runner.
To fire an event from a minion call event.send
salt-call event.send 'foo' '{overstate: refresh}'
After this is called, any reactor sls files matching event tag foo
will
execute with {{ data['data']['overstate'] }}
equal to 'refresh'
.
See salt.modules.event
for more information.
The best way to see exactly what events are fired and what data is available in
each event is to use the state.event runner
.
See also
Example usage:
salt-run state.event pretty=True
Example output:
salt/job/20150213001905721678/new {
"_stamp": "2015-02-13T00:19:05.724583",
"arg": [],
"fun": "test.ping",
"jid": "20150213001905721678",
"minions": [
"jerry"
],
"tgt": "*",
"tgt_type": "glob",
"user": "root"
}
salt/job/20150213001910749506/ret/jerry {
"_stamp": "2015-02-13T00:19:11.136730",
"cmd": "_return",
"fun": "saltutil.find_job",
"fun_args": [
"20150213001905721678"
],
"id": "jerry",
"jid": "20150213001910749506",
"retcode": 0,
"return": {},
"success": true
}
The best window into the Reactor is to run the master in the foreground with debug logging enabled. The output will include when the master sees the event, what the master does in response to that event, and it will also include the rendered SLS file (or any errors generated while rendering the SLS file).
Stop the master.
Start the master manually:
salt-master -l debug
Look for log entries in the form:
[DEBUG ] Gathering reactors for tag foo/bar
[DEBUG ] Compiling reactions for tag foo/bar
[DEBUG ] Rendered data from file: /path/to/the/reactor_file.sls:
<... Rendered output appears here. ...>
The rendered output is the result of the Jinja parsing and is a good way to view the result of referencing Jinja variables. If the result is empty then Jinja produced an empty result and the Reactor will ignore it.
I.e., when to use `arg` and `kwarg` and when to specify the function arguments directly.
While the reactor system uses the same basic data structure as the state system, the functions that will be called using that data structure are different functions than are called via Salt's state system. The Reactor can call Runner modules using the runner prefix, Wheel modules using the wheel prefix, and can also cause minions to run Execution modules using the local prefix.
Changed in version 2014.7.0: The cmd
prefix was renamed to local
for consistency with other
parts of Salt. A backward-compatible alias was added for cmd
.
The Reactor runs on the master and calls functions that exist on the master. In the case of Runner and Wheel functions the Reactor can just call those functions directly since they exist on the master and are run on the master.
In the case of functions that exist on minions and are run on minions, the Reactor still needs to call a function on the master in order to send the necessary data to the minion so the minion can execute that function.
The Reactor calls functions exposed in Salt's Python API documentation. and thus the structure of Reactor files very transparently reflects the function signatures of those functions.
The Reactor sends commands down to minions in the exact same way Salt's CLI interface does. It calls a function locally on the master that sends the name of the function as well as a list of any arguments and a dictionary of any keyword arguments that the minion should use to execute that function.
Specifically, the Reactor calls the async version of this function
. You can see that function has 'arg' and 'kwarg'
parameters which are both values that are sent down to the minion.
Executing remote commands maps to the LocalClient interface which is used by the salt command. This interface more specifically maps to the cmd_async method inside of the LocalClient class. This means that the arguments passed are being passed to the cmd_async method, not the remote method. A field starts with local to use the LocalClient subsystem. The result is, to execute a remote command, a reactor formula would look like this:
clean_tmp:
local.cmd.run:
- tgt: '*'
- arg:
- rm -rf /tmp/*
The arg
option takes a list of arguments as they would be presented on the
command line, so the above declaration is the same as running this salt
command:
salt '*' cmd.run 'rm -rf /tmp/*'
Use the expr_form
argument to specify a matcher:
clean_tmp:
local.cmd.run:
- tgt: 'os:Ubuntu'
- expr_form: grain
- arg:
- rm -rf /tmp/*
clean_tmp:
local.cmd.run:
- tgt: 'G@roles:hbase_master'
- expr_form: compound
- arg:
- rm -rf /tmp/*
Any other parameters in the LocalClient().cmd()
method can be specified as well.
Calling Runner modules and Wheel modules from the Reactor uses a more direct syntax since the function is being executed locally instead of sending a command to a remote system to be executed there. There are no 'arg' or 'kwarg' parameters (unless the Runner function or Wheel function accepts a parameter with either of those names.)
For example:
clear_the_grains_cache_for_all_minions:
runner.cache.clear_grains
If the runner takes arguments
then
they can be specified as well:
spin_up_more_web_machines:
runner.cloud.profile:
- prof: centos_6
- instances:
- web11 # These VM names would be generated via Jinja in a
- web12 # real-world example.
An interesting trick to pass data from the Reactor script to
state.apply
is to pass it as inline
Pillar data since both functions take a keyword argument named pillar
.
The following example uses Salt's Reactor to listen for the event that is fired
when the key for a new minion is accepted on the master using salt-key
.
/etc/salt/master.d/reactor.conf
:
reactor:
- 'salt/key':
- /srv/salt/haproxy/react_new_minion.sls
The Reactor then fires a :state.apply
command targeted to the HAProxy servers and passes the ID of the new minion
from the event to the state file via inline Pillar.
/srv/salt/haproxy/react_new_minion.sls
:
{% if data['act'] == 'accept' and data['id'].startswith('web') %}
add_new_minion_to_pool:
local.state.apply:
- tgt: 'haproxy*'
- arg:
- haproxy.refresh_pool
- kwarg:
pillar:
new_minion: {{ data['id'] }}
{% endif %}
The above command is equivalent to the following command at the CLI:
salt 'haproxy*' state.apply haproxy.refresh_pool 'pillar={new_minion: minionid}'
This works with Orchestrate files as well:
call_some_orchestrate_file:
runner.state.orchestrate:
- mods: some_orchestrate_file
- pillar:
stuff: things
Which is equivalent to the following command at the CLI:
salt-run state.orchestrate some_orchestrate_file pillar='{stuff: things}'
Finally, that data is available in the state file using the normal Pillar
lookup syntax. The following example is grabbing web server names and IP
addresses from Salt Mine. If this state is invoked from the
Reactor then the custom Pillar value from above will be available and the new
minion will be added to the pool but with the disabled
flag so that HAProxy
won't yet direct traffic to it.
/srv/salt/haproxy/refresh_pool.sls
:
{% set new_minion = salt['pillar.get']('new_minion') %}
listen web *:80
balance source
{% for server,ip in salt['mine.get']('web*', 'network.interfaces', ['eth0']).items() %}
{% if server == new_minion %}
server {{ server }} {{ ip }}:80 disabled
{% else %}
server {{ server }} {{ ip }}:80 check
{% endif %}
{% endfor %}
In this example, we're going to assume that we have a group of servers that
will come online at random and need to have keys automatically accepted. We'll
also add that we don't want all servers being automatically accepted. For this
example, we'll assume that all hosts that have an id that starts with 'ink'
will be automatically accepted and have state.apply
executed. On top of this, we're going to add that
a host coming up that was replaced (meaning a new key) will also be accepted.
Our master configuration will be rather simple. All minions that attempte to authenticate will match the tag of salt/auth. When it comes to the minion key being accepted, we get a more refined tag that includes the minion id, which we can use for matching.
/etc/salt/master.d/reactor.conf
:
reactor:
- 'salt/auth':
- /srv/reactor/auth-pending.sls
- 'salt/minion/ink*/start':
- /srv/reactor/auth-complete.sls
In this sls file, we say that if the key was rejected we will delete the key on the master and then also tell the master to ssh in to the minion and tell it to restart the minion, since a minion process will die if the key is rejected.
We also say that if the key is pending and the id starts with ink we will accept the key. A minion that is waiting on a pending key will retry authentication every ten seconds by default.
/srv/reactor/auth-pending.sls
:
{# Ink server faild to authenticate -- remove accepted key #}
{% if not data['result'] and data['id'].startswith('ink') %}
minion_remove:
wheel.key.delete:
- match: {{ data['id'] }}
minion_rejoin:
local.cmd.run:
- tgt: salt-master.domain.tld
- arg:
- ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no "{{ data['id'] }}" 'sleep 10 && /etc/init.d/salt-minion restart'
{% endif %}
{# Ink server is sending new key -- accept this key #}
{% if 'act' in data and data['act'] == 'pend' and data['id'].startswith('ink') %}
minion_add:
wheel.key.accept:
- match: {{ data['id'] }}
{% endif %}
No if statements are needed here because we already limited this action to just Ink servers in the master configuration.
/srv/reactor/auth-complete.sls
:
{# When an Ink server connects, run state.apply. #}
highstate_run:
local.state.apply:
- tgt: {{ data['id'] }}
- ret: smtp
The above will also return the highstate result data
using the smtp_return returner (use virtualname like when using from the
command line with --return). The returner needs to be configured on the
minion for this to work. See salt.returners.smtp_return
documentation for that.
Salt will sync all custom types (by running a saltutil.sync_all
) on every highstate. However, there is a chicken-and-egg issue where, on the
initial highstate, a minion will not yet have these
custom types synced when the top file is first compiled. This can be worked
around with a simple reactor which watches for minion_start
events, which
each minion fires when it first starts up and connects to the master.
On the master, create /srv/reactor/sync_grains.sls with the following contents:
sync_grains:
local.saltutil.sync_grains:
- tgt: {{ data['id'] }}
And in the master config file, add the following reactor configuration:
reactor:
- 'minion_start':
- /srv/reactor/sync_grains.sls
This will cause the master to instruct each minion to sync its custom grains when it starts, making these grains available when the initial highstate is executed.
Other types can be synced by replacing local.saltutil.sync_grains
with
local.saltutil.sync_modules
, local.saltutil.sync_all
, or whatever else
suits the intended use case.