Merge branch '2017.7' into improve-async-operation-handling-in-kubernetes-module

This commit is contained in:
Jochen Breuer 2017-09-19 10:44:31 +02:00 committed by GitHub
commit d1b5ec098c
66 changed files with 1967 additions and 938 deletions

View file

@ -1,5 +1,17 @@
{
"alwaysNotifyForPaths": [
{
"name": "ryan-lane",
"files": ["salt/**/*boto*.py"],
"skipTeamPrs": false
},
{
"name": "tkwilliams",
"files": ["salt/**/*boto*.py"],
"skipTeamPrs": false
}
],
"skipTitle": "Merge forward",
"userBlacklist": ["cvrebert", "markusgattol", "olliewalsh"]
"userBlacklist": ["cvrebert", "markusgattol", "olliewalsh", "basepi"]
}

View file

@ -373,7 +373,7 @@
# interface: eth0
# cidr: '10.0.0.0/8'
# The number of seconds a mine update runs.
# The number of minutes between mine updates.
#mine_interval: 60
# Windows platforms lack posix IPC and must rely on slower TCP based inter-

View file

@ -4091,7 +4091,9 @@ information.
.. code-block:: yaml
reactor: []
reactor:
- 'salt/minion/*/start':
- salt://reactor/startup_tasks.sls
.. conf_master:: reactor_refresh_interval

View file

@ -674,7 +674,7 @@ Note these can be defined in the pillar for a minion as well.
Default: ``60``
The number of seconds a mine update runs.
The number of minutes between mine updates.
.. code-block:: yaml

View file

@ -118,3 +118,53 @@ has to be closed after every command.
.. code-block:: yaml
proxy_always_alive: False
``proxy_merge_pillar_in_opts``
------------------------------
.. versionadded:: 2017.7.3
Default: ``False``.
Wheter the pillar data to be merged into the proxy configuration options.
As multiple proxies can run on the same server, we may need different
configuration options for each, while there's one single configuration file.
The solution is merging the pillar data of each proxy minion into the opts.
.. code-block:: yaml
proxy_merge_pillar_in_opts: True
``proxy_deep_merge_pillar_in_opts``
-----------------------------------
.. versionadded:: 2017.7.3
Default: ``False``.
Deep merge of pillar data into configuration opts.
This option is evaluated only when :conf_proxy:`proxy_merge_pillar_in_opts` is
enabled.
``proxy_merge_pillar_in_opts_strategy``
---------------------------------------
.. versionadded:: 2017.7.3
Default: ``smart``.
The strategy used when merging pillar configuration into opts.
This option is evaluated only when :conf_proxy:`proxy_merge_pillar_in_opts` is
enabled.
``proxy_mines_pillar``
----------------------
.. versionadded:: 2017.7.3
Default: ``True``.
Allow enabling mine details using pillar data. This evaluates the mine
configuration under the pillar, for the following regular minion options that
are also equally available on the proxy minion: :conf_minion:`mine_interval`,
and :conf_minion:`mine_functions`.

View file

@ -1,6 +1,12 @@
salt.runners.mattermost module
==============================
**Note for 2017.7 releases!**
Due to the `salt.runners.config <https://github.com/saltstack/salt/blob/develop/salt/runners/config.py>`_ module not being available in this release series, importing the `salt.runners.config <https://github.com/saltstack/salt/blob/develop/salt/runners/config.py>`_ module from the develop branch is required to make this module work.
Ref: `Mattermost runner failing to retrieve config values due to unavailable config runner #43479 <https://github.com/saltstack/salt/issues/43479>`_
.. automodule:: salt.runners.mattermost
:members:
:undoc-members:

View file

@ -253,9 +253,8 @@ in ``/etc/salt/master.d/reactor.conf``:
.. note::
You can have only one top level ``reactor`` section, so if one already
exists, add this code to the existing section. See :ref:`Understanding the
Structure of Reactor Formulas <reactor-structure>` to learn more about
reactor SLS syntax.
exists, add this code to the existing section. See :ref:`here
<reactor-sls>` to learn more about reactor SLS syntax.
Start the Salt Master in Debug Mode

View file

@ -263,9 +263,17 @@ against that branch.
Release Branches
----------------
For each release a branch will be created when we are ready to tag. The branch will be the same name as the tag minus the v. For example, the v2017.7.1 release was created from the 2017.7.1 branch. This branching strategy will allow for more stability when there is a need for a re-tag during the testing phase of our releases.
For each release, a branch will be created when the SaltStack release team is
ready to tag. The release branch is created from the parent branch and will be
the same name as the tag minus the ``v``. For example, the ``2017.7.1`` release
branch was created from the ``2017.7`` parent branch and the ``v2017.7.1``
release was tagged at the ``HEAD`` of the ``2017.7.1`` branch. This branching
strategy will allow for more stability when there is a need for a re-tag during
the testing phase of the release process.
Once the branch is created, the fixes required for a given release, as determined by the SaltStack release team, will be added to this branch. All commits in this branch will be merged forward into the parent branch as well.
Once the release branch is created, the fixes required for a given release, as
determined by the SaltStack release team, will be added to this branch. All
commits in this branch will be merged forward into the parent branch as well.
Keeping Salt Forks in Sync
==========================

View file

@ -27,7 +27,12 @@ Salt engines are configured under an ``engines`` top-level section in your Salt
port: 5959
proto: tcp
Salt engines must be in the Salt path, or you can add the ``engines_dirs`` option in your Salt master configuration with a list of directories under which Salt attempts to find Salt engines.
Salt engines must be in the Salt path, or you can add the ``engines_dirs`` option in your Salt master configuration with a list of directories under which Salt attempts to find Salt engines. This option should be formatted as a list of directories to search, such as:
.. code-block:: yaml
engines_dirs:
- /home/bob/engines
Writing an Engine
=================

View file

@ -27,9 +27,9 @@ event bus is an open system used for sending information notifying Salt and
other systems about operations.
The event system fires events with a very specific criteria. Every event has a
:strong:`tag`. Event tags allow for fast top level filtering of events. In
addition to the tag, each event has a data structure. This data structure is a
dict, which contains information about the event.
**tag**. Event tags allow for fast top-level filtering of events. In addition
to the tag, each event has a data structure. This data structure is a
dictionary, which contains information about the event.
.. _reactor-mapping-events:
@ -65,15 +65,12 @@ and each event tag has a list of reactor SLS files to be run.
the :ref:`querystring syntax <querystring-syntax>` (e.g.
``salt://reactor/mycustom.sls?saltenv=reactor``).
Reactor sls files are similar to state and pillar sls files. They are
by default yaml + Jinja templates and are passed familiar context variables.
Reactor SLS files are similar to State and Pillar SLS files. They are by
default YAML + Jinja templates and are passed familiar context variables.
Click :ref:`here <reactor-jinja-context>` for more detailed information on the
variables availble in Jinja templating.
They differ because of the addition of the ``tag`` and ``data`` variables.
- The ``tag`` variable is just the tag in the fired event.
- The ``data`` variable is the event's data dict.
Here is a simple reactor sls:
Here is the SLS for a simple reaction:
.. code-block:: jinja
@ -90,71 +87,278 @@ data structure and compiler used for the state system is used for the reactor
system. The only difference is that the data is matched up to the salt command
API and the runner system. In this example, a command is published to the
``mysql1`` minion with a function of :py:func:`state.apply
<salt.modules.state.apply_>`. Similarly, a runner can be called:
<salt.modules.state.apply_>`, which performs a :ref:`highstate
<running-highstate>`. Similarly, a runner can be called:
.. code-block:: jinja
{% if data['data']['custom_var'] == 'runit' %}
call_runit_orch:
runner.state.orchestrate:
- mods: _orch.runit
- args:
- mods: orchestrate.runit
{% endif %}
This example will execute the state.orchestrate runner and intiate an execution
of the runit orchestrator located at ``/srv/salt/_orch/runit.sls``. Using
``_orch/`` is any arbitrary path but it is recommended to avoid using "orchestrate"
as this is most likely to cause confusion.
of the ``runit`` orchestrator located at ``/srv/salt/orchestrate/runit.sls``.
Writing SLS Files
-----------------
Types of Reactions
==================
Reactor SLS files are stored in the same location as State SLS files. This means
that both ``file_roots`` and ``gitfs_remotes`` impact what SLS files are
available to the reactor and orchestrator.
============================== ==================================================================================
Name Description
============================== ==================================================================================
:ref:`local <reactor-local>` Runs a :ref:`remote-execution function <all-salt.modules>` on targeted minions
:ref:`runner <reactor-runner>` Executes a :ref:`runner function <all-salt.runners>`
:ref:`wheel <reactor-wheel>` Executes a :ref:`wheel function <all-salt.wheel>` on the master
:ref:`caller <reactor-caller>` Runs a :ref:`remote-execution function <all-salt.modules>` on a masterless minion
============================== ==================================================================================
It is recommended to keep reactor and orchestrator SLS files in their own uniquely
named subdirectories such as ``_orch/``, ``orch/``, ``_orchestrate/``, ``react/``,
``_reactor/``, etc. Keeping a unique name helps prevent confusion when trying to
read through this a few years down the road.
.. note::
The ``local`` and ``caller`` reaction types will be renamed for the Oxygen
release. These reaction types were named after Salt's internal client
interfaces, and are not intuitively named. Both ``local`` and ``caller``
will continue to work in Reactor SLS files, but for the Oxygen release the
documentation will be updated to reflect the new preferred naming.
The Goal of Writing Reactor SLS Files
=====================================
Where to Put Reactor SLS Files
==============================
Reactor SLS files share the familiar syntax from Salt States but there are
important differences. The goal of a Reactor file is to process a Salt event as
quickly as possible and then to optionally start a **new** process in response.
Reactor SLS files can come both from files local to the master, and from any of
backends enabled via the :conf_master:`fileserver_backend` config option. Files
placed in the Salt fileserver can be referenced using a ``salt://`` URL, just
like they can in State SLS files.
1. The Salt Reactor watches Salt's event bus for new events.
2. The event tag is matched against the list of event tags under the
``reactor`` section in the Salt Master config.
3. The SLS files for any matches are Rendered into a data structure that
represents one or more function calls.
4. That data structure is given to a pool of worker threads for execution.
It is recommended to place reactor and orchestrator SLS files in their own
uniquely-named subdirectories such as ``orch/``, ``orchestrate/``, ``react/``,
``reactor/``, etc., to keep them organized.
.. _reactor-sls:
Writing Reactor SLS
===================
The different reaction types were developed separately and have historically
had different methods for passing arguments. For the 2017.7.2 release a new,
unified configuration schema has been introduced, which applies to all reaction
types.
The old config schema will continue to be supported, and there is no plan to
deprecate it at this time.
.. _reactor-local:
Local Reactions
---------------
A ``local`` reaction runs a :ref:`remote-execution function <all-salt.modules>`
on the targeted minions.
The old config schema required the positional and keyword arguments to be
manually separated by the user under ``arg`` and ``kwarg`` parameters. However,
this is not very user-friendly, as it forces the user to distinguish which type
of argument is which, and make sure that positional arguments are ordered
properly. Therefore, the new config schema is recommended if the master is
running a supported release.
The below two examples are equivalent:
+---------------------------------+-----------------------------+
| Supported in 2017.7.2 and later | Supported in all releases |
+=================================+=============================+
| :: | :: |
| | |
| install_zsh: | install_zsh: |
| local.state.single: | local.state.single: |
| - tgt: 'kernel:Linux' | - tgt: 'kernel:Linux' |
| - tgt_type: grain | - tgt_type: grain |
| - args: | - arg: |
| - fun: pkg.installed | - pkg.installed |
| - name: zsh | - zsh |
| - fromrepo: updates | - kwarg: |
| | fromrepo: updates |
+---------------------------------+-----------------------------+
This reaction would be equvalent to running the following Salt command:
.. code-block:: bash
salt -G 'kernel:Linux' state.single pkg.installed name=zsh fromrepo=updates
.. note::
Any other parameters in the :py:meth:`LocalClient().cmd_async()
<salt.client.LocalClient.cmd_async>` method can be passed at the same
indentation level as ``tgt``.
.. note::
``tgt_type`` is only required when the target expression defined in ``tgt``
uses a :ref:`target type <targeting>` other than a minion ID glob.
The ``tgt_type`` argument was named ``expr_form`` in releases prior to
2017.7.0.
.. _reactor-runner:
Runner Reactions
----------------
Runner reactions execute :ref:`runner functions <all-salt.runners>` locally on
the master.
The old config schema called for passing arguments to the reaction directly
under the name of the runner function. However, this can cause unpredictable
interactions with the Reactor system's internal arguments. It is also possible
to pass positional and keyword arguments under ``arg`` and ``kwarg`` like above
in :ref:`local reactions <reactor-local>`, but as noted above this is not very
user-friendly. Therefore, the new config schema is recommended if the master
is running a supported release.
The below two examples are equivalent:
+-------------------------------------------------+-------------------------------------------------+
| Supported in 2017.7.2 and later | Supported in all releases |
+=================================================+=================================================+
| :: | :: |
| | |
| deploy_app: | deploy_app: |
| runner.state.orchestrate: | runner.state.orchestrate: |
| - args: | - mods: orchestrate.deploy_app |
| - mods: orchestrate.deploy_app | - kwarg: |
| - pillar: | pillar: |
| event_tag: {{ tag }} | event_tag: {{ tag }} |
| event_data: {{ data['data']|json }} | event_data: {{ data['data']|json }} |
+-------------------------------------------------+-------------------------------------------------+
Assuming that the event tag is ``foo``, and the data passed to the event is
``{'bar': 'baz'}``, then this reaction is equvalent to running the following
Salt command:
.. code-block:: bash
salt-run state.orchestrate mods=orchestrate.deploy_app pillar='{"event_tag": "foo", "event_data": {"bar": "baz"}}'
.. _reactor-wheel:
Wheel Reactions
---------------
Wheel reactions run :ref:`wheel functions <all-salt.wheel>` locally on the
master.
Like :ref:`runner reactions <reactor-runner>`, the old config schema called for
wheel reactions to have arguments passed directly under the name of the
:ref:`wheel function <all-salt.wheel>` (or in ``arg`` or ``kwarg`` parameters).
The below two examples are equivalent:
+-----------------------------------+---------------------------------+
| Supported in 2017.7.2 and later | Supported in all releases |
+===================================+=================================+
| :: | :: |
| | |
| remove_key: | remove_key: |
| wheel.key.delete: | wheel.key.delete: |
| - args: | - match: {{ data['id'] }} |
| - match: {{ data['id'] }} | |
+-----------------------------------+---------------------------------+
.. _reactor-caller:
Caller Reactions
----------------
Caller reactions run :ref:`remote-execution functions <all-salt.modules>` on a
minion daemon's Reactor system. To run a Reactor on the minion, it is necessary
to configure the :mod:`Reactor Engine <salt.engines.reactor>` in the minion
config file, and then setup your watched events in a ``reactor`` section in the
minion config file as well.
.. note:: Masterless Minions use this Reactor
This is the only way to run the Reactor if you use masterless minions.
Both the old and new config schemas involve passing arguments under an ``args``
parameter. However, the old config schema only supports positional arguments.
Therefore, the new config schema is recommended if the masterless minion is
running a supported release.
The below two examples are equivalent:
+---------------------------------+---------------------------+
| Supported in 2017.7.2 and later | Supported in all releases |
+=================================+===========================+
| :: | :: |
| | |
| touch_file: | touch_file: |
| caller.file.touch: | caller.file.touch: |
| - args: | - args: |
| - name: /tmp/foo | - /tmp/foo |
+---------------------------------+---------------------------+
This reaction is equvalent to running the following Salt command:
.. code-block:: bash
salt-call file.touch name=/tmp/foo
Best Practices for Writing Reactor SLS Files
============================================
The Reactor works as follows:
1. The Salt Reactor watches Salt's event bus for new events.
2. Each event's tag is matched against the list of event tags configured under
the :conf_master:`reactor` section in the Salt Master config.
3. The SLS files for any matches are rendered into a data structure that
represents one or more function calls.
4. That data structure is given to a pool of worker threads for execution.
Matching and rendering Reactor SLS files is done sequentially in a single
process. Complex Jinja that calls out to slow Execution or Runner modules slows
down the rendering and causes other reactions to pile up behind the current
one. The worker pool is designed to handle complex and long-running processes
such as Salt Orchestrate.
process. For that reason, reactor SLS files should contain few individual
reactions (one, if at all possible). Also, keep in mind that reactions are
fired asynchronously (with the exception of :ref:`caller <reactor-caller>`) and
do *not* support :ref:`requisites <requisites>`.
tl;dr: Rendering Reactor SLS files MUST be simple and quick. The new process
started by the worker threads can be long-running. Using the reactor to fire
an orchestrate runner would be ideal.
Complex Jinja templating that calls out to slow :ref:`remote-execution
<all-salt.modules>` or :ref:`runner <all-salt.runners>` functions slows down
the rendering and causes other reactions to pile up behind the current one. The
worker pool is designed to handle complex and long-running processes like
:ref:`orchestration <orchestrate-runner>` jobs.
Therefore, when complex tasks are in order, :ref:`orchestration
<orchestrate-runner>` is a natural fit. Orchestration SLS files can be more
complex, and use requisites. Performing a complex task using orchestration lets
the Reactor system fire off the orchestration job and proceed with processing
other reactions.
.. _reactor-jinja-context:
Jinja Context
-------------
=============
Reactor files only have access to a minimal Jinja context. ``grains`` and
``pillar`` are not available. The ``salt`` object is available for calling
Runner and Execution modules but it should be used sparingly and only for quick
tasks for the reasons mentioned above.
Reactor SLS files only have access to a minimal Jinja context. ``grains`` and
``pillar`` are *not* available. The ``salt`` object is available for calling
:ref:`remote-execution <all-salt.modules>` or :ref:`runner <all-salt.runners>`
functions, but it should be used sparingly and only for quick tasks for the
reasons mentioned above.
In addition to the ``salt`` object, the following variables are available in
the Jinja context:
- ``tag`` - the tag from the event that triggered execution of the Reactor SLS
file
- ``data`` - the event's data dictionary
The ``data`` dict will contain an ``id`` key containing the minion ID, if the
event was fired from a minion, and a ``data`` key containing the data passed to
the event.
Advanced State System Capabilities
----------------------------------
==================================
Reactor SLS files, by design, do not support Requisites, ordering,
``onlyif``/``unless`` conditionals and most other powerful constructs from
Salt's State system.
Reactor SLS files, by design, do not support :ref:`requisites <requisites>`,
ordering, ``onlyif``/``unless`` conditionals and most other powerful constructs
from Salt's State system.
Complex Master-side operations are best performed by Salt's Orchestrate system
so using the Reactor to kick off an Orchestrate run is a very common pairing.
@ -166,7 +370,7 @@ For example:
# /etc/salt/master.d/reactor.conf
# A custom event containing: {"foo": "Foo!", "bar: "bar*", "baz": "Baz!"}
reactor:
- myco/custom/event:
- my/custom/event:
- /srv/reactor/some_event.sls
.. code-block:: jinja
@ -174,15 +378,15 @@ For example:
# /srv/reactor/some_event.sls
invoke_orchestrate_file:
runner.state.orchestrate:
- mods: _orch.do_complex_thing # /srv/salt/_orch/do_complex_thing.sls
- kwarg:
pillar:
event_tag: {{ tag }}
event_data: {{ data|json() }}
- args:
- mods: orchestrate.do_complex_thing
- pillar:
event_tag: {{ tag }}
event_data: {{ data|json }}
.. code-block:: jinja
# /srv/salt/_orch/do_complex_thing.sls
# /srv/salt/orchestrate/do_complex_thing.sls
{% set tag = salt.pillar.get('event_tag') %}
{% set data = salt.pillar.get('event_data') %}
@ -209,7 +413,7 @@ For example:
.. _beacons-and-reactors:
Beacons and Reactors
--------------------
====================
An event initiated by a beacon, when it arrives at the master will be wrapped
inside a second event, such that the data object containing the beacon
@ -219,27 +423,52 @@ For example, to access the ``id`` field of the beacon event in a reactor file,
you will need to reference ``{{ data['data']['id'] }}`` rather than ``{{
data['id'] }}`` as for events initiated directly on the event bus.
Similarly, the data dictionary attached to the event would be located in
``{{ data['data']['data'] }}`` instead of ``{{ data['data'] }}``.
See the :ref:`beacon documentation <beacon-example>` for examples.
Fire an event
=============
Manually Firing an Event
========================
To fire an event from a minion call ``event.send``
From the Master
---------------
Use the :py:func:`event.send <salt.runners.event.send>` runner:
.. code-block:: bash
salt-call event.send 'foo' '{orchestrate: refresh}'
salt-run event.send foo '{orchestrate: refresh}'
After this is called, any reactor sls files matching event tag ``foo`` will
execute with ``{{ data['data']['orchestrate'] }}`` equal to ``'refresh'``.
From the Minion
---------------
See :py:mod:`salt.modules.event` for more information.
To fire an event to the master from a minion, call :py:func:`event.send
<salt.modules.event.send>`:
Knowing what event is being fired
=================================
.. code-block:: bash
The best way to see exactly what events are fired and what data is available in
each event is to use the :py:func:`state.event runner
salt-call event.send foo '{orchestrate: refresh}'
To fire an event to the minion's local event bus, call :py:func:`event.fire
<salt.modules.event.fire>`:
.. code-block:: bash
salt-call event.fire '{orchestrate: refresh}' foo
Referencing Data Passed in Events
---------------------------------
Assuming any of the above examples, any reactor SLS files triggered by watching
the event tag ``foo`` will execute with ``{{ data['data']['orchestrate'] }}``
equal to ``'refresh'``.
Getting Information About Events
================================
The best way to see exactly what events have been fired and what data is
available in each event is to use the :py:func:`state.event runner
<salt.runners.state.event>`.
.. seealso:: :ref:`Common Salt Events <event-master_events>`
@ -308,156 +537,10 @@ rendered SLS file (or any errors generated while rendering the SLS file).
view the result of referencing Jinja variables. If the result is empty then
Jinja produced an empty result and the Reactor will ignore it.
.. _reactor-structure:
Passing Event Data to Minions or Orchestration as Pillar
--------------------------------------------------------
Understanding the Structure of Reactor Formulas
===============================================
**I.e., when to use `arg` and `kwarg` and when to specify the function
arguments directly.**
While the reactor system uses the same basic data structure as the state
system, the functions that will be called using that data structure are
different functions than are called via Salt's state system. The Reactor can
call Runner modules using the `runner` prefix, Wheel modules using the `wheel`
prefix, and can also cause minions to run Execution modules using the `local`
prefix.
.. versionchanged:: 2014.7.0
The ``cmd`` prefix was renamed to ``local`` for consistency with other
parts of Salt. A backward-compatible alias was added for ``cmd``.
The Reactor runs on the master and calls functions that exist on the master. In
the case of Runner and Wheel functions the Reactor can just call those
functions directly since they exist on the master and are run on the master.
In the case of functions that exist on minions and are run on minions, the
Reactor still needs to call a function on the master in order to send the
necessary data to the minion so the minion can execute that function.
The Reactor calls functions exposed in :ref:`Salt's Python API documentation
<client-apis>`. and thus the structure of Reactor files very transparently
reflects the function signatures of those functions.
Calling Execution modules on Minions
------------------------------------
The Reactor sends commands down to minions in the exact same way Salt's CLI
interface does. It calls a function locally on the master that sends the name
of the function as well as a list of any arguments and a dictionary of any
keyword arguments that the minion should use to execute that function.
Specifically, the Reactor calls the async version of :py:meth:`this function
<salt.client.LocalClient.cmd>`. You can see that function has 'arg' and 'kwarg'
parameters which are both values that are sent down to the minion.
Executing remote commands maps to the :strong:`LocalClient` interface which is
used by the :strong:`salt` command. This interface more specifically maps to
the :strong:`cmd_async` method inside of the :strong:`LocalClient` class. This
means that the arguments passed are being passed to the :strong:`cmd_async`
method, not the remote method. A field starts with :strong:`local` to use the
:strong:`LocalClient` subsystem. The result is, to execute a remote command,
a reactor formula would look like this:
.. code-block:: yaml
clean_tmp:
local.cmd.run:
- tgt: '*'
- arg:
- rm -rf /tmp/*
The ``arg`` option takes a list of arguments as they would be presented on the
command line, so the above declaration is the same as running this salt
command:
.. code-block:: bash
salt '*' cmd.run 'rm -rf /tmp/*'
Use the ``tgt_type`` argument to specify a matcher:
.. code-block:: yaml
clean_tmp:
local.cmd.run:
- tgt: 'os:Ubuntu'
- tgt_type: grain
- arg:
- rm -rf /tmp/*
clean_tmp:
local.cmd.run:
- tgt: 'G@roles:hbase_master'
- tgt_type: compound
- arg:
- rm -rf /tmp/*
.. note::
The ``tgt_type`` argument was named ``expr_form`` in releases prior to
2017.7.0 (2016.11.x and earlier).
Any other parameters in the :py:meth:`LocalClient().cmd()
<salt.client.LocalClient.cmd>` method can be specified as well.
Executing Reactors from the Minion
----------------------------------
The minion can be setup to use the Reactor via a reactor engine. This just
sets up and listens to the minions event bus, instead of to the masters.
The biggest difference is that you have to use the caller method on the
Reactor, which is the equivalent of salt-call, to run your commands.
:mod:`Reactor Engine setup <salt.engines.reactor>`
.. code-block:: yaml
clean_tmp:
caller.cmd.run:
- arg:
- rm -rf /tmp/*
.. note:: Masterless Minions use this Reactor
This is the only way to run the Reactor if you use masterless minions.
Calling Runner modules and Wheel modules
----------------------------------------
Calling Runner modules and Wheel modules from the Reactor uses a more direct
syntax since the function is being executed locally instead of sending a
command to a remote system to be executed there. There are no 'arg' or 'kwarg'
parameters (unless the Runner function or Wheel function accepts a parameter
with either of those names.)
For example:
.. code-block:: yaml
clear_the_grains_cache_for_all_minions:
runner.cache.clear_grains
If the :py:func:`the runner takes arguments <salt.runners.cloud.profile>` then
they must be specified as keyword arguments.
.. code-block:: yaml
spin_up_more_web_machines:
runner.cloud.profile:
- prof: centos_6
- instances:
- web11 # These VM names would be generated via Jinja in a
- web12 # real-world example.
To determine the proper names for the arguments, check the documentation
or source code for the runner function you wish to call.
Passing event data to Minions or Orchestrate as Pillar
------------------------------------------------------
An interesting trick to pass data from the Reactor script to
An interesting trick to pass data from the Reactor SLS file to
:py:func:`state.apply <salt.modules.state.apply_>` is to pass it as inline
Pillar data since both functions take a keyword argument named ``pillar``.
@ -484,10 +567,9 @@ from the event to the state file via inline Pillar.
add_new_minion_to_pool:
local.state.apply:
- tgt: 'haproxy*'
- arg:
- haproxy.refresh_pool
- kwarg:
pillar:
- args:
- mods: haproxy.refresh_pool
- pillar:
new_minion: {{ data['id'] }}
{% endif %}
@ -503,17 +585,16 @@ This works with Orchestrate files as well:
call_some_orchestrate_file:
runner.state.orchestrate:
- mods: _orch.some_orchestrate_file
- pillar:
stuff: things
- args:
- mods: orchestrate.some_orchestrate_file
- pillar:
stuff: things
Which is equivalent to the following command at the CLI:
.. code-block:: bash
salt-run state.orchestrate _orch.some_orchestrate_file pillar='{stuff: things}'
This expects to find a file at /srv/salt/_orch/some_orchestrate_file.sls.
salt-run state.orchestrate orchestrate.some_orchestrate_file pillar='{stuff: things}'
Finally, that data is available in the state file using the normal Pillar
lookup syntax. The following example is grabbing web server names and IP
@ -564,7 +645,7 @@ includes the minion id, which we can use for matching.
- 'salt/minion/ink*/start':
- /srv/reactor/auth-complete.sls
In this sls file, we say that if the key was rejected we will delete the key on
In this SLS file, we say that if the key was rejected we will delete the key on
the master and then also tell the master to ssh in to the minion and tell it to
restart the minion, since a minion process will die if the key is rejected.
@ -580,19 +661,21 @@ authentication every ten seconds by default.
{% if not data['result'] and data['id'].startswith('ink') %}
minion_remove:
wheel.key.delete:
- match: {{ data['id'] }}
- args:
- match: {{ data['id'] }}
minion_rejoin:
local.cmd.run:
- tgt: salt-master.domain.tld
- arg:
- ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no "{{ data['id'] }}" 'sleep 10 && /etc/init.d/salt-minion restart'
- args:
- cmd: ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no "{{ data['id'] }}" 'sleep 10 && /etc/init.d/salt-minion restart'
{% endif %}
{# Ink server is sending new key -- accept this key #}
{% if 'act' in data and data['act'] == 'pend' and data['id'].startswith('ink') %}
minion_add:
wheel.key.accept:
- match: {{ data['id'] }}
- args:
- match: {{ data['id'] }}
{% endif %}
No if statements are needed here because we already limited this action to just

View file

@ -481,11 +481,17 @@ Alternatively the ``uninstaller`` can also simply repeat the URL of the msi file
:param bool allusers: This parameter is specific to `.msi` installations. It
tells `msiexec` to install the software for all users. The default is True.
:param bool cache_dir: If true, the entire directory where the installer resides
will be recursively cached. This is useful for installers that depend on
other files in the same directory for installation.
:param bool cache_dir: If true when installer URL begins with salt://, the
entire directory where the installer resides will be recursively cached.
This is useful for installers that depend on other files in the same
directory for installation.
.. note:: Only applies to salt: installer URLs.
:param str cache_file:
When installer URL begins with salt://, this indicates single file to copy
down for use with the installer. Copied to the same location as the
installer. Use this over ``cache_dir`` if there are many files in the
directory and you only need a specific file and don't want to cache
additional files that may reside in the installer directory.
Here's an example for a software package that has dependent files:

View file

@ -35,7 +35,8 @@ _salt_get_keys(){
}
_salt(){
local _salt_cache_functions=${SALT_COMP_CACHE_FUNCTIONS:='~/.cache/salt-comp-cache_functions'}
CACHE_DIR="$HOME/.cache/salt-comp-cache_functions"
local _salt_cache_functions=${SALT_COMP_CACHE_FUNCTIONS:=$CACHE_DIR}
local _salt_cache_timeout=${SALT_COMP_CACHE_TIMEOUT:='last hour'}
if [ ! -d "$(dirname ${_salt_cache_functions})" ]; then

View file

@ -44,7 +44,7 @@ ${StrStrAdv}
!define CPUARCH "x86"
!endif
; Part of the Trim function for Strings
# Part of the Trim function for Strings
!define Trim "!insertmacro Trim"
!macro Trim ResultVar String
Push "${String}"
@ -61,27 +61,27 @@ ${StrStrAdv}
!define MUI_UNICON "salt.ico"
!define MUI_WELCOMEFINISHPAGE_BITMAP "panel.bmp"
; Welcome page
# Welcome page
!insertmacro MUI_PAGE_WELCOME
; License page
# License page
!insertmacro MUI_PAGE_LICENSE "LICENSE.txt"
; Configure Minion page
# Configure Minion page
Page custom pageMinionConfig pageMinionConfig_Leave
; Instfiles page
# Instfiles page
!insertmacro MUI_PAGE_INSTFILES
; Finish page (Customized)
# Finish page (Customized)
!define MUI_PAGE_CUSTOMFUNCTION_SHOW pageFinish_Show
!define MUI_PAGE_CUSTOMFUNCTION_LEAVE pageFinish_Leave
!insertmacro MUI_PAGE_FINISH
; Uninstaller pages
# Uninstaller pages
!insertmacro MUI_UNPAGE_INSTFILES
; Language files
# Language files
!insertmacro MUI_LANGUAGE "English"
@ -201,8 +201,8 @@ ShowInstDetails show
ShowUnInstDetails show
; Check and install Visual C++ redist packages
; See http://blogs.msdn.com/b/astebner/archive/2009/01/29/9384143.aspx for more info
# Check and install Visual C++ redist packages
# See http://blogs.msdn.com/b/astebner/archive/2009/01/29/9384143.aspx for more info
Section -Prerequisites
Var /GLOBAL VcRedistName
@ -211,12 +211,12 @@ Section -Prerequisites
Var /Global CheckVcRedist
StrCpy $CheckVcRedist "False"
; Visual C++ 2015 redist packages
# Visual C++ 2015 redist packages
!define PY3_VC_REDIST_NAME "VC_Redist_2015"
!define PY3_VC_REDIST_X64_GUID "{50A2BC33-C9CD-3BF1-A8FF-53C10A0B183C}"
!define PY3_VC_REDIST_X86_GUID "{BBF2AC74-720C-3CB3-8291-5E34039232FA}"
; Visual C++ 2008 SP1 MFC Security Update redist packages
# Visual C++ 2008 SP1 MFC Security Update redist packages
!define PY2_VC_REDIST_NAME "VC_Redist_2008_SP1_MFC"
!define PY2_VC_REDIST_X64_GUID "{5FCE6D76-F5DC-37AB-B2B8-22AB8CEDB1D4}"
!define PY2_VC_REDIST_X86_GUID "{9BE518E6-ECC6-35A9-88E4-87755C07200F}"
@ -239,7 +239,7 @@ Section -Prerequisites
StrCpy $VcRedistGuid ${PY2_VC_REDIST_X86_GUID}
${EndIf}
; VCRedist 2008 only needed on Windows Server 2008R2/Windows 7 and below
# VCRedist 2008 only needed on Windows Server 2008R2/Windows 7 and below
${If} ${AtMostWin2008R2}
StrCpy $CheckVcRedist "True"
${EndIf}
@ -255,20 +255,41 @@ Section -Prerequisites
"$VcRedistName is currently not installed. Would you like to install?" \
/SD IDYES IDNO endVcRedist
ClearErrors
; The Correct version of VCRedist is copied over by "build_pkg.bat"
# The Correct version of VCRedist is copied over by "build_pkg.bat"
SetOutPath "$INSTDIR\"
File "..\prereqs\vcredist.exe"
; /passive used by 2015 installer
; /qb! used by 2008 installer
; It just ignores the unrecognized switches...
ExecWait "$INSTDIR\vcredist.exe /qb! /passive"
IfErrors 0 endVcRedist
# If an output variable is specified ($0 in the case below),
# ExecWait sets the variable with the exit code (and only sets the
# error flag if an error occurs; if an error occurs, the contents
# of the user variable are undefined).
# http://nsis.sourceforge.net/Reference/ExecWait
# /passive used by 2015 installer
# /qb! used by 2008 installer
# It just ignores the unrecognized switches...
ClearErrors
ExecWait '"$INSTDIR\vcredist.exe" /qb! /passive /norestart' $0
IfErrors 0 CheckVcRedistErrorCode
MessageBox MB_OK \
"$VcRedistName failed to install. Try installing the package manually." \
/SD IDOK
Goto endVcRedist
CheckVcRedistErrorCode:
# Check for Reboot Error Code (3010)
${If} $0 == 3010
MessageBox MB_OK \
"$VcRedistName installed but requires a restart to complete." \
/SD IDOK
# Check for any other errors
${ElseIfNot} $0 == 0
MessageBox MB_OK \
"$VcRedistName failed with ErrorCode: $0. Try installing the package manually." \
/SD IDOK
${EndIf}
endVcRedist:
${EndIf}
${EndIf}
@ -294,12 +315,12 @@ Function .onInit
Call parseCommandLineSwitches
; Check for existing installation
# Check for existing installation
ReadRegStr $R0 HKLM \
"Software\Microsoft\Windows\CurrentVersion\Uninstall\${PRODUCT_NAME}" \
"UninstallString"
StrCmp $R0 "" checkOther
; Found existing installation, prompt to uninstall
# Found existing installation, prompt to uninstall
MessageBox MB_OKCANCEL|MB_ICONEXCLAMATION \
"${PRODUCT_NAME} is already installed.$\n$\n\
Click `OK` to remove the existing installation." \
@ -307,12 +328,12 @@ Function .onInit
Abort
checkOther:
; Check for existing installation of full salt
# Check for existing installation of full salt
ReadRegStr $R0 HKLM \
"Software\Microsoft\Windows\CurrentVersion\Uninstall\${PRODUCT_NAME_OTHER}" \
"UninstallString"
StrCmp $R0 "" skipUninstall
; Found existing installation, prompt to uninstall
# Found existing installation, prompt to uninstall
MessageBox MB_OKCANCEL|MB_ICONEXCLAMATION \
"${PRODUCT_NAME_OTHER} is already installed.$\n$\n\
Click `OK` to remove the existing installation." \
@ -321,22 +342,22 @@ Function .onInit
uninst:
; Get current Silent status
# Get current Silent status
StrCpy $R0 0
${If} ${Silent}
StrCpy $R0 1
${EndIf}
; Turn on Silent mode
# Turn on Silent mode
SetSilent silent
; Don't remove all directories
# Don't remove all directories
StrCpy $DeleteInstallDir 0
; Uninstall silently
# Uninstall silently
Call uninstallSalt
; Set it back to Normal mode, if that's what it was before
# Set it back to Normal mode, if that's what it was before
${If} $R0 == 0
SetSilent normal
${EndIf}
@ -350,7 +371,7 @@ Section -Post
WriteUninstaller "$INSTDIR\uninst.exe"
; Uninstall Registry Entries
# Uninstall Registry Entries
WriteRegStr ${PRODUCT_UNINST_ROOT_KEY} "${PRODUCT_UNINST_KEY}" \
"DisplayName" "$(^Name)"
WriteRegStr ${PRODUCT_UNINST_ROOT_KEY} "${PRODUCT_UNINST_KEY}" \
@ -366,19 +387,19 @@ Section -Post
WriteRegStr HKLM "SYSTEM\CurrentControlSet\services\salt-minion" \
"DependOnService" "nsi"
; Set the estimated size
# Set the estimated size
${GetSize} "$INSTDIR\bin" "/S=OK" $0 $1 $2
IntFmt $0 "0x%08X" $0
WriteRegDWORD ${PRODUCT_UNINST_ROOT_KEY} "${PRODUCT_UNINST_KEY}" \
"EstimatedSize" "$0"
; Commandline Registry Entries
# Commandline Registry Entries
WriteRegStr HKLM "${PRODUCT_CALL_REGKEY}" "" "$INSTDIR\salt-call.bat"
WriteRegStr HKLM "${PRODUCT_CALL_REGKEY}" "Path" "$INSTDIR\bin\"
WriteRegStr HKLM "${PRODUCT_MINION_REGKEY}" "" "$INSTDIR\salt-minion.bat"
WriteRegStr HKLM "${PRODUCT_MINION_REGKEY}" "Path" "$INSTDIR\bin\"
; Register the Salt-Minion Service
# Register the Salt-Minion Service
nsExec::Exec "nssm.exe install salt-minion $INSTDIR\bin\python.exe -E -s $INSTDIR\bin\Scripts\salt-minion -c $INSTDIR\conf -l quiet"
nsExec::Exec "nssm.exe set salt-minion Description Salt Minion from saltstack.com"
nsExec::Exec "nssm.exe set salt-minion Start SERVICE_AUTO_START"
@ -398,12 +419,12 @@ SectionEnd
Function .onInstSuccess
; If StartMinionDelayed is 1, then set the service to start delayed
# If StartMinionDelayed is 1, then set the service to start delayed
${If} $StartMinionDelayed == 1
nsExec::Exec "nssm.exe set salt-minion Start SERVICE_DELAYED_AUTO_START"
${EndIf}
; If start-minion is 1, then start the service
# If start-minion is 1, then start the service
${If} $StartMinion == 1
nsExec::Exec 'net start salt-minion'
${EndIf}
@ -413,10 +434,11 @@ FunctionEnd
Function un.onInit
; Load the parameters
# Load the parameters
${GetParameters} $R0
# Uninstaller: Remove Installation Directory
ClearErrors
${GetOptions} $R0 "/delete-install-dir" $R1
IfErrors delete_install_dir_not_found
StrCpy $DeleteInstallDir 1
@ -434,7 +456,7 @@ Section Uninstall
Call un.uninstallSalt
; Remove C:\salt from the Path
# Remove C:\salt from the Path
Push "C:\salt"
Call un.RemoveFromPath
@ -444,27 +466,27 @@ SectionEnd
!macro uninstallSalt un
Function ${un}uninstallSalt
; Make sure we're in the right directory
# Make sure we're in the right directory
${If} $INSTDIR == "c:\salt\bin\Scripts"
StrCpy $INSTDIR "C:\salt"
${EndIf}
; Stop and Remove salt-minion service
# Stop and Remove salt-minion service
nsExec::Exec 'net stop salt-minion'
nsExec::Exec 'sc delete salt-minion'
; Stop and remove the salt-master service
# Stop and remove the salt-master service
nsExec::Exec 'net stop salt-master'
nsExec::Exec 'sc delete salt-master'
; Remove files
# Remove files
Delete "$INSTDIR\uninst.exe"
Delete "$INSTDIR\nssm.exe"
Delete "$INSTDIR\salt*"
Delete "$INSTDIR\vcredist.exe"
RMDir /r "$INSTDIR\bin"
; Remove Registry entries
# Remove Registry entries
DeleteRegKey ${PRODUCT_UNINST_ROOT_KEY} "${PRODUCT_UNINST_KEY}"
DeleteRegKey ${PRODUCT_UNINST_ROOT_KEY} "${PRODUCT_UNINST_KEY_OTHER}"
DeleteRegKey ${PRODUCT_UNINST_ROOT_KEY} "${PRODUCT_CALL_REGKEY}"
@ -474,17 +496,17 @@ Function ${un}uninstallSalt
DeleteRegKey ${PRODUCT_UNINST_ROOT_KEY} "${PRODUCT_MINION_REGKEY}"
DeleteRegKey ${PRODUCT_UNINST_ROOT_KEY} "${PRODUCT_RUN_REGKEY}"
; Automatically close when finished
# Automatically close when finished
SetAutoClose true
; Prompt to remove the Installation directory
# Prompt to remove the Installation directory
${IfNot} $DeleteInstallDir == 1
MessageBox MB_ICONQUESTION|MB_YESNO|MB_DEFBUTTON2 \
"Would you like to completely remove $INSTDIR and all of its contents?" \
/SD IDNO IDNO finished
${EndIf}
; Make sure you're not removing Program Files
# Make sure you're not removing Program Files
${If} $INSTDIR != 'Program Files'
${AndIf} $INSTDIR != 'Program Files (x86)'
RMDir /r "$INSTDIR"
@ -526,7 +548,7 @@ FunctionEnd
Function Trim
Exch $R1 ; Original string
Exch $R1 # Original string
Push $R2
Loop:
@ -558,36 +580,36 @@ Function Trim
FunctionEnd
;------------------------------------------------------------------------------
; StrStr Function
; - find substring in a string
;
; Usage:
; Push "this is some string"
; Push "some"
; Call StrStr
; Pop $0 ; "some string"
;------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# StrStr Function
# - find substring in a string
#
# Usage:
# Push "this is some string"
# Push "some"
# Call StrStr
# Pop $0 ; "some string"
#------------------------------------------------------------------------------
!macro StrStr un
Function ${un}StrStr
Exch $R1 ; $R1=substring, stack=[old$R1,string,...]
Exch ; stack=[string,old$R1,...]
Exch $R2 ; $R2=string, stack=[old$R2,old$R1,...]
Push $R3 ; $R3=strlen(substring)
Push $R4 ; $R4=count
Push $R5 ; $R5=tmp
StrLen $R3 $R1 ; Get the length of the Search String
StrCpy $R4 0 ; Set the counter to 0
Exch $R1 # $R1=substring, stack=[old$R1,string,...]
Exch # stack=[string,old$R1,...]
Exch $R2 # $R2=string, stack=[old$R2,old$R1,...]
Push $R3 # $R3=strlen(substring)
Push $R4 # $R4=count
Push $R5 # $R5=tmp
StrLen $R3 $R1 # Get the length of the Search String
StrCpy $R4 0 # Set the counter to 0
loop:
StrCpy $R5 $R2 $R3 $R4 ; Create a moving window of the string that is
; the size of the length of the search string
StrCmp $R5 $R1 done ; Is the contents of the window the same as
; search string, then done
StrCmp $R5 "" done ; Is the window empty, then done
IntOp $R4 $R4 + 1 ; Shift the windows one character
Goto loop ; Repeat
StrCpy $R5 $R2 $R3 $R4 # Create a moving window of the string that is
# the size of the length of the search string
StrCmp $R5 $R1 done # Is the contents of the window the same as
# search string, then done
StrCmp $R5 "" done # Is the window empty, then done
IntOp $R4 $R4 + 1 # Shift the windows one character
Goto loop # Repeat
done:
StrCpy $R1 $R2 "" $R4
@ -595,7 +617,7 @@ Function ${un}StrStr
Pop $R4
Pop $R3
Pop $R2
Exch $R1 ; $R1=old$R1, stack=[result,...]
Exch $R1 # $R1=old$R1, stack=[result,...]
FunctionEnd
!macroend
@ -603,74 +625,74 @@ FunctionEnd
!insertmacro StrStr "un."
;------------------------------------------------------------------------------
; AddToPath Function
; - Adds item to Path for All Users
; - Overcomes NSIS ReadRegStr limitation of 1024 characters by using Native
; Windows Commands
;
; Usage:
; Push "C:\path\to\add"
; Call AddToPath
;------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# AddToPath Function
# - Adds item to Path for All Users
# - Overcomes NSIS ReadRegStr limitation of 1024 characters by using Native
# Windows Commands
#
# Usage:
# Push "C:\path\to\add"
# Call AddToPath
#------------------------------------------------------------------------------
!define Environ 'HKLM "SYSTEM\CurrentControlSet\Control\Session Manager\Environment"'
Function AddToPath
Exch $0 ; Path to add
Push $1 ; Current Path
Push $2 ; Results of StrStr / Length of Path + Path to Add
Push $3 ; Handle to Reg / Length of Path
Push $4 ; Result of Registry Call
Exch $0 # Path to add
Push $1 # Current Path
Push $2 # Results of StrStr / Length of Path + Path to Add
Push $3 # Handle to Reg / Length of Path
Push $4 # Result of Registry Call
; Open a handle to the key in the registry, handle in $3, Error in $4
# Open a handle to the key in the registry, handle in $3, Error in $4
System::Call "advapi32::RegOpenKey(i 0x80000002, t'SYSTEM\CurrentControlSet\Control\Session Manager\Environment', *i.r3) i.r4"
; Make sure registry handle opened successfully (returned 0)
# Make sure registry handle opened successfully (returned 0)
IntCmp $4 0 0 done done
; Load the contents of path into $1, Error Code into $4, Path length into $2
# Load the contents of path into $1, Error Code into $4, Path length into $2
System::Call "advapi32::RegQueryValueEx(i $3, t'PATH', i 0, i 0, t.r1, *i ${NSIS_MAX_STRLEN} r2) i.r4"
; Close the handle to the registry ($3)
# Close the handle to the registry ($3)
System::Call "advapi32::RegCloseKey(i $3)"
; Check for Error Code 234, Path too long for the variable
IntCmp $4 234 0 +4 +4 ; $4 == ERROR_MORE_DATA
# Check for Error Code 234, Path too long for the variable
IntCmp $4 234 0 +4 +4 # $4 == ERROR_MORE_DATA
DetailPrint "AddToPath Failed: original length $2 > ${NSIS_MAX_STRLEN}"
MessageBox MB_OK \
"You may add C:\salt to the %PATH% for convenience when issuing local salt commands from the command line." \
/SD IDOK
Goto done
; If no error, continue
IntCmp $4 0 +5 ; $4 != NO_ERROR
; Error 2 means the Key was not found
IntCmp $4 2 +3 ; $4 != ERROR_FILE_NOT_FOUND
# If no error, continue
IntCmp $4 0 +5 # $4 != NO_ERROR
# Error 2 means the Key was not found
IntCmp $4 2 +3 # $4 != ERROR_FILE_NOT_FOUND
DetailPrint "AddToPath: unexpected error code $4"
Goto done
StrCpy $1 ""
; Check if already in PATH
Push "$1;" ; The string to search
Push "$0;" ; The string to find
# Check if already in PATH
Push "$1;" # The string to search
Push "$0;" # The string to find
Call StrStr
Pop $2 ; The result of the search
StrCmp $2 "" 0 done ; String not found, try again with ';' at the end
; Otherwise, it's already in the path
Push "$1;" ; The string to search
Push "$0\;" ; The string to find
Pop $2 # The result of the search
StrCmp $2 "" 0 done # String not found, try again with ';' at the end
# Otherwise, it's already in the path
Push "$1;" # The string to search
Push "$0\;" # The string to find
Call StrStr
Pop $2 ; The result
StrCmp $2 "" 0 done ; String not found, continue (add)
; Otherwise, it's already in the path
Pop $2 # The result
StrCmp $2 "" 0 done # String not found, continue (add)
# Otherwise, it's already in the path
; Prevent NSIS string overflow
StrLen $2 $0 ; Length of path to add ($2)
StrLen $3 $1 ; Length of current path ($3)
IntOp $2 $2 + $3 ; Length of current path + path to add ($2)
IntOp $2 $2 + 2 ; Account for the additional ';'
; $2 = strlen(dir) + strlen(PATH) + sizeof(";")
# Prevent NSIS string overflow
StrLen $2 $0 # Length of path to add ($2)
StrLen $3 $1 # Length of current path ($3)
IntOp $2 $2 + $3 # Length of current path + path to add ($2)
IntOp $2 $2 + 2 # Account for the additional ';'
# $2 = strlen(dir) + strlen(PATH) + sizeof(";")
; Make sure the new length isn't over the NSIS_MAX_STRLEN
# Make sure the new length isn't over the NSIS_MAX_STRLEN
IntCmp $2 ${NSIS_MAX_STRLEN} +4 +4 0
DetailPrint "AddToPath: new length $2 > ${NSIS_MAX_STRLEN}"
MessageBox MB_OK \
@ -678,18 +700,18 @@ Function AddToPath
/SD IDOK
Goto done
; Append dir to PATH
# Append dir to PATH
DetailPrint "Add to PATH: $0"
StrCpy $2 $1 1 -1 ; Copy the last character of the existing path
StrCmp $2 ";" 0 +2 ; Check for trailing ';'
StrCpy $1 $1 -1 ; remove trailing ';'
StrCmp $1 "" +2 ; Make sure Path is not empty
StrCpy $0 "$1;$0" ; Append new path at the end ($0)
StrCpy $2 $1 1 -1 # Copy the last character of the existing path
StrCmp $2 ";" 0 +2 # Check for trailing ';'
StrCpy $1 $1 -1 # remove trailing ';'
StrCmp $1 "" +2 # Make sure Path is not empty
StrCpy $0 "$1;$0" # Append new path at the end ($0)
; We can use the NSIS command here. Only 'ReadRegStr' is affected
# We can use the NSIS command here. Only 'ReadRegStr' is affected
WriteRegExpandStr ${Environ} "PATH" $0
; Broadcast registry change to open programs
# Broadcast registry change to open programs
SendMessage ${HWND_BROADCAST} ${WM_WININICHANGE} 0 "STR:Environment" /TIMEOUT=5000
done:
@ -702,16 +724,16 @@ Function AddToPath
FunctionEnd
;------------------------------------------------------------------------------
; RemoveFromPath Function
; - Removes item from Path for All Users
; - Overcomes NSIS ReadRegStr limitation of 1024 characters by using Native
; Windows Commands
;
; Usage:
; Push "C:\path\to\add"
; Call un.RemoveFromPath
;------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# RemoveFromPath Function
# - Removes item from Path for All Users
# - Overcomes NSIS ReadRegStr limitation of 1024 characters by using Native
# Windows Commands
#
# Usage:
# Push "C:\path\to\add"
# Call un.RemoveFromPath
#------------------------------------------------------------------------------
Function un.RemoveFromPath
Exch $0
@ -722,59 +744,59 @@ Function un.RemoveFromPath
Push $5
Push $6
; Open a handle to the key in the registry, handle in $3, Error in $4
# Open a handle to the key in the registry, handle in $3, Error in $4
System::Call "advapi32::RegOpenKey(i 0x80000002, t'SYSTEM\CurrentControlSet\Control\Session Manager\Environment', *i.r3) i.r4"
; Make sure registry handle opened successfully (returned 0)
# Make sure registry handle opened successfully (returned 0)
IntCmp $4 0 0 done done
; Load the contents of path into $1, Error Code into $4, Path length into $2
# Load the contents of path into $1, Error Code into $4, Path length into $2
System::Call "advapi32::RegQueryValueEx(i $3, t'PATH', i 0, i 0, t.r1, *i ${NSIS_MAX_STRLEN} r2) i.r4"
; Close the handle to the registry ($3)
# Close the handle to the registry ($3)
System::Call "advapi32::RegCloseKey(i $3)"
; Check for Error Code 234, Path too long for the variable
IntCmp $4 234 0 +4 +4 ; $4 == ERROR_MORE_DATA
# Check for Error Code 234, Path too long for the variable
IntCmp $4 234 0 +4 +4 # $4 == ERROR_MORE_DATA
DetailPrint "AddToPath: original length $2 > ${NSIS_MAX_STRLEN}"
Goto done
; If no error, continue
IntCmp $4 0 +5 ; $4 != NO_ERROR
; Error 2 means the Key was not found
IntCmp $4 2 +3 ; $4 != ERROR_FILE_NOT_FOUND
# If no error, continue
IntCmp $4 0 +5 # $4 != NO_ERROR
# Error 2 means the Key was not found
IntCmp $4 2 +3 # $4 != ERROR_FILE_NOT_FOUND
DetailPrint "AddToPath: unexpected error code $4"
Goto done
StrCpy $1 ""
; Ensure there's a trailing ';'
StrCpy $5 $1 1 -1 ; Copy the last character of the path
StrCmp $5 ";" +2 ; Check for trailing ';', if found continue
StrCpy $1 "$1;" ; ensure trailing ';'
# Ensure there's a trailing ';'
StrCpy $5 $1 1 -1 # Copy the last character of the path
StrCmp $5 ";" +2 # Check for trailing ';', if found continue
StrCpy $1 "$1;" # ensure trailing ';'
; Check for our directory inside the path
Push $1 ; String to Search
Push "$0;" ; Dir to Find
# Check for our directory inside the path
Push $1 # String to Search
Push "$0;" # Dir to Find
Call un.StrStr
Pop $2 ; The results of the search
StrCmp $2 "" done ; If results are empty, we're done, otherwise continue
Pop $2 # The results of the search
StrCmp $2 "" done # If results are empty, we're done, otherwise continue
; Remove our Directory from the Path
# Remove our Directory from the Path
DetailPrint "Remove from PATH: $0"
StrLen $3 "$0;" ; Get the length of our dir ($3)
StrLen $4 $2 ; Get the length of the return from StrStr ($4)
StrCpy $5 $1 -$4 ; $5 is now the part before the path to remove
StrCpy $6 $2 "" $3 ; $6 is now the part after the path to remove
StrCpy $3 "$5$6" ; Combine $5 and $6
StrLen $3 "$0;" # Get the length of our dir ($3)
StrLen $4 $2 # Get the length of the return from StrStr ($4)
StrCpy $5 $1 -$4 # $5 is now the part before the path to remove
StrCpy $6 $2 "" $3 # $6 is now the part after the path to remove
StrCpy $3 "$5$6" # Combine $5 and $6
; Check for Trailing ';'
StrCpy $5 $3 1 -1 ; Load the last character of the string
StrCmp $5 ";" 0 +2 ; Check for ';'
StrCpy $3 $3 -1 ; remove trailing ';'
# Check for Trailing ';'
StrCpy $5 $3 1 -1 # Load the last character of the string
StrCmp $5 ";" 0 +2 # Check for ';'
StrCpy $3 $3 -1 # remove trailing ';'
; Write the new path to the registry
# Write the new path to the registry
WriteRegExpandStr ${Environ} "PATH" $3
; Broadcast the change to all open applications
# Broadcast the change to all open applications
SendMessage ${HWND_BROADCAST} ${WM_WININICHANGE} 0 "STR:Environment" /TIMEOUT=5000
done:
@ -808,6 +830,7 @@ Function getMinionConfig
confFound:
FileOpen $0 "$INSTDIR\conf\minion" r
ClearErrors
confLoop:
FileRead $0 $1
IfErrors EndOfFile
@ -838,68 +861,69 @@ FunctionEnd
Function updateMinionConfig
ClearErrors
FileOpen $0 "$INSTDIR\conf\minion" "r" ; open target file for reading
GetTempFileName $R0 ; get new temp file name
FileOpen $1 $R0 "w" ; open temp file for writing
FileOpen $0 "$INSTDIR\conf\minion" "r" # open target file for reading
GetTempFileName $R0 # get new temp file name
FileOpen $1 $R0 "w" # open temp file for writing
loop: ; loop through each line
FileRead $0 $2 ; read line from target file
IfErrors done ; end if errors are encountered (end of line)
loop: # loop through each line
FileRead $0 $2 # read line from target file
IfErrors done # end if errors are encountered (end of line)
${If} $MasterHost_State != "" ; if master is empty
${AndIf} $MasterHost_State != "salt" ; and if master is not 'salt'
${StrLoc} $3 $2 "master:" ">" ; where is 'master:' in this line
${If} $3 == 0 ; is it in the first...
${OrIf} $3 == 1 ; or second position (account for comments)
StrCpy $2 "master: $MasterHost_State$\r$\n" ; write the master
${EndIf} ; close if statement
${EndIf} ; close if statement
${If} $MasterHost_State != "" # if master is empty
${AndIf} $MasterHost_State != "salt" # and if master is not 'salt'
${StrLoc} $3 $2 "master:" ">" # where is 'master:' in this line
${If} $3 == 0 # is it in the first...
${OrIf} $3 == 1 # or second position (account for comments)
StrCpy $2 "master: $MasterHost_State$\r$\n" # write the master
${EndIf} # close if statement
${EndIf} # close if statement
${If} $MinionName_State != "" ; if minion is empty
${AndIf} $MinionName_State != "hostname" ; and if minion is not 'hostname'
${StrLoc} $3 $2 "id:" ">" ; where is 'id:' in this line
${If} $3 == 0 ; is it in the first...
${OrIf} $3 == 1 ; or the second position (account for comments)
StrCpy $2 "id: $MinionName_State$\r$\n" ; change line
${EndIf} ; close if statement
${EndIf} ; close if statement
${If} $MinionName_State != "" # if minion is empty
${AndIf} $MinionName_State != "hostname" # and if minion is not 'hostname'
${StrLoc} $3 $2 "id:" ">" # where is 'id:' in this line
${If} $3 == 0 # is it in the first...
${OrIf} $3 == 1 # or the second position (account for comments)
StrCpy $2 "id: $MinionName_State$\r$\n" # change line
${EndIf} # close if statement
${EndIf} # close if statement
FileWrite $1 $2 ; write changed or unchanged line to temp file
FileWrite $1 $2 # write changed or unchanged line to temp file
Goto loop
done:
FileClose $0 ; close target file
FileClose $1 ; close temp file
Delete "$INSTDIR\conf\minion" ; delete target file
CopyFiles /SILENT $R0 "$INSTDIR\conf\minion" ; copy temp file to target file
Delete $R0 ; delete temp file
FileClose $0 # close target file
FileClose $1 # close temp file
Delete "$INSTDIR\conf\minion" # delete target file
CopyFiles /SILENT $R0 "$INSTDIR\conf\minion" # copy temp file to target file
Delete $R0 # delete temp file
FunctionEnd
Function parseCommandLineSwitches
; Load the parameters
# Load the parameters
${GetParameters} $R0
; Check for start-minion switches
; /start-service is to be deprecated, so we must check for both
# Check for start-minion switches
# /start-service is to be deprecated, so we must check for both
${GetOptions} $R0 "/start-service=" $R1
${GetOptions} $R0 "/start-minion=" $R2
# Service: Start Salt Minion
${IfNot} $R2 == ""
; If start-minion was passed something, then set it
# If start-minion was passed something, then set it
StrCpy $StartMinion $R2
${ElseIfNot} $R1 == ""
; If start-service was passed something, then set StartMinion to that
# If start-service was passed something, then set StartMinion to that
StrCpy $StartMinion $R1
${Else}
; Otherwise default to 1
# Otherwise default to 1
StrCpy $StartMinion 1
${EndIf}
# Service: Minion Startup Type Delayed
ClearErrors
${GetOptions} $R0 "/start-minion-delayed" $R1
IfErrors start_minion_delayed_not_found
StrCpy $StartMinionDelayed 1

View file

@ -4,6 +4,8 @@ Minion data cache plugin for Consul key/value data store.
.. versionadded:: 2016.11.2
:depends: python-consul >= 0.2.0
It is up to the system administrator to set up and configure the Consul
infrastructure. All is needed for this plugin is a working Consul agent
with a read-write access to the key-value store.
@ -81,8 +83,11 @@ def __virtual__():
'verify': __opts__.get('consul.verify', True),
}
global api
api = consul.Consul(**consul_kwargs)
try:
global api
api = consul.Consul(**consul_kwargs)
except AttributeError:
return (False, "Failed to invoke consul.Consul, please make sure you have python-consul >= 0.2.0 installed")
return __virtualname__

View file

@ -14,7 +14,7 @@ from __future__ import absolute_import
# Import Salt libs
import salt.spm
import salt.utils.parsers as parsers
from salt.utils.verify import verify_log
from salt.utils.verify import verify_log, verify_env
class SPM(parsers.SPMParser):
@ -29,6 +29,10 @@ class SPM(parsers.SPMParser):
ui = salt.spm.SPMCmdlineInterface()
self.parse_args()
self.setup_logfile_logger()
v_dirs = [
self.config['cachedir'],
]
verify_env(v_dirs, self.config['user'],)
verify_log(self.config)
client = salt.spm.SPMClient(ui, self.config)
client.run(self.args)

View file

@ -359,29 +359,19 @@ class SyncClientMixin(object):
# packed into the top level object. The plan is to move away from
# that since the caller knows what is an arg vs a kwarg, but while
# we make the transition we will load "kwargs" using format_call if
# there are no kwargs in the low object passed in
f_call = None
if 'arg' not in low:
# there are no kwargs in the low object passed in.
if 'arg' in low and 'kwarg' in low:
args = low['arg']
kwargs = low['kwarg']
else:
f_call = salt.utils.format_call(
self.functions[fun],
low,
expected_extra_kws=CLIENT_INTERNAL_KEYWORDS
)
args = f_call.get('args', ())
else:
args = low['arg']
if 'kwarg' not in low:
log.critical(
'kwargs must be passed inside the low data within the '
'\'kwarg\' key. See usage of '
'salt.utils.args.parse_input() and '
'salt.minion.load_args_and_kwargs() elsewhere in the '
'codebase.'
)
kwargs = {}
else:
kwargs = low['kwarg']
kwargs = f_call.get('kwargs', {})
# Update the event data with loaded args and kwargs
data['fun_args'] = list(args) + ([kwargs] if kwargs else [])

View file

@ -334,7 +334,7 @@ VALID_OPTS = {
# Whether or not scheduled mine updates should be accompanied by a job return for the job cache
'mine_return_job': bool,
# Schedule a mine update every n number of seconds
# The number of minutes between mine updates.
'mine_interval': int,
# The ipc strategy. (i.e., sockets versus tcp, etc)
@ -569,6 +569,23 @@ VALID_OPTS = {
# False in 2016.3.0
'add_proxymodule_to_opts': bool,
# Merge pillar data into configuration opts.
# As multiple proxies can run on the same server, we may need different
# configuration options for each, while there's one single configuration file.
# The solution is merging the pillar data of each proxy minion into the opts.
'proxy_merge_pillar_in_opts': bool,
# Deep merge of pillar data into configuration opts.
# Evaluated only when `proxy_merge_pillar_in_opts` is True.
'proxy_deep_merge_pillar_in_opts': bool,
# The strategy used when merging pillar into opts.
# Considered only when `proxy_merge_pillar_in_opts` is True.
'proxy_merge_pillar_in_opts_strategy': str,
# Allow enabling mine details using pillar data.
'proxy_mines_pillar': bool,
# In some particular cases, always alive proxies are not beneficial.
# This option can be used in those less dynamic environments:
# the user can request the connection
@ -1637,6 +1654,12 @@ DEFAULT_PROXY_MINION_OPTS = {
'append_minionid_config_dirs': ['cachedir', 'pidfile', 'default_include', 'extension_modules'],
'default_include': 'proxy.d/*.conf',
'proxy_merge_pillar_in_opts': False,
'proxy_deep_merge_pillar_in_opts': False,
'proxy_merge_pillar_in_opts_strategy': 'smart',
'proxy_mines_pillar': True,
# By default, proxies will preserve the connection.
# If this option is set to False,
# the connection with the remote dumb device

View file

@ -1270,10 +1270,10 @@ class RemoteClient(Client):
hash_type = self.opts.get('hash_type', 'md5')
ret['hsum'] = salt.utils.get_hash(path, form=hash_type)
ret['hash_type'] = hash_type
return ret, list(os.stat(path))
return ret
load = {'path': path,
'saltenv': saltenv,
'cmd': '_file_hash_and_stat'}
'cmd': '_file_hash'}
return self.channel.send(load)
def hash_file(self, path, saltenv='base'):
@ -1282,14 +1282,33 @@ class RemoteClient(Client):
master file server prepend the path with salt://<file on server>
otherwise, prepend the file with / for a local file.
'''
return self.__hash_and_stat_file(path, saltenv)[0]
return self.__hash_and_stat_file(path, saltenv)
def hash_and_stat_file(self, path, saltenv='base'):
'''
The same as hash_file, but also return the file's mode, or None if no
mode data is present.
'''
return self.__hash_and_stat_file(path, saltenv)
hash_result = self.hash_file(path, saltenv)
try:
path = self._check_proto(path)
except MinionError as err:
if not os.path.isfile(path):
return hash_result, None
else:
try:
return hash_result, list(os.stat(path))
except Exception:
return hash_result, None
load = {'path': path,
'saltenv': saltenv,
'cmd': '_file_find'}
fnd = self.channel.send(load)
try:
stat_result = fnd.get('stat')
except AttributeError:
stat_result = None
return hash_result, stat_result
def list_env(self, saltenv='base'):
'''

View file

@ -1175,6 +1175,10 @@ _OS_FAMILY_MAP = {
'Raspbian': 'Debian',
'Devuan': 'Debian',
'antiX': 'Debian',
'Kali': 'Debian',
'neon': 'Debian',
'Cumulus': 'Debian',
'Deepin': 'Debian',
'NILinuxRT': 'NILinuxRT',
'NILinuxRT-XFCE': 'NILinuxRT',
'Void': 'Void',

View file

@ -489,7 +489,7 @@ class Key(object):
minions = []
for key, val in six.iteritems(keys):
minions.extend(val)
if not self.opts.get('preserve_minion_cache', False) or not preserve_minions:
if not self.opts.get('preserve_minion_cache', False):
m_cache = os.path.join(self.opts['cachedir'], self.ACC)
if os.path.isdir(m_cache):
for minion in os.listdir(m_cache):
@ -736,7 +736,7 @@ class Key(object):
def delete_key(self,
match=None,
match_dict=None,
preserve_minions=False,
preserve_minions=None,
revoke_auth=False):
'''
Delete public keys. If "match" is passed, it is evaluated as a glob.
@ -774,11 +774,10 @@ class Key(object):
salt.utils.event.tagify(prefix='key'))
except (OSError, IOError):
pass
if preserve_minions:
preserve_minions_list = matches.get('minions', [])
if self.opts.get('preserve_minions') is True:
self.check_minion_cache(preserve_minions=matches.get('minions', []))
else:
preserve_minions_list = []
self.check_minion_cache(preserve_minions=preserve_minions_list)
self.check_minion_cache()
if self.opts.get('rotate_aes_key'):
salt.crypt.dropfile(self.opts['cachedir'], self.opts['user'])
return (
@ -969,16 +968,17 @@ class RaetKey(Key):
minions.extend(val)
m_cache = os.path.join(self.opts['cachedir'], 'minions')
if os.path.isdir(m_cache):
for minion in os.listdir(m_cache):
if minion not in minions:
shutil.rmtree(os.path.join(m_cache, minion))
cache = salt.cache.factory(self.opts)
clist = cache.list(self.ACC)
if clist:
for minion in clist:
if not self.opts.get('preserve_minion_cache', False):
if os.path.isdir(m_cache):
for minion in os.listdir(m_cache):
if minion not in minions and minion not in preserve_minions:
cache.flush('{0}/{1}'.format(self.ACC, minion))
shutil.rmtree(os.path.join(m_cache, minion))
cache = salt.cache.factory(self.opts)
clist = cache.list(self.ACC)
if clist:
for minion in clist:
if minion not in minions and minion not in preserve_minions:
cache.flush('{0}/{1}'.format(self.ACC, minion))
kind = self.opts.get('__role', '') # application kind
if kind not in salt.utils.kinds.APPL_KINDS:
@ -1220,7 +1220,7 @@ class RaetKey(Key):
def delete_key(self,
match=None,
match_dict=None,
preserve_minions=False,
preserve_minions=None,
revoke_auth=False):
'''
Delete public keys. If "match" is passed, it is evaluated as a glob.
@ -1251,7 +1251,10 @@ class RaetKey(Key):
os.remove(os.path.join(self.opts['pki_dir'], status, key))
except (OSError, IOError):
pass
self.check_minion_cache(preserve_minions=matches.get('minions', []))
if self.opts.get('preserve_minions') is True:
self.check_minion_cache(preserve_minions=matches.get('minions', []))
else:
self.check_minion_cache()
return (
self.name_match(match) if match is not None
else self.dict_match(matches)

View file

@ -270,7 +270,7 @@ def raw_mod(opts, name, functions, mod='modules'):
testmod['test.ping']()
'''
loader = LazyLoader(
_module_dirs(opts, mod, 'rawmodule'),
_module_dirs(opts, mod, 'module'),
opts,
tag='rawmodule',
virtual_enable=False,

View file

@ -100,6 +100,7 @@ import salt.defaults.exitcodes
import salt.cli.daemons
import salt.log.setup
import salt.utils.dictupdate
from salt.config import DEFAULT_MINION_OPTS
from salt.defaults import DEFAULT_TARGET_DELIM
from salt.executors import FUNCTION_EXECUTORS
@ -1599,13 +1600,24 @@ class Minion(MinionBase):
minion side execution.
'''
salt.utils.appendproctitle('{0}._thread_multi_return {1}'.format(cls.__name__, data['jid']))
ret = {
'return': {},
'retcode': {},
'success': {}
}
for ind in range(0, len(data['fun'])):
ret['success'][data['fun'][ind]] = False
multifunc_ordered = opts.get('multifunc_ordered', False)
num_funcs = len(data['fun'])
if multifunc_ordered:
ret = {
'return': [None] * num_funcs,
'retcode': [None] * num_funcs,
'success': [False] * num_funcs
}
else:
ret = {
'return': {},
'retcode': {},
'success': {}
}
for ind in range(0, num_funcs):
if not multifunc_ordered:
ret['success'][data['fun'][ind]] = False
try:
if minion_instance.connected and minion_instance.opts['pillar'].get('minion_blackout', False):
# this minion is blacked out. Only allow saltutil.refresh_pillar
@ -1620,12 +1632,20 @@ class Minion(MinionBase):
data['arg'][ind],
data)
minion_instance.functions.pack['__context__']['retcode'] = 0
ret['return'][data['fun'][ind]] = func(*args, **kwargs)
ret['retcode'][data['fun'][ind]] = minion_instance.functions.pack['__context__'].get(
'retcode',
0
)
ret['success'][data['fun'][ind]] = True
if multifunc_ordered:
ret['return'][ind] = func(*args, **kwargs)
ret['retcode'][ind] = minion_instance.functions.pack['__context__'].get(
'retcode',
0
)
ret['success'][ind] = True
else:
ret['return'][data['fun'][ind]] = func(*args, **kwargs)
ret['retcode'][data['fun'][ind]] = minion_instance.functions.pack['__context__'].get(
'retcode',
0
)
ret['success'][data['fun'][ind]] = True
except Exception as exc:
trb = traceback.format_exc()
log.warning(
@ -1633,7 +1653,10 @@ class Minion(MinionBase):
exc
)
)
ret['return'][data['fun'][ind]] = trb
if multifunc_ordered:
ret['return'][ind] = trb
else:
ret['return'][data['fun'][ind]] = trb
ret['jid'] = data['jid']
ret['fun'] = data['fun']
ret['fun_args'] = data['arg']
@ -2588,6 +2611,8 @@ class SyndicManager(MinionBase):
'''
if kwargs is None:
kwargs = {}
successful = False
# Call for each master
for master, syndic_future in self.iter_master_options(master_id):
if not syndic_future.done() or syndic_future.exception():
log.error('Unable to call {0} on {1}, that syndic is not connected'.format(func, master))
@ -2595,12 +2620,12 @@ class SyndicManager(MinionBase):
try:
getattr(syndic_future.result(), func)(*args, **kwargs)
return
successful = True
except SaltClientError:
log.error('Unable to call {0} on {1}, trying another...'.format(func, master))
self._mark_master_dead(master)
continue
log.critical('Unable to call {0} on any masters!'.format(func))
if not successful:
log.critical('Unable to call {0} on any masters!'.format(func))
def _return_pub_syndic(self, values, master_id=None):
'''
@ -3118,6 +3143,26 @@ class ProxyMinion(Minion):
if 'proxy' not in self.opts:
self.opts['proxy'] = self.opts['pillar']['proxy']
if self.opts.get('proxy_merge_pillar_in_opts'):
# Override proxy opts with pillar data when the user required.
self.opts = salt.utils.dictupdate.merge(self.opts,
self.opts['pillar'],
strategy=self.opts.get('proxy_merge_pillar_in_opts_strategy'),
merge_lists=self.opts.get('proxy_deep_merge_pillar_in_opts', False))
elif self.opts.get('proxy_mines_pillar'):
# Even when not required, some details such as mine configuration
# should be merged anyway whenever possible.
if 'mine_interval' in self.opts['pillar']:
self.opts['mine_interval'] = self.opts['pillar']['mine_interval']
if 'mine_functions' in self.opts['pillar']:
general_proxy_mines = self.opts.get('mine_functions', [])
specific_proxy_mines = self.opts['pillar']['mine_functions']
try:
self.opts['mine_functions'] = general_proxy_mines + specific_proxy_mines
except TypeError as terr:
log.error('Unable to merge mine functions from the pillar in the opts, for proxy {}'.format(
self.opts['id']))
fq_proxyname = self.opts['proxy']['proxytype']
# Need to load the modules so they get all the dunder variables

View file

@ -12,6 +12,7 @@ import logging
# Import Salt libs
import salt.utils
import salt.utils.path
# Import 3rd-party libs
import salt.ext.six as six
@ -241,4 +242,4 @@ def _read_link(name):
Throws an OSError if the link does not exist
'''
alt_link_path = '/etc/alternatives/{0}'.format(name)
return os.readlink(alt_link_path)
return salt.utils.path.readlink(alt_link_path)

View file

@ -93,11 +93,15 @@ __virtualname__ = 'pkg'
def __virtual__():
'''
Confirm this module is on a Debian based system
Confirm this module is on a Debian-based system
'''
if __grains__.get('os_family') in ('Kali', 'Debian', 'neon'):
return __virtualname__
elif __grains__.get('os_family', False) == 'Cumulus':
# If your minion is running an OS which is Debian-based but does not have
# an "os_family" grain of Debian, then the proper fix is NOT to check for
# the minion's "os_family" grain here in the __virtual__. The correct fix
# is to add the value from the minion's "os" grain to the _OS_FAMILY_MAP
# dict in salt/grains/core.py, so that we assign the correct "os_family"
# grain to the minion.
if __grains__.get('os_family') == 'Debian':
return __virtualname__
return (False, 'The pkg module could not be loaded: unsupported OS family')

View file

@ -2456,7 +2456,8 @@ def describe_route_table(route_table_id=None, route_table_name=None,
'''
salt.utils.warn_until('Oxygen',
salt.utils.warn_until(
'Neon',
'The \'describe_route_table\' method has been deprecated and '
'replaced by \'describe_route_tables\'.'
)

View file

@ -2027,19 +2027,12 @@ def build_network_settings(**settings):
# Write settings
_write_file_network(network, _DEB_NETWORKING_FILE, True)
# Write hostname to /etc/hostname
# Get hostname and domain from opts
sline = opts['hostname'].split('.', 1)
opts['hostname'] = sline[0]
hostname = '{0}\n' . format(opts['hostname'])
current_domainname = current_network_settings['domainname']
current_searchdomain = current_network_settings['searchdomain']
# Only write the hostname if it has changed
if not opts['hostname'] == current_network_settings['hostname']:
if not ('test' in settings and settings['test']):
# TODO replace wiht a call to network.mod_hostname instead
_write_file_network(hostname, _DEB_HOSTNAME_FILE)
new_domain = False
if len(sline) > 1:
new_domainname = sline[1]

View file

@ -17,10 +17,10 @@ except ImportError:
from pipes import quote as _cmd_quote
# Import salt libs
import salt.utils
import salt.utils.yast
import salt.utils.preseed
import salt.utils.kickstart
import salt.utils.validate.path
import salt.syspaths
from salt.exceptions import SaltInvocationError
@ -403,6 +403,11 @@ def _bootstrap_deb(
log.error('Required tool debootstrap is not installed.')
return False
if static_qemu and not salt.utils.validate.path.is_executable(static_qemu):
log.error('Required tool qemu not '
'present/readable at: {0}'.format(static_qemu))
return False
if isinstance(pkgs, (list, tuple)):
pkgs = ','.join(pkgs)
if isinstance(exclude_pkgs, (list, tuple)):
@ -427,11 +432,13 @@ def _bootstrap_deb(
__salt__['cmd.run'](deb_args, python_shell=False)
__salt__['cmd.run'](
'cp {qemu} {root}/usr/bin/'.format(
qemu=_cmd_quote(static_qemu), root=_cmd_quote(root)
if static_qemu:
__salt__['cmd.run'](
'cp {qemu} {root}/usr/bin/'.format(
qemu=_cmd_quote(static_qemu), root=_cmd_quote(root)
)
)
)
env = {'DEBIAN_FRONTEND': 'noninteractive',
'DEBCONF_NONINTERACTIVE_SEEN': 'true',
'LC_ALL': 'C',

View file

@ -31,7 +31,7 @@ def __virtual__():
if __grains__['kernel'] in ('Linux', 'OpenBSD', 'NetBSD'):
return __virtualname__
return (False, 'The groupadd execution module cannot be loaded: '
' only available on Linux, OpenBSD and NetBSD')
' only available on Linux, OpenBSD and NetBSD')
def add(name, gid=None, system=False, root=None):
@ -44,12 +44,12 @@ def add(name, gid=None, system=False, root=None):
salt '*' group.add foo 3456
'''
cmd = 'groupadd '
cmd = ['groupadd']
if gid:
cmd += '-g {0} '.format(gid)
cmd.append('-g {0}'.format(gid))
if system and __grains__['kernel'] != 'OpenBSD':
cmd += '-r '
cmd += name
cmd.append('-r')
cmd.append(name)
if root is not None:
cmd.extend(('-R', root))
@ -69,7 +69,7 @@ def delete(name, root=None):
salt '*' group.delete foo
'''
cmd = ('groupdel', name)
cmd = ['groupdel', name]
if root is not None:
cmd.extend(('-R', root))
@ -140,7 +140,7 @@ def chgid(name, gid, root=None):
pre_gid = __salt__['file.group_to_gid'](name)
if gid == pre_gid:
return True
cmd = ('groupmod', '-g', gid, name)
cmd = ['groupmod', '-g', gid, name]
if root is not None:
cmd.extend(('-R', root))
@ -170,15 +170,15 @@ def adduser(name, username, root=None):
if __grains__['kernel'] == 'Linux':
if on_redhat_5:
cmd = ('gpasswd', '-a', username, name)
cmd = ['gpasswd', '-a', username, name]
elif on_suse_11:
cmd = ('usermod', '-A', name, username)
cmd = ['usermod', '-A', name, username]
else:
cmd = ('gpasswd', '--add', username, name)
cmd = ['gpasswd', '--add', username, name]
if root is not None:
cmd.extend(('-Q', root))
else:
cmd = ('usermod', '-G', name, username)
cmd = ['usermod', '-G', name, username]
if root is not None:
cmd.extend(('-R', root))
@ -208,20 +208,20 @@ def deluser(name, username, root=None):
if username in grp_info['members']:
if __grains__['kernel'] == 'Linux':
if on_redhat_5:
cmd = ('gpasswd', '-d', username, name)
cmd = ['gpasswd', '-d', username, name]
elif on_suse_11:
cmd = ('usermod', '-R', name, username)
cmd = ['usermod', '-R', name, username]
else:
cmd = ('gpasswd', '--del', username, name)
cmd = ['gpasswd', '--del', username, name]
if root is not None:
cmd.extend(('-R', root))
retcode = __salt__['cmd.retcode'](cmd, python_shell=False)
elif __grains__['kernel'] == 'OpenBSD':
out = __salt__['cmd.run_stdout']('id -Gn {0}'.format(username),
python_shell=False)
cmd = 'usermod -S '
cmd += ','.join([g for g in out.split() if g != str(name)])
cmd += ' {0}'.format(username)
cmd = ['usermod', '-S']
cmd.append(','.join([g for g in out.split() if g != str(name)]))
cmd.append('{0}'.format(username))
retcode = __salt__['cmd.retcode'](cmd, python_shell=False)
else:
log.error('group.deluser is not yet supported on this platform')
@ -249,13 +249,13 @@ def members(name, members_list, root=None):
if __grains__['kernel'] == 'Linux':
if on_redhat_5:
cmd = ('gpasswd', '-M', members_list, name)
cmd = ['gpasswd', '-M', members_list, name]
elif on_suse_11:
for old_member in __salt__['group.info'](name).get('members'):
__salt__['cmd.run']('groupmod -R {0} {1}'.format(old_member, name), python_shell=False)
cmd = ('groupmod', '-A', members_list, name)
cmd = ['groupmod', '-A', members_list, name]
else:
cmd = ('gpasswd', '--members', members_list, name)
cmd = ['gpasswd', '--members', members_list, name]
if root is not None:
cmd.extend(('-R', root))
retcode = __salt__['cmd.retcode'](cmd, python_shell=False)
@ -270,7 +270,7 @@ def members(name, members_list, root=None):
for user in members_list.split(","):
if user:
retcode = __salt__['cmd.retcode'](
'usermod -G {0} {1}'.format(name, user),
['usermod', '-G', name, user],
python_shell=False)
if not retcode == 0:
break

View file

@ -318,17 +318,18 @@ class _Section(OrderedDict):
yield '{0}[{1}]{0}'.format(os.linesep, self.name)
sections_dict = OrderedDict()
for name, value in six.iteritems(self):
# Handle Comment Lines
if com_regx.match(name):
yield '{0}{1}'.format(value, os.linesep)
# Handle Sections
elif isinstance(value, _Section):
sections_dict.update({name: value})
# Key / Value pairs
# Adds spaces between the separator
else:
yield '{0}{1}{2}{3}'.format(
name,
(
' {0} '.format(self.sep) if self.sep != ' '
else self.sep
),
' {0} '.format(self.sep) if self.sep != ' ' else self.sep,
value,
os.linesep
)

View file

@ -1,6 +1,9 @@
# -*- coding: utf-8 -*-
'''
Support for Linux File Access Control Lists
The Linux ACL module requires the `getfacl` and `setfacl` binaries.
'''
from __future__ import absolute_import

View file

@ -11,7 +11,6 @@ import logging
# Import salt libs
import salt.utils
from salt.utils import which as _which
from salt.exceptions import CommandNotFoundError, CommandExecutionError
# Import 3rd-party libs
@ -1114,12 +1113,12 @@ def is_fuse_exec(cmd):
salt '*' mount.is_fuse_exec sshfs
'''
cmd_path = _which(cmd)
cmd_path = salt.utils.which(cmd)
# No point in running ldd on a command that doesn't exist
if not cmd_path:
return False
elif not _which('ldd'):
elif not salt.utils.which('ldd'):
raise CommandNotFoundError('ldd')
out = __salt__['cmd.run']('ldd {0}'.format(cmd_path), python_shell=False)

View file

@ -18,6 +18,8 @@ Module to provide redis functionality to Salt
# Import Python libs
from __future__ import absolute_import
from salt.ext.six.moves import zip
from salt.ext import six
from datetime import datetime
# Import third party libs
try:
@ -513,8 +515,14 @@ def lastsave(host=None, port=None, db=None, password=None):
salt '*' redis.lastsave
'''
# Use of %s to get the timestamp is not supported by Python. The reason it
# works is because it's passed to the system strftime which may not support
# it. See: https://stackoverflow.com/a/11743262
server = _connect(host, port, db, password)
return int(server.lastsave().strftime("%s"))
if six.PY2:
return int((server.lastsave() - datetime(1970, 1, 1)).total_seconds())
else:
return int(server.lastsave().timestamp())
def llen(key, host=None, port=None, db=None, password=None):

View file

@ -374,8 +374,10 @@ def list_semod():
def _validate_filetype(filetype):
'''
Checks if the given filetype is a valid SELinux filetype specification.
Throws an SaltInvocationError if it isn't.
.. versionadded:: 2017.7.0
Checks if the given filetype is a valid SELinux filetype
specification. Throws an SaltInvocationError if it isn't.
'''
if filetype not in _SELINUX_FILETYPES.keys():
raise SaltInvocationError('Invalid filetype given: {0}'.format(filetype))
@ -384,6 +386,8 @@ def _validate_filetype(filetype):
def _context_dict_to_string(context):
'''
.. versionadded:: 2017.7.0
Converts an SELinux file context from a dict to a string.
'''
return '{sel_user}:{sel_role}:{sel_type}:{sel_level}'.format(**context)
@ -391,6 +395,8 @@ def _context_dict_to_string(context):
def _context_string_to_dict(context):
'''
.. versionadded:: 2017.7.0
Converts an SELinux file context from string to dict.
'''
if not re.match('[^:]+:[^:]+:[^:]+:[^:]+$', context):
@ -405,8 +411,11 @@ def _context_string_to_dict(context):
def filetype_id_to_string(filetype='a'):
'''
Translates SELinux filetype single-letter representation
to a more human-readable version (which is also used in `semanage fcontext -l`).
.. versionadded:: 2017.7.0
Translates SELinux filetype single-letter representation to a more
human-readable version (which is also used in `semanage fcontext
-l`).
'''
_validate_filetype(filetype)
return _SELINUX_FILETYPES.get(filetype, 'error')
@ -414,20 +423,27 @@ def filetype_id_to_string(filetype='a'):
def fcontext_get_policy(name, filetype=None, sel_type=None, sel_user=None, sel_level=None):
'''
Returns the current entry in the SELinux policy list as a dictionary.
Returns None if no exact match was found
.. versionadded:: 2017.7.0
Returns the current entry in the SELinux policy list as a
dictionary. Returns None if no exact match was found.
Returned keys are:
- filespec (the name supplied and matched)
- filetype (the descriptive name of the filetype supplied)
- sel_user, sel_role, sel_type, sel_level (the selinux context)
* filespec (the name supplied and matched)
* filetype (the descriptive name of the filetype supplied)
* sel_user, sel_role, sel_type, sel_level (the selinux context)
For a more in-depth explanation of the selinux context, go to
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security-Enhanced_Linux/chap-Security-Enhanced_Linux-SELinux_Contexts.html
name: filespec of the file or directory. Regex syntax is allowed.
filetype: The SELinux filetype specification.
Use one of [a, f, d, c, b, s, l, p].
See also `man semanage-fcontext`.
Defaults to 'a' (all files)
name
filespec of the file or directory. Regex syntax is allowed.
filetype
The SELinux filetype specification. Use one of [a, f, d, c, b,
s, l, p]. See also `man semanage-fcontext`. Defaults to 'a'
(all files).
CLI Example:
@ -460,20 +476,34 @@ def fcontext_get_policy(name, filetype=None, sel_type=None, sel_user=None, sel_l
def fcontext_add_or_delete_policy(action, name, filetype=None, sel_type=None, sel_user=None, sel_level=None):
'''
Sets or deletes the SELinux policy for a given filespec and other optional parameters.
Returns the result of the call to semanage.
Note that you don't have to remove an entry before setting a new one for a given
filespec and filetype, as adding one with semanage automatically overwrites a
previously configured SELinux context.
.. versionadded:: 2017.7.0
name: filespec of the file or directory. Regex syntax is allowed.
file_type: The SELinux filetype specification.
Use one of [a, f, d, c, b, s, l, p].
See also ``man semanage-fcontext``.
Defaults to 'a' (all files)
sel_type: SELinux context type. There are many.
sel_user: SELinux user. Use ``semanage login -l`` to determine which ones are available to you
sel_level: The MLS range of the SELinux context.
Sets or deletes the SELinux policy for a given filespec and other
optional parameters.
Returns the result of the call to semanage.
Note that you don't have to remove an entry before setting a new
one for a given filespec and filetype, as adding one with semanage
automatically overwrites a previously configured SELinux context.
name
filespec of the file or directory. Regex syntax is allowed.
file_type
The SELinux filetype specification. Use one of [a, f, d, c, b,
s, l, p]. See also ``man semanage-fcontext``. Defaults to 'a'
(all files).
sel_type
SELinux context type. There are many.
sel_user
SELinux user. Use ``semanage login -l`` to determine which ones
are available to you.
sel_level
The MLS range of the SELinux context.
CLI Example:
@ -499,10 +529,14 @@ def fcontext_add_or_delete_policy(action, name, filetype=None, sel_type=None, se
def fcontext_policy_is_applied(name, recursive=False):
'''
Returns an empty string if the SELinux policy for a given filespec is applied,
returns string with differences in policy and actual situation otherwise.
.. versionadded:: 2017.7.0
name: filespec of the file or directory. Regex syntax is allowed.
Returns an empty string if the SELinux policy for a given filespec
is applied, returns string with differences in policy and actual
situation otherwise.
name
filespec of the file or directory. Regex syntax is allowed.
CLI Example:
@ -519,11 +553,17 @@ def fcontext_policy_is_applied(name, recursive=False):
def fcontext_apply_policy(name, recursive=False):
'''
Applies SElinux policies to filespec using `restorecon [-R] filespec`.
Returns dict with changes if succesful, the output of the restorecon command otherwise.
.. versionadded:: 2017.7.0
name: filespec of the file or directory. Regex syntax is allowed.
recursive: Recursively apply SELinux policies.
Applies SElinux policies to filespec using `restorecon [-R]
filespec`. Returns dict with changes if succesful, the output of
the restorecon command otherwise.
name
filespec of the file or directory. Regex syntax is allowed.
recursive
Recursively apply SELinux policies.
CLI Example:

View file

@ -740,10 +740,13 @@ def set_auth_key(
with salt.utils.fopen(fconfig, 'ab+') as _fh:
if new_file is False:
# Let's make sure we have a new line at the end of the file
_fh.seek(1024, 2)
if not _fh.read(1024).rstrip(six.b(' ')).endswith(six.b('\n')):
_fh.seek(0, 2)
_fh.write(six.b('\n'))
_fh.seek(0, 2)
if _fh.tell() > 0:
# File isn't empty, check if last byte is a newline
# If not, add one
_fh.seek(-1, 2)
if _fh.read(1) != six.b('\n'):
_fh.write(six.b('\n'))
if six.PY3:
auth_line = auth_line.encode(__salt_system_encoding__)
_fh.write(auth_line)

View file

@ -168,7 +168,10 @@ def has_settable_hwclock():
salt '*' system.has_settable_hwclock
'''
if salt.utils.which_bin(['hwclock']) is not None:
res = __salt__['cmd.run_all'](['hwclock', '--test', '--systohc'], python_shell=False)
res = __salt__['cmd.run_all'](
['hwclock', '--test', '--systohc'], python_shell=False,
output_loglevel='quiet', ignore_retcode=True
)
return res['retcode'] == 0
return False

View file

@ -199,12 +199,10 @@ def create(path,
for entry in extra_search_dir:
cmd.append('--extra-search-dir={0}'.format(entry))
if never_download is True:
if virtualenv_version_info >= (1, 10):
if virtualenv_version_info >= (1, 10) and virtualenv_version_info < (14, 0, 0):
log.info(
'The virtualenv \'--never-download\' option has been '
'deprecated in virtualenv(>=1.10), as such, the '
'\'never_download\' option to `virtualenv.create()` has '
'also been deprecated and it\'s not necessary anymore.'
'--never-download was deprecated in 1.10.0, but reimplemented in 14.0.0. '
'If this feature is needed, please install a supported virtualenv version.'
)
else:
cmd.append('--never-download')

View file

@ -983,18 +983,6 @@ def install(name=None, refresh=False, pkgs=None, **kwargs):
# Version 1.2.3 will apply to packages foo and bar
salt '*' pkg.install foo,bar version=1.2.3
cache_file (str):
A single file to copy down for use with the installer. Copied to the
same location as the installer. Use this over ``cache_dir`` if there
are many files in the directory and you only need a specific file
and don't want to cache additional files that may reside in the
installer directory. Only applies to files on ``salt://``
cache_dir (bool):
True will copy the contents of the installer directory. This is
useful for installations that are not a single file. Only applies to
directories on ``salt://``
extra_install_flags (str):
Additional install flags that will be appended to the
``install_flags`` defined in the software definition file. Only
@ -1286,7 +1274,7 @@ def install(name=None, refresh=False, pkgs=None, **kwargs):
if use_msiexec:
cmd = msiexec
arguments = ['/i', cached_pkg]
if pkginfo['version_num'].get('allusers', True):
if pkginfo[version_num].get('allusers', True):
arguments.append('ALLUSERS="1"')
arguments.extend(salt.utils.shlex_split(install_flags))
else:

View file

@ -857,8 +857,8 @@ def list_repo_pkgs(*args, **kwargs):
_parse_output(out['stdout'], strict=True)
else:
for repo in repos:
cmd = [_yum(), '--quiet', 'repository-packages', repo,
'list', '--showduplicates']
cmd = [_yum(), '--quiet', '--showduplicates',
'repository-packages', repo, 'list']
if cacheonly:
cmd.append('-C')
# Can't concatenate because args is a tuple, using list.extend()
@ -2723,7 +2723,7 @@ def _parse_repo_file(filename):
for section in parsed._sections:
section_dict = dict(parsed._sections[section])
section_dict.pop('__name__')
section_dict.pop('__name__', None)
config[section] = section_dict
# Try to extract leading comments

View file

@ -405,20 +405,19 @@ class Pillar(object):
self.opts['pillarenv'], ', '.join(self.opts['file_roots'])
)
else:
tops[self.opts['pillarenv']] = [
compile_template(
self.client.cache_file(
self.opts['state_top'],
self.opts['pillarenv']
),
self.rend,
self.opts['renderer'],
self.opts['renderer_blacklist'],
self.opts['renderer_whitelist'],
self.opts['pillarenv'],
_pillar_rend=True,
)
]
top = self.client.cache_file(self.opts['state_top'], self.opts['pillarenv'])
if top:
tops[self.opts['pillarenv']] = [
compile_template(
top,
self.rend,
self.opts['renderer'],
self.opts['renderer_blacklist'],
self.opts['renderer_whitelist'],
self.opts['pillarenv'],
_pillar_rend=True,
)
]
else:
for saltenv in self._get_envs():
if self.opts.get('pillar_source_merging_strategy', None) == "none":

View file

@ -391,7 +391,6 @@ def clean_old_jobs():
Clean out the old jobs from the job cache
'''
if __opts__['keep_jobs'] != 0:
cur = time.time()
jid_root = _job_dir()
if not os.path.exists(jid_root):
@ -421,7 +420,7 @@ def clean_old_jobs():
shutil.rmtree(t_path)
elif os.path.isfile(jid_file):
jid_ctime = os.stat(jid_file).st_ctime
hours_difference = (cur - jid_ctime) / 3600.0
hours_difference = (time.time()- jid_ctime) / 3600.0
if hours_difference > __opts__['keep_jobs'] and os.path.exists(t_path):
# Remove the entire t_path from the original JID dir
shutil.rmtree(t_path)
@ -435,7 +434,7 @@ def clean_old_jobs():
# Checking the time again prevents a possible race condition where
# t_path JID dirs were created, but not yet populated by a jid file.
t_path_ctime = os.stat(t_path).st_ctime
hours_difference = (cur - t_path_ctime) / 3600.0
hours_difference = (time.time() - t_path_ctime) / 3600.0
if hours_difference > __opts__['keep_jobs']:
shutil.rmtree(t_path)

View file

@ -14,9 +14,12 @@ import shutil
import msgpack
import hashlib
import logging
import pwd
import grp
import sys
try:
import pwd
import grp
except ImportError:
pass
# Import Salt libs
import salt.client
@ -491,10 +494,18 @@ class SPMClient(object):
# No defaults for this in config.py; default to the current running
# user and group
uid = self.opts.get('spm_uid', os.getuid())
gid = self.opts.get('spm_gid', os.getgid())
uname = pwd.getpwuid(uid)[0]
gname = grp.getgrgid(gid)[0]
import salt.utils
if salt.utils.is_windows():
import salt.utils.win_functions
uname = gname = salt.utils.win_functions.get_current_user()
uname_sid = salt.utils.win_functions.get_sid_from_name(uname)
uid = self.opts.get('spm_uid', uname_sid)
gid = self.opts.get('spm_gid', uname_sid)
else:
uid = self.opts.get('spm_uid', os.getuid())
gid = self.opts.get('spm_gid', os.getgid())
uname = pwd.getpwuid(uid)[0]
gname = grp.getgrgid(gid)[0]
# Second pass: install the files
for member in pkg_files:
@ -710,7 +721,7 @@ class SPMClient(object):
raise SPMInvocationError('A path to a directory must be specified')
if args[1] == '.':
repo_path = os.environ['PWD']
repo_path = os.getcwdu()
else:
repo_path = args[1]

View file

@ -3124,7 +3124,7 @@ class BaseHighState(object):
Returns:
{'saltenv': ['state1', 'state2', ...]}
'''
matches = {}
matches = DefaultOrderedDict(OrderedDict)
# pylint: disable=cell-var-from-loop
for saltenv, body in six.iteritems(top):
if self.opts['environment']:

View file

@ -116,9 +116,14 @@ def cert(name,
if res['result'] is None:
ret['changes'] = {}
else:
if not __salt__['acme.has'](name):
new = None
else:
new = __salt__['acme.info'](name)
ret['changes'] = {
'old': old,
'new': __salt__['acme.info'](name)
'new': new
}
return ret

View file

@ -2,6 +2,8 @@
'''
Linux File Access Control Lists
The Linux ACL state module requires the `getfacl` and `setfacl` binaries.
Ensure a Linux ACL is present
.. code-block:: yaml
@ -81,11 +83,12 @@ def present(name, acl_type, acl_name='', perms='', recurse=False):
# applied to the user/group that owns the file, e.g.,
# default:group::rwx would be listed as default:group:root:rwx
# In this case, if acl_name is empty, we really want to search for root
# but still uses '' for other
# We search through the dictionary getfacl returns for the owner of the
# file if acl_name is empty.
if acl_name == '':
_search_name = __current_perms[name].get('comment').get(_acl_type)
_search_name = __current_perms[name].get('comment').get(_acl_type, '')
else:
_search_name = acl_name
@ -150,11 +153,12 @@ def absent(name, acl_type, acl_name='', perms='', recurse=False):
# applied to the user/group that owns the file, e.g.,
# default:group::rwx would be listed as default:group:root:rwx
# In this case, if acl_name is empty, we really want to search for root
# but still uses '' for other
# We search through the dictionary getfacl returns for the owner of the
# file if acl_name is empty.
if acl_name == '':
_search_name = __current_perms[name].get('comment').get(_acl_type)
_search_name = __current_perms[name].get('comment').get(_acl_type, '')
else:
_search_name = acl_name

View file

@ -310,17 +310,27 @@ def module_remove(name):
def fcontext_policy_present(name, sel_type, filetype='a', sel_user=None, sel_level=None):
'''
Makes sure a SELinux policy for a given filespec (name),
filetype and SELinux context type is present.
.. versionadded:: 2017.7.0
name: filespec of the file or directory. Regex syntax is allowed.
sel_type: SELinux context type. There are many.
filetype: The SELinux filetype specification.
Use one of [a, f, d, c, b, s, l, p].
See also `man semanage-fcontext`.
Defaults to 'a' (all files)
sel_user: The SELinux user.
sel_level: The SELinux MLS range
Makes sure a SELinux policy for a given filespec (name), filetype
and SELinux context type is present.
name
filespec of the file or directory. Regex syntax is allowed.
sel_type
SELinux context type. There are many.
filetype
The SELinux filetype specification. Use one of [a, f, d, c, b,
s, l, p]. See also `man semanage-fcontext`. Defaults to 'a'
(all files).
sel_user
The SELinux user.
sel_level
The SELinux MLS range.
'''
ret = {'name': name, 'result': False, 'changes': {}, 'comment': ''}
new_state = {}
@ -383,17 +393,27 @@ def fcontext_policy_present(name, sel_type, filetype='a', sel_user=None, sel_lev
def fcontext_policy_absent(name, filetype='a', sel_type=None, sel_user=None, sel_level=None):
'''
Makes sure an SELinux file context policy for a given filespec (name),
filetype and SELinux context type is absent.
.. versionadded:: 2017.7.0
name: filespec of the file or directory. Regex syntax is allowed.
filetype: The SELinux filetype specification.
Use one of [a, f, d, c, b, s, l, p].
See also `man semanage-fcontext`.
Defaults to 'a' (all files).
sel_type: The SELinux context type. There are many.
sel_user: The SELinux user.
sel_level: The SELinux MLS range
Makes sure an SELinux file context policy for a given filespec
(name), filetype and SELinux context type is absent.
name
filespec of the file or directory. Regex syntax is allowed.
filetype
The SELinux filetype specification. Use one of [a, f, d, c, b,
s, l, p]. See also `man semanage-fcontext`. Defaults to 'a'
(all files).
sel_type
The SELinux context type. There are many.
sel_user
The SELinux user.
sel_level
The SELinux MLS range.
'''
ret = {'name': name, 'result': False, 'changes': {}, 'comment': ''}
new_state = {}
@ -433,7 +453,10 @@ def fcontext_policy_absent(name, filetype='a', sel_type=None, sel_user=None, sel
def fcontext_policy_applied(name, recursive=False):
'''
Checks and makes sure the SELinux policies for a given filespec are applied.
.. versionadded:: 2017.7.0
Checks and makes sure the SELinux policies for a given filespec are
applied.
'''
ret = {'name': name, 'result': False, 'changes': {}, 'comment': ''}

View file

@ -15,6 +15,8 @@ DEVICE="{{name}}"
{%endif%}{% if onparent %}ONPARENT={{onparent}}
{%endif%}{% if ipv4_failure_fatal %}IPV4_FAILURE_FATAL="{{ipv4_failure_fatal}}"
{%endif%}{% if ipaddr %}IPADDR="{{ipaddr}}"
{%endif%}{% if ipaddr_start %}IPADDR_START="{{ipaddr_start}}"
{%endif%}{% if ipaddr_end %}IPADDR_END="{{ipaddr_end}}"
{%endif%}{% if netmask %}NETMASK="{{netmask}}"
{%endif%}{% if prefix %}PREFIX="{{prefix}}"
{%endif%}{% if gateway %}GATEWAY="{{gateway}}"

View file

@ -966,7 +966,14 @@ class DaemonMixIn(six.with_metaclass(MixInMeta, object)):
# We've loaded and merged options into the configuration, it's safe
# to query about the pidfile
if self.check_pidfile():
os.unlink(self.config['pidfile'])
try:
os.unlink(self.config['pidfile'])
except OSError as err:
self.info(
'PIDfile could not be deleted: {0}'.format(
self.config['pidfile']
)
)
def set_pidfile(self):
from salt.utils.process import set_pidfile
@ -2359,6 +2366,16 @@ class SaltKeyOptionParser(six.with_metaclass(OptionParserMeta,
'Default: %default.')
)
self.add_option(
'--preserve-minions',
default=False,
help=('Setting this to True prevents the master from deleting '
'the minion cache when keys are deleted, this may have '
'security implications if compromised minions auth with '
'a previous deleted minion ID. '
'Default: %default.')
)
key_options_group = optparse.OptionGroup(
self, 'Key Generation Options'
)
@ -2458,6 +2475,13 @@ class SaltKeyOptionParser(six.with_metaclass(OptionParserMeta,
elif self.options.rotate_aes_key.lower() == 'false':
self.options.rotate_aes_key = False
def process_preserve_minions(self):
if hasattr(self.options, 'preserve_minions') and isinstance(self.options.preserve_minions, str):
if self.options.preserve_minions.lower() == 'true':
self.options.preserve_minions = True
elif self.options.preserve_minions.lower() == 'false':
self.options.preserve_minions = False
def process_list(self):
# Filter accepted list arguments as soon as possible
if not self.options.list:

View file

@ -7,12 +7,14 @@ import glob
import logging
# Import salt libs
import salt.client
import salt.runner
import salt.state
import salt.utils
import salt.utils.cache
import salt.utils.event
import salt.utils.process
import salt.wheel
import salt.defaults.exitcodes
# Import 3rd-party libs
@ -21,6 +23,15 @@ import salt.ext.six as six
log = logging.getLogger(__name__)
REACTOR_INTERNAL_KEYWORDS = frozenset([
'__id__',
'__sls__',
'name',
'order',
'fun',
'state',
])
class Reactor(salt.utils.process.SignalHandlingMultiprocessingProcess, salt.state.Compiler):
'''
@ -29,6 +40,10 @@ class Reactor(salt.utils.process.SignalHandlingMultiprocessingProcess, salt.stat
The reactor has the capability to execute pre-programmed executions
as reactions to events
'''
aliases = {
'cmd': 'local',
}
def __init__(self, opts, log_queue=None):
super(Reactor, self).__init__(log_queue=log_queue)
local_minion_opts = opts.copy()
@ -171,6 +186,16 @@ class Reactor(salt.utils.process.SignalHandlingMultiprocessingProcess, salt.stat
return {'status': False, 'comment': 'Reactor does not exists.'}
def resolve_aliases(self, chunks):
'''
Preserve backward compatibility by rewriting the 'state' key in the low
chunks if it is using a legacy type.
'''
for idx, _ in enumerate(chunks):
new_state = self.aliases.get(chunks[idx]['state'])
if new_state is not None:
chunks[idx]['state'] = new_state
def reactions(self, tag, data, reactors):
'''
Render a list of reactor files and returns a reaction struct
@ -191,6 +216,7 @@ class Reactor(salt.utils.process.SignalHandlingMultiprocessingProcess, salt.stat
except Exception as exc:
log.error('Exception trying to compile reactions: {0}'.format(exc), exc_info=True)
self.resolve_aliases(chunks)
return chunks
def call_reactions(self, chunks):
@ -248,12 +274,19 @@ class Reactor(salt.utils.process.SignalHandlingMultiprocessingProcess, salt.stat
class ReactWrap(object):
'''
Create a wrapper that executes low data for the reaction system
Wrapper that executes low data for the Reactor System
'''
# class-wide cache of clients
client_cache = None
event_user = 'Reactor'
reaction_class = {
'local': salt.client.LocalClient,
'runner': salt.runner.RunnerClient,
'wheel': salt.wheel.Wheel,
'caller': salt.client.Caller,
}
def __init__(self, opts):
self.opts = opts
if ReactWrap.client_cache is None:
@ -264,21 +297,49 @@ class ReactWrap(object):
queue_size=self.opts['reactor_worker_hwm'] # queue size for those workers
)
def populate_client_cache(self, low):
'''
Populate the client cache with an instance of the specified type
'''
reaction_type = low['state']
if reaction_type not in self.client_cache:
log.debug('Reactor is populating %s client cache', reaction_type)
if reaction_type in ('runner', 'wheel'):
# Reaction types that run locally on the master want the full
# opts passed.
self.client_cache[reaction_type] = \
self.reaction_class[reaction_type](self.opts)
# The len() function will cause the module functions to load if
# they aren't already loaded. We want to load them so that the
# spawned threads don't need to load them. Loading in the
# spawned threads creates race conditions such as sometimes not
# finding the required function because another thread is in
# the middle of loading the functions.
len(self.client_cache[reaction_type].functions)
else:
# Reactions which use remote pubs only need the conf file when
# instantiating a client instance.
self.client_cache[reaction_type] = \
self.reaction_class[reaction_type](self.opts['conf_file'])
def run(self, low):
'''
Execute the specified function in the specified state by passing the
low data
Execute a reaction by invoking the proper wrapper func
'''
l_fun = getattr(self, low['state'])
self.populate_client_cache(low)
try:
f_call = salt.utils.format_call(l_fun, low)
kwargs = f_call.get('kwargs', {})
if 'arg' not in kwargs:
kwargs['arg'] = []
if 'kwarg' not in kwargs:
kwargs['kwarg'] = {}
l_fun = getattr(self, low['state'])
except AttributeError:
log.error(
'ReactWrap is missing a wrapper function for \'%s\'',
low['state']
)
# TODO: Setting the user doesn't seem to work for actual remote publishes
try:
wrap_call = salt.utils.format_call(l_fun, low)
args = wrap_call.get('args', ())
kwargs = wrap_call.get('kwargs', {})
# TODO: Setting user doesn't seem to work for actual remote pubs
if low['state'] in ('runner', 'wheel'):
# Update called function's low data with event user to
# segregate events fired by reactor and avoid reaction loops
@ -286,80 +347,106 @@ class ReactWrap(object):
# Replace ``state`` kwarg which comes from high data compiler.
# It breaks some runner functions and seems unnecessary.
kwargs['__state__'] = kwargs.pop('state')
# NOTE: if any additional keys are added here, they will also
# need to be added to filter_kwargs()
l_fun(*f_call.get('args', ()), **kwargs)
if 'args' in kwargs:
# New configuration
reactor_args = kwargs.pop('args')
for item in ('arg', 'kwarg'):
if item in low:
log.warning(
'Reactor \'%s\' is ignoring \'%s\' param %s due to '
'presence of \'args\' param. Check the Reactor System '
'documentation for the correct argument format.',
low['__id__'], item, low[item]
)
if low['state'] == 'caller' \
and isinstance(reactor_args, list) \
and not salt.utils.is_dictlist(reactor_args):
# Legacy 'caller' reactors were already using the 'args'
# param, but only supported a list of positional arguments.
# If low['args'] is a list but is *not* a dictlist, then
# this is actually using the legacy configuration. So, put
# the reactor args into kwarg['arg'] so that the wrapper
# interprets them as positional args.
kwargs['arg'] = reactor_args
kwargs['kwarg'] = {}
else:
kwargs['arg'] = ()
kwargs['kwarg'] = reactor_args
if not isinstance(kwargs['kwarg'], dict):
kwargs['kwarg'] = salt.utils.repack_dictlist(kwargs['kwarg'])
if not kwargs['kwarg']:
log.error(
'Reactor \'%s\' failed to execute %s \'%s\': '
'Incorrect argument format, check the Reactor System '
'documentation for the correct format.',
low['__id__'], low['state'], low['fun']
)
return
else:
# Legacy configuration
react_call = {}
if low['state'] in ('runner', 'wheel'):
if 'arg' not in kwargs or 'kwarg' not in kwargs:
# Runner/wheel execute on the master, so we can use
# format_call to get the functions args/kwargs
react_fun = self.client_cache[low['state']].functions.get(low['fun'])
if react_fun is None:
log.error(
'Reactor \'%s\' failed to execute %s \'%s\': '
'function not available',
low['__id__'], low['state'], low['fun']
)
return
react_call = salt.utils.format_call(
react_fun,
low,
expected_extra_kws=REACTOR_INTERNAL_KEYWORDS
)
if 'arg' not in kwargs:
kwargs['arg'] = react_call.get('args', ())
if 'kwarg' not in kwargs:
kwargs['kwarg'] = react_call.get('kwargs', {})
# Execute the wrapper with the proper args/kwargs. kwargs['arg']
# and kwargs['kwarg'] contain the positional and keyword arguments
# that will be passed to the client interface to execute the
# desired runner/wheel/remote-exec/etc. function.
l_fun(*args, **kwargs)
except SystemExit:
log.warning(
'Reactor \'%s\' attempted to exit. Ignored.', low['__id__']
)
except Exception:
log.error(
'Failed to execute {0}: {1}\n'.format(low['state'], l_fun),
exc_info=True
)
def local(self, *args, **kwargs):
'''
Wrap LocalClient for running :ref:`execution modules <all-salt.modules>`
'''
if 'local' not in self.client_cache:
self.client_cache['local'] = salt.client.LocalClient(self.opts['conf_file'])
try:
self.client_cache['local'].cmd_async(*args, **kwargs)
except SystemExit:
log.warning('Attempt to exit reactor. Ignored.')
except Exception as exc:
log.warning('Exception caught by reactor: {0}'.format(exc))
cmd = local
'Reactor \'%s\' failed to execute %s \'%s\'',
low['__id__'], low['state'], low['fun'], exc_info=True
)
def runner(self, fun, **kwargs):
'''
Wrap RunnerClient for executing :ref:`runner modules <all-salt.runners>`
'''
if 'runner' not in self.client_cache:
self.client_cache['runner'] = salt.runner.RunnerClient(self.opts)
# The len() function will cause the module functions to load if
# they aren't already loaded. We want to load them so that the
# spawned threads don't need to load them. Loading in the spawned
# threads creates race conditions such as sometimes not finding
# the required function because another thread is in the middle
# of loading the functions.
len(self.client_cache['runner'].functions)
try:
self.pool.fire_async(self.client_cache['runner'].low, args=(fun, kwargs))
except SystemExit:
log.warning('Attempt to exit in reactor by runner. Ignored')
except Exception as exc:
log.warning('Exception caught by reactor: {0}'.format(exc))
self.pool.fire_async(self.client_cache['runner'].low, args=(fun, kwargs))
def wheel(self, fun, **kwargs):
'''
Wrap Wheel to enable executing :ref:`wheel modules <all-salt.wheel>`
'''
if 'wheel' not in self.client_cache:
self.client_cache['wheel'] = salt.wheel.Wheel(self.opts)
# The len() function will cause the module functions to load if
# they aren't already loaded. We want to load them so that the
# spawned threads don't need to load them. Loading in the spawned
# threads creates race conditions such as sometimes not finding
# the required function because another thread is in the middle
# of loading the functions.
len(self.client_cache['wheel'].functions)
try:
self.pool.fire_async(self.client_cache['wheel'].low, args=(fun, kwargs))
except SystemExit:
log.warning('Attempt to in reactor by whell. Ignored.')
except Exception as exc:
log.warning('Exception caught by reactor: {0}'.format(exc))
self.pool.fire_async(self.client_cache['wheel'].low, args=(fun, kwargs))
def caller(self, fun, *args, **kwargs):
def local(self, fun, tgt, **kwargs):
'''
Wrap Caller to enable executing :ref:`caller modules <all-salt.caller>`
Wrap LocalClient for running :ref:`execution modules <all-salt.modules>`
'''
log.debug("in caller with fun {0} args {1} kwargs {2}".format(fun, args, kwargs))
args = kwargs.get('args', [])
if 'caller' not in self.client_cache:
self.client_cache['caller'] = salt.client.Caller(self.opts['conf_file'])
try:
self.client_cache['caller'].function(fun, *args)
except SystemExit:
log.warning('Attempt to exit reactor. Ignored.')
except Exception as exc:
log.warning('Exception caught by reactor: {0}'.format(exc))
self.client_cache['local'].cmd_async(tgt, fun, **kwargs)
def caller(self, fun, **kwargs):
'''
Wrap LocalCaller to execute remote exec functions locally on the Minion
'''
self.client_cache['caller'].cmd(fun, *kwargs['arg'], **kwargs['kwarg'])

View file

@ -842,7 +842,8 @@ class Schedule(object):
if argspec.keywords:
# this function accepts **kwargs, pack in the publish data
for key, val in six.iteritems(ret):
kwargs['__pub_{0}'.format(key)] = copy.deepcopy(val)
if key is not 'kwargs':
kwargs['__pub_{0}'.format(key)] = copy.deepcopy(val)
ret['return'] = self.functions[func](*args, **kwargs)

View file

@ -64,3 +64,14 @@ def is_readable(path):
# The path does not exist
return False
def is_executable(path):
'''
Check if a given path is executable by the current user.
:param path: The path to check
:returns: True or False
'''
return os.access(path, os.X_OK)

View file

@ -2,42 +2,39 @@
# Import Python libs
from __future__ import absolute_import
import os
import tempfile
# Import Salt Libs
from salt.cloud.clouds import ec2
from salt.exceptions import SaltCloudSystemExit
# Import Salt Testing Libs
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.unit import TestCase, skipIf
from tests.support.mock import NO_MOCK, NO_MOCK_REASON
from tests.support.mock import NO_MOCK, NO_MOCK_REASON, patch, PropertyMock
@skipIf(NO_MOCK, NO_MOCK_REASON)
class EC2TestCase(TestCase, LoaderModuleMockMixin):
class EC2TestCase(TestCase):
'''
Unit TestCase for salt.cloud.clouds.ec2 module.
'''
def setup_loader_modules(self):
return {ec2: {}}
def test__validate_key_path_and_mode(self):
with tempfile.NamedTemporaryFile() as f:
key_file = f.name
os.chmod(key_file, 0o644)
self.assertRaises(SaltCloudSystemExit,
ec2._validate_key_path_and_mode,
key_file)
os.chmod(key_file, 0o600)
self.assertTrue(ec2._validate_key_path_and_mode(key_file))
os.chmod(key_file, 0o400)
self.assertTrue(ec2._validate_key_path_and_mode(key_file))
# Key file exists
with patch('os.path.exists', return_value=True):
with patch('os.stat') as patched_stat:
# tmp file removed
self.assertRaises(SaltCloudSystemExit,
ec2._validate_key_path_and_mode,
key_file)
type(patched_stat.return_value).st_mode = PropertyMock(return_value=0o644)
self.assertRaises(
SaltCloudSystemExit, ec2._validate_key_path_and_mode, 'key_file')
type(patched_stat.return_value).st_mode = PropertyMock(return_value=0o600)
self.assertTrue(ec2._validate_key_path_and_mode('key_file'))
type(patched_stat.return_value).st_mode = PropertyMock(return_value=0o400)
self.assertTrue(ec2._validate_key_path_and_mode('key_file'))
# Key file does not exist
with patch('os.path.exists', return_value=False):
self.assertRaises(
SaltCloudSystemExit, ec2._validate_key_path_and_mode, 'key_file')

View file

@ -66,30 +66,28 @@ class AlternativesTestCase(TestCase, LoaderModuleMockMixin):
)
def test_show_current(self):
with patch('os.readlink') as os_readlink_mock:
os_readlink_mock.return_value = '/etc/alternatives/salt'
mock = MagicMock(return_value='/etc/alternatives/salt')
with patch('salt.utils.path.readlink', mock):
ret = alternatives.show_current('better-world')
self.assertEqual('/etc/alternatives/salt', ret)
os_readlink_mock.assert_called_once_with(
'/etc/alternatives/better-world'
)
mock.assert_called_once_with('/etc/alternatives/better-world')
with TestsLoggingHandler() as handler:
os_readlink_mock.side_effect = OSError('Hell was not found!!!')
mock.side_effect = OSError('Hell was not found!!!')
self.assertFalse(alternatives.show_current('hell'))
os_readlink_mock.assert_called_with('/etc/alternatives/hell')
mock.assert_called_with('/etc/alternatives/hell')
self.assertIn('ERROR:alternative: hell does not exist',
handler.messages)
def test_check_installed(self):
with patch('os.readlink') as os_readlink_mock:
os_readlink_mock.return_value = '/etc/alternatives/salt'
mock = MagicMock(return_value='/etc/alternatives/salt')
with patch('salt.utils.path.readlink', mock):
self.assertTrue(
alternatives.check_installed(
'better-world', '/etc/alternatives/salt'
)
)
os_readlink_mock.return_value = False
mock.return_value = False
self.assertFalse(
alternatives.check_installed(
'help', '/etc/alternatives/salt'

View file

@ -65,7 +65,8 @@ class TestGemModule(TestCase, LoaderModuleMockMixin):
with patch.dict(gem.__salt__,
{'rvm.is_installed': MagicMock(return_value=False),
'rbenv.is_installed': MagicMock(return_value=True),
'rbenv.do': mock}):
'rbenv.do': mock}),\
patch('salt.utils.is_windows', return_value=False):
gem._gem(['install', 'rails'])
mock.assert_called_once_with(
['gem', 'install', 'rails'],

View file

@ -94,9 +94,11 @@ class GenesisTestCase(TestCase, LoaderModuleMockMixin):
'cmd.run': MagicMock(),
'disk.blkid': MagicMock(return_value={})}):
with patch('salt.modules.genesis.salt.utils.which', return_value=True):
param_set['params'].update(common_parms)
self.assertEqual(genesis.bootstrap(**param_set['params']), None)
genesis.__salt__['cmd.run'].assert_any_call(param_set['cmd'], python_shell=False)
with patch('salt.modules.genesis.salt.utils.validate.path.is_executable',
return_value=True):
param_set['params'].update(common_parms)
self.assertEqual(genesis.bootstrap(**param_set['params']), None)
genesis.__salt__['cmd.run'].assert_any_call(param_set['cmd'], python_shell=False)
with patch.object(genesis, '_bootstrap_pacman', return_value='A') as pacman_patch:
with patch.dict(genesis.__salt__, {'mount.umount': MagicMock(),

View file

@ -118,16 +118,16 @@ class GroupAddTestCase(TestCase, LoaderModuleMockMixin):
'''
os_version_list = [
{'grains': {'kernel': 'Linux', 'os_family': 'RedHat', 'osmajorrelease': '5'},
'cmd': ('gpasswd', '-a', 'root', 'test')},
'cmd': ['gpasswd', '-a', 'root', 'test']},
{'grains': {'kernel': 'Linux', 'os_family': 'Suse', 'osmajorrelease': '11'},
'cmd': ('usermod', '-A', 'test', 'root')},
'cmd': ['usermod', '-A', 'test', 'root']},
{'grains': {'kernel': 'Linux'},
'cmd': ('gpasswd', '--add', 'root', 'test')},
'cmd': ['gpasswd', '--add', 'root', 'test']},
{'grains': {'kernel': 'OTHERKERNEL'},
'cmd': ('usermod', '-G', 'test', 'root')},
'cmd': ['usermod', '-G', 'test', 'root']},
]
for os_version in os_version_list:
@ -145,16 +145,16 @@ class GroupAddTestCase(TestCase, LoaderModuleMockMixin):
'''
os_version_list = [
{'grains': {'kernel': 'Linux', 'os_family': 'RedHat', 'osmajorrelease': '5'},
'cmd': ('gpasswd', '-d', 'root', 'test')},
'cmd': ['gpasswd', '-d', 'root', 'test']},
{'grains': {'kernel': 'Linux', 'os_family': 'Suse', 'osmajorrelease': '11'},
'cmd': ('usermod', '-R', 'test', 'root')},
'cmd': ['usermod', '-R', 'test', 'root']},
{'grains': {'kernel': 'Linux'},
'cmd': ('gpasswd', '--del', 'root', 'test')},
'cmd': ['gpasswd', '--del', 'root', 'test']},
{'grains': {'kernel': 'OpenBSD'},
'cmd': 'usermod -S foo root'},
'cmd': ['usermod', '-S', 'foo', 'root']},
]
for os_version in os_version_list:
@ -180,16 +180,16 @@ class GroupAddTestCase(TestCase, LoaderModuleMockMixin):
'''
os_version_list = [
{'grains': {'kernel': 'Linux', 'os_family': 'RedHat', 'osmajorrelease': '5'},
'cmd': ('gpasswd', '-M', 'foo', 'test')},
'cmd': ['gpasswd', '-M', 'foo', 'test']},
{'grains': {'kernel': 'Linux', 'os_family': 'Suse', 'osmajorrelease': '11'},
'cmd': ('groupmod', '-A', 'foo', 'test')},
'cmd': ['groupmod', '-A', 'foo', 'test']},
{'grains': {'kernel': 'Linux'},
'cmd': ('gpasswd', '--members', 'foo', 'test')},
'cmd': ['gpasswd', '--members', 'foo', 'test']},
{'grains': {'kernel': 'OpenBSD'},
'cmd': 'usermod -G test foo'},
'cmd': ['usermod', '-G', 'test', 'foo']},
]
for os_version in os_version_list:

View file

@ -16,6 +16,7 @@ from tests.support.mock import (
)
# Import Salt Libs
import salt.modules.hosts as hosts
import salt.utils
from salt.ext.six.moves import StringIO
@ -92,8 +93,12 @@ class HostsTestCase(TestCase, LoaderModuleMockMixin):
'''
Tests true if the alias is set
'''
hosts_file = '/etc/hosts'
if salt.utils.is_windows():
hosts_file = r'C:\Windows\System32\Drivers\etc\hosts'
with patch('salt.modules.hosts.__get_hosts_filename',
MagicMock(return_value='/etc/hosts')), \
MagicMock(return_value=hosts_file)), \
patch('os.path.isfile', MagicMock(return_value=False)), \
patch.dict(hosts.__salt__,
{'config.option': MagicMock(return_value=None)}):
@ -139,7 +144,16 @@ class HostsTestCase(TestCase, LoaderModuleMockMixin):
self.close()
def close(self):
data[0] = self.getvalue()
# Don't save unless there's something there. In Windows
# the class gets initialized the first time with mode = w
# which sets the initial value to ''. When the class closes
# it clears out data and causes the test to fail.
# I don't know why it get's initialized with a mode of 'w'
# For the purposes of this test data shouldn't be empty
# This is a problem with this class and not with the hosts
# module
if self.getvalue():
data[0] = self.getvalue()
StringIO.close(self)
expected = '\n'.join((
@ -151,6 +165,7 @@ class HostsTestCase(TestCase, LoaderModuleMockMixin):
mock_opt = MagicMock(return_value=None)
with patch.dict(hosts.__salt__, {'config.option': mock_opt}):
self.assertTrue(hosts.set_host('1.1.1.1', ' '))
self.assertEqual(data[0], expected)
# 'rm_host' function tests: 2
@ -182,9 +197,13 @@ class HostsTestCase(TestCase, LoaderModuleMockMixin):
'''
Tests if specified host entry gets added from the hosts file
'''
hosts_file = '/etc/hosts'
if salt.utils.is_windows():
hosts_file = r'C:\Windows\System32\Drivers\etc\hosts'
with patch('salt.utils.fopen', mock_open()), \
patch('salt.modules.hosts.__get_hosts_filename',
MagicMock(return_value='/etc/hosts')):
MagicMock(return_value=hosts_file)):
mock_opt = MagicMock(return_value=None)
with patch.dict(hosts.__salt__, {'config.option': mock_opt}):
self.assertTrue(hosts.add_host('10.10.10.10', 'Salt1'))

View file

@ -15,38 +15,38 @@ import salt.modules.ini_manage as ini
class IniManageTestCase(TestCase):
TEST_FILE_CONTENT = '''\
# Comment on the first line
# First main option
option1=main1
# Second main option
option2=main2
[main]
# Another comment
test1=value 1
test2=value 2
[SectionB]
test1=value 1B
# Blank line should be above
test3 = value 3B
[SectionC]
# The following option is empty
empty_option=
'''
TEST_FILE_CONTENT = os.linesep.join([
'# Comment on the first line',
'',
'# First main option',
'option1=main1',
'',
'# Second main option',
'option2=main2',
'',
'',
'[main]',
'# Another comment',
'test1=value 1',
'',
'test2=value 2',
'',
'[SectionB]',
'test1=value 1B',
'',
'# Blank line should be above',
'test3 = value 3B',
'',
'[SectionC]',
'# The following option is empty',
'empty_option='
])
maxDiff = None
def setUp(self):
self.tfile = tempfile.NamedTemporaryFile(delete=False, mode='w+')
self.tfile.write(self.TEST_FILE_CONTENT)
self.tfile = tempfile.NamedTemporaryFile(delete=False, mode='w+b')
self.tfile.write(salt.utils.to_bytes(self.TEST_FILE_CONTENT))
self.tfile.close()
def tearDown(self):
@ -121,40 +121,42 @@ empty_option=
})
with salt.utils.fopen(self.tfile.name, 'r') as fp:
file_content = fp.read()
self.assertIn('\nempty_option = \n', file_content,
'empty_option was not preserved')
expected = '{0}{1}{0}'.format(os.linesep, 'empty_option = ')
self.assertIn(expected, file_content, 'empty_option was not preserved')
def test_empty_lines_preserved_after_edit(self):
ini.set_option(self.tfile.name, {
'SectionB': {'test3': 'new value 3B'},
})
expected = os.linesep.join([
'# Comment on the first line',
'',
'# First main option',
'option1 = main1',
'',
'# Second main option',
'option2 = main2',
'',
'[main]',
'# Another comment',
'test1 = value 1',
'',
'test2 = value 2',
'',
'[SectionB]',
'test1 = value 1B',
'',
'# Blank line should be above',
'test3 = new value 3B',
'',
'[SectionC]',
'# The following option is empty',
'empty_option = ',
''
])
with salt.utils.fopen(self.tfile.name, 'r') as fp:
file_content = fp.read()
self.assertEqual('''\
# Comment on the first line
# First main option
option1 = main1
# Second main option
option2 = main2
[main]
# Another comment
test1 = value 1
test2 = value 2
[SectionB]
test1 = value 1B
# Blank line should be above
test3 = new value 3B
[SectionC]
# The following option is empty
empty_option =
''', file_content)
self.assertEqual(expected, file_content)
def test_empty_lines_preserved_after_multiple_edits(self):
ini.set_option(self.tfile.name, {

View file

@ -19,7 +19,7 @@ from tests.support.mock import (
# Import Salt Libs
import salt.utils
from salt.exceptions import CommandExecutionError
from salt.exceptions import CommandExecutionError, CommandNotFoundError
import salt.modules.mount as mount
MOCK_SHELL_FILE = 'A B C D F G\n'
@ -242,15 +242,26 @@ class MountTestCase(TestCase, LoaderModuleMockMixin):
'''
Returns true if the command passed is a fuse mountable application
'''
with patch.object(salt.utils, 'which', return_value=None):
# Return False if fuse doesn't exist
with patch('salt.utils.which', return_value=None):
self.assertFalse(mount.is_fuse_exec('cmd'))
with patch.object(salt.utils, 'which', return_value=True):
self.assertFalse(mount.is_fuse_exec('cmd'))
# Return CommandNotFoundError if fuse exists, but ldd doesn't exist
with patch('salt.utils.which', side_effect=[True, False]):
self.assertRaises(CommandNotFoundError, mount.is_fuse_exec, 'cmd')
mock = MagicMock(side_effect=[1, 0])
with patch.object(salt.utils, 'which', mock):
self.assertFalse(mount.is_fuse_exec('cmd'))
# Return False if fuse exists, ldd exists, but libfuse is not in the
# return
with patch('salt.utils.which', side_effect=[True, True]):
mock = MagicMock(return_value='not correct')
with patch.dict(mount.__salt__, {'cmd.run': mock}):
self.assertFalse(mount.is_fuse_exec('cmd'))
# Return True if fuse exists, ldd exists, and libfuse is in the return
with patch('salt.utils.which', side_effect=[True, True]):
mock = MagicMock(return_value='contains libfuse')
with patch.dict(mount.__salt__, {'cmd.run': mock}):
self.assertTrue(mount.is_fuse_exec('cmd'))
def test_swaps(self):
'''

View file

@ -34,7 +34,8 @@ class PamTestCase(TestCase):
'''
Test if the parsing function works
'''
with patch('salt.utils.fopen', mock_open(read_data=MOCK_FILE)):
with patch('os.path.exists', return_value=True), \
patch('salt.utils.fopen', mock_open(read_data=MOCK_FILE)):
self.assertListEqual(pam.read_file('/etc/pam.d/login'),
[{'arguments': [], 'control_flag': 'ok',
'interface': 'ok', 'module': 'ignore'}])

View file

@ -49,21 +49,24 @@ class PartedTestCase(TestCase, LoaderModuleMockMixin):
def test_virtual_bails_without_parted(self):
'''If parted not in PATH, __virtual__ shouldn't register module'''
with patch('salt.utils.which', lambda exe: not exe == "parted"):
with patch('salt.utils.which', lambda exe: not exe == "parted"),\
patch('salt.utils.is_windows', return_value=False):
ret = parted.__virtual__()
err = (False, 'The parted execution module failed to load parted binary is not in the path.')
self.assertEqual(err, ret)
def test_virtual_bails_without_lsblk(self):
'''If lsblk not in PATH, __virtual__ shouldn't register module'''
with patch('salt.utils.which', lambda exe: not exe == "lsblk"):
with patch('salt.utils.which', lambda exe: not exe == "lsblk"),\
patch('salt.utils.is_windows', return_value=False):
ret = parted.__virtual__()
err = (False, 'The parted execution module failed to load lsblk binary is not in the path.')
self.assertEqual(err, ret)
def test_virtual_bails_without_partprobe(self):
'''If partprobe not in PATH, __virtual__ shouldn't register module'''
with patch('salt.utils.which', lambda exe: not exe == "partprobe"):
with patch('salt.utils.which', lambda exe: not exe == "partprobe"),\
patch('salt.utils.is_windows', return_value=False):
ret = parted.__virtual__()
err = (False, 'The parted execution module failed to load partprobe binary is not in the path.')
self.assertEqual(err, ret)

View file

@ -18,6 +18,7 @@ from tests.support.mock import (
# Import Salt Libs
import salt.modules.pw_group as pw_group
import salt.utils
@skipIf(NO_MOCK, NO_MOCK_REASON)
@ -44,6 +45,7 @@ class PwGroupTestCase(TestCase, LoaderModuleMockMixin):
with patch.dict(pw_group.__salt__, {'cmd.run_all': mock}):
self.assertTrue(pw_group.delete('a'))
@skipIf(salt.utils.is_windows(), 'grp not available on Windows')
def test_info(self):
'''
Tests to return information about a group
@ -57,6 +59,7 @@ class PwGroupTestCase(TestCase, LoaderModuleMockMixin):
with patch.dict(pw_group.grinfo, mock):
self.assertDictEqual(pw_group.info('name'), {})
@skipIf(salt.utils.is_windows(), 'grp not available on Windows')
def test_getent(self):
'''
Tests for return info on all groups

View file

@ -80,15 +80,14 @@ class QemuNbdTestCase(TestCase, LoaderModuleMockMixin):
with patch.dict(qemu_nbd.__salt__, {'cmd.run': mock}):
self.assertEqual(qemu_nbd.init('/srv/image.qcow2'), '')
with patch.object(os.path, 'isfile', mock):
with patch.object(glob, 'glob',
MagicMock(return_value=['/dev/nbd0'])):
with patch.dict(qemu_nbd.__salt__,
{'cmd.run': mock,
'mount.mount': mock,
'cmd.retcode': MagicMock(side_effect=[1, 0])}):
self.assertDictEqual(qemu_nbd.init('/srv/image.qcow2'),
{'{0}/nbd/nbd0/nbd0'.format(tempfile.gettempdir()): '/dev/nbd0'})
with patch.object(os.path, 'isfile', mock),\
patch.object(glob, 'glob', MagicMock(return_value=['/dev/nbd0'])),\
patch.dict(qemu_nbd.__salt__,
{'cmd.run': mock,
'mount.mount': mock,
'cmd.retcode': MagicMock(side_effect=[1, 0])}):
expected = {os.sep.join([tempfile.gettempdir(), 'nbd', 'nbd0', 'nbd0']): '/dev/nbd0'}
self.assertDictEqual(qemu_nbd.init('/srv/image.qcow2'), expected)
# 'clear' function tests: 1

View file

@ -47,14 +47,19 @@ class SeedTestCase(TestCase, LoaderModuleMockMixin):
'''
Test to update and get the random script to a random place
'''
with patch.dict(seed.__salt__,
{'config.gather_bootstrap_script': MagicMock(return_value='BS_PATH/BS')}):
with patch.object(uuid, 'uuid4', return_value='UUID'):
with patch.object(os.path, 'exists', return_value=True):
with patch.object(os, 'chmod', return_value=None):
with patch.object(shutil, 'copy', return_value=None):
self.assertEqual(seed.prep_bootstrap('MPT'), ('MPT/tmp/UUID/BS', '/tmp/UUID'))
self.assertEqual(seed.prep_bootstrap('/MPT'), ('/MPT/tmp/UUID/BS', '/tmp/UUID'))
with patch.dict(seed.__salt__, {'config.gather_bootstrap_script': MagicMock(return_value=os.path.join('BS_PATH', 'BS'))}),\
patch.object(uuid, 'uuid4', return_value='UUID'),\
patch.object(os.path, 'exists', return_value=True),\
patch.object(os, 'chmod', return_value=None),\
patch.object(shutil, 'copy', return_value=None):
expect = (os.path.join('MPT', 'tmp', 'UUID', 'BS'),
os.sep + os.path.join('tmp', 'UUID'))
self.assertEqual(seed.prep_bootstrap('MPT'), expect)
expect = (os.sep + os.path.join('MPT', 'tmp', 'UUID', 'BS'),
os.sep + os.path.join('tmp', 'UUID'))
self.assertEqual(seed.prep_bootstrap(os.sep + 'MPT'), expect)
def test_apply_(self):
'''

View file

@ -109,10 +109,9 @@ class VirtualenvTestCase(TestCase, LoaderModuleMockMixin):
# Are we logging the deprecation information?
self.assertIn(
'INFO:The virtualenv \'--never-download\' option has been '
'deprecated in virtualenv(>=1.10), as such, the '
'\'never_download\' option to `virtualenv.create()` has '
'also been deprecated and it\'s not necessary anymore.',
'INFO:--never-download was deprecated in 1.10.0, '
'but reimplemented in 14.0.0. If this feature is needed, '
'please install a supported virtualenv version.',
handler.messages
)

View file

@ -95,7 +95,10 @@ class LocalCacheCleanOldJobsTestCase(TestCase, LoaderModuleMockMixin):
local_cache.clean_old_jobs()
# Get the name of the JID directory that was created to test against
jid_dir_name = jid_dir.rpartition('/')[2]
if salt.utils.is_windows():
jid_dir_name = jid_dir.rpartition('\\')[2]
else:
jid_dir_name = jid_dir.rpartition('/')[2]
# Assert the JID directory is still present to be cleaned after keep_jobs interval
self.assertEqual([jid_dir_name], os.listdir(TMP_JID_DIR))

View file

@ -1,74 +1,556 @@
# -*- coding: utf-8 -*-
from __future__ import absolute_import
import time
import shutil
import tempfile
import codecs
import glob
import logging
import os
from contextlib import contextmanager
import textwrap
import yaml
import salt.utils
from salt.utils.process import clean_proc
import salt.loader
import salt.utils.reactor as reactor
from tests.integration import AdaptedConfigurationTestCaseMixin
from tests.support.paths import TMP
from tests.support.unit import TestCase, skipIf
from tests.support.mock import patch, MagicMock
from tests.support.mixins import AdaptedConfigurationTestCaseMixin
from tests.support.mock import (
NO_MOCK,
NO_MOCK_REASON,
patch,
MagicMock,
Mock,
mock_open,
)
REACTOR_CONFIG = '''\
reactor:
- old_runner:
- /srv/reactor/old_runner.sls
- old_wheel:
- /srv/reactor/old_wheel.sls
- old_local:
- /srv/reactor/old_local.sls
- old_cmd:
- /srv/reactor/old_cmd.sls
- old_caller:
- /srv/reactor/old_caller.sls
- new_runner:
- /srv/reactor/new_runner.sls
- new_wheel:
- /srv/reactor/new_wheel.sls
- new_local:
- /srv/reactor/new_local.sls
- new_cmd:
- /srv/reactor/new_cmd.sls
- new_caller:
- /srv/reactor/new_caller.sls
'''
REACTOR_DATA = {
'runner': {'data': {'message': 'This is an error'}},
'wheel': {'data': {'id': 'foo'}},
'local': {'data': {'pkg': 'zsh', 'repo': 'updates'}},
'cmd': {'data': {'pkg': 'zsh', 'repo': 'updates'}},
'caller': {'data': {'path': '/tmp/foo'}},
}
SLS = {
'/srv/reactor/old_runner.sls': textwrap.dedent('''\
raise_error:
runner.error.error:
- name: Exception
- message: {{ data['data']['message'] }}
'''),
'/srv/reactor/old_wheel.sls': textwrap.dedent('''\
remove_key:
wheel.key.delete:
- match: {{ data['data']['id'] }}
'''),
'/srv/reactor/old_local.sls': textwrap.dedent('''\
install_zsh:
local.state.single:
- tgt: test
- arg:
- pkg.installed
- {{ data['data']['pkg'] }}
- kwarg:
fromrepo: {{ data['data']['repo'] }}
'''),
'/srv/reactor/old_cmd.sls': textwrap.dedent('''\
install_zsh:
cmd.state.single:
- tgt: test
- arg:
- pkg.installed
- {{ data['data']['pkg'] }}
- kwarg:
fromrepo: {{ data['data']['repo'] }}
'''),
'/srv/reactor/old_caller.sls': textwrap.dedent('''\
touch_file:
caller.file.touch:
- args:
- {{ data['data']['path'] }}
'''),
'/srv/reactor/new_runner.sls': textwrap.dedent('''\
raise_error:
runner.error.error:
- args:
- name: Exception
- message: {{ data['data']['message'] }}
'''),
'/srv/reactor/new_wheel.sls': textwrap.dedent('''\
remove_key:
wheel.key.delete:
- args:
- match: {{ data['data']['id'] }}
'''),
'/srv/reactor/new_local.sls': textwrap.dedent('''\
install_zsh:
local.state.single:
- tgt: test
- args:
- fun: pkg.installed
- name: {{ data['data']['pkg'] }}
- fromrepo: {{ data['data']['repo'] }}
'''),
'/srv/reactor/new_cmd.sls': textwrap.dedent('''\
install_zsh:
cmd.state.single:
- tgt: test
- args:
- fun: pkg.installed
- name: {{ data['data']['pkg'] }}
- fromrepo: {{ data['data']['repo'] }}
'''),
'/srv/reactor/new_caller.sls': textwrap.dedent('''\
touch_file:
caller.file.touch:
- args:
- name: {{ data['data']['path'] }}
'''),
}
LOW_CHUNKS = {
# Note that the "name" value in the chunk has been overwritten by the
# "name" argument in the SLS. This is one reason why the new schema was
# needed.
'old_runner': [{
'state': 'runner',
'__id__': 'raise_error',
'__sls__': '/srv/reactor/old_runner.sls',
'order': 1,
'fun': 'error.error',
'name': 'Exception',
'message': 'This is an error',
}],
'old_wheel': [{
'state': 'wheel',
'__id__': 'remove_key',
'name': 'remove_key',
'__sls__': '/srv/reactor/old_wheel.sls',
'order': 1,
'fun': 'key.delete',
'match': 'foo',
}],
'old_local': [{
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/old_local.sls',
'order': 1,
'tgt': 'test',
'fun': 'state.single',
'arg': ['pkg.installed', 'zsh'],
'kwarg': {'fromrepo': 'updates'},
}],
'old_cmd': [{
'state': 'local', # 'cmd' should be aliased to 'local'
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/old_cmd.sls',
'order': 1,
'tgt': 'test',
'fun': 'state.single',
'arg': ['pkg.installed', 'zsh'],
'kwarg': {'fromrepo': 'updates'},
}],
'old_caller': [{
'state': 'caller',
'__id__': 'touch_file',
'name': 'touch_file',
'__sls__': '/srv/reactor/old_caller.sls',
'order': 1,
'fun': 'file.touch',
'args': ['/tmp/foo'],
}],
'new_runner': [{
'state': 'runner',
'__id__': 'raise_error',
'name': 'raise_error',
'__sls__': '/srv/reactor/new_runner.sls',
'order': 1,
'fun': 'error.error',
'args': [
{'name': 'Exception'},
{'message': 'This is an error'},
],
}],
'new_wheel': [{
'state': 'wheel',
'__id__': 'remove_key',
'name': 'remove_key',
'__sls__': '/srv/reactor/new_wheel.sls',
'order': 1,
'fun': 'key.delete',
'args': [
{'match': 'foo'},
],
}],
'new_local': [{
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/new_local.sls',
'order': 1,
'tgt': 'test',
'fun': 'state.single',
'args': [
{'fun': 'pkg.installed'},
{'name': 'zsh'},
{'fromrepo': 'updates'},
],
}],
'new_cmd': [{
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/new_cmd.sls',
'order': 1,
'tgt': 'test',
'fun': 'state.single',
'args': [
{'fun': 'pkg.installed'},
{'name': 'zsh'},
{'fromrepo': 'updates'},
],
}],
'new_caller': [{
'state': 'caller',
'__id__': 'touch_file',
'name': 'touch_file',
'__sls__': '/srv/reactor/new_caller.sls',
'order': 1,
'fun': 'file.touch',
'args': [
{'name': '/tmp/foo'},
],
}],
}
WRAPPER_CALLS = {
'old_runner': (
'error.error',
{
'__state__': 'runner',
'__id__': 'raise_error',
'__sls__': '/srv/reactor/old_runner.sls',
'__user__': 'Reactor',
'order': 1,
'arg': [],
'kwarg': {
'name': 'Exception',
'message': 'This is an error',
},
'name': 'Exception',
'message': 'This is an error',
},
),
'old_wheel': (
'key.delete',
{
'__state__': 'wheel',
'__id__': 'remove_key',
'name': 'remove_key',
'__sls__': '/srv/reactor/old_wheel.sls',
'order': 1,
'__user__': 'Reactor',
'arg': ['foo'],
'kwarg': {},
'match': 'foo',
},
),
'old_local': {
'args': ('test', 'state.single'),
'kwargs': {
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/old_local.sls',
'order': 1,
'arg': ['pkg.installed', 'zsh'],
'kwarg': {'fromrepo': 'updates'},
},
},
'old_cmd': {
'args': ('test', 'state.single'),
'kwargs': {
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/old_cmd.sls',
'order': 1,
'arg': ['pkg.installed', 'zsh'],
'kwarg': {'fromrepo': 'updates'},
},
},
'old_caller': {
'args': ('file.touch', '/tmp/foo'),
'kwargs': {},
},
'new_runner': (
'error.error',
{
'__state__': 'runner',
'__id__': 'raise_error',
'name': 'raise_error',
'__sls__': '/srv/reactor/new_runner.sls',
'__user__': 'Reactor',
'order': 1,
'arg': (),
'kwarg': {
'name': 'Exception',
'message': 'This is an error',
},
},
),
'new_wheel': (
'key.delete',
{
'__state__': 'wheel',
'__id__': 'remove_key',
'name': 'remove_key',
'__sls__': '/srv/reactor/new_wheel.sls',
'order': 1,
'__user__': 'Reactor',
'arg': (),
'kwarg': {'match': 'foo'},
},
),
'new_local': {
'args': ('test', 'state.single'),
'kwargs': {
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/new_local.sls',
'order': 1,
'arg': (),
'kwarg': {
'fun': 'pkg.installed',
'name': 'zsh',
'fromrepo': 'updates',
},
},
},
'new_cmd': {
'args': ('test', 'state.single'),
'kwargs': {
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/new_cmd.sls',
'order': 1,
'arg': (),
'kwarg': {
'fun': 'pkg.installed',
'name': 'zsh',
'fromrepo': 'updates',
},
},
},
'new_caller': {
'args': ('file.touch',),
'kwargs': {'name': '/tmp/foo'},
},
}
log = logging.getLogger(__name__)
@contextmanager
def reactor_process(opts, reactor):
opts = dict(opts)
opts['reactor'] = reactor
proc = reactor.Reactor(opts)
proc.start()
try:
if os.environ.get('TRAVIS_PYTHON_VERSION', None) is not None:
# Travis is slow
time.sleep(10)
else:
time.sleep(2)
yield
finally:
clean_proc(proc)
def _args_sideffect(*args, **kwargs):
return args, kwargs
@skipIf(True, 'Skipping until its clear what and how is this supposed to be testing')
@skipIf(NO_MOCK, NO_MOCK_REASON)
class TestReactor(TestCase, AdaptedConfigurationTestCaseMixin):
def setUp(self):
self.opts = self.get_temp_config('master')
self.tempdir = tempfile.mkdtemp(dir=TMP)
self.sls_name = os.path.join(self.tempdir, 'test.sls')
with salt.utils.fopen(self.sls_name, 'w') as fh:
fh.write('''
update_fileserver:
runner.fileserver.update
''')
'''
Tests for constructing the low chunks to be executed via the Reactor
'''
@classmethod
def setUpClass(cls):
'''
Load the reactor config for mocking
'''
cls.opts = cls.get_temp_config('master')
reactor_config = yaml.safe_load(REACTOR_CONFIG)
cls.opts.update(reactor_config)
cls.reactor = reactor.Reactor(cls.opts)
cls.reaction_map = salt.utils.repack_dictlist(reactor_config['reactor'])
renderers = salt.loader.render(cls.opts, {})
cls.render_pipe = [(renderers[x], '') for x in ('jinja', 'yaml')]
def tearDown(self):
if os.path.isdir(self.tempdir):
shutil.rmtree(self.tempdir)
del self.opts
del self.tempdir
del self.sls_name
@classmethod
def tearDownClass(cls):
del cls.opts
del cls.reactor
del cls.render_pipe
def test_basic(self):
reactor_config = [
{'salt/tagA': ['/srv/reactor/A.sls']},
{'salt/tagB': ['/srv/reactor/B.sls']},
{'*': ['/srv/reactor/all.sls']},
]
wrap = reactor.ReactWrap(self.opts)
with patch.object(reactor.ReactWrap, 'local', MagicMock(side_effect=_args_sideffect)):
ret = wrap.run({'fun': 'test.ping',
'state': 'local',
'order': 1,
'name': 'foo_action',
'__id__': 'foo_action'})
raise Exception(ret)
def test_list_reactors(self):
'''
Ensure that list_reactors() returns the correct list of reactor SLS
files for each tag.
'''
for schema in ('old', 'new'):
for rtype in REACTOR_DATA:
tag = '_'.join((schema, rtype))
self.assertEqual(
self.reactor.list_reactors(tag),
self.reaction_map[tag]
)
def test_reactions(self):
'''
Ensure that the correct reactions are built from the configured SLS
files and tag data.
'''
for schema in ('old', 'new'):
for rtype in REACTOR_DATA:
tag = '_'.join((schema, rtype))
log.debug('test_reactions: processing %s', tag)
reactors = self.reactor.list_reactors(tag)
log.debug('test_reactions: %s reactors: %s', tag, reactors)
# No globbing in our example SLS, and the files don't actually
# exist, so mock glob.glob to just return back the path passed
# to it.
with patch.object(
glob,
'glob',
MagicMock(side_effect=lambda x: [x])):
# The below four mocks are all so that
# salt.template.compile_template() will read the templates
# we've mocked up in the SLS global variable above.
with patch.object(
os.path, 'isfile',
MagicMock(return_value=True)):
with patch.object(
salt.utils, 'is_empty',
MagicMock(return_value=False)):
with patch.object(
codecs, 'open',
mock_open(read_data=SLS[reactors[0]])):
with patch.object(
salt.template, 'template_shebang',
MagicMock(return_value=self.render_pipe)):
reactions = self.reactor.reactions(
tag,
REACTOR_DATA[rtype],
reactors,
)
log.debug(
'test_reactions: %s reactions: %s',
tag, reactions
)
self.assertEqual(reactions, LOW_CHUNKS[tag])
@skipIf(NO_MOCK, NO_MOCK_REASON)
class TestReactWrap(TestCase, AdaptedConfigurationTestCaseMixin):
'''
Tests that we are formulating the wrapper calls properly
'''
@classmethod
def setUpClass(cls):
cls.wrap = reactor.ReactWrap(cls.get_temp_config('master'))
@classmethod
def tearDownClass(cls):
del cls.wrap
def test_runner(self):
'''
Test runner reactions using both the old and new config schema
'''
for schema in ('old', 'new'):
tag = '_'.join((schema, 'runner'))
chunk = LOW_CHUNKS[tag][0]
thread_pool = Mock()
thread_pool.fire_async = Mock()
with patch.object(self.wrap, 'pool', thread_pool):
self.wrap.run(chunk)
thread_pool.fire_async.assert_called_with(
self.wrap.client_cache['runner'].low,
args=WRAPPER_CALLS[tag]
)
def test_wheel(self):
'''
Test wheel reactions using both the old and new config schema
'''
for schema in ('old', 'new'):
tag = '_'.join((schema, 'wheel'))
chunk = LOW_CHUNKS[tag][0]
thread_pool = Mock()
thread_pool.fire_async = Mock()
with patch.object(self.wrap, 'pool', thread_pool):
self.wrap.run(chunk)
thread_pool.fire_async.assert_called_with(
self.wrap.client_cache['wheel'].low,
args=WRAPPER_CALLS[tag]
)
def test_local(self):
'''
Test local reactions using both the old and new config schema
'''
for schema in ('old', 'new'):
tag = '_'.join((schema, 'local'))
chunk = LOW_CHUNKS[tag][0]
client_cache = {'local': Mock()}
client_cache['local'].cmd_async = Mock()
with patch.object(self.wrap, 'client_cache', client_cache):
self.wrap.run(chunk)
client_cache['local'].cmd_async.assert_called_with(
*WRAPPER_CALLS[tag]['args'],
**WRAPPER_CALLS[tag]['kwargs']
)
def test_cmd(self):
'''
Test cmd reactions (alias for 'local') using both the old and new
config schema
'''
for schema in ('old', 'new'):
tag = '_'.join((schema, 'cmd'))
chunk = LOW_CHUNKS[tag][0]
client_cache = {'local': Mock()}
client_cache['local'].cmd_async = Mock()
with patch.object(self.wrap, 'client_cache', client_cache):
self.wrap.run(chunk)
client_cache['local'].cmd_async.assert_called_with(
*WRAPPER_CALLS[tag]['args'],
**WRAPPER_CALLS[tag]['kwargs']
)
def test_caller(self):
'''
Test caller reactions using both the old and new config schema
'''
for schema in ('old', 'new'):
tag = '_'.join((schema, 'caller'))
chunk = LOW_CHUNKS[tag][0]
client_cache = {'caller': Mock()}
client_cache['caller'].cmd = Mock()
with patch.object(self.wrap, 'client_cache', client_cache):
self.wrap.run(chunk)
client_cache['caller'].cmd.assert_called_with(
*WRAPPER_CALLS[tag]['args'],
**WRAPPER_CALLS[tag]['kwargs']
)