[2016.3] Merge forward from 2015.8 to 2016.3 (#32784)

* json encode arguments passed to an execution module function call

this fixes problems where you could pass a string to a module function,
which thanks to the yaml decoder which is used when parsing command line
arguments could change its type entirely. for example:

__salt__['test.echo')('{foo: bar}')

the test.echo function just returns the argument it's given. however,
because it's being called through a salt-call process like this:

salt-call --local test.echo {foo: bar}

salt thinks it's yaml and therefore yaml decodes it. the return value
from the test.echo call above is therefore a dict, not a string.

* Prevent crash if pygit2 package is requesting re-compilation of the e… (#32652)

* Prevent crash if pygit2 package is requesting re-compilation of the entire library on production systems (no *devel packages)

* Fix PEP8: move imports to the top of the file

* Move logger up

* Add log error message in case if exception is not an ImportError

* align OS grains from older SLES with current one (#32649)

* Fixing critical bug to remove only the specified Host instead of the entire Host cluster (#32640)

* yumpkg: Ignore epoch in version comparison for explict versions without an epoch (#32563)

* yumpkg: Ignore epoch in version comparison for explict versions without an epoch

Also properly handle comparisions for packages with multiple versions.

Resolves #32229

* Don't attempt downgrade for kernel and its subpackages

Multiple versions are supported since their paths do not conflict.

* Lower log level for pillar cache (#32655)

This shouldn't show up on salt-call runs

* Don't access deprecated Exception.message attribute. (#32556)

* Don't access deprecated Exception.message attribute.

To avoid a deprecation warning message in logs.
There is a new function salt.exceptions.get_error_message(e) instead.

* Fixed module docs test.

* Fix for issue 32523 (#32672)

* Fix routes for redhat < 6

* Handle a couple of arguments better (Azure) (#32683)

* backporting a fix from develop where the use of splay would result in seconds=0 in the schedule.list when there was no seconds specified in the origina schedule

* Handle when beacon not configured and we try to enable/disable them (#32692)

* Handle the situation when the beacon is not configured and we try to disable it

* a couple more missing returns in the enable & disable

* Check dependencies type before appling str operations (#32693)

* Update external auth documentation to list supported matcher. (#32733)

Thanks to #31598, all matchers are supported for eauth configuration.
But we still have no way to use compound matchers in eauth configuration.
Update the documentation to explicitly express this limitation.

* modules.win_dacl: consistent case of dacl constants (#32720)

* Document pillar cache options (#32643)

* Add note about Pillar data cache requirement for Pillar targeting method

* Add `saltutil.refresh_pillar` function to the scheduled Minion jobs

* Minor fixes in docs

* Add note about relations between `pillar_cache` option and Pillar Targeting
to Master config comments with small reformatting

* Document Pillar Cache Options for Salt Master

* Document Minions Targeting with Mine

* Remove `saltutil.refresh_pillar` scheduled persistent job

* Properly handle minion failback failure. (#32749)

* Properly handle minion failback failure.

Initiate minion restart if all masters down on __master_disconnect like
minion does on the initial master connect on start.

* Fixed unit test

* Improve documentation on pygit2 versions (#32779)

This adds an explanation of the python-cffi dep added in pygit2 0.21.0,
and recommends 0.20.3 for LTS distros. It also links to the salt-pack
issue which tracks the progress of adding pygit2 to our Debian and
Ubuntu repositories.

* Pylint fix
This commit is contained in:
Nicole Thomas 2016-04-25 15:26:09 -06:00
parent 7141119ff6
commit f9ffcb697a
30 changed files with 330 additions and 122 deletions

View file

@ -560,7 +560,7 @@
# and the first repo to have the file will return it.
# When using the git backend branches and tags are translated into salt
# environments.
# Note: file:// repos will be treated as a remote, so refs you want used must
# Note: file:// repos will be treated as a remote, so refs you want used must
# exist in that repo as *local* refs.
#gitfs_remotes:
# - git://github.com/saltstack/salt-states.git
@ -632,42 +632,41 @@
# A master can cache pillars locally to bypass the expense of having to render them
# for each minion on every request. This feature should only be enabled in cases
# where pillar rendering time is known to be unsatisfactory and any attendent security
# where pillar rendering time is known to be unsatisfactory and any attendant security
# concerns about storing pillars in a master cache have been addressed.
#
# When enabling this feature, be certain to read through the additional pillar_cache_*
# configuration options to fully understand the tuneable parameters and their implications.
# When enabling this feature, be certain to read through the additional ``pillar_cache_*``
# configuration options to fully understand the tunable parameters and their implications.
#
# Note: setting ``pillar_cache: True`` has no effect on targeting Minions with Pillars.
# See https://docs.saltstack.com/en/latest/topics/targeting/pillar.html
#pillar_cache: False
# If and only if a master has set `pillar_cache: True`, the cache TTL controls the amount
# If and only if a master has set ``pillar_cache: True``, the cache TTL controls the amount
# of time, in seconds, before the cache is considered invalid by a master and a fresh
# pillar is recompiled and stored.
#
# pillar_cache_ttl: 3600
#pillar_cache_ttl: 3600
# If an only if a master has set `pillar_cache: True`, one of several storage providers
# can be utililzed.
# If an only if a master has set ``pillar_cache: True``, one of several storage providers
# can be utilized:
#
# `disk`: The default storage backend. This caches rendered pillars to the master cache.
# Rendered pillars are serialized and deserialized as msgpack structures for speed.
# Note that pillars are stored UNENCRYPTED. Ensure that the master cache
# has permissions set appropriately. (Sane defaults are provided.)
# disk: The default storage backend. This caches rendered pillars to the master cache.
# Rendered pillars are serialized and deserialized as ``msgpack`` structures for
# speed. Note that pillars are stored UNENCRYPTED. Ensure that the master cache
# has permissions set appropriately (sane defaults are provided).
#
#`memory`: [EXPERIMENTAL] An optional backend for pillar caches which uses a pure-Python
# in-memory data structure for maximal performance. There are several cavaets,
# however. First, because each master worker contains its own in-memory cache,
# there is no guarantee of cache consistency between minion requests. This
# works best in situations where the pillar rarely if ever changes. Secondly,
# and perhaps more importantly, this means that unencrypted pillars will
# be accessible to any process which can examine the memory of the salt-master!
# This may represent a substantial security risk.
# memory: [EXPERIMENTAL] An optional backend for pillar caches which uses a pure-Python
# in-memory data structure for maximal performance. There are several caveats,
# however. First, because each master worker contains its own in-memory cache,
# there is no guarantee of cache consistency between minion requests. This
# works best in situations where the pillar rarely if ever changes. Secondly,
# and perhaps more importantly, this means that unencrypted pillars will
# be accessible to any process which can examine the memory of the ``salt-master``!
# This may represent a substantial security risk.
#
#pillar_cache_backend: disk
##### Syndic settings #####
##########################################
# The Salt syndic is used to pass commands through a master from a higher

View file

@ -10,7 +10,7 @@ of the Salt system each have a respective configuration file. The
:command:`salt-minion` is configured via the minion configuration file.
.. seealso::
:ref:`example master configuration file <configuration-examples-master>`
:ref:`Example master configuration file <configuration-examples-master>`.
The configuration file for the salt-master is located at
:file:`/etc/salt/master` by default. A notable exception is FreeBSD, where the
@ -20,7 +20,6 @@ options are as follows:
Primary Master Configuration
============================
.. conf_master:: interface
``interface``
@ -1594,7 +1593,6 @@ authenticate is protected by a passphrase.
gitfs_passphrase: mypassphrase
hg: Mercurial Remote File Server Backend
----------------------------------------
@ -2119,13 +2117,13 @@ configuration is the same as :conf_master:`file_roots`:
prod:
- /srv/pillar/prod
.. _master-configuration-ext-pillar:
.. conf_master:: ext_pillar
``ext_pillar``
--------------
.. _master-configuration-ext-pillar:
The ext_pillar option allows for any number of external pillar interfaces to be
called when populating pillar data. The configuration is based on ext_pillar
functions. The available ext_pillar functions can be found herein:
@ -2163,7 +2161,7 @@ ext_pillar.
ext_pillar_first: False
.. _git_pillar-config-opts:
.. _git-pillar-config-opts:
Git External Pillar (git_pillar) Configuration Options
------------------------------------------------------
@ -2353,6 +2351,7 @@ they were created by a different master.
.. __: http://www.gluster.org/
.. _git-ext-pillar-auth-opts:
Git External Pillar Authentication Options
******************************************
@ -2460,10 +2459,15 @@ authenticate is protected by a passphrase.
git_pillar_passphrase: mypassphrase
.. _pillar-merging-opts:
Pillar Merging Options
----------------------
.. conf_master:: pillar_source_merging_strategy
``pillar_source_merging_strategy``
----------------------------------
**********************************
.. versionadded:: 2014.7.0
@ -2472,7 +2476,7 @@ Default: ``smart``
The pillar_source_merging_strategy option allows you to configure merging
strategy between different sources. It accepts 4 values:
* recurse:
* ``recurse``:
it will merge recursively mapping of data. For example, theses 2 sources:
@ -2498,7 +2502,7 @@ strategy between different sources. It accepts 4 values:
element2: True
baz: quux
* aggregate:
* ``aggregate``:
instructs aggregation of elements between sources that use the #!yamlex renderer.
@ -2533,7 +2537,7 @@ strategy between different sources. It accepts 4 values:
- quux
- quux2
* overwrite:
* ``overwrite``:
Will use the behaviour of the 2014.1 branch and earlier.
@ -2563,14 +2567,14 @@ strategy between different sources. It accepts 4 values:
third_key: blah
fourth_key: blah
* smart (default):
* ``smart`` (default):
Guesses the best strategy based on the "renderer" setting.
.. conf_master:: pillar_merge_lists
``pillar_merge_lists``
----------------------
**********************
.. versionadded:: 2015.8.0
@ -2582,6 +2586,83 @@ Recursively merge lists by aggregating them instead of replacing them.
pillar_merge_lists: False
.. _pillar-cache-opts:
Pillar Cache Options
--------------------
.. conf_master:: pillar_cache
``pillar_cache``
****************
.. versionadded:: 2015.8.8
Default: ``False``
A master can cache pillars locally to bypass the expense of having to render them
for each minion on every request. This feature should only be enabled in cases
where pillar rendering time is known to be unsatisfactory and any attendant security
concerns about storing pillars in a master cache have been addressed.
When enabling this feature, be certain to read through the additional ``pillar_cache_*``
configuration options to fully understand the tunable parameters and their implications.
.. code-block:: yaml
pillar_cache: False
.. note::
Setting ``pillar_cache: True`` has no effect on
:ref:`targeting minions with pillar <targeting-pillar>`.
.. conf_master:: pillar_cache_ttl
``pillar_cache_ttl``
********************
.. versionadded:: 2015.8.8
Default: ``3600``
If and only if a master has set ``pillar_cache: True``, the cache TTL controls the amount
of time, in seconds, before the cache is considered invalid by a master and a fresh
pillar is recompiled and stored.
.. conf_master:: pillar_cache_backend
``pillar_cache_backend``
************************
.. versionadded:: 2015.8.8
Default: ``disk``
If an only if a master has set ``pillar_cache: True``, one of several storage providers
can be utilized:
* ``disk`` (default):
The default storage backend. This caches rendered pillars to the master cache.
Rendered pillars are serialized and deserialized as ``msgpack`` structures for speed.
Note that pillars are stored UNENCRYPTED. Ensure that the master cache has permissions
set appropriately (sane defaults are provided).
* ``memory`` [EXPERIMENTAL]:
An optional backend for pillar caches which uses a pure-Python
in-memory data structure for maximal performance. There are several caveats,
however. First, because each master worker contains its own in-memory cache,
there is no guarantee of cache consistency between minion requests. This
works best in situations where the pillar rarely if ever changes. Secondly,
and perhaps more importantly, this means that unencrypted pillars will
be accessible to any process which can examine the memory of the ``salt-master``!
This may represent a substantial security risk.
.. code-block:: yaml
pillar_cache_backend: disk
Syndic Server Settings
======================

View file

@ -60,7 +60,7 @@ The access controls are manifested using matchers in these configurations:
In the above example, fred is able to send commands only to minions which match
the specified glob target. This can be expanded to include other functions for
other minions based on standard targets.
other minions based on standard targets (all matchers are supported except the compound one).
.. code-block:: yaml
@ -84,4 +84,3 @@ unrestricted access to salt commands.
.. note::
Functions are matched using regular expressions.

View file

@ -45,10 +45,10 @@ passed, an empty list must be added:
Mine Functions Aliases
----------------------
Function aliases can be used to provide friendly names, usage intentions or to allow
multiple calls of the same function with different arguments. There is a different
syntax for passing positional and key-value arguments. Mixing positional and
key-value arguments is not supported.
Function aliases can be used to provide friendly names, usage intentions or to
allow multiple calls of the same function with different arguments. There is a
different syntax for passing positional and key-value arguments. Mixing
positional and key-value arguments is not supported.
.. versionadded:: 2014.7.0
@ -115,6 +115,20 @@ stored in a different location. Here is an example of a flat roster containing
of the Minion in question. This results in a non-trivial delay in
retrieving the requested data.
Minions Targeting with Mine
===========================
The ``mine.get`` function supports various methods of :ref:`Minions targeting
<targeting>` to fetch Mine data from particular hosts, such as glob or regular
expression matching on Minion id (name), grains, pillars and :ref:`compound
matches <targeting-compound>`. See the :py:mod:`salt.modules.mine` module
documentation for the reference.
.. note::
Pillar data needs to be cached on Master for pillar targeting to work with
Mine. Read the note in :ref:`relevant section <targeting-pillar>`.
Example
=======
@ -160,7 +174,7 @@ to add them to the pool of load balanced servers.
<...file contents snipped...>
{% for server, addrs in salt['mine.get']('roles:web', 'network.ip_addrs', expr_form='grain').items() %}
{% for server, addrs in salt['mine.get']('roles:web', 'network.ip_addrs', expr_form='grain') | dictsort() %}
server {{ server }} {{ addrs[0] }}:80 check
{% endfor %}

View file

@ -7,6 +7,18 @@ Targeting using Pillar
Pillar data can be used when targeting minions. This allows for ultimate
control and flexibility when targeting minions.
.. note::
To start using Pillar targeting it is required to make a Pillar
data cache on Salt Master for each Minion via following commands:
``salt '*' saltutil.refresh_pillar`` or ``salt '*' saltutil.sync_all``.
Also Pillar data cache will be populated during the
:ref:`highstate <running-highstate>` run. Once Pillar data changes, you
must refresh the cache by running above commands for this targeting
method to work correctly.
Example:
.. code-block:: bash
salt -I 'somekey:specialvalue' test.ping

View file

@ -63,26 +63,38 @@ be used to install it:
If pygit2_ is not packaged for the platform on which the Master is running, the
pygit2_ website has installation instructions here__. Keep in mind however that
following these instructions will install libgit2 and pygit2_ without system
following these instructions will install libgit2_ and pygit2_ without system
packages. Additionally, keep in mind that :ref:`SSH authentication in pygit2
<pygit2-authentication-ssh>` requires libssh2_ (*not* libssh) development
libraries to be present before libgit2 is built. On some distros (debian based)
``pkg-config`` is also required to link libgit2 with libssh2.
libraries to be present before libgit2_ is built. On some Debian-based distros
``pkg-config`` is also required to link libgit2_ with libssh2.
Additionally, version 0.21.0 of pygit2 introduced a dependency on python-cffi_,
which in turn depends on newer releases of libffi_. Upgrading libffi_ is not
advisable as several other applications depend on it, so on older LTS linux
releases pygit2_ 0.20.3 and libgit2_ 0.20.0 is the recommended combination.
While these are not packaged in the official repositories for Debian and
Ubuntu, SaltStack is actively working on adding packages for these to our
repositories_. The progress of this effort can be tracked here__.
.. warning::
pygit2_ is actively developed and :ref:`frequently makes
non-backwards-compatible API changes <pygit2-version-policy>`, even in
minor releases. It is not uncommon for pygit2_ upgrades to result in errors
in Salt. Please take care when upgrading pygit2_, and pay close attention
to the :ref:`changelog <pygit2-changelog>`, keeping an eye out for API
changes. Errors can be reported on the :ref:`SaltStack issue tracker
<saltstack-issue-tracker>`.
to the changelog_, keeping an eye out for API changes. Errors can be
reported on the :ref:`SaltStack issue tracker <saltstack-issue-tracker>`.
.. _pygit2-version-policy: http://www.pygit2.org/install.html#version-numbers
.. _pygit2-changelog: https://github.com/libgit2/pygit2#changelog
.. _changelog: https://github.com/libgit2/pygit2#changelog
.. _saltstack-issue-tracker: https://github.com/saltstack/salt/issues
.. __: http://www.pygit2.org/install.html
.. _libgit2: https://libgit2.github.com/
.. _libssh2: http://www.libssh2.org/
.. _python-cffi: https://pypi.python.org/pypi/cffi
.. _libffi: http://sourceware.org/libffi/
.. _repositories: https://repo.saltstack.com
.. __: https://github.com/saltstack/salt-pack/issues/70
GitPython
---------

View file

@ -110,7 +110,6 @@ the sample configuration file (default values)
recon_max: 5000
recon_randomize: True
- recon_default: the default value the socket should use, i.e. 1000. This value is in
milliseconds. (1000ms = 1 second)
- recon_max: the max value that the socket should use as a delay before trying to reconnect

View file

@ -29,8 +29,8 @@ for any OS with a Bourne shell:
.. code-block:: bash
curl -L https://bootstrap.saltstack.com -o install_salt.sh
sudo sh install_salt.sh
curl -L https://bootstrap.saltstack.com -o bootstrap_salt.sh
sudo sh bootstrap_salt.sh
See the `salt-bootstrap`_ documentation for other one liners. When using `Vagrant`_

View file

@ -533,8 +533,8 @@ This example clearly illustrates that; one, using the YAML renderer by default
is a wise decision and two, unbridled power can be obtained where needed by
using a pure Python SLS.
Running and debugging salt states.
----------------------------------
Running and Debugging Salt States
---------------------------------
Once the rules in an SLS are ready, they should be tested to ensure they
work properly. To invoke these rules, simply execute

View file

@ -49,7 +49,7 @@ try:
except ImportError as exc:
if exc.args[0] != 'No module named _msgpack':
raise
from salt.exceptions import SaltSystemExit
from salt.exceptions import SaltSystemExit, SaltClientError, get_error_message
# Let's instantiate log using salt.log.setup.logging.getLogger() so pylint
@ -96,7 +96,7 @@ class DaemonsMixin(object): # pylint: disable=no-init
:return:
'''
log.exception('Failed to create environment for {d_name}: {reason}'.format(
d_name=self.__class__.__name__, reason=error.message))
d_name=self.__class__.__name__, reason=get_error_message(error)))
self.shutdown(error)
@ -347,6 +347,8 @@ class Minion(parsers.MinionOptionParser, DaemonsMixin): # pylint: disable=no-in
self.verify_hash_type()
self.start_log_info()
self.minion.tune_in()
if self.minion.restart:
raise SaltClientError('Minion could not connect to Master')
except (KeyboardInterrupt, SaltSystemExit) as exc:
log.warn('Stopping the Salt Minion')
if isinstance(exc, KeyboardInterrupt):

View file

@ -103,8 +103,8 @@ class FunctionWrapper(object):
The remote execution function
'''
argv = [cmd]
argv.extend([str(arg) for arg in args])
argv.extend(['{0}={1}'.format(key, val) for key, val in six.iteritems(kwargs)])
argv.extend([json.dumps(arg) for arg in args])
argv.extend(['{0}={1}'.format(key, json.dumps(val)) for key, val in six.iteritems(kwargs)])
single = salt.client.ssh.Single(
self.opts,
argv,

View file

@ -221,9 +221,9 @@ def list_nodes(conn=None, call=None):
ret = {}
nodes = list_nodes_full(conn, call)
for node in nodes:
ret[node] = {}
for prop in ('id', 'image', 'name', 'size', 'state', 'private_ips', 'public_ips'):
ret[node][prop] = nodes[node][prop]
ret[node] = {'name': node}
for prop in ('id', 'image', 'size', 'state', 'private_ips', 'public_ips'):
ret[node][prop] = nodes[node].get(prop)
return ret
@ -585,6 +585,7 @@ def create(vm_):
# Deleting two useless keywords
del vm_kwargs['deployment_slot']
del vm_kwargs['label']
del vm_kwargs['virtual_network_name']
result = conn.add_role(**vm_kwargs)
_wait_for_async(conn, result.request_id)
except Exception as exc:

View file

@ -848,7 +848,7 @@ DEFAULT_MINION_OPTS = {
'environment': None,
'pillarenv': None,
'pillar_opts': False,
# `pillar_cache` and `pillar_ttl`
# ``pillar_cache``, ``pillar_cache_ttl`` and ``pillar_cache_backend``
# are not used on the minion but are unavoidably in the code path
'pillar_cache': False,
'pillar_cache_ttl': 3600,

View file

@ -26,6 +26,13 @@ def _nested_output(obj):
return ret
def get_error_message(error):
'''
Get human readable message from Python Exception
'''
return error.args[0] if error.args else ''
class SaltException(Exception):
'''
Base exception class; all Salt-specific exceptions should subclass this

View file

@ -1271,14 +1271,19 @@ def os_data():
for line in fhr:
if 'enterprise' in line.lower():
grains['lsb_distrib_id'] = 'SLES'
grains['lsb_distrib_codename'] = re.sub(r'\(.+\)', '', line).strip()
elif 'version' in line.lower():
version = re.sub(r'[^0-9]', '', line)
elif 'patchlevel' in line.lower():
patch = re.sub(r'[^0-9]', '', line)
grains['lsb_distrib_release'] = version
if patch:
grains['lsb_distrib_release'] += ' SP' + patch
grains['lsb_distrib_codename'] = 'n.a'
grains['lsb_distrib_release'] += '.' + patch
patchstr = 'SP' + patch
if grains['lsb_distrib_codename'] and patchstr not in grains['lsb_distrib_codename']:
grains['lsb_distrib_codename'] += ' ' + patchstr
if not grains['lsb_distrib_codename']:
grains['lsb_distrib_codename'] = 'n.a'
elif os.path.isfile('/etc/altlinux-release'):
# ALT Linux
grains['lsb_distrib_id'] = 'altlinux'

View file

@ -634,12 +634,12 @@ def grains(opts, force_refresh=False, proxy=None):
print __grains__['id']
'''
# if we hae no grains, lets try loading from disk (TODO: move to decorator?)
cfn = os.path.join(
opts['cachedir'],
'grains.cache.p'
)
if not force_refresh:
if opts.get('grains_cache', False):
cfn = os.path.join(
opts['cachedir'],
'grains.cache.p'
)
if os.path.isfile(cfn):
grains_cache_age = int(time.time() - os.path.getmtime(cfn))
if opts.get('grains_cache_expiration', 300) >= grains_cache_age and not \

View file

@ -806,6 +806,7 @@ class Minion(MinionBase):
self.win_proc = []
self.loaded_base_name = loaded_base_name
self.connected = False
self.restart = False
if io_loop is None:
if HAS_ZMQ:
@ -1854,9 +1855,13 @@ class Minion(MinionBase):
# if eval_master finds a new master for us, self.connected
# will be True again on successful master authentication
master, self.pub_channel = yield self.eval_master(
opts=self.opts,
failed=True)
try:
master, self.pub_channel = yield self.eval_master(
opts=self.opts,
failed=True)
except SaltClientError:
pass
if self.connected:
self.opts['master'] = master
@ -1894,6 +1899,9 @@ class Minion(MinionBase):
schedule=schedule)
else:
self.schedule.delete_job(name='__master_failback', persist=True)
else:
self.restart = True
self.io_loop.stop()
elif package.startswith('__master_connected'):
# handle this event only once. otherwise it will pollute the log
@ -2045,6 +2053,8 @@ class Minion(MinionBase):
if start:
try:
self.io_loop.start()
if self.restart:
self.destroy()
except (KeyboardInterrupt, RuntimeError): # A RuntimeError can be re-raised by Tornado on shutdown
self.destroy()

View file

@ -405,13 +405,16 @@ def enable_beacon(name, **kwargs):
if not name:
ret['comment'] = 'Beacon name is required.'
ret['result'] = False
return ret
if 'test' in kwargs and kwargs['test']:
ret['comment'] = 'Beacon {0} would be enabled.'.format(name)
else:
if name not in list_(return_yaml=True):
_beacons = list_(return_yaml=False)
if name not in _beacons:
ret['comment'] = 'Beacon {0} is not currently configured.'.format(name)
ret['result'] = False
return ret
try:
eventer = salt.utils.event.get_event('minion', opts=__opts__)
@ -455,13 +458,16 @@ def disable_beacon(name, **kwargs):
if not name:
ret['comment'] = 'Beacon name is required.'
ret['result'] = False
return ret
if 'test' in kwargs and kwargs['test']:
ret['comment'] = 'Beacons would be enabled.'
else:
if name not in list_(return_yaml=True):
_beacons = list_(return_yaml=False)
if name not in _beacons:
ret['comment'] = 'Beacon {0} is not currently configured.'.format(name)
ret['result'] = False
return ret
try:
eventer = salt.utils.event.get_event('minion', opts=__opts__)

View file

@ -12,6 +12,7 @@ import json
import logging
import salt.utils
import salt.utils.http
from salt.exceptions import get_error_message
__proxyenabled__ = ['chronos']
@ -109,10 +110,10 @@ def update_job(name, config):
log.debug('update response: %s', response)
return {'success': True}
except Exception as ex:
log.error('unable to update chronos job: %s', ex.message)
log.error('unable to update chronos job: %s', get_error_message(ex))
return {
'exception': {
'message': ex.message,
'message': get_error_message(ex),
}
}

View file

@ -52,7 +52,7 @@ import salt.utils.filebuffer
import salt.utils.files
import salt.utils.atomicfile
import salt.utils.url
from salt.exceptions import CommandExecutionError, SaltInvocationError
from salt.exceptions import CommandExecutionError, SaltInvocationError, get_error_message as _get_error_message
log = logging.getLogger(__name__)
@ -1364,7 +1364,7 @@ def _regex_to_static(src, regex):
try:
src = re.search(regex, src)
except Exception as ex:
raise CommandExecutionError("{0}: '{1}'".format(ex.message, regex))
raise CommandExecutionError("{0}: '{1}'".format(_get_error_message(ex), regex))
return src and src.group() or regex

View file

@ -12,6 +12,7 @@ import json
import logging
import salt.utils
import salt.utils.http
from salt.exceptions import get_error_message
__proxyenabled__ = ['marathon']
@ -114,10 +115,10 @@ def update_app(id, config):
log.debug('update response: %s', response)
return response['dict']
except Exception as ex:
log.error('unable to update marathon app: %s', ex.message)
log.error('unable to update marathon app: %s', get_error_message(ex))
return {
'exception': {
'message': ex.message,
'message': get_error_message(ex),
}
}

View file

@ -22,6 +22,7 @@ import json
# Import salt libs
from salt.ext.six import string_types
from salt.exceptions import get_error_message as _get_error_message
# Import third party libs
@ -428,6 +429,16 @@ def insert(objects, collection, user=None, password=None,
def find(collection, query=None, user=None, password=None,
host=None, port=None, database='admin'):
"""
Find an object or list of objects in a collection
CLI Example:
.. code-block:: bash
salt '*' mongodb.find mycollection '[{"foo": "FOO", "bar": "BAR"}]' <user> <password> <host> <port> <database>
"""
conn = _connect(user, password, host, port, database)
if not conn:
return 'Failed to connect to mongo database'
@ -444,7 +455,7 @@ def find(collection, query=None, user=None, password=None,
ret = col.find(query)
return list(ret)
except pymongo.errors.PyMongoError as err:
log.error("Removing objects failed with error: %s", err)
log.error("Searching objects failed with error: %s", err)
return err
@ -467,7 +478,7 @@ def remove(collection, query=None, user=None, password=None,
try:
query = _to_dict(query)
except Exception as err:
return err.message
return _get_error_message(err)
try:
log.info("Removing %r from %s", query, collection)
@ -476,5 +487,5 @@ def remove(collection, query=None, user=None, password=None,
ret = col.remove(query, w=w)
return "{0} objects removed".format(ret['n'])
except pymongo.errors.PyMongoError as err:
log.error("Removing objects failed with error: %s", err.message)
return err.message
log.error("Removing objects failed with error: %s", _get_error_message(err))
return _get_error_message(err)

View file

@ -26,6 +26,7 @@ from salt.modules.inspectlib.exceptions import (InspectorQueryException,
import salt.utils
import salt.utils.fsutils
from salt.exceptions import CommandExecutionError
from salt.exceptions import get_error_message as _get_error_message
log = logging.getLogger(__name__)
@ -92,7 +93,7 @@ def inspect(mode='all', priority=19, **kwargs):
except InspectorSnapshotException as ex:
raise CommandExecutionError(ex)
except Exception as ex:
log.error(ex.message)
log.error(_get_error_message(ex))
raise Exception(ex)
@ -157,5 +158,5 @@ def query(scope, **kwargs):
except InspectorQueryException as ex:
raise CommandExecutionError(ex)
except Exception as ex:
log.error(ex.message)
log.error(_get_error_message(ex))
raise Exception(ex)

View file

@ -1008,8 +1008,11 @@ def build_routes(iface, **settings):
'''
template = 'rh6_route_eth.jinja'
if __grains__['osrelease'][0] < 6:
template = 'route_eth.jinja'
try:
if int(__grains__['osrelease'][0]) < 6:
template = 'route_eth.jinja'
except ValueError:
pass
log.debug('Template name: ' + template)
iface = iface.lower()

View file

@ -155,7 +155,7 @@ class daclConstants(object):
'THIS FOLDER ONLY': {
'TEXT': 'this file/folder only',
'BITS': win32security.NO_INHERITANCE},
'THIS FOLDER, SUBFOLDERS, and FILES': {
'THIS FOLDER, SUBFOLDERS, AND FILES': {
'TEXT': 'this folder, subfolders, and files',
'BITS': win32security.CONTAINER_INHERIT_ACE |
win32security.OBJECT_INHERIT_ACE},

View file

@ -1023,7 +1023,10 @@ def install(name=None,
log.warning('"version" parameter will be ignored for multiple '
'package targets')
old = list_pkgs()
old = list_pkgs(versions_as_list=False)
# Use of __context__ means no duplicate work here, just accessing
# information already in __context__ from the previous call to list_pkgs()
old_as_list = list_pkgs(versions_as_list=True)
targets = []
downgrade = []
to_reinstall = {}
@ -1095,20 +1098,54 @@ def install(name=None,
else:
pkgstr = pkgpath
cver = old.get(pkgname, '')
if reinstall and cver \
and salt.utils.compare_versions(ver1=version_num,
oper='==',
ver2=cver,
cmp_func=version_cmp):
to_reinstall[pkgname] = pkgstr
elif not cver or salt.utils.compare_versions(ver1=version_num,
oper='>=',
ver2=cver,
cmp_func=version_cmp):
targets.append(pkgstr)
# Lambda to trim the epoch from the currently-installed version if
# no epoch is specified in the specified version
norm_epoch = lambda x, y: x.split(':', 1)[-1] \
if ':' not in y \
else x
cver = old_as_list.get(pkgname, [])
if reinstall and cver:
for ver in cver:
ver = norm_epoch(ver, version_num)
if salt.utils.compare_versions(ver1=version_num,
oper='==',
ver2=ver,
cmp_func=version_cmp):
# This version is already installed, so we need to
# reinstall.
to_reinstall[pkgname] = pkgstr
break
else:
downgrade.append(pkgstr)
if not cver:
targets.append(pkgstr)
else:
for ver in cver:
ver = norm_epoch(ver, version_num)
if salt.utils.compare_versions(ver1=version_num,
oper='>=',
ver2=ver,
cmp_func=version_cmp):
targets.append(pkgstr)
break
else:
if re.match('kernel(-.+)?', name):
# kernel and its subpackages support multiple
# installs as their paths do not conflict.
# Performing a yum/dnf downgrade will be a no-op
# so just do an install instead. It will fail if
# there are other interdependencies that have
# conflicts, and that's OK. We don't want to force
# anything, we just want to properly handle it if
# someone tries to install a kernel/kernel-devel of
# a lower version than the currently-installed one.
# TODO: find a better way to determine if a package
# supports multiple installs.
targets.append(pkgstr)
else:
# None of the currently-installed versions are
# greater than the specified version, so this is a
# downgrade.
downgrade.append(pkgstr)
def _add_common_args(cmd):
'''
@ -1167,7 +1204,7 @@ def install(name=None,
errors.append(out['stdout'])
__context__.pop('pkg.list_pkgs', None)
new = list_pkgs()
new = list_pkgs(versions_as_list=False)
ret = salt.utils.compare_dicts(old, new)

View file

@ -49,7 +49,7 @@ def get_pillar(opts, grains, minion_id, saltenv=None, ext=None, env=None, funcs=
'local': Pillar
}.get(opts['file_client'], Pillar)
# If local pillar and we're caching, run through the cache system first
log.info('Determining pillar cache')
log.debug('Determining pillar cache')
if opts['pillar_cache']:
log.info('Compiling pillar from cache')
log.debug('get_pillar using pillar cache with ext: {0}'.format(ext))

View file

@ -280,6 +280,8 @@ class SPMClient(object):
can_has = {}
cant_has = []
if 'dependencies' in formula_def and formula_def['dependencies'] is None:
formula_def['dependencies'] = ''
for dep in formula_def.get('dependencies', '').split(','):
dep = dep.strip()
if not dep:

View file

@ -19,6 +19,18 @@ import subprocess
import time
from datetime import datetime
# Import salt libs
import salt.utils
import salt.utils.itertools
import salt.utils.url
import salt.fileserver
from salt.utils.process import os_is_running as pid_exists
from salt.exceptions import FileserverConfigError, GitLockError, get_error_message
from salt.utils.event import tagify
# Import third party libs
import salt.ext.six as six
VALID_PROVIDERS = ('gitpython', 'pygit2', 'dulwich')
# Optional per-remote params that can only be used on a per-remote basis, and
# thus do not have defaults in salt/config.py.
@ -54,17 +66,8 @@ _INVALID_REPO = (
'master to continue to use this {2} remote.'
)
# Import salt libs
import salt.utils
import salt.utils.itertools
import salt.utils.url
import salt.fileserver
from salt.utils.process import os_is_running as pid_exists
from salt.exceptions import FileserverConfigError, GitLockError
from salt.utils.event import tagify
log = logging.getLogger(__name__)
# Import third party libs
import salt.ext.six as six
# pylint: disable=import-error
try:
import git
@ -80,8 +83,13 @@ try:
GitError = pygit2.errors.GitError
except AttributeError:
GitError = Exception
except ImportError:
HAS_PYGIT2 = False
except Exception as err: # cffi VerificationError also may happen
HAS_PYGIT2 = False # and pygit2 requrests re-compilation
# on a production system (!),
# but cffi might be absent as well!
# Therefore just a generic Exception class.
if not isinstance(err, ImportError):
log.error('Import pygit2 failed: {0}'.format(err))
try:
import dulwich.errors
@ -94,8 +102,6 @@ except ImportError:
HAS_DULWICH = False
# pylint: enable=import-error
log = logging.getLogger(__name__)
# Minimum versions for backend providers
GITPYTHON_MINVER = '0.3'
PYGIT2_MINVER = '0.20.3'
@ -1289,9 +1295,7 @@ class Pygit2(GitProvider):
try:
fetch_results = origin.fetch(**fetch_kwargs)
except GitError as exc:
# Using exc.__str__() here to avoid deprecation warning
# when referencing exc.message
exc_str = exc.__str__().lower()
exc_str = get_error_message(exc).lower()
if 'unsupported url protocol' in exc_str \
and isinstance(self.credentials, pygit2.Keypair):
log.error(

View file

@ -113,8 +113,9 @@ class DaemonsStarterTestCase(TestCase, integration.SaltClientTestCaseMixIn):
'''
obj = daemons.Minion()
obj.config = {'user': 'dummy', 'hash_type': alg}
for attr in ['minion', 'start_log_info', 'prepare', 'shutdown']:
for attr in ['start_log_info', 'prepare', 'shutdown']:
setattr(obj, attr, MagicMock())
setattr(obj, 'minion', MagicMock(restart=False))
return obj