mirror of
https://github.com/saltstack/salt.git
synced 2025-04-17 10:10:20 +00:00
Merge branch '2017.7' into avoid_unneeded_pip_install
This commit is contained in:
commit
49a6a8f02e
104 changed files with 3817 additions and 1041 deletions
|
@ -10,6 +10,7 @@
|
|||
driver:
|
||||
name: docker
|
||||
use_sudo: false
|
||||
hostname: salt
|
||||
privileged: true
|
||||
username: root
|
||||
volume:
|
||||
|
|
|
@ -67,8 +67,8 @@ Engage SaltStack
|
|||
|
||||
`SaltConf`_, **User Groups and Meetups** - SaltStack has a vibrant and `global
|
||||
community`_ of customers, users, developers and enthusiasts. Connect with other
|
||||
Salted folks in your area of the world, or join `SaltConf16`_, the SaltStack
|
||||
annual user conference, April 19-21 in Salt Lake City. Please let us know if
|
||||
Salted folks in your area of the world, or join `SaltConf18`_, the SaltStack
|
||||
annual user conference, September 10-14 in Salt Lake City. Please let us know if
|
||||
you would like to start a user group or if we should add your existing
|
||||
SaltStack user group to this list by emailing: info@saltstack.com
|
||||
|
||||
|
@ -91,7 +91,7 @@ services`_ offerings.
|
|||
|
||||
.. _SaltConf: http://www.youtube.com/user/saltstack
|
||||
.. _global community: http://www.meetup.com/pro/saltstack/
|
||||
.. _SaltConf16: http://saltconf.com/
|
||||
.. _SaltConf18: http://saltconf.com/
|
||||
.. _SaltStack education offerings: http://saltstack.com/training/
|
||||
.. _SaltStack Certified Engineer (SSCE): http://saltstack.com/certification/
|
||||
.. _SaltStack professional services: http://saltstack.com/services/
|
||||
|
|
|
@ -235,13 +235,13 @@
|
|||
# cause sub minion process to restart.
|
||||
#auth_safemode: False
|
||||
|
||||
# Ping Master to ensure connection is alive (seconds).
|
||||
# Ping Master to ensure connection is alive (minutes).
|
||||
#ping_interval: 0
|
||||
|
||||
# To auto recover minions if master changes IP address (DDNS)
|
||||
# auth_tries: 10
|
||||
# auth_safemode: False
|
||||
# ping_interval: 90
|
||||
# ping_interval: 2
|
||||
#
|
||||
# Minions won't know master is missing until a ping fails. After the ping fail,
|
||||
# the minion will attempt authentication and likely fails out and cause a restart.
|
||||
|
|
|
@ -225,15 +225,16 @@ enclosing brackets ``[`` and ``]``:
|
|||
|
||||
Default: ``{}``
|
||||
|
||||
This can be used to control logging levels more specifically. The example sets
|
||||
the main salt library at the 'warning' level, but sets ``salt.modules`` to log
|
||||
at the ``debug`` level:
|
||||
This can be used to control logging levels more specifically, based on log call name. The example sets
|
||||
the main salt library at the 'warning' level, sets ``salt.modules`` to log
|
||||
at the ``debug`` level, and sets a custom module to the ``all`` level:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
log_granular_levels:
|
||||
'salt': 'warning'
|
||||
'salt.modules': 'debug'
|
||||
'salt.loader.saltmaster.ext.module.custom_module': 'all'
|
||||
|
||||
External Logging Handlers
|
||||
-------------------------
|
||||
|
|
|
@ -303,6 +303,20 @@ option on the Salt master.
|
|||
|
||||
master_port: 4506
|
||||
|
||||
.. conf_minion:: publish_port
|
||||
|
||||
``publish_port``
|
||||
---------------
|
||||
|
||||
Default: ``4505``
|
||||
|
||||
The port of the master publish server, this needs to coincide with the publish_port
|
||||
option on the Salt master.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
publish_port: 4505
|
||||
|
||||
.. conf_minion:: user
|
||||
|
||||
``user``
|
||||
|
@ -869,7 +883,7 @@ restart.
|
|||
|
||||
Default: ``0``
|
||||
|
||||
Instructs the minion to ping its master(s) every n number of seconds. Used
|
||||
Instructs the minion to ping its master(s) every n number of minutes. Used
|
||||
primarily as a mitigation technique against minion disconnects.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
|
|
@ -80,12 +80,21 @@ same way as in the above example, only without a top-level ``grains:`` key:
|
|||
|
||||
.. note::
|
||||
|
||||
The content of ``/etc/salt/grains`` is ignored if you specify grains in the minion config.
|
||||
Grains in ``/etc/salt/grains`` are ignored if you specify the same grains in the minion config.
|
||||
|
||||
.. note::
|
||||
|
||||
Grains are static, and since they are not often changed, they will need a grains refresh when they are updated. You can do this by calling: ``salt minion saltutil.refresh_modules``
|
||||
|
||||
.. note::
|
||||
|
||||
You can equally configure static grains for Proxy Minions.
|
||||
As multiple Proxy Minion processes can run on the same machine, you need
|
||||
to index the files using the Minion ID, under ``/etc/salt/proxy.d/<minion ID>/grains``.
|
||||
For example, the grains for the Proxy Minion ``router1`` can be defined
|
||||
under ``/etc/salt/proxy.d/router1/grains``, while the grains for the
|
||||
Proxy Minion ``switch7`` can be put in ``/etc/salt/proxy.d/switch7/grains``.
|
||||
|
||||
Matching Grains in the Top File
|
||||
===============================
|
||||
|
||||
|
@ -305,3 +314,9 @@ Syncing grains can be done a number of ways, they are automatically synced when
|
|||
above) the grains can be manually synced and reloaded by calling the
|
||||
:mod:`saltutil.sync_grains <salt.modules.saltutil.sync_grains>` or
|
||||
:mod:`saltutil.sync_all <salt.modules.saltutil.sync_all>` functions.
|
||||
|
||||
.. note::
|
||||
|
||||
When the :conf_minion:`grains_cache` is set to False, the grains dictionary is built
|
||||
and stored in memory on the minion. Every time the minion restarts or
|
||||
``saltutil.refresh_grains`` is run, the grain dictionary is rebuilt from scratch.
|
||||
|
|
|
@ -1526,6 +1526,54 @@ Returns:
|
|||
|
||||
.. jinja_ref:: jinja-in-files
|
||||
|
||||
Escape filters
|
||||
--------------
|
||||
|
||||
.. jinja_ref:: regex_escape
|
||||
|
||||
``regex_escape``
|
||||
----------------
|
||||
|
||||
.. versionadded:: 2017.7.0
|
||||
|
||||
Allows escaping of strings so they can be interpreted literally by another function.
|
||||
|
||||
Example:
|
||||
|
||||
.. code-block:: jinja
|
||||
|
||||
regex_escape = {{ 'https://example.com?foo=bar%20baz' | regex_escape }}
|
||||
|
||||
will be rendered as:
|
||||
|
||||
.. code-block:: text
|
||||
|
||||
regex_escape = https\:\/\/example\.com\?foo\=bar\%20baz
|
||||
|
||||
Set Theory Filters
|
||||
------------------
|
||||
|
||||
.. jinja_ref:: unique
|
||||
|
||||
``unique``
|
||||
----------
|
||||
|
||||
.. versionadded:: 2017.7.0
|
||||
|
||||
Performs set math using Jinja filters.
|
||||
|
||||
Example:
|
||||
|
||||
.. code-block:: jinja
|
||||
|
||||
unique = {{ ['foo', 'foo', 'bar'] | unique }}
|
||||
|
||||
will be rendered as:
|
||||
|
||||
.. code-block:: text
|
||||
|
||||
unique = ['foo', 'bar']
|
||||
|
||||
Jinja in Files
|
||||
==============
|
||||
|
||||
|
|
|
@ -57,7 +57,15 @@ Writing Thorium Formulas
|
|||
========================
|
||||
Like some other Salt subsystems, Thorium uses its own directory structure. The
|
||||
default location for this structure is ``/srv/thorium/``, but it can be changed
|
||||
using the ``thorium_roots_dir`` setting in the ``master`` configuration file.
|
||||
using the ``thorium_roots`` setting in the ``master`` configuration file.
|
||||
|
||||
Example ``thorium_roots`` configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
thorium_roots:
|
||||
base:
|
||||
- /etc/salt/thorium
|
||||
|
||||
|
||||
The Thorium top.sls File
|
||||
|
|
|
@ -186,19 +186,60 @@ class Beacon(object):
|
|||
else:
|
||||
self.opts['beacons'][name].append({'enabled': enabled_value})
|
||||
|
||||
def list_beacons(self):
|
||||
def _get_beacons(self,
|
||||
include_opts=True,
|
||||
include_pillar=True):
|
||||
'''
|
||||
Return the beacons data structure
|
||||
'''
|
||||
beacons = {}
|
||||
if include_pillar:
|
||||
pillar_beacons = self.opts.get('pillar', {}).get('beacons', {})
|
||||
if not isinstance(pillar_beacons, dict):
|
||||
raise ValueError('Beacons must be of type dict.')
|
||||
beacons.update(pillar_beacons)
|
||||
if include_opts:
|
||||
opts_beacons = self.opts.get('beacons', {})
|
||||
if not isinstance(opts_beacons, dict):
|
||||
raise ValueError('Beacons must be of type dict.')
|
||||
beacons.update(opts_beacons)
|
||||
return beacons
|
||||
|
||||
def list_beacons(self,
|
||||
include_pillar=True,
|
||||
include_opts=True):
|
||||
'''
|
||||
List the beacon items
|
||||
|
||||
include_pillar: Whether to include beacons that are
|
||||
configured in pillar, default is True.
|
||||
|
||||
include_opts: Whether to include beacons that are
|
||||
configured in opts, default is True.
|
||||
'''
|
||||
beacons = self._get_beacons(include_pillar, include_opts)
|
||||
|
||||
# Fire the complete event back along with the list of beacons
|
||||
evt = salt.utils.event.get_event('minion', opts=self.opts)
|
||||
b_conf = self.functions['config.merge']('beacons')
|
||||
self.opts['beacons'].update(b_conf)
|
||||
evt.fire_event({'complete': True, 'beacons': self.opts['beacons']},
|
||||
evt.fire_event({'complete': True, 'beacons': beacons},
|
||||
tag='/salt/minion/minion_beacons_list_complete')
|
||||
|
||||
return True
|
||||
|
||||
def list_available_beacons(self):
|
||||
'''
|
||||
List the available beacons
|
||||
'''
|
||||
_beacons = ['{0}'.format(_beacon.replace('.beacon', ''))
|
||||
for _beacon in self.beacons if '.beacon' in _beacon]
|
||||
|
||||
# Fire the complete event back along with the list of beacons
|
||||
evt = salt.utils.event.get_event('minion', opts=self.opts)
|
||||
evt.fire_event({'complete': True, 'beacons': _beacons},
|
||||
tag='/salt/minion/minion_beacons_list_available_complete')
|
||||
|
||||
return True
|
||||
|
||||
def add_beacon(self, name, beacon_data):
|
||||
'''
|
||||
Add a beacon item
|
||||
|
@ -207,16 +248,23 @@ class Beacon(object):
|
|||
data = {}
|
||||
data[name] = beacon_data
|
||||
|
||||
if name in self.opts['beacons']:
|
||||
log.info('Updating settings for beacon '
|
||||
'item: {0}'.format(name))
|
||||
if name in self._get_beacons(include_opts=False):
|
||||
comment = 'Cannot update beacon item {0}, ' \
|
||||
'because it is configured in pillar.'.format(name)
|
||||
complete = False
|
||||
else:
|
||||
log.info('Added new beacon item {0}'.format(name))
|
||||
self.opts['beacons'].update(data)
|
||||
if name in self.opts['beacons']:
|
||||
comment = 'Updating settings for beacon ' \
|
||||
'item: {0}'.format(name)
|
||||
else:
|
||||
comment = 'Added new beacon item: {0}'.format(name)
|
||||
complete = True
|
||||
self.opts['beacons'].update(data)
|
||||
|
||||
# Fire the complete event back along with updated list of beacons
|
||||
evt = salt.utils.event.get_event('minion', opts=self.opts)
|
||||
evt.fire_event({'complete': True, 'beacons': self.opts['beacons']},
|
||||
evt.fire_event({'complete': complete, 'comment': comment,
|
||||
'beacons': self.opts['beacons']},
|
||||
tag='/salt/minion/minion_beacon_add_complete')
|
||||
|
||||
return True
|
||||
|
@ -229,15 +277,21 @@ class Beacon(object):
|
|||
data = {}
|
||||
data[name] = beacon_data
|
||||
|
||||
log.info('Updating settings for beacon '
|
||||
'item: {0}'.format(name))
|
||||
self.opts['beacons'].update(data)
|
||||
if name in self._get_beacons(include_opts=False):
|
||||
comment = 'Cannot modify beacon item {0}, ' \
|
||||
'it is configured in pillar.'.format(name)
|
||||
complete = False
|
||||
else:
|
||||
comment = 'Updating settings for beacon ' \
|
||||
'item: {0}'.format(name)
|
||||
complete = True
|
||||
self.opts['beacons'].update(data)
|
||||
|
||||
# Fire the complete event back along with updated list of beacons
|
||||
evt = salt.utils.event.get_event('minion', opts=self.opts)
|
||||
evt.fire_event({'complete': True, 'beacons': self.opts['beacons']},
|
||||
evt.fire_event({'complete': complete, 'comment': comment,
|
||||
'beacons': self.opts['beacons']},
|
||||
tag='/salt/minion/minion_beacon_modify_complete')
|
||||
|
||||
return True
|
||||
|
||||
def delete_beacon(self, name):
|
||||
|
@ -245,13 +299,22 @@ class Beacon(object):
|
|||
Delete a beacon item
|
||||
'''
|
||||
|
||||
if name in self.opts['beacons']:
|
||||
log.info('Deleting beacon item {0}'.format(name))
|
||||
del self.opts['beacons'][name]
|
||||
if name in self._get_beacons(include_opts=False):
|
||||
comment = 'Cannot delete beacon item {0}, ' \
|
||||
'it is configured in pillar.'.format(name)
|
||||
complete = False
|
||||
else:
|
||||
if name in self.opts['beacons']:
|
||||
del self.opts['beacons'][name]
|
||||
comment = 'Deleting beacon item: {0}'.format(name)
|
||||
else:
|
||||
comment = 'Beacon item {0} not found.'.format(name)
|
||||
complete = True
|
||||
|
||||
# Fire the complete event back along with updated list of beacons
|
||||
evt = salt.utils.event.get_event('minion', opts=self.opts)
|
||||
evt.fire_event({'complete': True, 'beacons': self.opts['beacons']},
|
||||
evt.fire_event({'complete': complete, 'comment': comment,
|
||||
'beacons': self.opts['beacons']},
|
||||
tag='/salt/minion/minion_beacon_delete_complete')
|
||||
|
||||
return True
|
||||
|
@ -289,11 +352,19 @@ class Beacon(object):
|
|||
Enable a beacon
|
||||
'''
|
||||
|
||||
self._update_enabled(name, True)
|
||||
if name in self._get_beacons(include_opts=False):
|
||||
comment = 'Cannot enable beacon item {0}, ' \
|
||||
'it is configured in pillar.'.format(name)
|
||||
complete = False
|
||||
else:
|
||||
self._update_enabled(name, True)
|
||||
comment = 'Enabling beacon item {0}'.format(name)
|
||||
complete = True
|
||||
|
||||
# Fire the complete event back along with updated list of beacons
|
||||
evt = salt.utils.event.get_event('minion', opts=self.opts)
|
||||
evt.fire_event({'complete': True, 'beacons': self.opts['beacons']},
|
||||
evt.fire_event({'complete': complete, 'comment': comment,
|
||||
'beacons': self.opts['beacons']},
|
||||
tag='/salt/minion/minion_beacon_enabled_complete')
|
||||
|
||||
return True
|
||||
|
@ -303,11 +374,19 @@ class Beacon(object):
|
|||
Disable a beacon
|
||||
'''
|
||||
|
||||
self._update_enabled(name, False)
|
||||
if name in self._get_beacons(include_opts=False):
|
||||
comment = 'Cannot disable beacon item {0}, ' \
|
||||
'it is configured in pillar.'.format(name)
|
||||
complete = False
|
||||
else:
|
||||
self._update_enabled(name, False)
|
||||
comment = 'Disabling beacon item {0}'.format(name)
|
||||
complete = True
|
||||
|
||||
# Fire the complete event back along with updated list of beacons
|
||||
evt = salt.utils.event.get_event('minion', opts=self.opts)
|
||||
evt.fire_event({'complete': True, 'beacons': self.opts['beacons']},
|
||||
evt.fire_event({'complete': complete, 'comment': comment,
|
||||
'beacons': self.opts['beacons']},
|
||||
tag='/salt/minion/minion_beacon_disabled_complete')
|
||||
|
||||
return True
|
||||
|
|
|
@ -240,7 +240,7 @@ class SyncClientMixin(object):
|
|||
|
||||
def low(self, fun, low, print_event=True, full_return=False):
|
||||
'''
|
||||
Check for deprecated usage and allow until Salt Oxygen.
|
||||
Check for deprecated usage and allow until Salt Fluorine.
|
||||
'''
|
||||
msg = []
|
||||
if 'args' in low:
|
||||
|
@ -251,7 +251,7 @@ class SyncClientMixin(object):
|
|||
low['kwarg'] = low.pop('kwargs')
|
||||
|
||||
if msg:
|
||||
salt.utils.warn_until('Oxygen', ' '.join(msg))
|
||||
salt.utils.warn_until('Fluorine', ' '.join(msg))
|
||||
|
||||
return self._low(fun, low, print_event=print_event, full_return=full_return)
|
||||
|
||||
|
|
|
@ -723,6 +723,7 @@ class Single(object):
|
|||
self.thin_dir = kwargs['thin_dir']
|
||||
elif self.winrm:
|
||||
saltwinshell.set_winvars(self)
|
||||
self.python_env = kwargs.get('ssh_python_env')
|
||||
else:
|
||||
if user:
|
||||
thin_dir = DEFAULT_THIN_DIR.replace('%%USER%%', user)
|
||||
|
@ -782,6 +783,10 @@ class Single(object):
|
|||
self.serial = salt.payload.Serial(opts)
|
||||
self.wfuncs = salt.loader.ssh_wrapper(opts, None, self.context)
|
||||
self.shell = salt.client.ssh.shell.gen_shell(opts, **args)
|
||||
if self.winrm:
|
||||
# Determine if Windows client is x86 or AMD64
|
||||
arch, _, _ = self.shell.exec_cmd('powershell $ENV:PROCESSOR_ARCHITECTURE')
|
||||
self.arch = arch.strip()
|
||||
self.thin = thin if thin else salt.utils.thin.thin_path(opts['cachedir'])
|
||||
|
||||
def __arg_comps(self):
|
||||
|
|
|
@ -6,6 +6,7 @@ Create ssh executor system
|
|||
from __future__ import absolute_import
|
||||
# Import python libs
|
||||
import os
|
||||
import time
|
||||
import copy
|
||||
import json
|
||||
import logging
|
||||
|
@ -21,6 +22,8 @@ import salt.loader
|
|||
import salt.minion
|
||||
import salt.log
|
||||
from salt.ext.six import string_types
|
||||
import salt.ext.six as six
|
||||
from salt.exceptions import SaltInvocationError
|
||||
|
||||
__func_alias__ = {
|
||||
'apply_': 'apply'
|
||||
|
@ -28,6 +31,47 @@ __func_alias__ = {
|
|||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _set_retcode(ret, highstate=None):
|
||||
'''
|
||||
Set the return code based on the data back from the state system
|
||||
'''
|
||||
|
||||
# Set default retcode to 0
|
||||
__context__['retcode'] = 0
|
||||
|
||||
if isinstance(ret, list):
|
||||
__context__['retcode'] = 1
|
||||
return
|
||||
if not salt.utils.check_state_result(ret, highstate=highstate):
|
||||
|
||||
__context__['retcode'] = 2
|
||||
|
||||
|
||||
def _check_pillar(kwargs, pillar=None):
|
||||
'''
|
||||
Check the pillar for errors, refuse to run the state if there are errors
|
||||
in the pillar and return the pillar errors
|
||||
'''
|
||||
if kwargs.get('force'):
|
||||
return True
|
||||
pillar_dict = pillar if pillar is not None else __pillar__
|
||||
if '_errors' in pillar_dict:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def _wait(jid):
|
||||
'''
|
||||
Wait for all previously started state jobs to finish running
|
||||
'''
|
||||
if jid is None:
|
||||
jid = salt.utils.jid.gen_jid()
|
||||
states = _prior_running_states(jid)
|
||||
while states:
|
||||
time.sleep(1)
|
||||
states = _prior_running_states(jid)
|
||||
|
||||
|
||||
def _merge_extra_filerefs(*args):
|
||||
'''
|
||||
Takes a list of filerefs and returns a merged list
|
||||
|
@ -127,6 +171,100 @@ def sls(mods, saltenv='base', test=None, exclude=None, **kwargs):
|
|||
return stdout
|
||||
|
||||
|
||||
def running(concurrent=False):
|
||||
'''
|
||||
Return a list of strings that contain state return data if a state function
|
||||
is already running. This function is used to prevent multiple state calls
|
||||
from being run at the same time.
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' state.running
|
||||
'''
|
||||
ret = []
|
||||
if concurrent:
|
||||
return ret
|
||||
active = __salt__['saltutil.is_running']('state.*')
|
||||
for data in active:
|
||||
err = (
|
||||
'The function "{0}" is running as PID {1} and was started at '
|
||||
'{2} with jid {3}'
|
||||
).format(
|
||||
data['fun'],
|
||||
data['pid'],
|
||||
salt.utils.jid.jid_to_time(data['jid']),
|
||||
data['jid'],
|
||||
)
|
||||
ret.append(err)
|
||||
return ret
|
||||
|
||||
|
||||
def _prior_running_states(jid):
|
||||
'''
|
||||
Return a list of dicts of prior calls to state functions. This function is
|
||||
used to queue state calls so only one is run at a time.
|
||||
'''
|
||||
|
||||
ret = []
|
||||
active = __salt__['saltutil.is_running']('state.*')
|
||||
for data in active:
|
||||
try:
|
||||
data_jid = int(data['jid'])
|
||||
except ValueError:
|
||||
continue
|
||||
if data_jid < int(jid):
|
||||
ret.append(data)
|
||||
return ret
|
||||
|
||||
|
||||
def _check_queue(queue, kwargs):
|
||||
'''
|
||||
Utility function to queue the state run if requested
|
||||
and to check for conflicts in currently running states
|
||||
'''
|
||||
if queue:
|
||||
_wait(kwargs.get('__pub_jid'))
|
||||
else:
|
||||
conflict = running(concurrent=kwargs.get('concurrent', False))
|
||||
if conflict:
|
||||
__context__['retcode'] = 1
|
||||
return conflict
|
||||
|
||||
|
||||
def _get_opts(**kwargs):
|
||||
'''
|
||||
Return a copy of the opts for use, optionally load a local config on top
|
||||
'''
|
||||
opts = copy.deepcopy(__opts__)
|
||||
|
||||
if 'localconfig' in kwargs:
|
||||
return salt.config.minion_config(kwargs['localconfig'], defaults=opts)
|
||||
|
||||
if 'saltenv' in kwargs:
|
||||
saltenv = kwargs['saltenv']
|
||||
if saltenv is not None and not isinstance(saltenv, six.string_types):
|
||||
opts['environment'] = str(kwargs['saltenv'])
|
||||
else:
|
||||
opts['environment'] = kwargs['saltenv']
|
||||
|
||||
if 'pillarenv' in kwargs:
|
||||
pillarenv = kwargs['pillarenv']
|
||||
if pillarenv is not None and not isinstance(pillarenv, six.string_types):
|
||||
opts['pillarenv'] = str(kwargs['pillarenv'])
|
||||
else:
|
||||
opts['pillarenv'] = kwargs['pillarenv']
|
||||
|
||||
return opts
|
||||
|
||||
|
||||
def _get_initial_pillar(opts):
|
||||
return __pillar__ if __opts__['__cli'] == 'salt-call' \
|
||||
and opts['pillarenv'] == __opts__['pillarenv'] \
|
||||
else None
|
||||
|
||||
|
||||
def low(data, **kwargs):
|
||||
'''
|
||||
Execute a single low data call
|
||||
|
@ -199,6 +337,21 @@ def low(data, **kwargs):
|
|||
return stdout
|
||||
|
||||
|
||||
def _get_test_value(test=None, **kwargs):
|
||||
'''
|
||||
Determine the correct value for the test flag.
|
||||
'''
|
||||
ret = True
|
||||
if test is None:
|
||||
if salt.utils.test_mode(test=test, **kwargs):
|
||||
ret = True
|
||||
else:
|
||||
ret = __opts__.get('test', None)
|
||||
else:
|
||||
ret = test
|
||||
return ret
|
||||
|
||||
|
||||
def high(data, **kwargs):
|
||||
'''
|
||||
Execute the compound calls stored in a single set of high data
|
||||
|
@ -615,6 +768,99 @@ def show_lowstate():
|
|||
return st_.compile_low_chunks()
|
||||
|
||||
|
||||
def sls_id(id_, mods, test=None, queue=False, **kwargs):
|
||||
'''
|
||||
Call a single ID from the named module(s) and handle all requisites
|
||||
|
||||
The state ID comes *before* the module ID(s) on the command line.
|
||||
|
||||
id
|
||||
ID to call
|
||||
|
||||
mods
|
||||
Comma-delimited list of modules to search for given id and its requisites
|
||||
|
||||
.. versionadded:: 2017.7.3
|
||||
|
||||
saltenv : base
|
||||
Specify a salt fileserver environment to be used when applying states
|
||||
|
||||
pillarenv
|
||||
Specify a Pillar environment to be used when applying states. This
|
||||
can also be set in the minion config file using the
|
||||
:conf_minion:`pillarenv` option. When neither the
|
||||
:conf_minion:`pillarenv` minion config option nor this CLI argument is
|
||||
used, all Pillar environments will be merged together.
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' state.sls_id my_state my_module
|
||||
|
||||
salt '*' state.sls_id my_state my_module,a_common_module
|
||||
'''
|
||||
conflict = _check_queue(queue, kwargs)
|
||||
if conflict is not None:
|
||||
return conflict
|
||||
orig_test = __opts__.get('test', None)
|
||||
opts = _get_opts(**kwargs)
|
||||
opts['test'] = _get_test_value(test, **kwargs)
|
||||
|
||||
# Since this is running a specific ID within a specific SLS file, fall back
|
||||
# to the 'base' saltenv if none is configured and none was passed.
|
||||
if opts['environment'] is None:
|
||||
opts['environment'] = 'base'
|
||||
|
||||
try:
|
||||
st_ = salt.state.HighState(opts,
|
||||
proxy=__proxy__,
|
||||
initial_pillar=_get_initial_pillar(opts))
|
||||
except NameError:
|
||||
st_ = salt.state.HighState(opts,
|
||||
initial_pillar=_get_initial_pillar(opts))
|
||||
|
||||
if not _check_pillar(kwargs, st_.opts['pillar']):
|
||||
__context__['retcode'] = 5
|
||||
err = ['Pillar failed to render with the following messages:']
|
||||
err += __pillar__['_errors']
|
||||
return err
|
||||
|
||||
if isinstance(mods, six.string_types):
|
||||
split_mods = mods.split(',')
|
||||
st_.push_active()
|
||||
try:
|
||||
high_, errors = st_.render_highstate({opts['environment']: split_mods})
|
||||
finally:
|
||||
st_.pop_active()
|
||||
errors += st_.state.verify_high(high_)
|
||||
# Apply requisites to high data
|
||||
high_, req_in_errors = st_.state.requisite_in(high_)
|
||||
if req_in_errors:
|
||||
# This if statement should not be necessary if there were no errors,
|
||||
# but it is required to get the unit tests to pass.
|
||||
errors.extend(req_in_errors)
|
||||
if errors:
|
||||
__context__['retcode'] = 1
|
||||
return errors
|
||||
chunks = st_.state.compile_high_data(high_)
|
||||
ret = {}
|
||||
for chunk in chunks:
|
||||
if chunk.get('__id__', '') == id_:
|
||||
ret.update(st_.state.call_chunk(chunk, {}, chunks))
|
||||
|
||||
_set_retcode(ret, highstate=highstate)
|
||||
# Work around Windows multiprocessing bug, set __opts__['test'] back to
|
||||
# value from before this function was run.
|
||||
__opts__['test'] = orig_test
|
||||
if not ret:
|
||||
raise SaltInvocationError(
|
||||
'No matches for ID \'{0}\' found in SLS \'{1}\' within saltenv '
|
||||
'\'{2}\''.format(id_, mods, opts['environment'])
|
||||
)
|
||||
return ret
|
||||
|
||||
|
||||
def show_sls(mods, saltenv='base', test=None, **kwargs):
|
||||
'''
|
||||
Display the state data from a specific sls or list of sls files on the
|
||||
|
|
|
@ -48,6 +48,10 @@ log = logging.getLogger(__name__)
|
|||
# The name salt will identify the lib by
|
||||
__virtualname__ = 'virtualbox'
|
||||
|
||||
#if no clone mode is specified in the virtualbox profile
|
||||
#then default to 0 which was the old default value
|
||||
DEFAULT_CLONE_MODE = 0
|
||||
|
||||
|
||||
def __virtual__():
|
||||
'''
|
||||
|
@ -85,6 +89,30 @@ def get_configured_provider():
|
|||
return configured
|
||||
|
||||
|
||||
def map_clonemode(vm_info):
|
||||
"""
|
||||
Convert the virtualbox config file values for clone_mode into the integers the API requires
|
||||
"""
|
||||
mode_map = {
|
||||
'state': 0,
|
||||
'child': 1,
|
||||
'all': 2
|
||||
}
|
||||
|
||||
if not vm_info:
|
||||
return DEFAULT_CLONE_MODE
|
||||
|
||||
if 'clonemode' not in vm_info:
|
||||
return DEFAULT_CLONE_MODE
|
||||
|
||||
if vm_info['clonemode'] in mode_map:
|
||||
return mode_map[vm_info['clonemode']]
|
||||
else:
|
||||
raise SaltCloudSystemExit(
|
||||
"Illegal clonemode for virtualbox profile. Legal values are: {}".format(','.join(mode_map.keys()))
|
||||
)
|
||||
|
||||
|
||||
def create(vm_info):
|
||||
"""
|
||||
Creates a virtual machine from the given VM information.
|
||||
|
@ -102,6 +130,7 @@ def create(vm_info):
|
|||
profile: <dict>
|
||||
driver: <provider>:<profile>
|
||||
clonefrom: <vm_name>
|
||||
clonemode: <mode> (default: state, choices: state, child, all)
|
||||
}
|
||||
@type vm_info dict
|
||||
@return dict of resulting vm. !!!Passwords can and should be included!!!
|
||||
|
@ -133,6 +162,9 @@ def create(vm_info):
|
|||
key_filename = config.get_cloud_config_value(
|
||||
'private_key', vm_info, __opts__, search_global=False, default=None
|
||||
)
|
||||
clone_mode = map_clonemode(vm_info)
|
||||
wait_for_pattern = vm_info['waitforpattern'] if 'waitforpattern' in vm_info.keys() else None
|
||||
interface_index = vm_info['interfaceindex'] if 'interfaceindex' in vm_info.keys() else 0
|
||||
|
||||
log.debug("Going to fire event: starting create")
|
||||
__utils__['cloud.fire_event'](
|
||||
|
@ -147,7 +179,8 @@ def create(vm_info):
|
|||
# to create the virtual machine.
|
||||
request_kwargs = {
|
||||
'name': vm_info['name'],
|
||||
'clone_from': vm_info['clonefrom']
|
||||
'clone_from': vm_info['clonefrom'],
|
||||
'clone_mode': clone_mode
|
||||
}
|
||||
|
||||
__utils__['cloud.fire_event'](
|
||||
|
@ -163,17 +196,17 @@ def create(vm_info):
|
|||
# Booting and deploying if needed
|
||||
if power:
|
||||
vb_start_vm(vm_name, timeout=boot_timeout)
|
||||
ips = vb_wait_for_network_address(wait_for_ip_timeout, machine_name=vm_name)
|
||||
ips = vb_wait_for_network_address(wait_for_ip_timeout, machine_name=vm_name, wait_for_pattern=wait_for_pattern)
|
||||
|
||||
if len(ips):
|
||||
ip = ips[0]
|
||||
ip = ips[interface_index]
|
||||
log.info("[ {0} ] IPv4 is: {1}".format(vm_name, ip))
|
||||
# ssh or smb using ip and install salt only if deploy is True
|
||||
if deploy:
|
||||
vm_info['key_filename'] = key_filename
|
||||
vm_info['ssh_host'] = ip
|
||||
|
||||
res = __utils__['cloud.bootstrap'](vm_info)
|
||||
res = __utils__['cloud.bootstrap'](vm_info, __opts__)
|
||||
vm_result.update(res)
|
||||
|
||||
__utils__['cloud.fire_event'](
|
||||
|
|
|
@ -938,7 +938,7 @@ VALID_OPTS = {
|
|||
|
||||
'queue_dirs': list,
|
||||
|
||||
# Instructs the minion to ping its master(s) every n number of seconds. Used
|
||||
# Instructs the minion to ping its master(s) every n number of minutes. Used
|
||||
# primarily as a mitigation technique against minion disconnects.
|
||||
'ping_interval': int,
|
||||
|
||||
|
|
|
@ -586,7 +586,18 @@ class RemoteFuncs(object):
|
|||
ret = {}
|
||||
if not salt.utils.verify.valid_id(self.opts, load['id']):
|
||||
return ret
|
||||
match_type = load.get('tgt_type', 'glob')
|
||||
expr_form = load.get('expr_form')
|
||||
if expr_form is not None and 'tgt_type' not in load:
|
||||
salt.utils.warn_until(
|
||||
u'Neon',
|
||||
u'_mine_get: minion {0} uses pre-Nitrogen API key '
|
||||
u'"expr_form". Accepting for backwards compatibility '
|
||||
u'but this is not guaranteed '
|
||||
u'after the Neon release'.format(load['id'])
|
||||
)
|
||||
match_type = expr_form
|
||||
else:
|
||||
match_type = load.get('tgt_type', 'glob')
|
||||
if match_type.lower() == 'pillar':
|
||||
match_type = 'pillar_exact'
|
||||
if match_type.lower() == 'compound':
|
||||
|
|
|
@ -10,6 +10,7 @@ import socket
|
|||
import ctypes
|
||||
import os
|
||||
import ipaddress
|
||||
import salt.ext.six as six
|
||||
|
||||
|
||||
class sockaddr(ctypes.Structure):
|
||||
|
@ -36,7 +37,7 @@ def inet_pton(address_family, ip_string):
|
|||
# This will catch IP Addresses such as 10.1.2
|
||||
if address_family == socket.AF_INET:
|
||||
try:
|
||||
ipaddress.ip_address(ip_string.decode())
|
||||
ipaddress.ip_address(six.u(ip_string))
|
||||
except ValueError:
|
||||
raise socket.error('illegal IP address string passed to inet_pton')
|
||||
return socket.inet_aton(ip_string)
|
||||
|
|
|
@ -716,12 +716,14 @@ def _virtual(osdata):
|
|||
pass
|
||||
if os.path.isfile('/proc/1/cgroup'):
|
||||
try:
|
||||
with salt.utils.fopen('/proc/1/cgroup', 'r') as fhr:
|
||||
if ':/lxc/' in fhr.read():
|
||||
grains['virtual_subtype'] = 'LXC'
|
||||
with salt.utils.fopen('/proc/1/cgroup', 'r') as fhr:
|
||||
fhr_contents = fhr.read()
|
||||
if ':/docker/' in fhr_contents or ':/system.slice/docker' in fhr_contents:
|
||||
if ':/lxc/' in fhr_contents:
|
||||
grains['virtual_subtype'] = 'LXC'
|
||||
else:
|
||||
if any(x in fhr_contents
|
||||
for x in (':/system.slice/docker', ':/docker/',
|
||||
':/docker-ce/')):
|
||||
grains['virtual_subtype'] = 'Docker'
|
||||
except IOError:
|
||||
pass
|
||||
|
|
|
@ -12,6 +12,7 @@ import logging
|
|||
# Import salt libs
|
||||
import salt.utils
|
||||
|
||||
__proxyenabled__ = ['*']
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
|
@ -31,16 +32,33 @@ def config():
|
|||
if 'conf_file' not in __opts__:
|
||||
return {}
|
||||
if os.path.isdir(__opts__['conf_file']):
|
||||
gfn = os.path.join(
|
||||
__opts__['conf_file'],
|
||||
'grains'
|
||||
)
|
||||
if salt.utils.is_proxy():
|
||||
gfn = os.path.join(
|
||||
__opts__['conf_file'],
|
||||
'proxy.d',
|
||||
__opts__['id'],
|
||||
'grains'
|
||||
)
|
||||
else:
|
||||
gfn = os.path.join(
|
||||
__opts__['conf_file'],
|
||||
'grains'
|
||||
)
|
||||
else:
|
||||
gfn = os.path.join(
|
||||
os.path.dirname(__opts__['conf_file']),
|
||||
'grains'
|
||||
)
|
||||
if salt.utils.is_proxy():
|
||||
gfn = os.path.join(
|
||||
os.path.dirname(__opts__['conf_file']),
|
||||
'proxy.d',
|
||||
__opts__['id'],
|
||||
'grains'
|
||||
)
|
||||
else:
|
||||
gfn = os.path.join(
|
||||
os.path.dirname(__opts__['conf_file']),
|
||||
'grains'
|
||||
)
|
||||
if os.path.isfile(gfn):
|
||||
log.debug('Loading static grains from %s', gfn)
|
||||
with salt.utils.fopen(gfn, 'rb') as fp_:
|
||||
try:
|
||||
return yaml.safe_load(fp_.read())
|
||||
|
|
221
salt/minion.py
221
salt/minion.py
|
@ -862,6 +862,10 @@ class MinionManager(MinionBase):
|
|||
failed = False
|
||||
while True:
|
||||
try:
|
||||
if minion.opts.get('beacons_before_connect', False):
|
||||
minion.setup_beacons(before_connect=True)
|
||||
if minion.opts.get('scheduler_before_connect', False):
|
||||
minion.setup_scheduler(before_connect=True)
|
||||
yield minion.connect_master(failed=failed)
|
||||
minion.tune_in(start=False)
|
||||
break
|
||||
|
@ -936,6 +940,7 @@ class Minion(MinionBase):
|
|||
# True means the Minion is fully functional and ready to handle events.
|
||||
self.ready = False
|
||||
self.jid_queue = jid_queue or []
|
||||
self.periodic_callbacks = {}
|
||||
|
||||
if io_loop is None:
|
||||
if HAS_ZMQ:
|
||||
|
@ -967,6 +972,19 @@ class Minion(MinionBase):
|
|||
# post_master_init
|
||||
if not salt.utils.is_proxy():
|
||||
self.opts['grains'] = salt.loader.grains(opts)
|
||||
else:
|
||||
if self.opts.get('beacons_before_connect', False):
|
||||
log.warning(
|
||||
'\'beacons_before_connect\' is not supported '
|
||||
'for proxy minions. Setting to False'
|
||||
)
|
||||
self.opts['beacons_before_connect'] = False
|
||||
if self.opts.get('scheduler_before_connect', False):
|
||||
log.warning(
|
||||
'\'scheduler_before_connect\' is not supported '
|
||||
'for proxy minions. Setting to False'
|
||||
)
|
||||
self.opts['scheduler_before_connect'] = False
|
||||
|
||||
log.info('Creating minion process manager')
|
||||
|
||||
|
@ -1070,19 +1088,22 @@ class Minion(MinionBase):
|
|||
pillarenv=self.opts.get('pillarenv')
|
||||
).compile_pillar()
|
||||
|
||||
self.functions, self.returners, self.function_errors, self.executors = self._load_modules()
|
||||
self.serial = salt.payload.Serial(self.opts)
|
||||
self.mod_opts = self._prep_mod_opts()
|
||||
self.matcher = Matcher(self.opts, self.functions)
|
||||
self.beacons = salt.beacons.Beacon(self.opts, self.functions)
|
||||
uid = salt.utils.get_uid(user=self.opts.get('user', None))
|
||||
self.proc_dir = get_proc_dir(self.opts['cachedir'], uid=uid)
|
||||
if not self.ready:
|
||||
self._setup_core()
|
||||
elif self.connected and self.opts['pillar']:
|
||||
# The pillar has changed due to the connection to the master.
|
||||
# Reload the functions so that they can use the new pillar data.
|
||||
self.functions, self.returners, self.function_errors, self.executors = self._load_modules()
|
||||
if hasattr(self, 'schedule'):
|
||||
self.schedule.functions = self.functions
|
||||
self.schedule.returners = self.returners
|
||||
|
||||
self.schedule = salt.utils.schedule.Schedule(
|
||||
self.opts,
|
||||
self.functions,
|
||||
self.returners,
|
||||
cleanup=[master_event(type='alive')])
|
||||
if not hasattr(self, 'schedule'):
|
||||
self.schedule = salt.utils.schedule.Schedule(
|
||||
self.opts,
|
||||
self.functions,
|
||||
self.returners,
|
||||
cleanup=[master_event(type='alive')])
|
||||
|
||||
# add default scheduling jobs to the minions scheduler
|
||||
if self.opts['mine_enabled'] and 'mine.update' in self.functions:
|
||||
|
@ -1136,9 +1157,6 @@ class Minion(MinionBase):
|
|||
self.schedule.delete_job(master_event(type='alive', master=self.opts['master']), persist=True)
|
||||
self.schedule.delete_job(master_event(type='failback'), persist=True)
|
||||
|
||||
self.grains_cache = self.opts['grains']
|
||||
self.ready = True
|
||||
|
||||
def _return_retry_timer(self):
|
||||
'''
|
||||
Based on the minion configuration, either return a randomized timer or
|
||||
|
@ -1896,6 +1914,8 @@ class Minion(MinionBase):
|
|||
func = data.get('func', None)
|
||||
name = data.get('name', None)
|
||||
beacon_data = data.get('beacon_data', None)
|
||||
include_pillar = data.get(u'include_pillar', None)
|
||||
include_opts = data.get(u'include_opts', None)
|
||||
|
||||
if func == 'add':
|
||||
self.beacons.add_beacon(name, beacon_data)
|
||||
|
@ -1912,7 +1932,9 @@ class Minion(MinionBase):
|
|||
elif func == 'disable_beacon':
|
||||
self.beacons.disable_beacon(name)
|
||||
elif func == 'list':
|
||||
self.beacons.list_beacons()
|
||||
self.beacons.list_beacons(include_opts, include_pillar)
|
||||
elif func == u'list_available':
|
||||
self.beacons.list_available_beacons()
|
||||
|
||||
def environ_setenv(self, tag, data):
|
||||
'''
|
||||
|
@ -2176,6 +2198,118 @@ class Minion(MinionBase):
|
|||
except (ValueError, NameError):
|
||||
pass
|
||||
|
||||
def _setup_core(self):
|
||||
'''
|
||||
Set up the core minion attributes.
|
||||
This is safe to call multiple times.
|
||||
'''
|
||||
if not self.ready:
|
||||
# First call. Initialize.
|
||||
self.functions, self.returners, self.function_errors, self.executors = self._load_modules()
|
||||
self.serial = salt.payload.Serial(self.opts)
|
||||
self.mod_opts = self._prep_mod_opts()
|
||||
self.matcher = Matcher(self.opts, self.functions)
|
||||
uid = salt.utils.get_uid(user=self.opts.get('user', None))
|
||||
self.proc_dir = get_proc_dir(self.opts['cachedir'], uid=uid)
|
||||
self.grains_cache = self.opts['grains']
|
||||
self.ready = True
|
||||
|
||||
def setup_beacons(self, before_connect=False):
|
||||
'''
|
||||
Set up the beacons.
|
||||
This is safe to call multiple times.
|
||||
'''
|
||||
self._setup_core()
|
||||
|
||||
loop_interval = self.opts['loop_interval']
|
||||
new_periodic_callbacks = {}
|
||||
|
||||
if 'beacons' not in self.periodic_callbacks:
|
||||
self.beacons = salt.beacons.Beacon(self.opts, self.functions)
|
||||
|
||||
def handle_beacons():
|
||||
# Process Beacons
|
||||
beacons = None
|
||||
try:
|
||||
beacons = self.process_beacons(self.functions)
|
||||
except Exception:
|
||||
log.critical('The beacon errored: ', exc_info=True)
|
||||
if beacons and self.connected:
|
||||
self._fire_master(events=beacons)
|
||||
|
||||
new_periodic_callbacks['beacons'] = tornado.ioloop.PeriodicCallback(handle_beacons, loop_interval * 1000, io_loop=self.io_loop)
|
||||
if before_connect:
|
||||
# Make sure there is a chance for one iteration to occur before connect
|
||||
handle_beacons()
|
||||
|
||||
if 'cleanup' not in self.periodic_callbacks:
|
||||
new_periodic_callbacks['cleanup'] = tornado.ioloop.PeriodicCallback(self._fallback_cleanups, loop_interval * 1000, io_loop=self.io_loop)
|
||||
|
||||
# start all the other callbacks
|
||||
for periodic_cb in six.itervalues(new_periodic_callbacks):
|
||||
periodic_cb.start()
|
||||
|
||||
self.periodic_callbacks.update(new_periodic_callbacks)
|
||||
|
||||
def setup_scheduler(self, before_connect=False):
|
||||
'''
|
||||
Set up the scheduler.
|
||||
This is safe to call multiple times.
|
||||
'''
|
||||
self._setup_core()
|
||||
|
||||
loop_interval = self.opts['loop_interval']
|
||||
new_periodic_callbacks = {}
|
||||
|
||||
if 'schedule' not in self.periodic_callbacks:
|
||||
if 'schedule' not in self.opts:
|
||||
self.opts['schedule'] = {}
|
||||
if not hasattr(self, 'schedule'):
|
||||
self.schedule = salt.utils.schedule.Schedule(
|
||||
self.opts,
|
||||
self.functions,
|
||||
self.returners,
|
||||
cleanup=[master_event(type='alive')])
|
||||
|
||||
try:
|
||||
if self.opts['grains_refresh_every']: # If exists and is not zero. In minutes, not seconds!
|
||||
if self.opts['grains_refresh_every'] > 1:
|
||||
log.debug(
|
||||
'Enabling the grains refresher. Will run every {0} minutes.'.format(
|
||||
self.opts['grains_refresh_every'])
|
||||
)
|
||||
else: # Clean up minute vs. minutes in log message
|
||||
log.debug(
|
||||
'Enabling the grains refresher. Will run every {0} minute.'.format(
|
||||
self.opts['grains_refresh_every'])
|
||||
)
|
||||
self._refresh_grains_watcher(
|
||||
abs(self.opts['grains_refresh_every'])
|
||||
)
|
||||
except Exception as exc:
|
||||
log.error(
|
||||
'Exception occurred in attempt to initialize grain refresh routine during minion tune-in: {0}'.format(
|
||||
exc)
|
||||
)
|
||||
|
||||
# TODO: actually listen to the return and change period
|
||||
def handle_schedule():
|
||||
self.process_schedule(self, loop_interval)
|
||||
new_periodic_callbacks['schedule'] = tornado.ioloop.PeriodicCallback(handle_schedule, 1000, io_loop=self.io_loop)
|
||||
|
||||
if before_connect:
|
||||
# Make sure there is a chance for one iteration to occur before connect
|
||||
handle_schedule()
|
||||
|
||||
if 'cleanup' not in self.periodic_callbacks:
|
||||
new_periodic_callbacks['cleanup'] = tornado.ioloop.PeriodicCallback(self._fallback_cleanups, loop_interval * 1000, io_loop=self.io_loop)
|
||||
|
||||
# start all the other callbacks
|
||||
for periodic_cb in six.itervalues(new_periodic_callbacks):
|
||||
periodic_cb.start()
|
||||
|
||||
self.periodic_callbacks.update(new_periodic_callbacks)
|
||||
|
||||
# Main Minion Tune In
|
||||
def tune_in(self, start=True):
|
||||
'''
|
||||
|
@ -2187,6 +2321,10 @@ class Minion(MinionBase):
|
|||
log.debug('Minion \'{0}\' trying to tune in'.format(self.opts['id']))
|
||||
|
||||
if start:
|
||||
if self.opts.get('beacons_before_connect', False):
|
||||
self.setup_beacons(before_connect=True)
|
||||
if self.opts.get('scheduler_before_connect', False):
|
||||
self.setup_scheduler(before_connect=True)
|
||||
self.sync_connect_master()
|
||||
if self.connected:
|
||||
self._fire_master_minion_start()
|
||||
|
@ -2201,31 +2339,9 @@ class Minion(MinionBase):
|
|||
# On first startup execute a state run if configured to do so
|
||||
self._state_run()
|
||||
|
||||
loop_interval = self.opts['loop_interval']
|
||||
self.setup_beacons()
|
||||
self.setup_scheduler()
|
||||
|
||||
try:
|
||||
if self.opts['grains_refresh_every']: # If exists and is not zero. In minutes, not seconds!
|
||||
if self.opts['grains_refresh_every'] > 1:
|
||||
log.debug(
|
||||
'Enabling the grains refresher. Will run every {0} minutes.'.format(
|
||||
self.opts['grains_refresh_every'])
|
||||
)
|
||||
else: # Clean up minute vs. minutes in log message
|
||||
log.debug(
|
||||
'Enabling the grains refresher. Will run every {0} minute.'.format(
|
||||
self.opts['grains_refresh_every'])
|
||||
|
||||
)
|
||||
self._refresh_grains_watcher(
|
||||
abs(self.opts['grains_refresh_every'])
|
||||
)
|
||||
except Exception as exc:
|
||||
log.error(
|
||||
'Exception occurred in attempt to initialize grain refresh routine during minion tune-in: {0}'.format(
|
||||
exc)
|
||||
)
|
||||
|
||||
self.periodic_callbacks = {}
|
||||
# schedule the stuff that runs every interval
|
||||
ping_interval = self.opts.get('ping_interval', 0) * 60
|
||||
if ping_interval > 0 and self.connected:
|
||||
|
@ -2243,30 +2359,7 @@ class Minion(MinionBase):
|
|||
except Exception:
|
||||
log.warning('Attempt to ping master failed.', exc_on_loglevel=logging.DEBUG)
|
||||
self.periodic_callbacks['ping'] = tornado.ioloop.PeriodicCallback(ping_master, ping_interval * 1000, io_loop=self.io_loop)
|
||||
|
||||
self.periodic_callbacks['cleanup'] = tornado.ioloop.PeriodicCallback(self._fallback_cleanups, loop_interval * 1000, io_loop=self.io_loop)
|
||||
|
||||
def handle_beacons():
|
||||
# Process Beacons
|
||||
beacons = None
|
||||
try:
|
||||
beacons = self.process_beacons(self.functions)
|
||||
except Exception:
|
||||
log.critical('The beacon errored: ', exc_info=True)
|
||||
if beacons and self.connected:
|
||||
self._fire_master(events=beacons, sync=False)
|
||||
|
||||
self.periodic_callbacks['beacons'] = tornado.ioloop.PeriodicCallback(handle_beacons, loop_interval * 1000, io_loop=self.io_loop)
|
||||
|
||||
# TODO: actually listen to the return and change period
|
||||
def handle_schedule():
|
||||
self.process_schedule(self, loop_interval)
|
||||
if hasattr(self, 'schedule'):
|
||||
self.periodic_callbacks['schedule'] = tornado.ioloop.PeriodicCallback(handle_schedule, 1000, io_loop=self.io_loop)
|
||||
|
||||
# start all the other callbacks
|
||||
for periodic_cb in six.itervalues(self.periodic_callbacks):
|
||||
periodic_cb.start()
|
||||
self.periodic_callbacks['ping'].start()
|
||||
|
||||
# add handler to subscriber
|
||||
if hasattr(self, 'pub_channel') and self.pub_channel is not None:
|
||||
|
|
|
@ -125,7 +125,7 @@ def cert(name,
|
|||
salt 'gitlab.example.com' acme.cert dev.example.com "[gitlab.example.com]" test_cert=True renew=14 webroot=/opt/gitlab/embedded/service/gitlab-rails/public
|
||||
'''
|
||||
|
||||
cmd = [LEA, 'certonly', '--quiet']
|
||||
cmd = [LEA, 'certonly', '--non-interactive']
|
||||
|
||||
cert_file = _cert_file(name, 'cert')
|
||||
if not __salt__['file.file_exists'](cert_file):
|
||||
|
|
|
@ -169,6 +169,9 @@ def atrm(*args):
|
|||
if not args:
|
||||
return {'jobs': {'removed': [], 'tag': None}}
|
||||
|
||||
# Convert all to strings
|
||||
args = [str(arg) for arg in args]
|
||||
|
||||
if args[0] == 'all':
|
||||
if len(args) > 1:
|
||||
opts = list(list(map(str, [j['job'] for j in atq(args[1])['jobs']])))
|
||||
|
@ -178,7 +181,7 @@ def atrm(*args):
|
|||
ret = {'jobs': {'removed': opts, 'tag': None}}
|
||||
else:
|
||||
opts = list(list(map(str, [i['job'] for i in atq()['jobs']
|
||||
if i['job'] in args])))
|
||||
if str(i['job']) in args])))
|
||||
ret = {'jobs': {'removed': opts, 'tag': None}}
|
||||
|
||||
# Shim to produce output similar to what __virtual__() should do
|
||||
|
|
|
@ -27,12 +27,22 @@ __func_alias__ = {
|
|||
}
|
||||
|
||||
|
||||
def list_(return_yaml=True):
|
||||
def list_(return_yaml=True,
|
||||
include_pillar=True,
|
||||
include_opts=True):
|
||||
'''
|
||||
List the beacons currently configured on the minion
|
||||
|
||||
:param return_yaml: Whether to return YAML formatted output, default True
|
||||
:return: List of currently configured Beacons.
|
||||
:param return_yaml: Whether to return YAML formatted output,
|
||||
default True
|
||||
|
||||
:param include_pillar: Whether to include beacons that are
|
||||
configured in pillar, default is True.
|
||||
|
||||
:param include_opts: Whether to include beacons that are
|
||||
configured in opts, default is True.
|
||||
|
||||
:return: List of currently configured Beacons.
|
||||
|
||||
CLI Example:
|
||||
|
||||
|
@ -45,7 +55,10 @@ def list_(return_yaml=True):
|
|||
|
||||
try:
|
||||
eventer = salt.utils.event.get_event('minion', opts=__opts__)
|
||||
res = __salt__['event.fire']({'func': 'list'}, 'manage_beacons')
|
||||
res = __salt__['event.fire']({'func': 'list',
|
||||
'include_pillar': include_pillar,
|
||||
'include_opts': include_opts},
|
||||
'manage_beacons')
|
||||
if res:
|
||||
event_ret = eventer.get_event(tag='/salt/minion/minion_beacons_list_complete', wait=30)
|
||||
log.debug('event_ret {0}'.format(event_ret))
|
||||
|
@ -69,6 +82,47 @@ def list_(return_yaml=True):
|
|||
return {'beacons': {}}
|
||||
|
||||
|
||||
def list_available(return_yaml=True):
|
||||
'''
|
||||
List the beacons currently available on the minion
|
||||
|
||||
:param return_yaml: Whether to return YAML formatted output, default True
|
||||
:return: List of currently configured Beacons.
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' beacons.list_available
|
||||
|
||||
'''
|
||||
beacons = None
|
||||
|
||||
try:
|
||||
eventer = salt.utils.event.get_event('minion', opts=__opts__)
|
||||
res = __salt__['event.fire']({'func': 'list_available'}, 'manage_beacons')
|
||||
if res:
|
||||
event_ret = eventer.get_event(tag='/salt/minion/minion_beacons_list_available_complete', wait=30)
|
||||
if event_ret and event_ret['complete']:
|
||||
beacons = event_ret['beacons']
|
||||
except KeyError:
|
||||
# Effectively a no-op, since we can't really return without an event system
|
||||
ret = {}
|
||||
ret['result'] = False
|
||||
ret['comment'] = 'Event module not available. Beacon add failed.'
|
||||
return ret
|
||||
|
||||
if beacons:
|
||||
if return_yaml:
|
||||
tmp = {'beacons': beacons}
|
||||
yaml_out = yaml.safe_dump(tmp, default_flow_style=False)
|
||||
return yaml_out
|
||||
else:
|
||||
return beacons
|
||||
else:
|
||||
return {'beacons': {}}
|
||||
|
||||
|
||||
def add(name, beacon_data, **kwargs):
|
||||
'''
|
||||
Add a beacon on the minion
|
||||
|
@ -91,6 +145,10 @@ def add(name, beacon_data, **kwargs):
|
|||
ret['comment'] = 'Beacon {0} is already configured.'.format(name)
|
||||
return ret
|
||||
|
||||
if name not in list_available(return_yaml=False):
|
||||
ret['comment'] = 'Beacon "{0}" is not available.'.format(name)
|
||||
return ret
|
||||
|
||||
if 'test' in kwargs and kwargs['test']:
|
||||
ret['result'] = True
|
||||
ret['comment'] = 'Beacon: {0} would be added.'.format(name)
|
||||
|
@ -130,7 +188,10 @@ def add(name, beacon_data, **kwargs):
|
|||
if name in beacons and beacons[name] == beacon_data:
|
||||
ret['result'] = True
|
||||
ret['comment'] = 'Added beacon: {0}.'.format(name)
|
||||
return ret
|
||||
else:
|
||||
ret['result'] = False
|
||||
ret['comment'] = event_ret['comment']
|
||||
return ret
|
||||
except KeyError:
|
||||
# Effectively a no-op, since we can't really return without an event system
|
||||
ret['comment'] = 'Event module not available. Beacon add failed.'
|
||||
|
@ -215,7 +276,10 @@ def modify(name, beacon_data, **kwargs):
|
|||
if name in beacons and beacons[name] == beacon_data:
|
||||
ret['result'] = True
|
||||
ret['comment'] = 'Modified beacon: {0}.'.format(name)
|
||||
return ret
|
||||
else:
|
||||
ret['result'] = False
|
||||
ret['comment'] = event_ret['comment']
|
||||
return ret
|
||||
except KeyError:
|
||||
# Effectively a no-op, since we can't really return without an event system
|
||||
ret['comment'] = 'Event module not available. Beacon add failed.'
|
||||
|
@ -257,6 +321,9 @@ def delete(name, **kwargs):
|
|||
ret['result'] = True
|
||||
ret['comment'] = 'Deleted beacon: {0}.'.format(name)
|
||||
return ret
|
||||
else:
|
||||
ret['result'] = False
|
||||
ret['comment'] = event_ret['comment']
|
||||
except KeyError:
|
||||
# Effectively a no-op, since we can't really return without an event system
|
||||
ret['comment'] = 'Event module not available. Beacon add failed.'
|
||||
|
@ -279,7 +346,7 @@ def save():
|
|||
ret = {'comment': [],
|
||||
'result': True}
|
||||
|
||||
beacons = list_(return_yaml=False)
|
||||
beacons = list_(return_yaml=False, include_pillar=False)
|
||||
|
||||
# move this file into an configurable opt
|
||||
sfn = '{0}/{1}/beacons.conf'.format(__opts__['config_dir'],
|
||||
|
@ -332,7 +399,7 @@ def enable(**kwargs):
|
|||
else:
|
||||
ret['result'] = False
|
||||
ret['comment'] = 'Failed to enable beacons on minion.'
|
||||
return ret
|
||||
return ret
|
||||
except KeyError:
|
||||
# Effectively a no-op, since we can't really return without an event system
|
||||
ret['comment'] = 'Event module not available. Beacons enable job failed.'
|
||||
|
@ -372,7 +439,7 @@ def disable(**kwargs):
|
|||
else:
|
||||
ret['result'] = False
|
||||
ret['comment'] = 'Failed to disable beacons on minion.'
|
||||
return ret
|
||||
return ret
|
||||
except KeyError:
|
||||
# Effectively a no-op, since we can't really return without an event system
|
||||
ret['comment'] = 'Event module not available. Beacons enable job failed.'
|
||||
|
@ -435,7 +502,10 @@ def enable_beacon(name, **kwargs):
|
|||
else:
|
||||
ret['result'] = False
|
||||
ret['comment'] = 'Failed to enable beacon {0} on minion.'.format(name)
|
||||
return ret
|
||||
else:
|
||||
ret['result'] = False
|
||||
ret['comment'] = event_ret['comment']
|
||||
return ret
|
||||
except KeyError:
|
||||
# Effectively a no-op, since we can't really return without an event system
|
||||
ret['comment'] = 'Event module not available. Beacon enable job failed.'
|
||||
|
@ -488,7 +558,10 @@ def disable_beacon(name, **kwargs):
|
|||
else:
|
||||
ret['result'] = False
|
||||
ret['comment'] = 'Failed to disable beacon on minion.'
|
||||
return ret
|
||||
else:
|
||||
ret['result'] = False
|
||||
ret['comment'] = event_ret['comment']
|
||||
return ret
|
||||
except KeyError:
|
||||
# Effectively a no-op, since we can't really return without an event system
|
||||
ret['comment'] = 'Event module not available. Beacon disable job failed.'
|
||||
|
|
|
@ -3127,6 +3127,12 @@ def run_bg(cmd,
|
|||
Note that ``env`` represents the environment variables for the command, and
|
||||
should be formatted as a dict, or a YAML string which resolves to a dict.
|
||||
|
||||
.. note::
|
||||
|
||||
If the init system is systemd and the backgrounded task should run even if the salt-minion process
|
||||
is restarted, prepend ``systemd-run --scope`` to the command. This will reparent the process in its
|
||||
own scope separate from salt-minion, and will not be affected by restarting the minion service.
|
||||
|
||||
:param str cmd: The command to run. ex: 'ls -lart /home'
|
||||
|
||||
:param str cwd: The current working directory to execute the command in.
|
||||
|
|
|
@ -147,8 +147,24 @@ def _render_tab(lst):
|
|||
cron['cmd']
|
||||
)
|
||||
)
|
||||
for spec in lst['special']:
|
||||
ret.append('{0} {1}\n'.format(spec['spec'], spec['cmd']))
|
||||
for cron in lst['special']:
|
||||
if cron['comment'] is not None or cron['identifier'] is not None:
|
||||
comment = '#'
|
||||
if cron['comment']:
|
||||
comment += ' {0}'.format(
|
||||
cron['comment'].rstrip().replace('\n', '\n# '))
|
||||
if cron['identifier']:
|
||||
comment += ' {0}:{1}'.format(SALT_CRON_IDENTIFIER,
|
||||
cron['identifier'])
|
||||
|
||||
comment += '\n'
|
||||
ret.append(comment)
|
||||
ret.append('{0}{1} {2}\n'.format(
|
||||
cron['commented'] is True and '#DISABLED#' or '',
|
||||
cron['spec'],
|
||||
cron['cmd']
|
||||
)
|
||||
)
|
||||
return ret
|
||||
|
||||
|
||||
|
@ -317,7 +333,15 @@ def list_tab(user):
|
|||
continue
|
||||
dat['spec'] = comps[0]
|
||||
dat['cmd'] = ' '.join(comps[1:])
|
||||
dat['identifier'] = identifier
|
||||
dat['comment'] = comment
|
||||
dat['commented'] = False
|
||||
if commented_cron_job:
|
||||
dat['commented'] = True
|
||||
ret['special'].append(dat)
|
||||
identifier = None
|
||||
comment = None
|
||||
commented_cron_job = False
|
||||
elif line.startswith('#'):
|
||||
# It's a comment! Catch it!
|
||||
comment_line = line.lstrip('# ')
|
||||
|
@ -363,11 +387,17 @@ def list_tab(user):
|
|||
ret['pre'].append(line)
|
||||
return ret
|
||||
|
||||
|
||||
# For consistency's sake
|
||||
ls = salt.utils.alias_function(list_tab, 'ls')
|
||||
|
||||
|
||||
def set_special(user, special, cmd):
|
||||
def set_special(user,
|
||||
special,
|
||||
cmd,
|
||||
commented=False,
|
||||
comment=None,
|
||||
identifier=None):
|
||||
'''
|
||||
Set up a special command in the crontab.
|
||||
|
||||
|
@ -379,11 +409,60 @@ def set_special(user, special, cmd):
|
|||
'''
|
||||
lst = list_tab(user)
|
||||
for cron in lst['special']:
|
||||
if special == cron['spec'] and cmd == cron['cmd']:
|
||||
cid = _cron_id(cron)
|
||||
if _cron_matched(cron, cmd, identifier):
|
||||
test_setted_id = (
|
||||
cron['identifier'] is None
|
||||
and SALT_CRON_NO_IDENTIFIER
|
||||
or cron['identifier'])
|
||||
tests = [(cron['comment'], comment),
|
||||
(cron['commented'], commented),
|
||||
(identifier, test_setted_id),
|
||||
(cron['spec'], special)]
|
||||
if cid or identifier:
|
||||
tests.append((cron['cmd'], cmd))
|
||||
if any([_needs_change(x, y) for x, y in tests]):
|
||||
rm_special(user, cmd, identifier=cid)
|
||||
|
||||
# Use old values when setting the new job if there was no
|
||||
# change needed for a given parameter
|
||||
if not _needs_change(cron['spec'], special):
|
||||
special = cron['spec']
|
||||
if not _needs_change(cron['commented'], commented):
|
||||
commented = cron['commented']
|
||||
if not _needs_change(cron['comment'], comment):
|
||||
comment = cron['comment']
|
||||
if not _needs_change(cron['cmd'], cmd):
|
||||
cmd = cron['cmd']
|
||||
if (
|
||||
cid == SALT_CRON_NO_IDENTIFIER
|
||||
):
|
||||
if identifier:
|
||||
cid = identifier
|
||||
if (
|
||||
cid == SALT_CRON_NO_IDENTIFIER
|
||||
and cron['identifier'] is None
|
||||
):
|
||||
cid = None
|
||||
cron['identifier'] = cid
|
||||
if not cid or (
|
||||
cid and not _needs_change(cid, identifier)
|
||||
):
|
||||
identifier = cid
|
||||
jret = set_special(user, special, cmd, commented=commented,
|
||||
comment=comment, identifier=identifier)
|
||||
if jret == 'new':
|
||||
return 'updated'
|
||||
else:
|
||||
return jret
|
||||
return 'present'
|
||||
spec = {'spec': special,
|
||||
'cmd': cmd}
|
||||
lst['special'].append(spec)
|
||||
cron = {'spec': special,
|
||||
'cmd': cmd,
|
||||
'identifier': identifier,
|
||||
'comment': comment,
|
||||
'commented': commented}
|
||||
lst['special'].append(cron)
|
||||
|
||||
comdat = _write_cron_lines(user, _render_tab(lst))
|
||||
if comdat['retcode']:
|
||||
# Failed to commit, return the error
|
||||
|
@ -536,7 +615,7 @@ def set_job(user,
|
|||
return 'new'
|
||||
|
||||
|
||||
def rm_special(user, special, cmd):
|
||||
def rm_special(user, cmd, special=None, identifier=None):
|
||||
'''
|
||||
Remove a special cron job for a specified user.
|
||||
|
||||
|
@ -544,22 +623,28 @@ def rm_special(user, special, cmd):
|
|||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' cron.rm_job root @hourly /usr/bin/foo
|
||||
salt '*' cron.rm_special root /usr/bin/foo
|
||||
'''
|
||||
lst = list_tab(user)
|
||||
ret = 'absent'
|
||||
rm_ = None
|
||||
for ind in range(len(lst['special'])):
|
||||
if lst['special'][ind]['cmd'] == cmd and \
|
||||
lst['special'][ind]['spec'] == special:
|
||||
lst['special'].pop(ind)
|
||||
rm_ = ind
|
||||
if rm_ is not None:
|
||||
break
|
||||
if _cron_matched(lst['special'][ind], cmd, identifier=identifier):
|
||||
if special is None:
|
||||
# No special param was specified
|
||||
rm_ = ind
|
||||
else:
|
||||
if lst['special'][ind]['spec'] == special:
|
||||
rm_ = ind
|
||||
if rm_ is not None:
|
||||
lst['special'].pop(rm_)
|
||||
ret = 'removed'
|
||||
comdat = _write_cron_lines(user, _render_tab(lst))
|
||||
if comdat['retcode']:
|
||||
# Failed to commit
|
||||
return comdat['stderr']
|
||||
comdat = _write_cron_lines(user, _render_tab(lst))
|
||||
if comdat['retcode']:
|
||||
# Failed to commit, return the error
|
||||
return comdat['stderr']
|
||||
return ret
|
||||
|
||||
|
||||
|
@ -610,6 +695,7 @@ def rm_job(user,
|
|||
return comdat['stderr']
|
||||
return ret
|
||||
|
||||
|
||||
rm = salt.utils.alias_function(rm_job, 'rm')
|
||||
|
||||
|
||||
|
|
|
@ -1861,14 +1861,14 @@ def line(path, content=None, match=None, mode=None, location=None,
|
|||
if changed:
|
||||
if show_changes:
|
||||
with salt.utils.fopen(path, 'r') as fp_:
|
||||
path_content = _splitlines_preserving_trailing_newline(
|
||||
fp_.read())
|
||||
changes_diff = ''.join(difflib.unified_diff(
|
||||
path_content, _splitlines_preserving_trailing_newline(body)))
|
||||
path_content = fp_.read().splitlines(True)
|
||||
changes_diff = ''.join(difflib.unified_diff(path_content, body.splitlines(True)))
|
||||
if __opts__['test'] is False:
|
||||
fh_ = None
|
||||
try:
|
||||
fh_ = salt.utils.atomicfile.atomic_open(path, 'w')
|
||||
# Make sure we match the file mode from salt.utils.fopen
|
||||
mode = 'wb' if six.PY2 and salt.utils.is_windows() else 'w'
|
||||
fh_ = salt.utils.atomicfile.atomic_open(path, mode)
|
||||
fh_.write(body)
|
||||
finally:
|
||||
if fh_:
|
||||
|
@ -3368,7 +3368,11 @@ def stats(path, hash_type=None, follow_symlinks=True):
|
|||
pstat = os.lstat(path)
|
||||
except OSError:
|
||||
# Not a broken symlink, just a nonexistent path
|
||||
return ret
|
||||
# NOTE: The file.directory state checks the content of the error
|
||||
# message in this exception. Any changes made to the message for this
|
||||
# exception will reflect the file.directory state as well, and will
|
||||
# likely require changes there.
|
||||
raise CommandExecutionError('Path not found: {0}'.format(path))
|
||||
else:
|
||||
if follow_symlinks:
|
||||
pstat = os.stat(path)
|
||||
|
@ -3832,8 +3836,15 @@ def get_managed(
|
|||
parsed_scheme = urlparsed_source.scheme
|
||||
parsed_path = os.path.join(
|
||||
urlparsed_source.netloc, urlparsed_source.path).rstrip(os.sep)
|
||||
unix_local_source = parsed_scheme in ('file', '')
|
||||
|
||||
if parsed_scheme and parsed_scheme.lower() in 'abcdefghijklmnopqrstuvwxyz':
|
||||
if unix_local_source:
|
||||
sfn = parsed_path
|
||||
if not os.path.exists(sfn):
|
||||
msg = 'Local file source {0} does not exist'.format(sfn)
|
||||
return '', {}, msg
|
||||
|
||||
if parsed_scheme and parsed_scheme.lower() in string.ascii_lowercase:
|
||||
parsed_path = ':'.join([parsed_scheme, parsed_path])
|
||||
parsed_scheme = 'file'
|
||||
|
||||
|
@ -3841,9 +3852,10 @@ def get_managed(
|
|||
source_sum = __salt__['cp.hash_file'](source, saltenv)
|
||||
if not source_sum:
|
||||
return '', {}, 'Source file {0} not found'.format(source)
|
||||
elif not source_hash and parsed_scheme == 'file':
|
||||
elif not source_hash and unix_local_source:
|
||||
source_sum = _get_local_file_source_sum(parsed_path)
|
||||
elif not source_hash and source.startswith(os.sep):
|
||||
# This should happen on Windows
|
||||
source_sum = _get_local_file_source_sum(source)
|
||||
else:
|
||||
if not skip_verify:
|
||||
|
@ -4193,12 +4205,6 @@ def check_perms(name, ret, user, group, mode, follow_symlinks=False):
|
|||
# Check permissions
|
||||
perms = {}
|
||||
cur = stats(name, follow_symlinks=follow_symlinks)
|
||||
if not cur:
|
||||
# NOTE: The file.directory state checks the content of the error
|
||||
# message in this exception. Any changes made to the message for this
|
||||
# exception will reflect the file.directory state as well, and will
|
||||
# likely require changes there.
|
||||
raise CommandExecutionError('{0} does not exist'.format(name))
|
||||
perms['luser'] = cur['user']
|
||||
perms['lgroup'] = cur['group']
|
||||
perms['lmode'] = salt.utils.normalize_mode(cur['mode'])
|
||||
|
@ -4498,11 +4504,18 @@ def check_file_meta(
|
|||
'''
|
||||
changes = {}
|
||||
if not source_sum:
|
||||
source_sum = {}
|
||||
lstats = stats(name, hash_type=source_sum.get('hash_type', None), follow_symlinks=False)
|
||||
source_sum = dict()
|
||||
|
||||
try:
|
||||
lstats = stats(name, hash_type=source_sum.get('hash_type', None),
|
||||
follow_symlinks=False)
|
||||
except CommandExecutionError:
|
||||
lstats = {}
|
||||
|
||||
if not lstats:
|
||||
changes['newfile'] = name
|
||||
return changes
|
||||
|
||||
if 'hsum' in source_sum:
|
||||
if source_sum['hsum'] != lstats['sum']:
|
||||
if not sfn and source:
|
||||
|
@ -4741,21 +4754,22 @@ def manage_file(name,
|
|||
if source_sum and ('hsum' in source_sum):
|
||||
source_sum['hsum'] = source_sum['hsum'].lower()
|
||||
|
||||
if source and not sfn:
|
||||
# File is not present, cache it
|
||||
sfn = __salt__['cp.cache_file'](source, saltenv)
|
||||
if source:
|
||||
if not sfn:
|
||||
return _error(
|
||||
ret, 'Source file \'{0}\' not found'.format(source))
|
||||
htype = source_sum.get('hash_type', __opts__['hash_type'])
|
||||
# Recalculate source sum now that file has been cached
|
||||
source_sum = {
|
||||
'hash_type': htype,
|
||||
'hsum': get_hash(sfn, form=htype)
|
||||
}
|
||||
# File is not present, cache it
|
||||
sfn = __salt__['cp.cache_file'](source, saltenv)
|
||||
if not sfn:
|
||||
return _error(
|
||||
ret, 'Source file \'{0}\' not found'.format(source))
|
||||
htype = source_sum.get('hash_type', __opts__['hash_type'])
|
||||
# Recalculate source sum now that file has been cached
|
||||
source_sum = {
|
||||
'hash_type': htype,
|
||||
'hsum': get_hash(sfn, form=htype)
|
||||
}
|
||||
|
||||
if keep_mode:
|
||||
if _urlparse(source).scheme in ('salt', 'file') \
|
||||
or source.startswith('/'):
|
||||
if _urlparse(source).scheme in ('salt', 'file', ''):
|
||||
try:
|
||||
mode = __salt__['cp.stat_file'](source, saltenv=saltenv, octal=True)
|
||||
except Exception as exc:
|
||||
|
@ -4785,7 +4799,7 @@ def manage_file(name,
|
|||
# source, and we are not skipping checksum verification, then
|
||||
# verify that it matches the specified checksum.
|
||||
if not skip_verify \
|
||||
and _urlparse(source).scheme not in ('salt', ''):
|
||||
and _urlparse(source).scheme != 'salt':
|
||||
dl_sum = get_hash(sfn, source_sum['hash_type'])
|
||||
if dl_sum != source_sum['hsum']:
|
||||
ret['comment'] = (
|
||||
|
@ -4973,8 +4987,6 @@ def manage_file(name,
|
|||
makedirs_(name, user=user, group=group, mode=dir_mode)
|
||||
|
||||
if source:
|
||||
# It is a new file, set the diff accordingly
|
||||
ret['changes']['diff'] = 'New file'
|
||||
# Apply the new file
|
||||
if not sfn:
|
||||
sfn = __salt__['cp.cache_file'](source, saltenv)
|
||||
|
@ -4998,6 +5010,8 @@ def manage_file(name,
|
|||
)
|
||||
ret['result'] = False
|
||||
return ret
|
||||
# It is a new file, set the diff accordingly
|
||||
ret['changes']['diff'] = 'New file'
|
||||
if not os.path.isdir(contain_dir):
|
||||
if makedirs:
|
||||
_set_mode_and_make_dirs(name, dir_mode, mode, user, group)
|
||||
|
|
|
@ -1,6 +1,13 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Return/control aspects of the grains data
|
||||
|
||||
Grains set or altered with this module are stored in the 'grains'
|
||||
file on the minions. By default, this file is located at: ``/etc/salt/grains``
|
||||
|
||||
.. Note::
|
||||
|
||||
This does **NOT** override any grains set in the minion config file.
|
||||
'''
|
||||
|
||||
# Import python libs
|
||||
|
@ -222,20 +229,44 @@ def setvals(grains, destructive=False):
|
|||
raise SaltException('setvals grains must be a dictionary.')
|
||||
grains = {}
|
||||
if os.path.isfile(__opts__['conf_file']):
|
||||
gfn = os.path.join(
|
||||
os.path.dirname(__opts__['conf_file']),
|
||||
'grains'
|
||||
)
|
||||
if salt.utils.is_proxy():
|
||||
gfn = os.path.join(
|
||||
os.path.dirname(__opts__['conf_file']),
|
||||
'proxy.d',
|
||||
__opts__['id'],
|
||||
'grains'
|
||||
)
|
||||
else:
|
||||
gfn = os.path.join(
|
||||
os.path.dirname(__opts__['conf_file']),
|
||||
'grains'
|
||||
)
|
||||
elif os.path.isdir(__opts__['conf_file']):
|
||||
gfn = os.path.join(
|
||||
__opts__['conf_file'],
|
||||
'grains'
|
||||
)
|
||||
if salt.utils.is_proxy():
|
||||
gfn = os.path.join(
|
||||
__opts__['conf_file'],
|
||||
'proxy.d',
|
||||
__opts__['id'],
|
||||
'grains'
|
||||
)
|
||||
else:
|
||||
gfn = os.path.join(
|
||||
__opts__['conf_file'],
|
||||
'grains'
|
||||
)
|
||||
else:
|
||||
gfn = os.path.join(
|
||||
os.path.dirname(__opts__['conf_file']),
|
||||
'grains'
|
||||
)
|
||||
if salt.utils.is_proxy():
|
||||
gfn = os.path.join(
|
||||
os.path.dirname(__opts__['conf_file']),
|
||||
'proxy.d',
|
||||
__opts__['id'],
|
||||
'grains'
|
||||
)
|
||||
else:
|
||||
gfn = os.path.join(
|
||||
os.path.dirname(__opts__['conf_file']),
|
||||
'grains'
|
||||
)
|
||||
|
||||
if os.path.isfile(gfn):
|
||||
with salt.utils.fopen(gfn, 'rb') as fp_:
|
||||
|
|
|
@ -585,7 +585,8 @@ def _parse_members(settype, members):
|
|||
def _parse_member(settype, member, strict=False):
|
||||
subtypes = settype.split(':')[1].split(',')
|
||||
|
||||
parts = member.split(' ')
|
||||
all_parts = member.split(' ', 1)
|
||||
parts = all_parts[0].split(',')
|
||||
|
||||
parsed_member = []
|
||||
for i in range(len(subtypes)):
|
||||
|
@ -610,8 +611,8 @@ def _parse_member(settype, member, strict=False):
|
|||
|
||||
parsed_member.append(part)
|
||||
|
||||
if len(parts) > len(subtypes):
|
||||
parsed_member.append(' '.join(parts[len(subtypes):]))
|
||||
if len(all_parts) > 1:
|
||||
parsed_member.append(all_parts[1])
|
||||
|
||||
return parsed_member
|
||||
|
||||
|
|
|
@ -70,7 +70,7 @@ def __init__(opts):
|
|||
def _get_driver(profile):
|
||||
config = __salt__['config.option']('libcloud_dns')[profile]
|
||||
cls = get_driver(config['driver'])
|
||||
args = config
|
||||
args = config.copy()
|
||||
del args['driver']
|
||||
args['key'] = config.get('key')
|
||||
args['secret'] = config.get('secret', None)
|
||||
|
|
|
@ -144,17 +144,17 @@ def _parse_acl(acl, user, group):
|
|||
# Set the permissions fields
|
||||
octal = 0
|
||||
vals['permissions'] = {}
|
||||
if 'r' in comps[2]:
|
||||
if 'r' in comps[-1]:
|
||||
octal += 4
|
||||
vals['permissions']['read'] = True
|
||||
else:
|
||||
vals['permissions']['read'] = False
|
||||
if 'w' in comps[2]:
|
||||
if 'w' in comps[-1]:
|
||||
octal += 2
|
||||
vals['permissions']['write'] = True
|
||||
else:
|
||||
vals['permissions']['write'] = False
|
||||
if 'x' in comps[2]:
|
||||
if 'x' in comps[-1]:
|
||||
octal += 1
|
||||
vals['permissions']['execute'] = True
|
||||
else:
|
||||
|
|
|
@ -19,11 +19,12 @@ import logging
|
|||
import time
|
||||
|
||||
# Import 3rdp-party libs
|
||||
from salt.ext.six.moves import range # pylint: disable=import-error,redefined-builtin
|
||||
from salt.ext.six.moves import range, map # pylint: disable=import-error,redefined-builtin
|
||||
from salt.ext.six import string_types
|
||||
|
||||
# Import salt libs
|
||||
import salt.utils
|
||||
import salt.utils.files
|
||||
import salt.utils.decorators as decorators
|
||||
from salt.utils.locales import sdecode as _sdecode
|
||||
from salt.exceptions import CommandExecutionError, SaltInvocationError
|
||||
|
@ -520,16 +521,72 @@ def get_auto_login():
|
|||
return False if ret['retcode'] else ret['stdout']
|
||||
|
||||
|
||||
def enable_auto_login(name):
|
||||
def _kcpassword(password):
|
||||
'''
|
||||
Internal function for obfuscating the password used for AutoLogin
|
||||
This is later written as the contents of the ``/etc/kcpassword`` file
|
||||
|
||||
.. versionadded:: 2017.7.3
|
||||
|
||||
Adapted from:
|
||||
https://github.com/timsutton/osx-vm-templates/blob/master/scripts/support/set_kcpassword.py
|
||||
|
||||
Args:
|
||||
|
||||
password(str):
|
||||
The password to obfuscate
|
||||
|
||||
Returns:
|
||||
str: The obfuscated password
|
||||
'''
|
||||
# The magic 11 bytes - these are just repeated
|
||||
# 0x7D 0x89 0x52 0x23 0xD2 0xBC 0xDD 0xEA 0xA3 0xB9 0x1F
|
||||
key = [125, 137, 82, 35, 210, 188, 221, 234, 163, 185, 31]
|
||||
key_len = len(key)
|
||||
|
||||
# Convert each character to a byte
|
||||
password = list(map(ord, password))
|
||||
|
||||
# pad password length out to an even multiple of key length
|
||||
remainder = len(password) % key_len
|
||||
if remainder > 0:
|
||||
password = password + [0] * (key_len - remainder)
|
||||
|
||||
# Break the password into chunks the size of len(key) (11)
|
||||
for chunk_index in range(0, len(password), len(key)):
|
||||
# Reset the key_index to 0 for each iteration
|
||||
key_index = 0
|
||||
|
||||
# Do an XOR on each character of that chunk of the password with the
|
||||
# corresponding item in the key
|
||||
# The length of the password, or the length of the key, whichever is
|
||||
# smaller
|
||||
for password_index in range(chunk_index,
|
||||
min(chunk_index + len(key), len(password))):
|
||||
password[password_index] = password[password_index] ^ key[key_index]
|
||||
key_index += 1
|
||||
|
||||
# Convert each byte back to a character
|
||||
password = list(map(chr, password))
|
||||
return ''.join(password)
|
||||
|
||||
|
||||
def enable_auto_login(name, password):
|
||||
'''
|
||||
.. versionadded:: 2016.3.0
|
||||
|
||||
Configures the machine to auto login with the specified user
|
||||
|
||||
:param str name: The user account use for auto login
|
||||
Args:
|
||||
|
||||
:return: True if successful, False if not
|
||||
:rtype: bool
|
||||
name (str): The user account use for auto login
|
||||
|
||||
password (str): The password to user for auto login
|
||||
|
||||
.. versionadded:: 2017.7.3
|
||||
|
||||
Returns:
|
||||
bool: ``True`` if successful, otherwise ``False``
|
||||
|
||||
CLI Example:
|
||||
|
||||
|
@ -537,6 +594,7 @@ def enable_auto_login(name):
|
|||
|
||||
salt '*' user.enable_auto_login stevej
|
||||
'''
|
||||
# Make the entry into the defaults file
|
||||
cmd = ['defaults',
|
||||
'write',
|
||||
'/Library/Preferences/com.apple.loginwindow.plist',
|
||||
|
@ -544,6 +602,13 @@ def enable_auto_login(name):
|
|||
name]
|
||||
__salt__['cmd.run'](cmd)
|
||||
current = get_auto_login()
|
||||
|
||||
# Create/Update the kcpassword file with an obfuscated password
|
||||
o_password = _kcpassword(password=password)
|
||||
with salt.utils.files.set_umask(0o077):
|
||||
with salt.utils.fopen('/etc/kcpassword', 'w') as fd:
|
||||
fd.write(o_password)
|
||||
|
||||
return current if isinstance(current, bool) else current.lower() == name.lower()
|
||||
|
||||
|
||||
|
@ -553,8 +618,8 @@ def disable_auto_login():
|
|||
|
||||
Disables auto login on the machine
|
||||
|
||||
:return: True if successful, False if not
|
||||
:rtype: bool
|
||||
Returns:
|
||||
bool: ``True`` if successful, otherwise ``False``
|
||||
|
||||
CLI Example:
|
||||
|
||||
|
@ -562,6 +627,11 @@ def disable_auto_login():
|
|||
|
||||
salt '*' user.disable_auto_login
|
||||
'''
|
||||
# Remove the kcpassword file
|
||||
cmd = 'rm -f /etc/kcpassword'
|
||||
__salt__['cmd.run'](cmd)
|
||||
|
||||
# Remove the entry from the defaults file
|
||||
cmd = ['defaults',
|
||||
'delete',
|
||||
'/Library/Preferences/com.apple.loginwindow.plist',
|
||||
|
|
|
@ -29,6 +29,7 @@ log = logging.getLogger(__name__)
|
|||
from salt.ext import six
|
||||
import salt.utils.templates
|
||||
import salt.utils.napalm
|
||||
import salt.utils.versions
|
||||
from salt.utils.napalm import proxy_napalm_wrap
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------------
|
||||
|
@ -228,7 +229,7 @@ def _config_logic(napalm_device,
|
|||
|
||||
|
||||
@proxy_napalm_wrap
|
||||
def connected(**kwarvs): # pylint: disable=unused-argument
|
||||
def connected(**kwargs): # pylint: disable=unused-argument
|
||||
'''
|
||||
Specifies if the connection to the device succeeded.
|
||||
|
||||
|
@ -932,6 +933,7 @@ def load_config(filename=None,
|
|||
debug=False,
|
||||
replace=False,
|
||||
inherit_napalm_device=None,
|
||||
saltenv='base',
|
||||
**kwargs): # pylint: disable=unused-argument
|
||||
'''
|
||||
Applies configuration changes on the device. It can be loaded from a file or from inline string.
|
||||
|
@ -947,10 +949,21 @@ def load_config(filename=None,
|
|||
To replace the config, set ``replace`` to ``True``.
|
||||
|
||||
filename
|
||||
Path to the file containing the desired configuration. By default is None.
|
||||
Path to the file containing the desired configuration.
|
||||
This can be specified using the absolute path to the file,
|
||||
or using one of the following URL schemes:
|
||||
|
||||
- ``salt://``, to fetch the template from the Salt fileserver.
|
||||
- ``http://`` or ``https://``
|
||||
- ``ftp://``
|
||||
- ``s3://``
|
||||
- ``swift://``
|
||||
|
||||
.. versionchanged:: 2017.7.3
|
||||
|
||||
text
|
||||
String containing the desired configuration.
|
||||
This argument is ignored when ``filename`` is specified.
|
||||
|
||||
test: False
|
||||
Dry run? If set as ``True``, will apply the config, discard and return the changes. Default: ``False``
|
||||
|
@ -970,6 +983,11 @@ def load_config(filename=None,
|
|||
|
||||
.. versionadded:: 2016.11.2
|
||||
|
||||
saltenv: ``base``
|
||||
Specifies the Salt environment name.
|
||||
|
||||
.. versionadded:: 2017.7.3
|
||||
|
||||
:return: a dictionary having the following keys:
|
||||
|
||||
* result (bool): if the config was applied successfully. It is ``False`` only in case of failure. In case \
|
||||
|
@ -999,7 +1017,6 @@ def load_config(filename=None,
|
|||
'diff': '[edit interfaces xe-0/0/5]+ description "Adding a description";'
|
||||
}
|
||||
'''
|
||||
|
||||
fun = 'load_merge_candidate'
|
||||
if replace:
|
||||
fun = 'load_replace_candidate'
|
||||
|
@ -1012,21 +1029,28 @@ def load_config(filename=None,
|
|||
# compare_config, discard / commit
|
||||
# which have to be over the same session
|
||||
napalm_device['CLOSE'] = False # pylint: disable=undefined-variable
|
||||
if filename:
|
||||
text = __salt__['cp.get_file_str'](filename, saltenv=saltenv)
|
||||
if text is False:
|
||||
# When using salt:// or https://, if the resource is not available,
|
||||
# it will either raise an exception, or return False.
|
||||
ret = {
|
||||
'result': False,
|
||||
'out': None
|
||||
}
|
||||
ret['comment'] = 'Unable to read from {}. Please specify a valid file or text.'.format(filename)
|
||||
log.error(ret['comment'])
|
||||
return ret
|
||||
_loaded = salt.utils.napalm.call(
|
||||
napalm_device, # pylint: disable=undefined-variable
|
||||
fun,
|
||||
**{
|
||||
'filename': filename,
|
||||
'config': text
|
||||
}
|
||||
)
|
||||
loaded_config = None
|
||||
if debug:
|
||||
if filename:
|
||||
with salt.utils.fopen(filename) as rfh:
|
||||
loaded_config = rfh.read()
|
||||
else:
|
||||
loaded_config = text
|
||||
loaded_config = text
|
||||
return _config_logic(napalm_device, # pylint: disable=undefined-variable
|
||||
_loaded,
|
||||
test=test,
|
||||
|
@ -1072,6 +1096,10 @@ def load_template(template_name,
|
|||
|
||||
To replace the config, set ``replace`` to ``True``.
|
||||
|
||||
.. warning::
|
||||
The support for native NAPALM templates will be dropped in Salt Fluorine.
|
||||
Implicitly, the ``template_path`` argument will be removed.
|
||||
|
||||
template_name
|
||||
Identifies path to the template source.
|
||||
The template can be either stored on the local machine, either remotely.
|
||||
|
@ -1108,6 +1136,9 @@ def load_template(template_name,
|
|||
in order to find the template, this argument must be provided:
|
||||
``template_path: /absolute/path/to/``.
|
||||
|
||||
.. note::
|
||||
This argument will be deprecated beginning with release codename ``Fluorine``.
|
||||
|
||||
template_hash: None
|
||||
Hash of the template file. Format: ``{hash_type: 'md5', 'hsum': <md5sum>}``
|
||||
|
||||
|
@ -1274,7 +1305,11 @@ def load_template(template_name,
|
|||
'out': None
|
||||
}
|
||||
loaded_config = None
|
||||
|
||||
if template_path:
|
||||
salt.utils.versions.warn_until(
|
||||
'Fluorine',
|
||||
'Use of `template_path` detected. This argument will be removed in Salt Fluorine.'
|
||||
)
|
||||
# prechecks
|
||||
if template_engine not in salt.utils.templates.TEMPLATE_REGISTRY:
|
||||
_loaded.update({
|
||||
|
|
|
@ -245,7 +245,7 @@ def install_ruby(ruby, runas=None):
|
|||
|
||||
ret = {}
|
||||
ret = _rbenv_exec(['install', ruby], env=env, runas=runas, ret=ret)
|
||||
if ret['retcode'] == 0:
|
||||
if ret is not False and ret['retcode'] == 0:
|
||||
rehash(runas=runas)
|
||||
return ret['stderr']
|
||||
else:
|
||||
|
|
|
@ -24,7 +24,7 @@ Values or Entries
|
|||
Values/Entries are name/data pairs. There can be many values in a key. The
|
||||
(Default) value corresponds to the Key, the rest are their own value pairs.
|
||||
|
||||
:depends: - winreg Python module
|
||||
:depends: - PyWin32
|
||||
'''
|
||||
# When production windows installer is using Python 3, Python 2 code can be removed
|
||||
|
||||
|
@ -35,14 +35,13 @@ from __future__ import unicode_literals
|
|||
import sys
|
||||
import logging
|
||||
from salt.ext.six.moves import range # pylint: disable=W0622,import-error
|
||||
from salt.ext import six
|
||||
|
||||
# Import third party libs
|
||||
try:
|
||||
from salt.ext.six.moves import winreg as _winreg # pylint: disable=import-error,no-name-in-module
|
||||
from win32con import HWND_BROADCAST, WM_SETTINGCHANGE
|
||||
from win32api import RegCreateKeyEx, RegSetValueEx, RegFlushKey, \
|
||||
RegCloseKey, error as win32apiError, SendMessage
|
||||
import win32gui
|
||||
import win32api
|
||||
import win32con
|
||||
import pywintypes
|
||||
HAS_WINDOWS_MODULES = True
|
||||
except ImportError:
|
||||
HAS_WINDOWS_MODULES = False
|
||||
|
@ -60,7 +59,7 @@ __virtualname__ = 'reg'
|
|||
|
||||
def __virtual__():
|
||||
'''
|
||||
Only works on Windows systems with the _winreg python module
|
||||
Only works on Windows systems with the PyWin32
|
||||
'''
|
||||
if not salt.utils.is_windows():
|
||||
return (False, 'reg execution module failed to load: '
|
||||
|
@ -69,106 +68,76 @@ def __virtual__():
|
|||
if not HAS_WINDOWS_MODULES:
|
||||
return (False, 'reg execution module failed to load: '
|
||||
'One of the following libraries did not load: '
|
||||
+ '_winreg, win32gui, win32con, win32api')
|
||||
+ 'win32gui, win32con, win32api')
|
||||
|
||||
return __virtualname__
|
||||
|
||||
|
||||
# winreg in python 2 is hard coded to use codex 'mbcs', which uses
|
||||
# encoding that the user has assign. The function _unicode_to_mbcs
|
||||
# and _unicode_to_mbcs help with this.
|
||||
def _to_mbcs(vdata):
|
||||
'''
|
||||
Converts unicode to to current users character encoding. Use this for values
|
||||
returned by reg functions
|
||||
'''
|
||||
return salt.utils.to_unicode(vdata, 'mbcs')
|
||||
|
||||
|
||||
def _unicode_to_mbcs(instr):
|
||||
def _to_unicode(vdata):
|
||||
'''
|
||||
Converts unicode to to current users character encoding.
|
||||
Converts from current users character encoding to unicode. Use this for
|
||||
parameters being pass to reg functions
|
||||
'''
|
||||
if isinstance(instr, six.text_type):
|
||||
# unicode to windows utf8
|
||||
return instr.encode('mbcs')
|
||||
else:
|
||||
# Assume its byte str or not a str/unicode
|
||||
return instr
|
||||
|
||||
|
||||
def _mbcs_to_unicode(instr):
|
||||
'''
|
||||
Converts from current users character encoding to unicode.
|
||||
When instr has a value of None, the return value of the function
|
||||
will also be None.
|
||||
'''
|
||||
if instr is None or isinstance(instr, six.text_type):
|
||||
return instr
|
||||
else:
|
||||
return six.text_type(instr, 'mbcs')
|
||||
|
||||
|
||||
def _mbcs_to_unicode_wrap(obj, vtype):
|
||||
'''
|
||||
Wraps _mbcs_to_unicode for use with registry vdata
|
||||
'''
|
||||
if vtype == 'REG_BINARY':
|
||||
# We should be able to leave it alone if the user has passed binary data in yaml with
|
||||
# binary !!
|
||||
# In python < 3 this should have type str and in python 3+ this should be a byte array
|
||||
return obj
|
||||
if isinstance(obj, list):
|
||||
return [_mbcs_to_unicode(x) for x in obj]
|
||||
elif isinstance(obj, six.integer_types):
|
||||
return obj
|
||||
else:
|
||||
return _mbcs_to_unicode(obj)
|
||||
return salt.utils.to_unicode(vdata, 'utf-8')
|
||||
|
||||
|
||||
class Registry(object): # pylint: disable=R0903
|
||||
'''
|
||||
Delay '_winreg' usage until this module is used
|
||||
Delay usage until this module is used
|
||||
'''
|
||||
def __init__(self):
|
||||
self.hkeys = {
|
||||
'HKEY_CURRENT_USER': _winreg.HKEY_CURRENT_USER,
|
||||
'HKEY_LOCAL_MACHINE': _winreg.HKEY_LOCAL_MACHINE,
|
||||
'HKEY_USERS': _winreg.HKEY_USERS,
|
||||
'HKCU': _winreg.HKEY_CURRENT_USER,
|
||||
'HKLM': _winreg.HKEY_LOCAL_MACHINE,
|
||||
'HKU': _winreg.HKEY_USERS,
|
||||
'HKEY_CURRENT_USER': win32con.HKEY_CURRENT_USER,
|
||||
'HKEY_LOCAL_MACHINE': win32con.HKEY_LOCAL_MACHINE,
|
||||
'HKEY_USERS': win32con.HKEY_USERS,
|
||||
'HKCU': win32con.HKEY_CURRENT_USER,
|
||||
'HKLM': win32con.HKEY_LOCAL_MACHINE,
|
||||
'HKU': win32con.HKEY_USERS,
|
||||
}
|
||||
self.vtype = {
|
||||
'REG_BINARY': _winreg.REG_BINARY,
|
||||
'REG_DWORD': _winreg.REG_DWORD,
|
||||
'REG_EXPAND_SZ': _winreg.REG_EXPAND_SZ,
|
||||
'REG_MULTI_SZ': _winreg.REG_MULTI_SZ,
|
||||
'REG_SZ': _winreg.REG_SZ
|
||||
'REG_BINARY': win32con.REG_BINARY,
|
||||
'REG_DWORD': win32con.REG_DWORD,
|
||||
'REG_EXPAND_SZ': win32con.REG_EXPAND_SZ,
|
||||
'REG_MULTI_SZ': win32con.REG_MULTI_SZ,
|
||||
'REG_SZ': win32con.REG_SZ,
|
||||
'REG_QWORD': win32con.REG_QWORD
|
||||
}
|
||||
self.opttype = {
|
||||
'REG_OPTION_NON_VOLATILE': _winreg.REG_OPTION_NON_VOLATILE,
|
||||
'REG_OPTION_VOLATILE': _winreg.REG_OPTION_VOLATILE
|
||||
'REG_OPTION_NON_VOLATILE': 0,
|
||||
'REG_OPTION_VOLATILE': 1
|
||||
}
|
||||
# Return Unicode due to from __future__ import unicode_literals
|
||||
self.vtype_reverse = {
|
||||
_winreg.REG_BINARY: 'REG_BINARY',
|
||||
_winreg.REG_DWORD: 'REG_DWORD',
|
||||
_winreg.REG_EXPAND_SZ: 'REG_EXPAND_SZ',
|
||||
_winreg.REG_MULTI_SZ: 'REG_MULTI_SZ',
|
||||
_winreg.REG_SZ: 'REG_SZ',
|
||||
# REG_QWORD isn't in the winreg library
|
||||
11: 'REG_QWORD'
|
||||
win32con.REG_BINARY: 'REG_BINARY',
|
||||
win32con.REG_DWORD: 'REG_DWORD',
|
||||
win32con.REG_EXPAND_SZ: 'REG_EXPAND_SZ',
|
||||
win32con.REG_MULTI_SZ: 'REG_MULTI_SZ',
|
||||
win32con.REG_SZ: 'REG_SZ',
|
||||
win32con.REG_QWORD: 'REG_QWORD'
|
||||
}
|
||||
self.opttype_reverse = {
|
||||
_winreg.REG_OPTION_NON_VOLATILE: 'REG_OPTION_NON_VOLATILE',
|
||||
_winreg.REG_OPTION_VOLATILE: 'REG_OPTION_VOLATILE'
|
||||
0: 'REG_OPTION_NON_VOLATILE',
|
||||
1: 'REG_OPTION_VOLATILE'
|
||||
}
|
||||
# delete_key_recursive uses this to check the subkey contains enough \
|
||||
# as we do not want to remove all or most of the registry
|
||||
self.subkey_slash_check = {
|
||||
_winreg.HKEY_CURRENT_USER: 0,
|
||||
_winreg.HKEY_LOCAL_MACHINE: 1,
|
||||
_winreg.HKEY_USERS: 1
|
||||
win32con.HKEY_CURRENT_USER: 0,
|
||||
win32con.HKEY_LOCAL_MACHINE: 1,
|
||||
win32con.HKEY_USERS: 1
|
||||
}
|
||||
|
||||
self.registry_32 = {
|
||||
True: _winreg.KEY_READ | _winreg.KEY_WOW64_32KEY,
|
||||
False: _winreg.KEY_READ,
|
||||
True: win32con.KEY_READ | win32con.KEY_WOW64_32KEY,
|
||||
False: win32con.KEY_READ,
|
||||
}
|
||||
|
||||
def __getattr__(self, k):
|
||||
|
@ -191,21 +160,16 @@ def _key_exists(hive, key, use_32bit_registry=False):
|
|||
:return: Returns True if found, False if not found
|
||||
:rtype: bool
|
||||
'''
|
||||
|
||||
if PY2:
|
||||
local_hive = _mbcs_to_unicode(hive)
|
||||
local_key = _unicode_to_mbcs(key)
|
||||
else:
|
||||
local_hive = hive
|
||||
local_key = key
|
||||
local_hive = _to_unicode(hive)
|
||||
local_key = _to_unicode(key)
|
||||
|
||||
registry = Registry()
|
||||
hkey = registry.hkeys[local_hive]
|
||||
access_mask = registry.registry_32[use_32bit_registry]
|
||||
|
||||
try:
|
||||
handle = _winreg.OpenKey(hkey, local_key, 0, access_mask)
|
||||
_winreg.CloseKey(handle)
|
||||
handle = win32api.RegOpenKeyEx(hkey, local_key, 0, access_mask)
|
||||
win32api.RegCloseKey(handle)
|
||||
return True
|
||||
except WindowsError: # pylint: disable=E0602
|
||||
return False
|
||||
|
@ -224,7 +188,10 @@ def broadcast_change():
|
|||
salt '*' reg.broadcast_change
|
||||
'''
|
||||
# https://msdn.microsoft.com/en-us/library/windows/desktop/ms644952(v=vs.85).aspx
|
||||
return bool(SendMessage(HWND_BROADCAST, WM_SETTINGCHANGE, 0, 0))
|
||||
_, res = win32gui.SendMessageTimeout(
|
||||
win32con.HWND_BROADCAST, win32con.WM_SETTINGCHANGE, 0, 0,
|
||||
win32con.SMTO_ABORTIFHUNG, 5000)
|
||||
return not bool(res)
|
||||
|
||||
|
||||
def list_keys(hive, key=None, use_32bit_registry=False):
|
||||
|
@ -253,12 +220,8 @@ def list_keys(hive, key=None, use_32bit_registry=False):
|
|||
salt '*' reg.list_keys HKLM 'SOFTWARE'
|
||||
'''
|
||||
|
||||
if PY2:
|
||||
local_hive = _mbcs_to_unicode(hive)
|
||||
local_key = _unicode_to_mbcs(key)
|
||||
else:
|
||||
local_hive = hive
|
||||
local_key = key
|
||||
local_hive = _to_unicode(hive)
|
||||
local_key = _to_unicode(key)
|
||||
|
||||
registry = Registry()
|
||||
hkey = registry.hkeys[local_hive]
|
||||
|
@ -266,12 +229,12 @@ def list_keys(hive, key=None, use_32bit_registry=False):
|
|||
|
||||
subkeys = []
|
||||
try:
|
||||
handle = _winreg.OpenKey(hkey, local_key, 0, access_mask)
|
||||
handle = win32api.RegOpenKeyEx(hkey, local_key, 0, access_mask)
|
||||
|
||||
for i in range(_winreg.QueryInfoKey(handle)[0]):
|
||||
subkey = _winreg.EnumKey(handle, i)
|
||||
for i in range(win32api.RegQueryInfoKey(handle)[0]):
|
||||
subkey = win32api.RegEnumKey(handle, i)
|
||||
if PY2:
|
||||
subkeys.append(_mbcs_to_unicode(subkey))
|
||||
subkeys.append(_to_unicode(subkey))
|
||||
else:
|
||||
subkeys.append(subkey)
|
||||
|
||||
|
@ -312,13 +275,8 @@ def list_values(hive, key=None, use_32bit_registry=False, include_default=True):
|
|||
|
||||
salt '*' reg.list_values HKLM 'SYSTEM\\CurrentControlSet\\Services\\Tcpip'
|
||||
'''
|
||||
|
||||
if PY2:
|
||||
local_hive = _mbcs_to_unicode(hive)
|
||||
local_key = _unicode_to_mbcs(key)
|
||||
else:
|
||||
local_hive = hive
|
||||
local_key = key
|
||||
local_hive = _to_unicode(hive)
|
||||
local_key = _to_unicode(key)
|
||||
|
||||
registry = Registry()
|
||||
hkey = registry.hkeys[local_hive]
|
||||
|
@ -327,37 +285,21 @@ def list_values(hive, key=None, use_32bit_registry=False, include_default=True):
|
|||
values = list()
|
||||
|
||||
try:
|
||||
handle = _winreg.OpenKey(hkey, local_key, 0, access_mask)
|
||||
handle = win32api.RegOpenKeyEx(hkey, local_key, 0, access_mask)
|
||||
|
||||
for i in range(_winreg.QueryInfoKey(handle)[1]):
|
||||
vname, vdata, vtype = _winreg.EnumValue(handle, i)
|
||||
for i in range(win32api.RegQueryInfoKey(handle)[1]):
|
||||
vname, vdata, vtype = win32api.RegEnumValue(handle, i)
|
||||
|
||||
if not vname:
|
||||
vname = "(Default)"
|
||||
|
||||
value = {'hive': local_hive,
|
||||
'key': local_key,
|
||||
'vname': vname,
|
||||
'vdata': vdata,
|
||||
'vname': _to_mbcs(vname),
|
||||
'vdata': _to_mbcs(vdata),
|
||||
'vtype': registry.vtype_reverse[vtype],
|
||||
'success': True}
|
||||
values.append(value)
|
||||
if include_default:
|
||||
# Get the default value for the key
|
||||
value = {'hive': local_hive,
|
||||
'key': local_key,
|
||||
'vname': '(Default)',
|
||||
'vdata': None,
|
||||
'success': True}
|
||||
try:
|
||||
# QueryValueEx returns unicode data
|
||||
vdata, vtype = _winreg.QueryValueEx(handle, '(Default)')
|
||||
if vdata or vdata in [0, '']:
|
||||
value['vtype'] = registry.vtype_reverse[vtype]
|
||||
value['vdata'] = vdata
|
||||
else:
|
||||
value['comment'] = 'Empty Value'
|
||||
except WindowsError: # pylint: disable=E0602
|
||||
value['vdata'] = ('(value not set)')
|
||||
value['vtype'] = 'REG_SZ'
|
||||
values.append(value)
|
||||
except WindowsError as exc: # pylint: disable=E0602
|
||||
log.debug(exc)
|
||||
log.debug(r'Cannot find key: {0}\{1}'.format(hive, key))
|
||||
|
@ -403,30 +345,19 @@ def read_value(hive, key, vname=None, use_32bit_registry=False):
|
|||
|
||||
salt '*' reg.read_value HKEY_LOCAL_MACHINE 'SOFTWARE\Salt' 'version'
|
||||
'''
|
||||
|
||||
# If no name is passed, the default value of the key will be returned
|
||||
# The value name is Default
|
||||
|
||||
# Setup the return array
|
||||
if PY2:
|
||||
ret = {'hive': _mbcs_to_unicode(hive),
|
||||
'key': _mbcs_to_unicode(key),
|
||||
'vname': _mbcs_to_unicode(vname),
|
||||
'vdata': None,
|
||||
'success': True}
|
||||
local_hive = _mbcs_to_unicode(hive)
|
||||
local_key = _unicode_to_mbcs(key)
|
||||
local_vname = _unicode_to_mbcs(vname)
|
||||
local_hive = _to_unicode(hive)
|
||||
local_key = _to_unicode(key)
|
||||
local_vname = _to_unicode(vname)
|
||||
|
||||
else:
|
||||
ret = {'hive': hive,
|
||||
'key': key,
|
||||
'vname': vname,
|
||||
'vdata': None,
|
||||
'success': True}
|
||||
local_hive = hive
|
||||
local_key = key
|
||||
local_vname = vname
|
||||
ret = {'hive': local_hive,
|
||||
'key': local_key,
|
||||
'vname': local_vname,
|
||||
'vdata': None,
|
||||
'success': True}
|
||||
|
||||
if not vname:
|
||||
ret['vname'] = '(Default)'
|
||||
|
@ -436,19 +367,22 @@ def read_value(hive, key, vname=None, use_32bit_registry=False):
|
|||
access_mask = registry.registry_32[use_32bit_registry]
|
||||
|
||||
try:
|
||||
handle = _winreg.OpenKey(hkey, local_key, 0, access_mask)
|
||||
handle = win32api.RegOpenKeyEx(hkey, local_key, 0, access_mask)
|
||||
try:
|
||||
# QueryValueEx returns unicode data
|
||||
vdata, vtype = _winreg.QueryValueEx(handle, local_vname)
|
||||
# RegQueryValueEx returns and accepts unicode data
|
||||
vdata, vtype = win32api.RegQueryValueEx(handle, local_vname)
|
||||
if vdata or vdata in [0, '']:
|
||||
ret['vtype'] = registry.vtype_reverse[vtype]
|
||||
ret['vdata'] = vdata
|
||||
if vtype == 7:
|
||||
ret['vdata'] = [_to_mbcs(i) for i in vdata]
|
||||
else:
|
||||
ret['vdata'] = _to_mbcs(vdata)
|
||||
else:
|
||||
ret['comment'] = 'Empty Value'
|
||||
except WindowsError: # pylint: disable=E0602
|
||||
ret['vdata'] = ('(value not set)')
|
||||
ret['vtype'] = 'REG_SZ'
|
||||
except WindowsError as exc: # pylint: disable=E0602
|
||||
except pywintypes.error as exc: # pylint: disable=E0602
|
||||
log.debug(exc)
|
||||
log.debug('Cannot find key: {0}\\{1}'.format(local_hive, local_key))
|
||||
ret['comment'] = 'Cannot find key: {0}\\{1}'.format(local_hive, local_key)
|
||||
|
@ -555,42 +489,47 @@ def set_value(hive,
|
|||
salt '*' reg.set_value HKEY_LOCAL_MACHINE 'SOFTWARE\\Salt' 'version' '2015.5.2' \\
|
||||
vtype=REG_LIST vdata='[a,b,c]'
|
||||
'''
|
||||
|
||||
if PY2:
|
||||
try:
|
||||
local_hive = _mbcs_to_unicode(hive)
|
||||
local_key = _mbcs_to_unicode(key)
|
||||
local_vname = _mbcs_to_unicode(vname)
|
||||
local_vtype = _mbcs_to_unicode(vtype)
|
||||
local_vdata = _mbcs_to_unicode_wrap(vdata, local_vtype)
|
||||
except TypeError as exc: # pylint: disable=E0602
|
||||
log.error(exc, exc_info=True)
|
||||
return False
|
||||
else:
|
||||
local_hive = hive
|
||||
local_key = key
|
||||
local_vname = vname
|
||||
local_vdata = vdata
|
||||
local_vtype = vtype
|
||||
local_hive = _to_unicode(hive)
|
||||
local_key = _to_unicode(key)
|
||||
local_vname = _to_unicode(vname)
|
||||
local_vtype = _to_unicode(vtype)
|
||||
|
||||
registry = Registry()
|
||||
hkey = registry.hkeys[local_hive]
|
||||
vtype_value = registry.vtype[local_vtype]
|
||||
access_mask = registry.registry_32[use_32bit_registry] | _winreg.KEY_ALL_ACCESS
|
||||
access_mask = registry.registry_32[use_32bit_registry] | win32con.KEY_ALL_ACCESS
|
||||
|
||||
# Check data type and cast to expected type
|
||||
# int will automatically become long on 64bit numbers
|
||||
# https://www.python.org/dev/peps/pep-0237/
|
||||
|
||||
# String Types to Unicode
|
||||
if vtype_value in [1, 2]:
|
||||
local_vdata = _to_unicode(vdata)
|
||||
# Don't touch binary...
|
||||
elif vtype_value == 3:
|
||||
local_vdata = vdata
|
||||
# Make sure REG_MULTI_SZ is a list of strings
|
||||
elif vtype_value == 7:
|
||||
local_vdata = [_to_unicode(i) for i in vdata]
|
||||
# Everything else is int
|
||||
else:
|
||||
local_vdata = int(vdata)
|
||||
|
||||
if volatile:
|
||||
create_options = registry.opttype['REG_OPTION_VOLATILE']
|
||||
else:
|
||||
create_options = registry.opttype['REG_OPTION_NON_VOLATILE']
|
||||
|
||||
try:
|
||||
handle, _ = RegCreateKeyEx(hkey, local_key, access_mask,
|
||||
handle, _ = win32api.RegCreateKeyEx(hkey, local_key, access_mask,
|
||||
Options=create_options)
|
||||
RegSetValueEx(handle, local_vname, 0, vtype_value, local_vdata)
|
||||
RegFlushKey(handle)
|
||||
RegCloseKey(handle)
|
||||
win32api.RegSetValueEx(handle, local_vname, 0, vtype_value, local_vdata)
|
||||
win32api.RegFlushKey(handle)
|
||||
win32api.RegCloseKey(handle)
|
||||
broadcast_change()
|
||||
return True
|
||||
except (win32apiError, SystemError, ValueError, TypeError) as exc: # pylint: disable=E0602
|
||||
except (win32api.error, SystemError, ValueError, TypeError) as exc: # pylint: disable=E0602
|
||||
log.error(exc, exc_info=True)
|
||||
return False
|
||||
|
||||
|
@ -626,18 +565,14 @@ def delete_key_recursive(hive, key, use_32bit_registry=False):
|
|||
salt '*' reg.delete_key_recursive HKLM SOFTWARE\\salt
|
||||
'''
|
||||
|
||||
if PY2:
|
||||
local_hive = _mbcs_to_unicode(hive)
|
||||
local_key = _unicode_to_mbcs(key)
|
||||
else:
|
||||
local_hive = hive
|
||||
local_key = key
|
||||
local_hive = _to_unicode(hive)
|
||||
local_key = _to_unicode(key)
|
||||
|
||||
# Instantiate the registry object
|
||||
registry = Registry()
|
||||
hkey = registry.hkeys[local_hive]
|
||||
key_path = local_key
|
||||
access_mask = registry.registry_32[use_32bit_registry] | _winreg.KEY_ALL_ACCESS
|
||||
access_mask = registry.registry_32[use_32bit_registry] | win32con.KEY_ALL_ACCESS
|
||||
|
||||
if not _key_exists(local_hive, local_key, use_32bit_registry):
|
||||
return False
|
||||
|
@ -654,17 +589,17 @@ def delete_key_recursive(hive, key, use_32bit_registry=False):
|
|||
i = 0
|
||||
while True:
|
||||
try:
|
||||
subkey = _winreg.EnumKey(_key, i)
|
||||
subkey = win32api.RegEnumKey(_key, i)
|
||||
yield subkey
|
||||
i += 1
|
||||
except WindowsError: # pylint: disable=E0602
|
||||
except pywintypes.error: # pylint: disable=E0602
|
||||
break
|
||||
|
||||
def _traverse_registry_tree(_hkey, _keypath, _ret, _access_mask):
|
||||
'''
|
||||
Traverse the registry tree i.e. dive into the tree
|
||||
'''
|
||||
_key = _winreg.OpenKey(_hkey, _keypath, 0, _access_mask)
|
||||
_key = win32api.RegOpenKeyEx(_hkey, _keypath, 0, _access_mask)
|
||||
for subkeyname in _subkeys(_key):
|
||||
subkeypath = r'{0}\{1}'.format(_keypath, subkeyname)
|
||||
_ret = _traverse_registry_tree(_hkey, subkeypath, _ret, access_mask)
|
||||
|
@ -683,8 +618,8 @@ def delete_key_recursive(hive, key, use_32bit_registry=False):
|
|||
# Delete all sub_keys
|
||||
for sub_key_path in key_list:
|
||||
try:
|
||||
key_handle = _winreg.OpenKey(hkey, sub_key_path, 0, access_mask)
|
||||
_winreg.DeleteKey(key_handle, '')
|
||||
key_handle = win32api.RegOpenKeyEx(hkey, sub_key_path, 0, access_mask)
|
||||
win32api.RegDeleteKey(key_handle, '')
|
||||
ret['Deleted'].append(r'{0}\{1}'.format(hive, sub_key_path))
|
||||
except WindowsError as exc: # pylint: disable=E0602
|
||||
log.error(exc, exc_info=True)
|
||||
|
@ -723,23 +658,18 @@ def delete_value(hive, key, vname=None, use_32bit_registry=False):
|
|||
salt '*' reg.delete_value HKEY_CURRENT_USER 'SOFTWARE\\Salt' 'version'
|
||||
'''
|
||||
|
||||
if PY2:
|
||||
local_hive = _mbcs_to_unicode(hive)
|
||||
local_key = _unicode_to_mbcs(key)
|
||||
local_vname = _unicode_to_mbcs(vname)
|
||||
else:
|
||||
local_hive = hive
|
||||
local_key = key
|
||||
local_vname = vname
|
||||
local_hive = _to_unicode(hive)
|
||||
local_key = _to_unicode(key)
|
||||
local_vname = _to_unicode(vname)
|
||||
|
||||
registry = Registry()
|
||||
hkey = registry.hkeys[local_hive]
|
||||
access_mask = registry.registry_32[use_32bit_registry] | _winreg.KEY_ALL_ACCESS
|
||||
access_mask = registry.registry_32[use_32bit_registry] | win32con.KEY_ALL_ACCESS
|
||||
|
||||
try:
|
||||
handle = _winreg.OpenKey(hkey, local_key, 0, access_mask)
|
||||
_winreg.DeleteValue(handle, local_vname)
|
||||
_winreg.CloseKey(handle)
|
||||
handle = win32api.RegOpenKeyEx(hkey, local_key, 0, access_mask)
|
||||
win32api.RegDeleteValue(handle, local_vname)
|
||||
win32api.RegCloseKey(handle)
|
||||
broadcast_change()
|
||||
return True
|
||||
except WindowsError as exc: # pylint: disable=E0602
|
||||
|
|
|
@ -1084,8 +1084,8 @@ def build_routes(iface, **settings):
|
|||
log.debug("IPv4 routes:\n{0}".format(opts4))
|
||||
log.debug("IPv6 routes:\n{0}".format(opts6))
|
||||
|
||||
routecfg = template.render(routes=opts4)
|
||||
routecfg6 = template.render(routes=opts6)
|
||||
routecfg = template.render(routes=opts4, iface=iface)
|
||||
routecfg6 = template.render(routes=opts6, iface=iface)
|
||||
|
||||
if settings['test']:
|
||||
routes = _read_temp(routecfg)
|
||||
|
|
|
@ -99,17 +99,16 @@ def _set_retcode(ret, highstate=None):
|
|||
__context__['retcode'] = 2
|
||||
|
||||
|
||||
def _check_pillar(kwargs, pillar=None):
|
||||
def _get_pillar_errors(kwargs, pillar=None):
|
||||
'''
|
||||
Check the pillar for errors, refuse to run the state if there are errors
|
||||
in the pillar and return the pillar errors
|
||||
Checks all pillars (external and internal) for errors.
|
||||
Return an error message, if anywhere or None.
|
||||
|
||||
:param kwargs: dictionary of options
|
||||
:param pillar: external pillar
|
||||
:return: None or an error message
|
||||
'''
|
||||
if kwargs.get('force'):
|
||||
return True
|
||||
pillar_dict = pillar if pillar is not None else __pillar__
|
||||
if '_errors' in pillar_dict:
|
||||
return False
|
||||
return True
|
||||
return None if kwargs.get('force') else (pillar or {}).get('_errors', __pillar__.get('_errors')) or None
|
||||
|
||||
|
||||
def _wait(jid):
|
||||
|
@ -411,10 +410,10 @@ def template(tem, queue=False, **kwargs):
|
|||
context=__context__,
|
||||
initial_pillar=_get_initial_pillar(opts))
|
||||
|
||||
if not _check_pillar(kwargs, st_.opts['pillar']):
|
||||
errors = _get_pillar_errors(kwargs, pillar=st_.opts['pillar'])
|
||||
if errors:
|
||||
__context__['retcode'] = 5
|
||||
raise CommandExecutionError('Pillar failed to render',
|
||||
info=st_.opts['pillar']['_errors'])
|
||||
raise CommandExecutionError('Pillar failed to render', info=errors)
|
||||
|
||||
if not tem.endswith('.sls'):
|
||||
tem = '{sls}.sls'.format(sls=tem)
|
||||
|
@ -493,6 +492,18 @@ def apply_(mods=None,
|
|||
Values passed this way will override Pillar values set via
|
||||
``pillar_roots`` or an external Pillar source.
|
||||
|
||||
exclude
|
||||
Exclude specific states from execution. Accepts a list of sls names, a
|
||||
comma-separated string of sls names, or a list of dictionaries
|
||||
containing ``sls`` or ``id`` keys. Glob-patterns may be used to match
|
||||
multiple states.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' state.apply exclude=bar,baz
|
||||
salt '*' state.apply exclude=foo*
|
||||
salt '*' state.apply exclude="[{'id': 'id_to_exclude'}, {'sls': 'sls_to_exclude'}]"
|
||||
|
||||
queue : False
|
||||
Instead of failing immediately when another state run is in progress,
|
||||
queue the new state run to begin running once the other has finished.
|
||||
|
@ -758,6 +769,18 @@ def highstate(test=None, queue=False, **kwargs):
|
|||
|
||||
.. versionadded:: 2016.3.0
|
||||
|
||||
exclude
|
||||
Exclude specific states from execution. Accepts a list of sls names, a
|
||||
comma-separated string of sls names, or a list of dictionaries
|
||||
containing ``sls`` or ``id`` keys. Glob-patterns may be used to match
|
||||
multiple states.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' state.higstate exclude=bar,baz
|
||||
salt '*' state.higstate exclude=foo*
|
||||
salt '*' state.highstate exclude="[{'id': 'id_to_exclude'}, {'sls': 'sls_to_exclude'}]"
|
||||
|
||||
saltenv
|
||||
Specify a salt fileserver environment to be used when applying states
|
||||
|
||||
|
@ -872,11 +895,10 @@ def highstate(test=None, queue=False, **kwargs):
|
|||
mocked=kwargs.get('mock', False),
|
||||
initial_pillar=_get_initial_pillar(opts))
|
||||
|
||||
if not _check_pillar(kwargs, st_.opts['pillar']):
|
||||
errors = _get_pillar_errors(kwargs, st_.opts['pillar'])
|
||||
if errors:
|
||||
__context__['retcode'] = 5
|
||||
err = ['Pillar failed to render with the following messages:']
|
||||
err += __pillar__['_errors']
|
||||
return err
|
||||
return ['Pillar failed to render with the following messages:'] + errors
|
||||
|
||||
st_.push_active()
|
||||
ret = {}
|
||||
|
@ -935,6 +957,18 @@ def sls(mods, test=None, exclude=None, queue=False, **kwargs):
|
|||
|
||||
.. versionadded:: 2016.3.0
|
||||
|
||||
exclude
|
||||
Exclude specific states from execution. Accepts a list of sls names, a
|
||||
comma-separated string of sls names, or a list of dictionaries
|
||||
containing ``sls`` or ``id`` keys. Glob-patterns may be used to match
|
||||
multiple states.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' state.sls foo,bar,baz exclude=bar,baz
|
||||
salt '*' state.sls foo,bar,baz exclude=ba*
|
||||
salt '*' state.sls foo,bar,baz exclude="[{'id': 'id_to_exclude'}, {'sls': 'sls_to_exclude'}]"
|
||||
|
||||
queue : False
|
||||
Instead of failing immediately when another state run is in progress,
|
||||
queue the new state run to begin running once the other has finished.
|
||||
|
@ -1071,11 +1105,10 @@ def sls(mods, test=None, exclude=None, queue=False, **kwargs):
|
|||
mocked=kwargs.get('mock', False),
|
||||
initial_pillar=_get_initial_pillar(opts))
|
||||
|
||||
if not _check_pillar(kwargs, st_.opts['pillar']):
|
||||
errors = _get_pillar_errors(kwargs, pillar=st_.opts['pillar'])
|
||||
if errors:
|
||||
__context__['retcode'] = 5
|
||||
err = ['Pillar failed to render with the following messages:']
|
||||
err += __pillar__['_errors']
|
||||
return err
|
||||
return ['Pillar failed to render with the following messages:'] + errors
|
||||
|
||||
orchestration_jid = kwargs.get('orchestration_jid')
|
||||
umask = os.umask(0o77)
|
||||
|
@ -1090,7 +1123,6 @@ def sls(mods, test=None, exclude=None, queue=False, **kwargs):
|
|||
mods = mods.split(',')
|
||||
|
||||
st_.push_active()
|
||||
ret = {}
|
||||
try:
|
||||
high_, errors = st_.render_highstate({opts['environment']: mods})
|
||||
|
||||
|
@ -1197,11 +1229,10 @@ def top(topfn, test=None, queue=False, **kwargs):
|
|||
pillar_enc=pillar_enc,
|
||||
context=__context__,
|
||||
initial_pillar=_get_initial_pillar(opts))
|
||||
if not _check_pillar(kwargs, st_.opts['pillar']):
|
||||
errors = _get_pillar_errors(kwargs, pillar=st_.opts['pillar'])
|
||||
if errors:
|
||||
__context__['retcode'] = 5
|
||||
err = ['Pillar failed to render with the following messages:']
|
||||
err += __pillar__['_errors']
|
||||
return err
|
||||
return ['Pillar failed to render with the following messages:'] + errors
|
||||
|
||||
st_.push_active()
|
||||
st_.opts['state_top'] = salt.utils.url.create(topfn)
|
||||
|
@ -1259,10 +1290,10 @@ def show_highstate(queue=False, **kwargs):
|
|||
pillar_enc=pillar_enc,
|
||||
initial_pillar=_get_initial_pillar(opts))
|
||||
|
||||
if not _check_pillar(kwargs, st_.opts['pillar']):
|
||||
errors = _get_pillar_errors(kwargs, pillar=st_.opts['pillar'])
|
||||
if errors:
|
||||
__context__['retcode'] = 5
|
||||
raise CommandExecutionError('Pillar failed to render',
|
||||
info=st_.opts['pillar']['_errors'])
|
||||
raise CommandExecutionError('Pillar failed to render', info=errors)
|
||||
|
||||
st_.push_active()
|
||||
try:
|
||||
|
@ -1293,10 +1324,10 @@ def show_lowstate(queue=False, **kwargs):
|
|||
st_ = salt.state.HighState(opts,
|
||||
initial_pillar=_get_initial_pillar(opts))
|
||||
|
||||
if not _check_pillar(kwargs, st_.opts['pillar']):
|
||||
errors = _get_pillar_errors(kwargs, pillar=st_.opts['pillar'])
|
||||
if errors:
|
||||
__context__['retcode'] = 5
|
||||
raise CommandExecutionError('Pillar failed to render',
|
||||
info=st_.opts['pillar']['_errors'])
|
||||
raise CommandExecutionError('Pillar failed to render', info=errors)
|
||||
|
||||
st_.push_active()
|
||||
try:
|
||||
|
@ -1394,11 +1425,10 @@ def sls_id(id_, mods, test=None, queue=False, **kwargs):
|
|||
st_ = salt.state.HighState(opts,
|
||||
initial_pillar=_get_initial_pillar(opts))
|
||||
|
||||
if not _check_pillar(kwargs, st_.opts['pillar']):
|
||||
errors = _get_pillar_errors(kwargs, pillar=st_.opts['pillar'])
|
||||
if errors:
|
||||
__context__['retcode'] = 5
|
||||
err = ['Pillar failed to render with the following messages:']
|
||||
err += __pillar__['_errors']
|
||||
return err
|
||||
return ['Pillar failed to render with the following messages:'] + errors
|
||||
|
||||
if isinstance(mods, six.string_types):
|
||||
split_mods = mods.split(',')
|
||||
|
@ -1480,10 +1510,10 @@ def show_low_sls(mods, test=None, queue=False, **kwargs):
|
|||
|
||||
st_ = salt.state.HighState(opts, initial_pillar=_get_initial_pillar(opts))
|
||||
|
||||
if not _check_pillar(kwargs, st_.opts['pillar']):
|
||||
errors = _get_pillar_errors(kwargs, pillar=st_.opts['pillar'])
|
||||
if errors:
|
||||
__context__['retcode'] = 5
|
||||
raise CommandExecutionError('Pillar failed to render',
|
||||
info=st_.opts['pillar']['_errors'])
|
||||
raise CommandExecutionError('Pillar failed to render', info=errors)
|
||||
|
||||
if isinstance(mods, six.string_types):
|
||||
mods = mods.split(',')
|
||||
|
@ -1567,10 +1597,10 @@ def show_sls(mods, test=None, queue=False, **kwargs):
|
|||
pillar_enc=pillar_enc,
|
||||
initial_pillar=_get_initial_pillar(opts))
|
||||
|
||||
if not _check_pillar(kwargs, st_.opts['pillar']):
|
||||
errors = _get_pillar_errors(kwargs, pillar=st_.opts['pillar'])
|
||||
if errors:
|
||||
__context__['retcode'] = 5
|
||||
raise CommandExecutionError('Pillar failed to render',
|
||||
info=st_.opts['pillar']['_errors'])
|
||||
raise CommandExecutionError('Pillar failed to render', info=errors)
|
||||
|
||||
if isinstance(mods, six.string_types):
|
||||
mods = mods.split(',')
|
||||
|
@ -1616,10 +1646,10 @@ def show_top(queue=False, **kwargs):
|
|||
|
||||
st_ = salt.state.HighState(opts, initial_pillar=_get_initial_pillar(opts))
|
||||
|
||||
if not _check_pillar(kwargs, st_.opts['pillar']):
|
||||
errors = _get_pillar_errors(kwargs, pillar=st_.opts['pillar'])
|
||||
if errors:
|
||||
__context__['retcode'] = 5
|
||||
raise CommandExecutionError('Pillar failed to render',
|
||||
info=st_.opts['pillar']['_errors'])
|
||||
raise CommandExecutionError('Pillar failed to render', info=errors)
|
||||
|
||||
errors = []
|
||||
top_ = st_.get_top()
|
||||
|
|
|
@ -337,6 +337,10 @@ def zone_compare(timezone):
|
|||
if 'Solaris' in __grains__['os_family'] or 'AIX' in __grains__['os_family']:
|
||||
return timezone == get_zone()
|
||||
|
||||
if 'FreeBSD' in __grains__['os_family']:
|
||||
if not os.path.isfile(_get_etc_localtime_path()):
|
||||
return timezone == get_zone()
|
||||
|
||||
tzfile = _get_etc_localtime_path()
|
||||
zonepath = _get_zone_file(timezone)
|
||||
try:
|
||||
|
|
|
@ -316,7 +316,7 @@ def get_site_packages(venv):
|
|||
ret = __salt__['cmd.exec_code_all'](
|
||||
bin_path,
|
||||
'from distutils import sysconfig; '
|
||||
'print sysconfig.get_python_lib()'
|
||||
'print(sysconfig.get_python_lib())'
|
||||
)
|
||||
|
||||
if ret['retcode'] != 0:
|
||||
|
|
|
@ -58,7 +58,7 @@ from salt.modules.file import (check_hash, # pylint: disable=W0611
|
|||
lstat, path_exists_glob, write, pardir, join, HASHES, HASHES_REVMAP,
|
||||
comment, uncomment, _add_flags, comment_line, _regex_to_static,
|
||||
_get_line_indent, apply_template_on_contents, dirname, basename,
|
||||
list_backups_dir)
|
||||
list_backups_dir, _assert_occurrence, _starts_till)
|
||||
from salt.modules.file import normpath as normpath_
|
||||
|
||||
from salt.utils import namespaced_function as _namespaced_function
|
||||
|
@ -116,7 +116,7 @@ def __virtual__():
|
|||
global write, pardir, join, _add_flags, apply_template_on_contents
|
||||
global path_exists_glob, comment, uncomment, _mkstemp_copy
|
||||
global _regex_to_static, _get_line_indent, dirname, basename
|
||||
global list_backups_dir, normpath_
|
||||
global list_backups_dir, normpath_, _assert_occurrence, _starts_till
|
||||
|
||||
replace = _namespaced_function(replace, globals())
|
||||
search = _namespaced_function(search, globals())
|
||||
|
@ -179,6 +179,8 @@ def __virtual__():
|
|||
basename = _namespaced_function(basename, globals())
|
||||
list_backups_dir = _namespaced_function(list_backups_dir, globals())
|
||||
normpath_ = _namespaced_function(normpath_, globals())
|
||||
_assert_occurrence = _namespaced_function(_assert_occurrence, globals())
|
||||
_starts_till = _namespaced_function(_starts_till, globals())
|
||||
|
||||
else:
|
||||
return False, 'Module win_file: Missing Win32 modules'
|
||||
|
@ -789,7 +791,7 @@ def chgrp(path, group):
|
|||
|
||||
def stats(path, hash_type='sha256', follow_symlinks=True):
|
||||
'''
|
||||
Return a dict containing the stats for a given file
|
||||
Return a dict containing the stats about a given file
|
||||
|
||||
Under Windows, `gid` will equal `uid` and `group` will equal `user`.
|
||||
|
||||
|
@ -818,6 +820,8 @@ def stats(path, hash_type='sha256', follow_symlinks=True):
|
|||
|
||||
salt '*' file.stats /etc/passwd
|
||||
'''
|
||||
# This is to mirror the behavior of file.py. `check_file_meta` expects an
|
||||
# empty dictionary when the file does not exist
|
||||
if not os.path.exists(path):
|
||||
raise CommandExecutionError('Path not found: {0}'.format(path))
|
||||
|
||||
|
@ -1225,33 +1229,37 @@ def mkdir(path,
|
|||
|
||||
path (str): The full path to the directory.
|
||||
|
||||
owner (str): The owner of the directory. If not passed, it will be the
|
||||
account that created the directory, likely SYSTEM
|
||||
owner (str):
|
||||
The owner of the directory. If not passed, it will be the account
|
||||
that created the directory, likely SYSTEM
|
||||
|
||||
grant_perms (dict): A dictionary containing the user/group and the basic
|
||||
permissions to grant, ie: ``{'user': {'perms': 'basic_permission'}}``.
|
||||
You can also set the ``applies_to`` setting here. The default is
|
||||
``this_folder_subfolders_files``. Specify another ``applies_to`` setting
|
||||
like this:
|
||||
grant_perms (dict):
|
||||
A dictionary containing the user/group and the basic permissions to
|
||||
grant, ie: ``{'user': {'perms': 'basic_permission'}}``. You can also
|
||||
set the ``applies_to`` setting here. The default is
|
||||
``this_folder_subfolders_files``. Specify another ``applies_to``
|
||||
setting like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
.. code-block:: yaml
|
||||
|
||||
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
|
||||
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
|
||||
|
||||
To set advanced permissions use a list for the ``perms`` parameter, ie:
|
||||
To set advanced permissions use a list for the ``perms`` parameter, ie:
|
||||
|
||||
.. code-block:: yaml
|
||||
.. code-block:: yaml
|
||||
|
||||
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
|
||||
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
|
||||
|
||||
deny_perms (dict): A dictionary containing the user/group and
|
||||
permissions to deny along with the ``applies_to`` setting. Use the same
|
||||
format used for the ``grant_perms`` parameter. Remember, deny
|
||||
permissions supersede grant permissions.
|
||||
deny_perms (dict):
|
||||
A dictionary containing the user/group and permissions to deny along
|
||||
with the ``applies_to`` setting. Use the same format used for the
|
||||
``grant_perms`` parameter. Remember, deny permissions supersede
|
||||
grant permissions.
|
||||
|
||||
inheritance (bool): If True the object will inherit permissions from the
|
||||
parent, if False, inheritance will be disabled. Inheritance setting will
|
||||
not apply to parent directories if they must be created
|
||||
inheritance (bool):
|
||||
If True the object will inherit permissions from the parent, if
|
||||
False, inheritance will be disabled. Inheritance setting will not
|
||||
apply to parent directories if they must be created
|
||||
|
||||
Returns:
|
||||
bool: True if successful
|
||||
|
@ -1310,33 +1318,37 @@ def makedirs_(path,
|
|||
|
||||
path (str): The full path to the directory.
|
||||
|
||||
owner (str): The owner of the directory. If not passed, it will be the
|
||||
account that created the directly, likely SYSTEM
|
||||
owner (str):
|
||||
The owner of the directory. If not passed, it will be the account
|
||||
that created the directly, likely SYSTEM
|
||||
|
||||
grant_perms (dict): A dictionary containing the user/group and the basic
|
||||
permissions to grant, ie: ``{'user': {'perms': 'basic_permission'}}``.
|
||||
You can also set the ``applies_to`` setting here. The default is
|
||||
``this_folder_subfolders_files``. Specify another ``applies_to`` setting
|
||||
like this:
|
||||
grant_perms (dict):
|
||||
A dictionary containing the user/group and the basic permissions to
|
||||
grant, ie: ``{'user': {'perms': 'basic_permission'}}``. You can also
|
||||
set the ``applies_to`` setting here. The default is
|
||||
``this_folder_subfolders_files``. Specify another ``applies_to``
|
||||
setting like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
.. code-block:: yaml
|
||||
|
||||
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
|
||||
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
|
||||
|
||||
To set advanced permissions use a list for the ``perms`` parameter, ie:
|
||||
To set advanced permissions use a list for the ``perms`` parameter, ie:
|
||||
|
||||
.. code-block:: yaml
|
||||
.. code-block:: yaml
|
||||
|
||||
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
|
||||
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
|
||||
|
||||
deny_perms (dict): A dictionary containing the user/group and
|
||||
permissions to deny along with the ``applies_to`` setting. Use the same
|
||||
format used for the ``grant_perms`` parameter. Remember, deny
|
||||
permissions supersede grant permissions.
|
||||
deny_perms (dict):
|
||||
A dictionary containing the user/group and permissions to deny along
|
||||
with the ``applies_to`` setting. Use the same format used for the
|
||||
``grant_perms`` parameter. Remember, deny permissions supersede
|
||||
grant permissions.
|
||||
|
||||
inheritance (bool): If True the object will inherit permissions from the
|
||||
parent, if False, inheritance will be disabled. Inheritance setting will
|
||||
not apply to parent directories if they must be created
|
||||
inheritance (bool):
|
||||
If True the object will inherit permissions from the parent, if
|
||||
False, inheritance will be disabled. Inheritance setting will not
|
||||
apply to parent directories if they must be created
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -1421,36 +1433,40 @@ def makedirs_perms(path,
|
|||
|
||||
path (str): The full path to the directory.
|
||||
|
||||
owner (str): The owner of the directory. If not passed, it will be the
|
||||
account that created the directory, likely SYSTEM
|
||||
owner (str):
|
||||
The owner of the directory. If not passed, it will be the account
|
||||
that created the directory, likely SYSTEM
|
||||
|
||||
grant_perms (dict): A dictionary containing the user/group and the basic
|
||||
permissions to grant, ie: ``{'user': {'perms': 'basic_permission'}}``.
|
||||
You can also set the ``applies_to`` setting here. The default is
|
||||
``this_folder_subfolders_files``. Specify another ``applies_to`` setting
|
||||
like this:
|
||||
grant_perms (dict):
|
||||
A dictionary containing the user/group and the basic permissions to
|
||||
grant, ie: ``{'user': {'perms': 'basic_permission'}}``. You can also
|
||||
set the ``applies_to`` setting here. The default is
|
||||
``this_folder_subfolders_files``. Specify another ``applies_to``
|
||||
setting like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
.. code-block:: yaml
|
||||
|
||||
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
|
||||
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
|
||||
|
||||
To set advanced permissions use a list for the ``perms`` parameter, ie:
|
||||
To set advanced permissions use a list for the ``perms`` parameter, ie:
|
||||
|
||||
.. code-block:: yaml
|
||||
.. code-block:: yaml
|
||||
|
||||
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
|
||||
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
|
||||
|
||||
deny_perms (dict): A dictionary containing the user/group and
|
||||
permissions to deny along with the ``applies_to`` setting. Use the same
|
||||
format used for the ``grant_perms`` parameter. Remember, deny
|
||||
permissions supersede grant permissions.
|
||||
deny_perms (dict):
|
||||
A dictionary containing the user/group and permissions to deny along
|
||||
with the ``applies_to`` setting. Use the same format used for the
|
||||
``grant_perms`` parameter. Remember, deny permissions supersede
|
||||
grant permissions.
|
||||
|
||||
inheritance (bool): If True the object will inherit permissions from the
|
||||
parent, if False, inheritance will be disabled. Inheritance setting will
|
||||
not apply to parent directories if they must be created
|
||||
inheritance (bool):
|
||||
If True the object will inherit permissions from the parent, if
|
||||
False, inheritance will be disabled. Inheritance setting will not
|
||||
apply to parent directories if they must be created
|
||||
|
||||
Returns:
|
||||
bool: True if successful, otherwise raise an error
|
||||
bool: True if successful, otherwise raises an error
|
||||
|
||||
CLI Example:
|
||||
|
||||
|
@ -1503,45 +1519,54 @@ def check_perms(path,
|
|||
deny_perms=None,
|
||||
inheritance=True):
|
||||
'''
|
||||
Set owner and permissions for each directory created.
|
||||
Set owner and permissions for each directory created. Used mostly by the
|
||||
state system.
|
||||
|
||||
Args:
|
||||
|
||||
path (str): The full path to the directory.
|
||||
|
||||
ret (dict): A dictionary to append changes to and return. If not passed,
|
||||
will create a new dictionary to return.
|
||||
ret (dict):
|
||||
A dictionary to append changes to and return. If not passed, will
|
||||
create a new dictionary to return.
|
||||
|
||||
owner (str): The owner of the directory. If not passed, it will be the
|
||||
account that created the directory, likely SYSTEM
|
||||
owner (str):
|
||||
The owner of the directory. If not passed, it will be the account
|
||||
that created the directory, likely SYSTEM
|
||||
|
||||
grant_perms (dict): A dictionary containing the user/group and the basic
|
||||
permissions to grant, ie: ``{'user': {'perms': 'basic_permission'}}``.
|
||||
You can also set the ``applies_to`` setting here. The default is
|
||||
``this_folder_subfolders_files``. Specify another ``applies_to`` setting
|
||||
like this:
|
||||
grant_perms (dict):
|
||||
A dictionary containing the user/group and the basic permissions to
|
||||
grant, ie: ``{'user': {'perms': 'basic_permission'}}``. You can also
|
||||
set the ``applies_to`` setting here. The default is
|
||||
``this_folder_subfolders_files``. Specify another ``applies_to``
|
||||
setting like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
.. code-block:: yaml
|
||||
|
||||
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
|
||||
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
|
||||
|
||||
To set advanced permissions use a list for the ``perms`` parameter, ie:
|
||||
To set advanced permissions use a list for the ``perms`` parameter, ie:
|
||||
|
||||
.. code-block:: yaml
|
||||
.. code-block:: yaml
|
||||
|
||||
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
|
||||
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
|
||||
|
||||
deny_perms (dict): A dictionary containing the user/group and
|
||||
permissions to deny along with the ``applies_to`` setting. Use the same
|
||||
format used for the ``grant_perms`` parameter. Remember, deny
|
||||
permissions supersede grant permissions.
|
||||
deny_perms (dict):
|
||||
A dictionary containing the user/group and permissions to deny along
|
||||
with the ``applies_to`` setting. Use the same format used for the
|
||||
``grant_perms`` parameter. Remember, deny permissions supersede
|
||||
grant permissions.
|
||||
|
||||
inheritance (bool): If True the object will inherit permissions from the
|
||||
parent, if False, inheritance will be disabled. Inheritance setting will
|
||||
not apply to parent directories if they must be created
|
||||
inheritance (bool):
|
||||
If True the object will inherit permissions from the parent, if
|
||||
False, inheritance will be disabled. Inheritance setting will not
|
||||
apply to parent directories if they must be created
|
||||
|
||||
Returns:
|
||||
bool: True if successful, otherwise raise an error
|
||||
dict: A dictionary of changes made to the object
|
||||
|
||||
Raises:
|
||||
CommandExecutionError: If the object does not exist
|
||||
|
||||
CLI Example:
|
||||
|
||||
|
@ -1556,6 +1581,9 @@ def check_perms(path,
|
|||
# Specify advanced attributes with a list
|
||||
salt '*' file.check_perms C:\\Temp\\ Administrators "{'jsnuffy': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'files_only'}}"
|
||||
'''
|
||||
if not os.path.exists(path):
|
||||
raise CommandExecutionError('Path not found: {0}'.format(path))
|
||||
|
||||
path = os.path.expanduser(path)
|
||||
|
||||
if not ret:
|
||||
|
|
|
@ -619,8 +619,8 @@ class _policy_info(object):
|
|||
},
|
||||
},
|
||||
'RemoteRegistryExactPaths': {
|
||||
'Policy': 'Network access: Remotely accessible registry '
|
||||
'paths',
|
||||
'Policy': 'Network access: Remotely accessible '
|
||||
'registry paths',
|
||||
'lgpo_section': self.security_options_gpedit_path,
|
||||
'Registry': {
|
||||
'Hive': 'HKEY_LOCAL_MACHINE',
|
||||
|
@ -632,8 +632,8 @@ class _policy_info(object):
|
|||
},
|
||||
},
|
||||
'RemoteRegistryPaths': {
|
||||
'Policy': 'Network access: Remotely accessible registry '
|
||||
'paths and sub-paths',
|
||||
'Policy': 'Network access: Remotely accessible '
|
||||
'registry paths and sub-paths',
|
||||
'lgpo_section': self.security_options_gpedit_path,
|
||||
'Registry': {
|
||||
'Hive': 'HKEY_LOCAL_MACHINE',
|
||||
|
@ -644,8 +644,8 @@ class _policy_info(object):
|
|||
},
|
||||
},
|
||||
'RestrictNullSessAccess': {
|
||||
'Policy': 'Network access: Restrict anonymous access to '
|
||||
'Named Pipes and Shares',
|
||||
'Policy': 'Network access: Restrict anonymous access '
|
||||
'to Named Pipes and Shares',
|
||||
'lgpo_section': self.security_options_gpedit_path,
|
||||
'Settings': self.enabled_one_disabled_zero.keys(),
|
||||
'Registry': {
|
||||
|
@ -898,9 +898,9 @@ class _policy_info(object):
|
|||
'Transform': self.enabled_one_disabled_zero_transform,
|
||||
},
|
||||
'CachedLogonsCount': {
|
||||
'Policy': 'Interactive logon: Number of previous logons '
|
||||
'to cache (in case domain controller is not '
|
||||
'available)',
|
||||
'Policy': 'Interactive logon: Number of previous '
|
||||
'logons to cache (in case domain controller '
|
||||
'is not available)',
|
||||
'Settings': {
|
||||
'Function': '_in_range_inclusive',
|
||||
'Args': {'min': 0, 'max': 50}
|
||||
|
@ -915,8 +915,9 @@ class _policy_info(object):
|
|||
},
|
||||
},
|
||||
'ForceUnlockLogon': {
|
||||
'Policy': 'Interactive logon: Require Domain Controller '
|
||||
'authentication to unlock workstation',
|
||||
'Policy': 'Interactive logon: Require Domain '
|
||||
'Controller authentication to unlock '
|
||||
'workstation',
|
||||
'Settings': self.enabled_one_disabled_zero.keys(),
|
||||
'lgpo_section': self.security_options_gpedit_path,
|
||||
'Registry': {
|
||||
|
@ -983,8 +984,8 @@ class _policy_info(object):
|
|||
},
|
||||
'EnableUIADesktopToggle': {
|
||||
'Policy': 'User Account Control: Allow UIAccess '
|
||||
'applications to prompt for elevation without '
|
||||
'using the secure desktop',
|
||||
'applications to prompt for elevation '
|
||||
'without using the secure desktop',
|
||||
'Settings': self.enabled_one_disabled_zero.keys(),
|
||||
'lgpo_section': self.security_options_gpedit_path,
|
||||
'Registry': {
|
||||
|
@ -998,8 +999,8 @@ class _policy_info(object):
|
|||
},
|
||||
'ConsentPromptBehaviorAdmin': {
|
||||
'Policy': 'User Account Control: Behavior of the '
|
||||
'elevation prompt for administrators in Admin '
|
||||
'Approval Mode',
|
||||
'elevation prompt for administrators in '
|
||||
'Admin Approval Mode',
|
||||
'Settings': self.uac_admin_prompt_lookup.keys(),
|
||||
'lgpo_section': self.security_options_gpedit_path,
|
||||
'Registry': {
|
||||
|
@ -1077,7 +1078,7 @@ class _policy_info(object):
|
|||
},
|
||||
'EnableSecureUIAPaths': {
|
||||
'Policy': 'User Account Control: Only elevate UIAccess '
|
||||
'applicaitons that are installed in secure '
|
||||
'applications that are installed in secure '
|
||||
'locations',
|
||||
'Settings': self.enabled_one_disabled_zero.keys(),
|
||||
'lgpo_section': self.security_options_gpedit_path,
|
||||
|
@ -1091,8 +1092,8 @@ class _policy_info(object):
|
|||
'Transform': self.enabled_one_disabled_zero_transform,
|
||||
},
|
||||
'EnableLUA': {
|
||||
'Policy': 'User Account Control: Run all administrators '
|
||||
'in Admin Approval Mode',
|
||||
'Policy': 'User Account Control: Run all '
|
||||
'administrators in Admin Approval Mode',
|
||||
'Settings': self.enabled_one_disabled_zero.keys(),
|
||||
'lgpo_section': self.security_options_gpedit_path,
|
||||
'Registry': {
|
||||
|
@ -1354,8 +1355,8 @@ class _policy_info(object):
|
|||
'Transform': self.enabled_one_disabled_zero_transform,
|
||||
},
|
||||
'EnableForcedLogoff': {
|
||||
'Policy': 'Microsoft network server: Disconnect clients '
|
||||
'when logon hours expire',
|
||||
'Policy': 'Microsoft network server: Disconnect '
|
||||
'clients when logon hours expire',
|
||||
'Settings': self.enabled_one_disabled_zero.keys(),
|
||||
'lgpo_section': self.security_options_gpedit_path,
|
||||
'Registry': {
|
||||
|
@ -1422,7 +1423,8 @@ class _policy_info(object):
|
|||
'Transform': self.enabled_one_disabled_zero_transform,
|
||||
},
|
||||
'UndockWithoutLogon': {
|
||||
'Policy': 'Devices: Allow undock without having to log on',
|
||||
'Policy': 'Devices: Allow undock without having to log '
|
||||
'on',
|
||||
'Settings': self.enabled_one_disabled_zero.keys(),
|
||||
'lgpo_section': self.security_options_gpedit_path,
|
||||
'Registry': {
|
||||
|
@ -1497,8 +1499,8 @@ class _policy_info(object):
|
|||
},
|
||||
},
|
||||
'SubmitControl': {
|
||||
'Policy': 'Domain controller: Allow server operators to '
|
||||
'schedule tasks',
|
||||
'Policy': 'Domain controller: Allow server operators '
|
||||
'to schedule tasks',
|
||||
'Settings': self.enabled_one_disabled_zero_strings.keys(),
|
||||
'lgpo_section': self.security_options_gpedit_path,
|
||||
'Registry': {
|
||||
|
@ -1577,8 +1579,8 @@ class _policy_info(object):
|
|||
'Transform': self.enabled_one_disabled_zero_strings_transform,
|
||||
},
|
||||
'SignSecureChannel': {
|
||||
'Policy': 'Domain member: Digitally sign secure channel '
|
||||
'data (when possible)',
|
||||
'Policy': 'Domain member: Digitally sign secure '
|
||||
'channel data (when possible)',
|
||||
'Settings': self.enabled_one_disabled_zero_strings.keys(),
|
||||
'lgpo_section': self.security_options_gpedit_path,
|
||||
'Registry': {
|
||||
|
@ -2301,7 +2303,7 @@ class _policy_info(object):
|
|||
},
|
||||
'RecoveryConsoleSecurityLevel': {
|
||||
'Policy': 'Recovery console: Allow automatic '
|
||||
'adminstrative logon',
|
||||
'administrative logon',
|
||||
'Settings': self.enabled_one_disabled_zero.keys(),
|
||||
'lgpo_section': self.security_options_gpedit_path,
|
||||
'Registry': {
|
||||
|
@ -2433,15 +2435,18 @@ class _policy_info(object):
|
|||
'''
|
||||
converts a binary 0/1 to Disabled/Enabled
|
||||
'''
|
||||
if val is not None:
|
||||
if ord(val) == 0:
|
||||
return 'Disabled'
|
||||
elif ord(val) == 1:
|
||||
return 'Enabled'
|
||||
try:
|
||||
if val is not None:
|
||||
if ord(val) == 0:
|
||||
return 'Disabled'
|
||||
elif ord(val) == 1:
|
||||
return 'Enabled'
|
||||
else:
|
||||
return 'Invalid Value'
|
||||
else:
|
||||
return 'Invalid Value'
|
||||
else:
|
||||
return 'Not Defined'
|
||||
return 'Not Defined'
|
||||
except TypeError:
|
||||
return 'Invalid Value'
|
||||
|
||||
@classmethod
|
||||
def _binary_enable_zero_disable_one_reverse_conversion(cls, val, **kwargs):
|
||||
|
@ -3122,12 +3127,13 @@ def _getDataFromRegPolData(search_string, policy_data, return_value_name=False):
|
|||
'''
|
||||
value = None
|
||||
values = []
|
||||
encoded_semicolon = ';'.encode('utf-16-le')
|
||||
if return_value_name:
|
||||
values = {}
|
||||
if search_string:
|
||||
registry = Registry()
|
||||
if len(search_string.split('{0};'.format(chr(0)))) >= 3:
|
||||
vtype = registry.vtype_reverse[ord(search_string.split('{0};'.format(chr(0)))[2])]
|
||||
if len(search_string.split(encoded_semicolon)) >= 3:
|
||||
vtype = registry.vtype_reverse[ord(search_string.split(encoded_semicolon)[2].decode('utf-32-le'))]
|
||||
else:
|
||||
vtype = None
|
||||
search_string = re.escape(search_string)
|
||||
|
@ -3135,29 +3141,28 @@ def _getDataFromRegPolData(search_string, policy_data, return_value_name=False):
|
|||
matches = [m for m in matches]
|
||||
if matches:
|
||||
for match in matches:
|
||||
pol_entry = policy_data[match.start():(policy_data.index(']',
|
||||
pol_entry = policy_data[match.start():(policy_data.index(']'.encode('utf-16-le'),
|
||||
match.end())
|
||||
)
|
||||
].split('{0};'.format(chr(0)))
|
||||
].split(encoded_semicolon)
|
||||
if len(pol_entry) >= 2:
|
||||
valueName = pol_entry[1]
|
||||
if len(pol_entry) >= 5:
|
||||
value = pol_entry[4]
|
||||
if vtype == 'REG_DWORD' or vtype == 'REG_QWORD':
|
||||
if value:
|
||||
vlist = list(ord(v) for v in value)
|
||||
if vtype == 'REG_DWORD':
|
||||
for v in struct.unpack('I', struct.pack('2H', *vlist)):
|
||||
for v in struct.unpack('I', value):
|
||||
value = v
|
||||
elif vtype == 'REG_QWORD':
|
||||
for v in struct.unpack('I', struct.pack('4H', *vlist)):
|
||||
for v in struct.unpack('Q', value):
|
||||
value = v
|
||||
else:
|
||||
value = 0
|
||||
elif vtype == 'REG_MULTI_SZ':
|
||||
value = value.rstrip(chr(0)).split(chr(0))
|
||||
value = value.decode('utf-16-le').rstrip(chr(0)).split(chr(0))
|
||||
else:
|
||||
value = value.rstrip(chr(0))
|
||||
value = value.decode('utf-16-le').rstrip(chr(0))
|
||||
if return_value_name:
|
||||
log.debug('we want value names and the value')
|
||||
values[valueName] = value
|
||||
|
@ -3268,35 +3273,52 @@ def _buildKnownDataSearchString(reg_key, reg_valueName, reg_vtype, reg_data,
|
|||
'''
|
||||
registry = Registry()
|
||||
this_element_value = None
|
||||
expected_string = ''
|
||||
expected_string = b''
|
||||
encoded_semicolon = ';'.encode('utf-16-le')
|
||||
encoded_null = chr(0).encode('utf-16-le')
|
||||
if reg_key:
|
||||
reg_key = reg_key.encode('utf-16-le')
|
||||
if reg_valueName:
|
||||
reg_valueName = reg_valueName.encode('utf-16-le')
|
||||
if reg_data and not check_deleted:
|
||||
if reg_vtype == 'REG_DWORD':
|
||||
this_element_value = ''
|
||||
for v in struct.unpack('2H', struct.pack('I', int(reg_data))):
|
||||
this_element_value = this_element_value + six.unichr(v)
|
||||
elif reg_vtype == 'REG_QWORD':
|
||||
this_element_value = ''
|
||||
for v in struct.unpack('4H', struct.pack('I', int(reg_data))):
|
||||
this_element_value = this_element_value + six.unichr(v)
|
||||
this_element_value = struct.pack('I', int(reg_data))
|
||||
elif reg_vtype == "REG_QWORD":
|
||||
this_element_value = struct.pack('Q', int(reg_data))
|
||||
elif reg_vtype == 'REG_SZ':
|
||||
this_element_value = '{0}{1}'.format(reg_data, chr(0))
|
||||
this_element_value = b''.join([reg_data.encode('utf-16-le'),
|
||||
encoded_null])
|
||||
if check_deleted:
|
||||
reg_vtype = 'REG_SZ'
|
||||
expected_string = u'[{1}{0};**del.{2}{0};{3}{0};{4}{0};{5}{0}]'.format(
|
||||
chr(0),
|
||||
reg_key,
|
||||
reg_valueName,
|
||||
chr(registry.vtype[reg_vtype]),
|
||||
six.unichr(len(' {0}'.format(chr(0)).encode('utf-16-le'))),
|
||||
' ')
|
||||
expected_string = b''.join(['['.encode('utf-16-le'),
|
||||
reg_key,
|
||||
encoded_null,
|
||||
encoded_semicolon,
|
||||
'**del.'.encode('utf-16-le'),
|
||||
reg_valueName,
|
||||
encoded_null,
|
||||
encoded_semicolon,
|
||||
chr(registry.vtype[reg_vtype]).encode('utf-32-le'),
|
||||
encoded_semicolon,
|
||||
six.unichr(len(' {0}'.format(chr(0)).encode('utf-16-le'))).encode('utf-32-le'),
|
||||
encoded_semicolon,
|
||||
' '.encode('utf-16-le'),
|
||||
encoded_null,
|
||||
']'.encode('utf-16-le')])
|
||||
else:
|
||||
expected_string = u'[{1}{0};{2}{0};{3}{0};{4}{0};{5}]'.format(
|
||||
chr(0),
|
||||
reg_key,
|
||||
reg_valueName,
|
||||
chr(registry.vtype[reg_vtype]),
|
||||
six.unichr(len(this_element_value.encode('utf-16-le'))),
|
||||
this_element_value)
|
||||
expected_string = b''.join(['['.encode('utf-16-le'),
|
||||
reg_key,
|
||||
encoded_null,
|
||||
encoded_semicolon,
|
||||
reg_valueName,
|
||||
encoded_null,
|
||||
encoded_semicolon,
|
||||
chr(registry.vtype[reg_vtype]).encode('utf-32-le'),
|
||||
encoded_semicolon,
|
||||
six.unichr(len(this_element_value)).encode('utf-32-le'),
|
||||
encoded_semicolon,
|
||||
this_element_value,
|
||||
']'.encode('utf-16-le')])
|
||||
return expected_string
|
||||
|
||||
|
||||
|
@ -3324,13 +3346,16 @@ def _processValueItem(element, reg_key, reg_valuename, policy, parent_element,
|
|||
expected_string = None
|
||||
# https://msdn.microsoft.com/en-us/library/dn606006(v=vs.85).aspx
|
||||
this_vtype = 'REG_SZ'
|
||||
standard_layout = u'[{1}{0};{2}{0};{3}{0};{4}{0};{5}]'
|
||||
encoded_semicolon = ';'.encode('utf-16-le')
|
||||
encoded_null = chr(0).encode('utf-16-le')
|
||||
if reg_key:
|
||||
reg_key = reg_key.encode('utf-16-le')
|
||||
if reg_valuename:
|
||||
reg_valuename = reg_valuename.encode('utf-16-le')
|
||||
if etree.QName(element).localname == 'decimal' and etree.QName(parent_element).localname != 'elements':
|
||||
this_vtype = 'REG_DWORD'
|
||||
if 'value' in element.attrib:
|
||||
this_element_value = ''
|
||||
for val in struct.unpack('2H', struct.pack('I', int(element.attrib['value']))):
|
||||
this_element_value = this_element_value + six.unichr(val)
|
||||
this_element_value = struct.pack('I', int(element.attrib['value']))
|
||||
else:
|
||||
msg = ('The {2} child {1} element for the policy with attributes: '
|
||||
'{0} does not have the required "value" attribute. The '
|
||||
|
@ -3345,9 +3370,7 @@ def _processValueItem(element, reg_key, reg_valuename, policy, parent_element,
|
|||
# server, so untested/assumed
|
||||
this_vtype = 'REG_QWORD'
|
||||
if 'value' in element.attrib:
|
||||
this_element_value = ''
|
||||
for val in struct.unpack('4H', struct.pack('I', int(element.attrib['value']))):
|
||||
this_element_value = this_element_value + six.unichr(val)
|
||||
this_element_value = struct.pack('Q', int(element.attrib['value']))
|
||||
else:
|
||||
msg = ('The {2} child {1} element for the policy with attributes: '
|
||||
'{0} does not have the required "value" attribute. The '
|
||||
|
@ -3359,7 +3382,8 @@ def _processValueItem(element, reg_key, reg_valuename, policy, parent_element,
|
|||
return None
|
||||
elif etree.QName(element).localname == 'string':
|
||||
this_vtype = 'REG_SZ'
|
||||
this_element_value = '{0}{1}'.format(element.text, chr(0))
|
||||
this_element_value = b''.join([element.text.encode('utf-16-le'),
|
||||
encoded_null])
|
||||
elif etree.QName(parent_element).localname == 'elements':
|
||||
standard_element_expected_string = True
|
||||
if etree.QName(element).localname == 'boolean':
|
||||
|
@ -3370,22 +3394,19 @@ def _processValueItem(element, reg_key, reg_valuename, policy, parent_element,
|
|||
check_deleted = True
|
||||
if not check_deleted:
|
||||
this_vtype = 'REG_DWORD'
|
||||
this_element_value = chr(1)
|
||||
this_element_value = chr(1).encode('utf-16-le')
|
||||
standard_element_expected_string = False
|
||||
elif etree.QName(element).localname == 'decimal':
|
||||
# https://msdn.microsoft.com/en-us/library/dn605987(v=vs.85).aspx
|
||||
this_vtype = 'REG_DWORD'
|
||||
requested_val = this_element_value
|
||||
if this_element_value is not None:
|
||||
temp_val = ''
|
||||
for v in struct.unpack('2H', struct.pack('I', int(this_element_value))):
|
||||
temp_val = temp_val + six.unichr(v)
|
||||
this_element_value = temp_val
|
||||
this_element_value = struct.pack('I', int(this_element_value))
|
||||
if 'storeAsText' in element.attrib:
|
||||
if element.attrib['storeAsText'].lower() == 'true':
|
||||
this_vtype = 'REG_SZ'
|
||||
if requested_val is not None:
|
||||
this_element_value = str(requested_val)
|
||||
this_element_value = str(requested_val).encode('utf-16-le')
|
||||
if check_deleted:
|
||||
this_vtype = 'REG_SZ'
|
||||
elif etree.QName(element).localname == 'longDecimal':
|
||||
|
@ -3393,15 +3414,12 @@ def _processValueItem(element, reg_key, reg_valuename, policy, parent_element,
|
|||
this_vtype = 'REG_QWORD'
|
||||
requested_val = this_element_value
|
||||
if this_element_value is not None:
|
||||
temp_val = ''
|
||||
for v in struct.unpack('4H', struct.pack('I', int(this_element_value))):
|
||||
temp_val = temp_val + six.unichr(v)
|
||||
this_element_value = temp_val
|
||||
this_element_value = struct.pack('Q', int(this_element_value))
|
||||
if 'storeAsText' in element.attrib:
|
||||
if element.attrib['storeAsText'].lower() == 'true':
|
||||
this_vtype = 'REG_SZ'
|
||||
if requested_val is not None:
|
||||
this_element_value = str(requested_val)
|
||||
this_element_value = str(requested_val).encode('utf-16-le')
|
||||
elif etree.QName(element).localname == 'text':
|
||||
# https://msdn.microsoft.com/en-us/library/dn605969(v=vs.85).aspx
|
||||
this_vtype = 'REG_SZ'
|
||||
|
@ -3409,14 +3427,15 @@ def _processValueItem(element, reg_key, reg_valuename, policy, parent_element,
|
|||
if element.attrib['expandable'].lower() == 'true':
|
||||
this_vtype = 'REG_EXPAND_SZ'
|
||||
if this_element_value is not None:
|
||||
this_element_value = '{0}{1}'.format(this_element_value, chr(0))
|
||||
this_element_value = b''.join([this_element_value.encode('utf-16-le'),
|
||||
encoded_null])
|
||||
elif etree.QName(element).localname == 'multiText':
|
||||
this_vtype = 'REG_MULTI_SZ'
|
||||
if this_element_value is not None:
|
||||
this_element_value = '{0}{1}{1}'.format(chr(0).join(this_element_value), chr(0))
|
||||
elif etree.QName(element).localname == 'list':
|
||||
standard_element_expected_string = False
|
||||
del_keys = ''
|
||||
del_keys = b''
|
||||
element_valuenames = []
|
||||
element_values = this_element_value
|
||||
if this_element_value is not None:
|
||||
|
@ -3425,12 +3444,20 @@ def _processValueItem(element, reg_key, reg_valuename, policy, parent_element,
|
|||
if element.attrib['additive'].lower() == 'false':
|
||||
# a delete values will be added before all the other
|
||||
# value = data pairs
|
||||
del_keys = u'[{1}{0};**delvals.{0};{2}{0};{3}{0};{4}{0}]'.format(
|
||||
chr(0),
|
||||
reg_key,
|
||||
chr(registry.vtype[this_vtype]),
|
||||
chr(len(' {0}'.format(chr(0)).encode('utf-16-le'))),
|
||||
' ')
|
||||
del_keys = b''.join(['['.encode('utf-16-le'),
|
||||
reg_key,
|
||||
encoded_null,
|
||||
encoded_semicolon,
|
||||
'**delvals.'.encode('utf-16-le'),
|
||||
encoded_null,
|
||||
encoded_semicolon,
|
||||
chr(registry.vtype[this_vtype]).encode('utf-32-le'),
|
||||
encoded_semicolon,
|
||||
chr(len(' {0}'.format(chr(0)).encode('utf-16-le'))).encode('utf-32-le'),
|
||||
encoded_semicolon,
|
||||
' '.encode('utf-16-le'),
|
||||
encoded_null,
|
||||
']'.encode('utf-16-le')])
|
||||
if 'expandable' in element.attrib:
|
||||
this_vtype = 'REG_EXPAND_SZ'
|
||||
if 'explicitValue' in element.attrib and element.attrib['explicitValue'].lower() == 'true':
|
||||
|
@ -3449,61 +3476,103 @@ def _processValueItem(element, reg_key, reg_valuename, policy, parent_element,
|
|||
log.debug('element_valuenames == {0} and element_values == {1}'.format(element_valuenames,
|
||||
element_values))
|
||||
for i, item in enumerate(element_valuenames):
|
||||
expected_string = expected_string + standard_layout.format(
|
||||
chr(0),
|
||||
reg_key,
|
||||
element_valuenames[i],
|
||||
chr(registry.vtype[this_vtype]),
|
||||
six.unichr(len('{0}{1}'.format(element_values[i],
|
||||
chr(0)).encode('utf-16-le'))),
|
||||
'{0}{1}'.format(element_values[i], chr(0)))
|
||||
expected_string = expected_string + b''.join(['['.encode('utf-16-le'),
|
||||
reg_key,
|
||||
encoded_null,
|
||||
encoded_semicolon,
|
||||
element_valuenames[i].encode('utf-16-le'),
|
||||
encoded_null,
|
||||
encoded_semicolon,
|
||||
chr(registry.vtype[this_vtype]).encode('utf-32-le'),
|
||||
encoded_semicolon,
|
||||
six.unichr(len('{0}{1}'.format(element_values[i],
|
||||
chr(0)).encode('utf-16-le'))).encode('utf-32-le'),
|
||||
encoded_semicolon,
|
||||
b''.join([element_values[i].encode('utf-16-le'),
|
||||
encoded_null]),
|
||||
']'.encode('utf-16-le')])
|
||||
else:
|
||||
expected_string = del_keys + r'[{1}{0};'.format(chr(0),
|
||||
reg_key)
|
||||
expected_string = del_keys + b''.join(['['.encode('utf-16-le'),
|
||||
reg_key,
|
||||
encoded_null,
|
||||
encoded_semicolon])
|
||||
else:
|
||||
expected_string = u'[{1}{0};**delvals.{0};{2}{0};{3}{0};{4}{0}]'.format(
|
||||
chr(0),
|
||||
reg_key,
|
||||
chr(registry.vtype[this_vtype]),
|
||||
chr(len(' {0}'.format(chr(0)).encode('utf-16-le'))),
|
||||
' ')
|
||||
expected_string = b''.join(['['.encode('utf-16-le'),
|
||||
reg_key,
|
||||
encoded_null,
|
||||
encoded_semicolon,
|
||||
'**delvals.'.encode('utf-16-le'),
|
||||
encoded_null,
|
||||
encoded_semicolon,
|
||||
chr(registry.vtype[this_vtype]).encode('utf-32-le'),
|
||||
encoded_semicolon,
|
||||
chr(len(' {0}'.format(chr(0)))).encode('utf-32-le'),
|
||||
encoded_semicolon,
|
||||
' '.encode('utf-16-le'),
|
||||
encoded_null,
|
||||
']'.encode('utf-16-le')])
|
||||
elif etree.QName(element).localname == 'enum':
|
||||
if this_element_value is not None:
|
||||
pass
|
||||
|
||||
if standard_element_expected_string and not check_deleted:
|
||||
if this_element_value is not None:
|
||||
expected_string = standard_layout.format(
|
||||
chr(0),
|
||||
reg_key,
|
||||
reg_valuename,
|
||||
chr(registry.vtype[this_vtype]),
|
||||
six.unichr(len(this_element_value.encode('utf-16-le'))),
|
||||
this_element_value)
|
||||
expected_string = b''.join(['['.encode('utf-16-le'),
|
||||
reg_key,
|
||||
encoded_null,
|
||||
encoded_semicolon,
|
||||
reg_valuename,
|
||||
encoded_null,
|
||||
encoded_semicolon,
|
||||
chr(registry.vtype[this_vtype]).encode('utf-32-le'),
|
||||
encoded_semicolon,
|
||||
six.unichr(len(this_element_value)).encode('utf-32-le'),
|
||||
encoded_semicolon,
|
||||
this_element_value,
|
||||
']'.encode('utf-16-le')])
|
||||
else:
|
||||
expected_string = u'[{1}{0};{2}{0};{3}{0};'.format(chr(0),
|
||||
reg_key,
|
||||
reg_valuename,
|
||||
chr(registry.vtype[this_vtype]))
|
||||
expected_string = b''.join(['['.encode('utf-16-le'),
|
||||
reg_key,
|
||||
encoded_null,
|
||||
encoded_semicolon,
|
||||
reg_valuename,
|
||||
encoded_null,
|
||||
encoded_semicolon,
|
||||
chr(registry.vtype[this_vtype]).encode('utf-32-le'),
|
||||
encoded_semicolon])
|
||||
|
||||
if not expected_string:
|
||||
if etree.QName(element).localname == "delete" or check_deleted:
|
||||
# delete value
|
||||
expected_string = u'[{1}{0};**del.{2}{0};{3}{0};{4}{0};{5}{0}]'.format(
|
||||
chr(0),
|
||||
reg_key,
|
||||
reg_valuename,
|
||||
chr(registry.vtype[this_vtype]),
|
||||
six.unichr(len(' {0}'.format(chr(0)).encode('utf-16-le'))),
|
||||
' ')
|
||||
expected_string = b''.join(['['.encode('utf-16-le'),
|
||||
reg_key,
|
||||
encoded_null,
|
||||
encoded_semicolon,
|
||||
'**del.'.encode('utf-16-le'),
|
||||
reg_valuename,
|
||||
encoded_null,
|
||||
encoded_semicolon,
|
||||
chr(registry.vtype[this_vtype]).encode('utf-32-le'),
|
||||
encoded_semicolon,
|
||||
six.unichr(len(' {0}'.format(chr(0)).encode('utf-16-le'))).encode('utf-32-le'),
|
||||
encoded_semicolon,
|
||||
' '.encode('utf-16-le'),
|
||||
encoded_null,
|
||||
']'.encode('utf-16-le')])
|
||||
else:
|
||||
expected_string = standard_layout.format(
|
||||
chr(0),
|
||||
reg_key,
|
||||
reg_valuename,
|
||||
chr(registry.vtype[this_vtype]),
|
||||
six.unichr(len(this_element_value.encode('utf-16-le'))),
|
||||
this_element_value)
|
||||
expected_string = b''.join(['['.encode('utf-16-le'),
|
||||
reg_key,
|
||||
encoded_null,
|
||||
encoded_semicolon,
|
||||
reg_valuename,
|
||||
encoded_null,
|
||||
encoded_semicolon,
|
||||
chr(registry.vtype[this_vtype]).encode('utf-32-le'),
|
||||
encoded_semicolon,
|
||||
six.unichr(len(this_element_value)).encode('utf-32-le'),
|
||||
encoded_semicolon,
|
||||
this_element_value,
|
||||
']'.encode('utf-16-le')])
|
||||
return expected_string
|
||||
|
||||
|
||||
|
@ -3528,17 +3597,16 @@ def _checkAllAdmxPolicies(policy_class,
|
|||
full_names = {}
|
||||
if policy_filedata:
|
||||
log.debug('POLICY CLASS {0} has file data'.format(policy_class))
|
||||
policy_filedata_split = re.sub(r'\]$',
|
||||
'',
|
||||
re.sub(r'^\[',
|
||||
'',
|
||||
policy_filedata.replace(module_policy_data.reg_pol_header, ''))
|
||||
).split('][')
|
||||
|
||||
policy_filedata_split = re.sub(salt.utils.to_bytes(r'\]{0}$'.format(chr(0))),
|
||||
b'',
|
||||
re.sub(salt.utils.to_bytes(r'^\[{0}'.format(chr(0))),
|
||||
b'',
|
||||
re.sub(re.escape(module_policy_data.reg_pol_header.encode('utf-16-le')), b'', policy_filedata))
|
||||
).split(']['.encode('utf-16-le'))
|
||||
for policy_item in policy_filedata_split:
|
||||
policy_item_key = policy_item.split('{0};'.format(chr(0)))[0]
|
||||
policy_item_key = policy_item.split('{0};'.format(chr(0)).encode('utf-16-le'))[0].decode('utf-16-le').lower()
|
||||
if policy_item_key:
|
||||
for admx_item in REGKEY_XPATH(admx_policy_definitions, keyvalue=policy_item_key.lower()):
|
||||
for admx_item in REGKEY_XPATH(admx_policy_definitions, keyvalue=policy_item_key):
|
||||
if etree.QName(admx_item).localname == 'policy':
|
||||
if admx_item not in admx_policies:
|
||||
admx_policies.append(admx_item)
|
||||
|
@ -3601,8 +3669,11 @@ def _checkAllAdmxPolicies(policy_class,
|
|||
break
|
||||
this_policynamespace = admx_policy.nsmap[admx_policy.prefix]
|
||||
if ENABLED_VALUE_XPATH(admx_policy) and this_policy_setting == 'Not Configured':
|
||||
element_only_enabled_disabled = False
|
||||
explicit_enable_disable_value_setting = True
|
||||
# some policies have a disabled list but not an enabled list
|
||||
# added this to address those issues
|
||||
if DISABLED_LIST_XPATH(admx_policy):
|
||||
element_only_enabled_disabled = False
|
||||
explicit_enable_disable_value_setting = True
|
||||
if _checkValueItemParent(admx_policy,
|
||||
this_policyname,
|
||||
this_key,
|
||||
|
@ -3615,8 +3686,11 @@ def _checkAllAdmxPolicies(policy_class,
|
|||
policy_vals[this_policynamespace] = {}
|
||||
policy_vals[this_policynamespace][this_policyname] = this_policy_setting
|
||||
if DISABLED_VALUE_XPATH(admx_policy) and this_policy_setting == 'Not Configured':
|
||||
element_only_enabled_disabled = False
|
||||
explicit_enable_disable_value_setting = True
|
||||
# some policies have a disabled list but not an enabled list
|
||||
# added this to address those issues
|
||||
if ENABLED_LIST_XPATH(admx_policy):
|
||||
element_only_enabled_disabled = False
|
||||
explicit_enable_disable_value_setting = True
|
||||
if _checkValueItemParent(admx_policy,
|
||||
this_policyname,
|
||||
this_key,
|
||||
|
@ -3841,7 +3915,7 @@ def _checkAllAdmxPolicies(policy_class,
|
|||
admx_policy,
|
||||
elements_item,
|
||||
check_deleted=False)
|
||||
) + r'(?!\*\*delvals\.)',
|
||||
) + salt.utils.to_bytes(r'(?!\*\*delvals\.)'),
|
||||
policy_filedata):
|
||||
configured_value = _getDataFromRegPolData(_processValueItem(child_item,
|
||||
child_key,
|
||||
|
@ -4034,7 +4108,6 @@ def _read_regpol_file(reg_pol_path):
|
|||
if os.path.exists(reg_pol_path):
|
||||
with salt.utils.fopen(reg_pol_path, 'rb') as pol_file:
|
||||
returndata = pol_file.read()
|
||||
returndata = returndata.decode('utf-16-le')
|
||||
return returndata
|
||||
|
||||
|
||||
|
@ -4044,12 +4117,13 @@ def _regexSearchKeyValueCombo(policy_data, policy_regpath, policy_regkey):
|
|||
for a policy_regpath and policy_regkey combo
|
||||
'''
|
||||
if policy_data:
|
||||
specialValueRegex = r'(\*\*Del\.|\*\*DelVals\.){0,1}'
|
||||
_thisSearch = r'\[{1}{0};{3}{2}{0};'.format(
|
||||
chr(0),
|
||||
re.escape(policy_regpath),
|
||||
re.escape(policy_regkey),
|
||||
specialValueRegex)
|
||||
specialValueRegex = salt.utils.to_bytes(r'(\*\*Del\.|\*\*DelVals\.){0,1}')
|
||||
_thisSearch = b''.join([salt.utils.to_bytes(r'\['),
|
||||
re.escape(policy_regpath),
|
||||
b'\00;',
|
||||
specialValueRegex,
|
||||
re.escape(policy_regkey),
|
||||
b'\00;'])
|
||||
match = re.search(_thisSearch, policy_data, re.IGNORECASE)
|
||||
if match:
|
||||
return policy_data[match.start():(policy_data.index(']', match.end())) + 1]
|
||||
|
@ -4080,9 +4154,9 @@ def _write_regpol_data(data_to_write,
|
|||
if not os.path.exists(policy_file_path):
|
||||
ret = __salt__['file.makedirs'](policy_file_path)
|
||||
with salt.utils.fopen(policy_file_path, 'wb') as pol_file:
|
||||
if not data_to_write.startswith(reg_pol_header):
|
||||
if not data_to_write.startswith(reg_pol_header.encode('utf-16-le')):
|
||||
pol_file.write(reg_pol_header.encode('utf-16-le'))
|
||||
pol_file.write(data_to_write.encode('utf-16-le'))
|
||||
pol_file.write(data_to_write)
|
||||
try:
|
||||
gpt_ini_data = ''
|
||||
if os.path.exists(gpt_ini_path):
|
||||
|
@ -4158,13 +4232,14 @@ def _policyFileReplaceOrAppendList(string_list, policy_data):
|
|||
update existing strings or append the strings
|
||||
'''
|
||||
if not policy_data:
|
||||
policy_data = ''
|
||||
policy_data = b''
|
||||
# we are going to clean off the special pre-fixes, so we get only the valuename
|
||||
specialValueRegex = r'(\*\*Del\.|\*\*DelVals\.){0,1}'
|
||||
specialValueRegex = salt.utils.to_bytes(r'(\*\*Del\.|\*\*DelVals\.){0,1}')
|
||||
for this_string in string_list:
|
||||
list_item_key = this_string.split('{0};'.format(chr(0)))[0].lstrip('[')
|
||||
list_item_key = this_string.split(b'\00;')[0].lstrip(b'[')
|
||||
list_item_value_name = re.sub(specialValueRegex,
|
||||
'', this_string.split('{0};'.format(chr(0)))[1],
|
||||
b'',
|
||||
this_string.split(b'\00;')[1],
|
||||
flags=re.IGNORECASE)
|
||||
log.debug('item value name is {0}'.format(list_item_value_name))
|
||||
data_to_replace = _regexSearchKeyValueCombo(policy_data,
|
||||
|
@ -4175,7 +4250,7 @@ def _policyFileReplaceOrAppendList(string_list, policy_data):
|
|||
policy_data = policy_data.replace(data_to_replace, this_string)
|
||||
else:
|
||||
log.debug('appending {0}'.format([this_string]))
|
||||
policy_data = ''.join([policy_data, this_string])
|
||||
policy_data = b''.join([policy_data, this_string])
|
||||
return policy_data
|
||||
|
||||
|
||||
|
@ -4186,16 +4261,16 @@ def _policyFileReplaceOrAppend(this_string, policy_data, append_only=False):
|
|||
'''
|
||||
# we are going to clean off the special pre-fixes, so we get only the valuename
|
||||
if not policy_data:
|
||||
policy_data = ''
|
||||
specialValueRegex = r'(\*\*Del\.|\*\*DelVals\.){0,1}'
|
||||
policy_data = b''
|
||||
specialValueRegex = salt.utils.to_bytes(r'(\*\*Del\.|\*\*DelVals\.){0,1}')
|
||||
item_key = None
|
||||
item_value_name = None
|
||||
data_to_replace = None
|
||||
if not append_only:
|
||||
item_key = this_string.split('{0};'.format(chr(0)))[0].lstrip('[')
|
||||
item_key = this_string.split(b'\00;')[0].lstrip(b'[')
|
||||
item_value_name = re.sub(specialValueRegex,
|
||||
'',
|
||||
this_string.split('{0};'.format(chr(0)))[1],
|
||||
b'',
|
||||
this_string.split(b'\00;')[1],
|
||||
flags=re.IGNORECASE)
|
||||
log.debug('item value name is {0}'.format(item_value_name))
|
||||
data_to_replace = _regexSearchKeyValueCombo(policy_data, item_key, item_value_name)
|
||||
|
@ -4204,7 +4279,7 @@ def _policyFileReplaceOrAppend(this_string, policy_data, append_only=False):
|
|||
policy_data = policy_data.replace(data_to_replace, this_string)
|
||||
else:
|
||||
log.debug('appending {0}'.format([this_string]))
|
||||
policy_data = ''.join([policy_data, this_string])
|
||||
policy_data = b''.join([policy_data, this_string])
|
||||
|
||||
return policy_data
|
||||
|
||||
|
@ -4222,9 +4297,10 @@ def _writeAdminTemplateRegPolFile(admtemplate_data,
|
|||
REGISTRY_FILE_VERSION (u'\x01\00')
|
||||
|
||||
https://msdn.microsoft.com/en-us/library/aa374407(VS.85).aspx
|
||||
[Registry Path<NULL>;Reg Value<NULL>;Reg Type<NULL>;SizeInBytes<NULL>;Data<NULL>]
|
||||
+ https://msdn.microsoft.com/en-us/library/cc232696.aspx
|
||||
[Registry Path<NULL>;Reg Value<NULL>;Reg Type;SizeInBytes;Data<NULL>]
|
||||
'''
|
||||
existing_data = ''
|
||||
existing_data = b''
|
||||
base_policy_settings = {}
|
||||
policy_data = _policy_info()
|
||||
policySearchXpath = '//ns1:*[@id = "{0}" or @name = "{0}"]'
|
||||
|
@ -4242,8 +4318,8 @@ def _writeAdminTemplateRegPolFile(admtemplate_data,
|
|||
for adm_namespace in admtemplate_data:
|
||||
for adm_policy in admtemplate_data[adm_namespace]:
|
||||
if str(admtemplate_data[adm_namespace][adm_policy]).lower() == 'not configured':
|
||||
if adm_policy in base_policy_settings[adm_namespace]:
|
||||
base_policy_settings[adm_namespace].pop(adm_policy)
|
||||
if base_policy_settings.get(adm_namespace, {}).pop(adm_policy, None) is not None:
|
||||
log.debug('Policy "{0}" removed'.format(adm_policy))
|
||||
else:
|
||||
log.debug('adding {0} to base_policy_settings'.format(adm_policy))
|
||||
if adm_namespace not in base_policy_settings:
|
||||
|
@ -4850,7 +4926,7 @@ def get_policy_info(policy_name,
|
|||
policy_class,
|
||||
', '.join(policy_data.policies.keys()))
|
||||
return ret
|
||||
if policy_name in policy_data.policies[policy_class]:
|
||||
if policy_name in policy_data.policies[policy_class]['policies']:
|
||||
ret['policy_aliases'].append(policy_data.policies[policy_class]['policies'][policy_name]['Policy'])
|
||||
ret['policy_found'] = True
|
||||
ret['message'] = ''
|
||||
|
|
|
@ -39,10 +39,11 @@ import logging
|
|||
import os
|
||||
import re
|
||||
import time
|
||||
import sys
|
||||
from functools import cmp_to_key
|
||||
|
||||
# Import third party libs
|
||||
import salt.ext.six as six
|
||||
from salt.ext import six
|
||||
# pylint: disable=import-error,no-name-in-module
|
||||
from salt.ext.six.moves.urllib.parse import urlparse as _urlparse
|
||||
|
||||
|
@ -50,9 +51,12 @@ from salt.ext.six.moves.urllib.parse import urlparse as _urlparse
|
|||
from salt.exceptions import (CommandExecutionError,
|
||||
SaltInvocationError,
|
||||
SaltRenderError)
|
||||
import salt.utils
|
||||
import salt.utils.pkg
|
||||
import salt.utils # Can be removed once is_true, get_hash, compare_dicts are moved
|
||||
import salt.utils.args
|
||||
import salt.utils.files
|
||||
import salt.utils.path
|
||||
import salt.utils.pkg
|
||||
import salt.utils.versions
|
||||
import salt.syspaths
|
||||
import salt.payload
|
||||
from salt.exceptions import MinionError
|
||||
|
@ -99,7 +103,7 @@ def latest_version(*names, **kwargs):
|
|||
salt '*' pkg.latest_version <package name>
|
||||
salt '*' pkg.latest_version <package1> <package2> <package3> ...
|
||||
'''
|
||||
if len(names) == 0:
|
||||
if not names:
|
||||
return ''
|
||||
|
||||
# Initialize the return dict with empty strings
|
||||
|
@ -124,6 +128,8 @@ def latest_version(*names, **kwargs):
|
|||
if name in installed_pkgs:
|
||||
log.trace('Determining latest installed version of %s', name)
|
||||
try:
|
||||
# installed_pkgs[name] Can be version number or 'Not Found'
|
||||
# 'Not Found' occurs when version number is not found in the registry
|
||||
latest_installed = sorted(
|
||||
installed_pkgs[name],
|
||||
key=cmp_to_key(_reverse_cmp_pkg_versions)
|
||||
|
@ -140,6 +146,8 @@ def latest_version(*names, **kwargs):
|
|||
# get latest available (from winrepo_dir) version of package
|
||||
pkg_info = _get_package_info(name, saltenv=saltenv)
|
||||
log.trace('Raw winrepo pkg_info for {0} is {1}'.format(name, pkg_info))
|
||||
|
||||
# latest_available can be version number or 'latest' or even 'Not Found'
|
||||
latest_available = _get_latest_pkg_version(pkg_info)
|
||||
if latest_available:
|
||||
log.debug('Latest available version '
|
||||
|
@ -147,9 +155,9 @@ def latest_version(*names, **kwargs):
|
|||
|
||||
# check, whether latest available version
|
||||
# is newer than latest installed version
|
||||
if salt.utils.compare_versions(ver1=str(latest_available),
|
||||
oper='>',
|
||||
ver2=str(latest_installed)):
|
||||
if compare_versions(ver1=str(latest_available),
|
||||
oper='>',
|
||||
ver2=str(latest_installed)):
|
||||
log.debug('Upgrade of {0} from {1} to {2} '
|
||||
'is available'.format(name,
|
||||
latest_installed,
|
||||
|
@ -188,10 +196,9 @@ def upgrade_available(name, **kwargs):
|
|||
# same default as latest_version
|
||||
refresh = salt.utils.is_true(kwargs.get('refresh', True))
|
||||
|
||||
current = version(name, saltenv=saltenv, refresh=refresh).get(name)
|
||||
latest = latest_version(name, saltenv=saltenv, refresh=False)
|
||||
|
||||
return compare_versions(latest, '>', current)
|
||||
# if latest_version returns blank, the latest version is already installed or
|
||||
# their is no package definition. This is a salt standard which could be improved.
|
||||
return latest_version(name, saltenv=saltenv, refresh=refresh) != ''
|
||||
|
||||
|
||||
def list_upgrades(refresh=True, **kwargs):
|
||||
|
@ -222,9 +229,13 @@ def list_upgrades(refresh=True, **kwargs):
|
|||
pkgs = {}
|
||||
for pkg in installed_pkgs:
|
||||
if pkg in available_pkgs:
|
||||
# latest_version() will be blank if the latest version is installed.
|
||||
# or the package name is wrong. Given we check available_pkgs, this
|
||||
# should not be the case of wrong package name.
|
||||
# Note: latest_version() is an expensive way to do this as it
|
||||
# calls list_pkgs each time.
|
||||
latest_ver = latest_version(pkg, refresh=False, saltenv=saltenv)
|
||||
install_ver = installed_pkgs[pkg]
|
||||
if compare_versions(latest_ver, '>', install_ver):
|
||||
if latest_ver:
|
||||
pkgs[pkg] = latest_ver
|
||||
|
||||
return pkgs
|
||||
|
@ -241,7 +252,7 @@ def list_available(*names, **kwargs):
|
|||
|
||||
saltenv (str): The salt environment to use. Default ``base``.
|
||||
|
||||
refresh (bool): Refresh package metadata. Default ``True``.
|
||||
refresh (bool): Refresh package metadata. Default ``False``.
|
||||
|
||||
return_dict_always (bool):
|
||||
Default ``False`` dict when a single package name is queried.
|
||||
|
@ -264,11 +275,10 @@ def list_available(*names, **kwargs):
|
|||
return ''
|
||||
|
||||
saltenv = kwargs.get('saltenv', 'base')
|
||||
refresh = salt.utils.is_true(kwargs.get('refresh', True))
|
||||
refresh = salt.utils.is_true(kwargs.get('refresh', False))
|
||||
_refresh_db_conditional(saltenv, force=refresh)
|
||||
return_dict_always = \
|
||||
salt.utils.is_true(kwargs.get('return_dict_always', False))
|
||||
|
||||
_refresh_db_conditional(saltenv, force=refresh)
|
||||
if len(names) == 1 and not return_dict_always:
|
||||
pkginfo = _get_package_info(names[0], saltenv=saltenv)
|
||||
if not pkginfo:
|
||||
|
@ -293,7 +303,9 @@ def list_available(*names, **kwargs):
|
|||
|
||||
def version(*names, **kwargs):
|
||||
'''
|
||||
Returns a version if the package is installed, else returns an empty string
|
||||
Returns a string representing the package version or an empty string if not
|
||||
installed. If more than one package name is specified, a dict of
|
||||
name/version pairs is returned.
|
||||
|
||||
Args:
|
||||
name (str): One or more package names
|
||||
|
@ -303,10 +315,11 @@ def version(*names, **kwargs):
|
|||
refresh (bool): Refresh package metadata. Default ``False``.
|
||||
|
||||
Returns:
|
||||
str: version string when a single package is specified.
|
||||
dict: The package name(s) with the installed versions.
|
||||
|
||||
.. code-block:: cfg
|
||||
|
||||
{['<version>', '<version>', ]} OR
|
||||
{'<package name>': ['<version>', '<version>', ]}
|
||||
|
||||
CLI Example:
|
||||
|
@ -315,19 +328,25 @@ def version(*names, **kwargs):
|
|||
|
||||
salt '*' pkg.version <package name>
|
||||
salt '*' pkg.version <package name01> <package name02>
|
||||
'''
|
||||
saltenv = kwargs.get('saltenv', 'base')
|
||||
|
||||
installed_pkgs = list_pkgs(refresh=kwargs.get('refresh', False))
|
||||
available_pkgs = get_repo_data(saltenv).get('repo')
|
||||
'''
|
||||
# Standard is return empty string even if not a valid name
|
||||
# TODO: Look at returning an error across all platforms with
|
||||
# CommandExecutionError(msg,info={'errors': errors })
|
||||
# available_pkgs = get_repo_data(saltenv).get('repo')
|
||||
# for name in names:
|
||||
# if name in available_pkgs:
|
||||
# ret[name] = installed_pkgs.get(name, '')
|
||||
|
||||
saltenv = kwargs.get('saltenv', 'base')
|
||||
installed_pkgs = list_pkgs(saltenv=saltenv, refresh=kwargs.get('refresh', False))
|
||||
|
||||
if len(names) == 1:
|
||||
return installed_pkgs.get(names[0], '')
|
||||
|
||||
ret = {}
|
||||
for name in names:
|
||||
if name in available_pkgs:
|
||||
ret[name] = installed_pkgs.get(name, '')
|
||||
else:
|
||||
ret[name] = 'not available'
|
||||
|
||||
ret[name] = installed_pkgs.get(name, '')
|
||||
return ret
|
||||
|
||||
|
||||
|
@ -424,7 +443,7 @@ def _get_reg_software():
|
|||
'(value not set)',
|
||||
'',
|
||||
None]
|
||||
#encoding = locale.getpreferredencoding()
|
||||
|
||||
reg_software = {}
|
||||
|
||||
hive = 'HKLM'
|
||||
|
@ -462,7 +481,7 @@ def _get_reg_software():
|
|||
def _refresh_db_conditional(saltenv, **kwargs):
|
||||
'''
|
||||
Internal use only in this module, has a different set of defaults and
|
||||
returns True or False. And supports check the age of the existing
|
||||
returns True or False. And supports checking the age of the existing
|
||||
generated metadata db, as well as ensure metadata db exists to begin with
|
||||
|
||||
Args:
|
||||
|
@ -476,8 +495,7 @@ def _refresh_db_conditional(saltenv, **kwargs):
|
|||
|
||||
failhard (bool):
|
||||
If ``True``, an error will be raised if any repo SLS files failed to
|
||||
process. If ``False``, no error will be raised, and a dictionary
|
||||
containing the full results will be returned.
|
||||
process.
|
||||
|
||||
Returns:
|
||||
bool: True Fetched or Cache uptodate, False to indicate an issue
|
||||
|
@ -695,8 +713,8 @@ def genrepo(**kwargs):
|
|||
|
||||
verbose (bool):
|
||||
Return verbose data structure which includes 'success_list', a list
|
||||
of all sls files and the package names contained within. Default
|
||||
'False'
|
||||
of all sls files and the package names contained within.
|
||||
Default ``False``.
|
||||
|
||||
failhard (bool):
|
||||
If ``True``, an error will be raised if any repo SLS files failed
|
||||
|
@ -739,11 +757,13 @@ def genrepo(**kwargs):
|
|||
successful_verbose
|
||||
)
|
||||
serial = salt.payload.Serial(__opts__)
|
||||
# TODO: 2016.11 has PY2 mode as 'w+b' develop has 'w+' ? PY3 is 'wb+'
|
||||
# also the reading of this is 'rb' in get_repo_data()
|
||||
mode = 'w+' if six.PY2 else 'wb+'
|
||||
|
||||
with salt.utils.fopen(repo_details.winrepo_file, mode) as repo_cache:
|
||||
repo_cache.write(serial.dumps(ret))
|
||||
# save reading it back again. ! this breaks due to utf8 issues
|
||||
#__context__['winrepo.data'] = ret
|
||||
# For some reason we can not save ret into __context__['winrepo.data'] as this breaks due to utf8 issues
|
||||
successful_count = len(successful_verbose)
|
||||
error_count = len(ret['errors'])
|
||||
if verbose:
|
||||
|
@ -778,7 +798,7 @@ def genrepo(**kwargs):
|
|||
return results
|
||||
|
||||
|
||||
def _repo_process_pkg_sls(file, short_path_name, ret, successful_verbose):
|
||||
def _repo_process_pkg_sls(filename, short_path_name, ret, successful_verbose):
|
||||
renderers = salt.loader.render(__opts__, __salt__)
|
||||
|
||||
def _failed_compile(msg):
|
||||
|
@ -788,7 +808,7 @@ def _repo_process_pkg_sls(file, short_path_name, ret, successful_verbose):
|
|||
|
||||
try:
|
||||
config = salt.template.compile_template(
|
||||
file,
|
||||
filename,
|
||||
renderers,
|
||||
__opts__['renderer'],
|
||||
__opts__.get('renderer_blacklist', ''),
|
||||
|
@ -803,7 +823,6 @@ def _repo_process_pkg_sls(file, short_path_name, ret, successful_verbose):
|
|||
if config:
|
||||
revmap = {}
|
||||
errors = []
|
||||
pkgname_ok_list = []
|
||||
for pkgname, versions in six.iteritems(config):
|
||||
if pkgname in ret['repo']:
|
||||
log.error(
|
||||
|
@ -812,12 +831,12 @@ def _repo_process_pkg_sls(file, short_path_name, ret, successful_verbose):
|
|||
)
|
||||
errors.append('package \'{0}\' already defined'.format(pkgname))
|
||||
break
|
||||
for version, repodata in six.iteritems(versions):
|
||||
for version_str, repodata in six.iteritems(versions):
|
||||
# Ensure version is a string/unicode
|
||||
if not isinstance(version, six.string_types):
|
||||
if not isinstance(version_str, six.string_types):
|
||||
msg = (
|
||||
'package \'{0}\'{{0}}, version number {1} '
|
||||
'is not a string'.format(pkgname, version)
|
||||
'is not a string'.format(pkgname, version_str)
|
||||
)
|
||||
log.error(
|
||||
msg.format(' within \'{0}\''.format(short_path_name))
|
||||
|
@ -829,7 +848,7 @@ def _repo_process_pkg_sls(file, short_path_name, ret, successful_verbose):
|
|||
msg = (
|
||||
'package \'{0}\'{{0}}, repo data for '
|
||||
'version number {1} is not defined as a dictionary '
|
||||
.format(pkgname, version)
|
||||
.format(pkgname, version_str)
|
||||
)
|
||||
log.error(
|
||||
msg.format(' within \'{0}\''.format(short_path_name))
|
||||
|
@ -840,8 +859,6 @@ def _repo_process_pkg_sls(file, short_path_name, ret, successful_verbose):
|
|||
if errors:
|
||||
ret.setdefault('errors', {})[short_path_name] = errors
|
||||
else:
|
||||
if pkgname not in pkgname_ok_list:
|
||||
pkgname_ok_list.append(pkgname)
|
||||
ret.setdefault('repo', {}).update(config)
|
||||
ret.setdefault('name_map', {}).update(revmap)
|
||||
successful_verbose[short_path_name] = config.keys()
|
||||
|
@ -916,7 +933,8 @@ def install(name=None, refresh=False, pkgs=None, **kwargs):
|
|||
to install. (no spaces after the commas)
|
||||
|
||||
refresh (bool):
|
||||
Boolean value representing whether or not to refresh the winrepo db
|
||||
Boolean value representing whether or not to refresh the winrepo db.
|
||||
Default ``False``.
|
||||
|
||||
pkgs (list):
|
||||
A list of packages to install from a software repository. All
|
||||
|
@ -1051,7 +1069,6 @@ def install(name=None, refresh=False, pkgs=None, **kwargs):
|
|||
'''
|
||||
ret = {}
|
||||
saltenv = kwargs.pop('saltenv', 'base')
|
||||
|
||||
refresh = salt.utils.is_true(refresh)
|
||||
# no need to call _refresh_db_conditional as list_pkgs will do it
|
||||
|
||||
|
@ -1072,7 +1089,7 @@ def install(name=None, refresh=False, pkgs=None, **kwargs):
|
|||
for pkg in pkg_params:
|
||||
pkg_params[pkg] = {'version': pkg_params[pkg]}
|
||||
|
||||
if pkg_params is None or len(pkg_params) == 0:
|
||||
if not pkg_params:
|
||||
log.error('No package definition found')
|
||||
return {}
|
||||
|
||||
|
@ -1114,11 +1131,12 @@ def install(name=None, refresh=False, pkgs=None, **kwargs):
|
|||
version_num = str(version_num)
|
||||
|
||||
if not version_num:
|
||||
# following can be version number or latest
|
||||
version_num = _get_latest_pkg_version(pkginfo)
|
||||
|
||||
# Check if the version is already installed
|
||||
if version_num in old.get(pkg_name, '').split(',') \
|
||||
or (old.get(pkg_name) == 'Not Found'):
|
||||
or (old.get(pkg_name, '') == 'Not Found'):
|
||||
# Desired version number already installed
|
||||
ret[pkg_name] = {'current': version_num}
|
||||
continue
|
||||
|
@ -1237,6 +1255,7 @@ def install(name=None, refresh=False, pkgs=None, **kwargs):
|
|||
log.debug('Source hash matches package hash.')
|
||||
|
||||
# Get install flags
|
||||
|
||||
install_flags = pkginfo[version_num].get('install_flags', '')
|
||||
if options and options.get('extra_install_flags'):
|
||||
install_flags = '{0} {1}'.format(
|
||||
|
@ -1244,32 +1263,32 @@ def install(name=None, refresh=False, pkgs=None, **kwargs):
|
|||
options.get('extra_install_flags', '')
|
||||
)
|
||||
|
||||
#Compute msiexec string
|
||||
# Compute msiexec string
|
||||
use_msiexec, msiexec = _get_msiexec(pkginfo[version_num].get('msiexec', False))
|
||||
|
||||
# Build cmd and arguments
|
||||
# cmd and arguments must be separated for use with the task scheduler
|
||||
cmd_shell = os.getenv('ComSpec', '{0}\\system32\\cmd.exe'.format(os.getenv('WINDIR')))
|
||||
if use_msiexec:
|
||||
cmd = msiexec
|
||||
arguments = ['/i', cached_pkg]
|
||||
arguments = '"{0}" /I "{1}"'.format(msiexec, cached_pkg)
|
||||
if pkginfo[version_num].get('allusers', True):
|
||||
arguments.append('ALLUSERS="1"')
|
||||
arguments.extend(salt.utils.shlex_split(install_flags, posix=False))
|
||||
arguments = '{0} ALLUSERS=1'.format(arguments)
|
||||
else:
|
||||
cmd = cached_pkg
|
||||
arguments = salt.utils.shlex_split(install_flags, posix=False)
|
||||
arguments = '"{0}"'.format(cached_pkg)
|
||||
|
||||
if install_flags:
|
||||
arguments = '{0} {1}'.format(arguments, install_flags)
|
||||
|
||||
# Install the software
|
||||
# Check Use Scheduler Option
|
||||
if pkginfo[version_num].get('use_scheduler', False):
|
||||
|
||||
# Create Scheduled Task
|
||||
__salt__['task.create_task'](name='update-salt-software',
|
||||
user_name='System',
|
||||
force=True,
|
||||
action_type='Execute',
|
||||
cmd=cmd,
|
||||
arguments=' '.join(arguments),
|
||||
cmd=cmd_shell,
|
||||
arguments='/s /c "{0}"'.format(arguments),
|
||||
start_in=cache_path,
|
||||
trigger_type='Once',
|
||||
start_date='1975-01-01',
|
||||
|
@ -1311,14 +1330,10 @@ def install(name=None, refresh=False, pkgs=None, **kwargs):
|
|||
log.error('Scheduled Task failed to run')
|
||||
ret[pkg_name] = {'install status': 'failed'}
|
||||
else:
|
||||
|
||||
# Combine cmd and arguments
|
||||
cmd = [cmd]
|
||||
cmd.extend(arguments)
|
||||
|
||||
# Launch the command
|
||||
result = __salt__['cmd.run_all'](cmd,
|
||||
result = __salt__['cmd.run_all']('"{0}" /s /c "{1}"'.format(cmd_shell, arguments),
|
||||
cache_path,
|
||||
output_loglevel='trace',
|
||||
python_shell=False,
|
||||
redirect_stderr=True)
|
||||
if not result['retcode']:
|
||||
|
@ -1397,14 +1412,17 @@ def remove(name=None, pkgs=None, version=None, **kwargs):
|
|||
.. versionadded:: 0.16.0
|
||||
|
||||
Args:
|
||||
name (str): The name(s) of the package(s) to be uninstalled. Can be a
|
||||
single package or a comma delimted list of packages, no spaces.
|
||||
name (str):
|
||||
The name(s) of the package(s) to be uninstalled. Can be a
|
||||
single package or a comma delimited list of packages, no spaces.
|
||||
|
||||
version (str):
|
||||
The version of the package to be uninstalled. If this option is
|
||||
used to to uninstall multiple packages, then this version will be
|
||||
applied to all targeted packages. Recommended using only when
|
||||
uninstalling a single package. If this parameter is omitted, the
|
||||
latest version will be uninstalled.
|
||||
|
||||
pkgs (list):
|
||||
A list of packages to delete. Must be passed as a python list. The
|
||||
``name`` parameter will be ignored if this option is passed.
|
||||
|
@ -1494,7 +1512,6 @@ def remove(name=None, pkgs=None, version=None, **kwargs):
|
|||
removal_targets.append(version_num)
|
||||
|
||||
for target in removal_targets:
|
||||
|
||||
# Get the uninstaller
|
||||
uninstaller = pkginfo[target].get('uninstaller', '')
|
||||
cache_dir = pkginfo[target].get('cache_dir', False)
|
||||
|
@ -1519,6 +1536,7 @@ def remove(name=None, pkgs=None, version=None, **kwargs):
|
|||
# If true, the entire directory will be cached instead of the
|
||||
# individual file. This is useful for installations that are not
|
||||
# single files
|
||||
|
||||
if cache_dir and uninstaller.startswith('salt:'):
|
||||
path, _ = os.path.split(uninstaller)
|
||||
__salt__['cp.cache_dir'](path,
|
||||
|
@ -1541,6 +1559,7 @@ def remove(name=None, pkgs=None, version=None, **kwargs):
|
|||
|
||||
# Compare the hash of the cached installer to the source only if
|
||||
# the file is hosted on salt:
|
||||
# TODO cp.cache_file does cache and hash checking? So why do it again?
|
||||
if uninstaller.startswith('salt:'):
|
||||
if __salt__['cp.hash_file'](uninstaller, saltenv) != \
|
||||
__salt__['cp.hash_file'](cached_pkg):
|
||||
|
@ -1558,14 +1577,13 @@ def remove(name=None, pkgs=None, version=None, **kwargs):
|
|||
else:
|
||||
# Run the uninstaller directly
|
||||
# (not hosted on salt:, https:, etc.)
|
||||
cached_pkg = uninstaller
|
||||
cached_pkg = os.path.expandvars(uninstaller)
|
||||
|
||||
# Fix non-windows slashes
|
||||
cached_pkg = cached_pkg.replace('/', '\\')
|
||||
cache_path, _ = os.path.split(cached_pkg)
|
||||
|
||||
# Get parameters for cmd
|
||||
expanded_cached_pkg = str(os.path.expandvars(cached_pkg))
|
||||
# os.path.expandvars is not required as we run everything through cmd.exe /s /c
|
||||
|
||||
# Get uninstall flags
|
||||
uninstall_flags = pkginfo[target].get('uninstall_flags', '')
|
||||
|
@ -1574,30 +1592,32 @@ def remove(name=None, pkgs=None, version=None, **kwargs):
|
|||
uninstall_flags = '{0} {1}'.format(
|
||||
uninstall_flags, kwargs.get('extra_uninstall_flags', ''))
|
||||
|
||||
#Compute msiexec string
|
||||
# Compute msiexec string
|
||||
use_msiexec, msiexec = _get_msiexec(pkginfo[target].get('msiexec', False))
|
||||
cmd_shell = os.getenv('ComSpec', '{0}\\system32\\cmd.exe'.format(os.getenv('WINDIR')))
|
||||
|
||||
# Build cmd and arguments
|
||||
# cmd and arguments must be separated for use with the task scheduler
|
||||
if use_msiexec:
|
||||
cmd = msiexec
|
||||
arguments = ['/x']
|
||||
arguments.extend(salt.utils.shlex_split(uninstall_flags, posix=False))
|
||||
# Check if uninstaller is set to {guid}, if not we assume its a remote msi file.
|
||||
# which has already been downloaded.
|
||||
arguments = '"{0}" /X "{1}"'.format(msiexec, cached_pkg)
|
||||
else:
|
||||
cmd = expanded_cached_pkg
|
||||
arguments = salt.utils.shlex_split(uninstall_flags, posix=False)
|
||||
arguments = '"{0}"'.format(cached_pkg)
|
||||
|
||||
if uninstall_flags:
|
||||
arguments = '{0} {1}'.format(arguments, uninstall_flags)
|
||||
|
||||
# Uninstall the software
|
||||
# Check Use Scheduler Option
|
||||
if pkginfo[target].get('use_scheduler', False):
|
||||
|
||||
# Create Scheduled Task
|
||||
__salt__['task.create_task'](name='update-salt-software',
|
||||
user_name='System',
|
||||
force=True,
|
||||
action_type='Execute',
|
||||
cmd=cmd,
|
||||
arguments=' '.join(arguments),
|
||||
cmd=cmd_shell,
|
||||
arguments='/s /c "{0}"'.format(arguments),
|
||||
start_in=cache_path,
|
||||
trigger_type='Once',
|
||||
start_date='1975-01-01',
|
||||
|
@ -1610,13 +1630,10 @@ def remove(name=None, pkgs=None, version=None, **kwargs):
|
|||
log.error('Scheduled Task failed to run')
|
||||
ret[pkgname] = {'uninstall status': 'failed'}
|
||||
else:
|
||||
# Build the install command
|
||||
cmd = [cmd]
|
||||
cmd.extend(arguments)
|
||||
|
||||
# Launch the command
|
||||
result = __salt__['cmd.run_all'](
|
||||
cmd,
|
||||
'"{0}" /s /c "{1}"'.format(cmd_shell, arguments),
|
||||
output_loglevel='trace',
|
||||
python_shell=False,
|
||||
redirect_stderr=True)
|
||||
if not result['retcode']:
|
||||
|
@ -1662,11 +1679,13 @@ def purge(name=None, pkgs=None, version=None, **kwargs):
|
|||
|
||||
name (str): The name of the package to be deleted.
|
||||
|
||||
version (str): The version of the package to be deleted. If this option
|
||||
is used in combination with the ``pkgs`` option below, then this
|
||||
version (str):
|
||||
The version of the package to be deleted. If this option is
|
||||
used in combination with the ``pkgs`` option below, then this
|
||||
version will be applied to all targeted packages.
|
||||
|
||||
pkgs (list): A list of packages to delete. Must be passed as a python
|
||||
pkgs (list):
|
||||
A list of packages to delete. Must be passed as a python
|
||||
list. The ``name`` parameter will be ignored if this option is
|
||||
passed.
|
||||
|
||||
|
@ -1800,4 +1819,20 @@ def compare_versions(ver1='', oper='==', ver2=''):
|
|||
|
||||
salt '*' pkg.compare_versions 1.2 >= 1.3
|
||||
'''
|
||||
return salt.utils.compare_versions(ver1, oper, ver2)
|
||||
if not ver1:
|
||||
raise SaltInvocationError('compare_version, ver1 is blank')
|
||||
if not ver2:
|
||||
raise SaltInvocationError('compare_version, ver2 is blank')
|
||||
|
||||
# Support version being the special meaning of 'latest'
|
||||
if ver1 == 'latest':
|
||||
ver1 = str(sys.maxsize)
|
||||
if ver2 == 'latest':
|
||||
ver2 = str(sys.maxsize)
|
||||
# Support version being the special meaning of 'Not Found'
|
||||
if ver1 == 'Not Found':
|
||||
ver1 = '0.0.0.0.0'
|
||||
if ver2 == 'Not Found':
|
||||
ver2 = '0.0.0.0.0'
|
||||
|
||||
return salt.utils.compare_versions(ver1, oper, ver2, ignore_epoch=True)
|
||||
|
|
|
@ -106,6 +106,13 @@ A REST API for Salt
|
|||
expire_responses : True
|
||||
Whether to check for and kill HTTP responses that have exceeded the
|
||||
default timeout.
|
||||
|
||||
.. deprecated:: 2016.11.9, 2017.7.3, Oxygen
|
||||
|
||||
The "expire_responses" configuration setting, which corresponds
|
||||
to the ``timeout_monitor`` setting in CherryPy, is no longer
|
||||
supported in CherryPy versions >= 12.0.0.
|
||||
|
||||
max_request_body_size : ``1048576``
|
||||
Maximum size for the HTTP request body.
|
||||
collect_stats : False
|
||||
|
@ -506,6 +513,7 @@ import salt.ext.six as six
|
|||
# Import Salt libs
|
||||
import salt
|
||||
import salt.auth
|
||||
import salt.exceptions
|
||||
import salt.utils
|
||||
import salt.utils.event
|
||||
|
||||
|
@ -753,11 +761,18 @@ def hypermedia_handler(*args, **kwargs):
|
|||
except (salt.exceptions.SaltDaemonNotRunning,
|
||||
salt.exceptions.SaltReqTimeoutError) as exc:
|
||||
raise cherrypy.HTTPError(503, exc.strerror)
|
||||
except (cherrypy.TimeoutError, salt.exceptions.SaltClientTimeout):
|
||||
except salt.exceptions.SaltClientTimeout:
|
||||
raise cherrypy.HTTPError(504)
|
||||
except cherrypy.CherryPyException:
|
||||
raise
|
||||
except Exception as exc:
|
||||
# The TimeoutError exception class was removed in CherryPy in 12.0.0, but
|
||||
# Still check existence of TimeoutError and handle in CherryPy < 12.
|
||||
# The check was moved down from the SaltClientTimeout error line because
|
||||
# A one-line if statement throws a BaseException inheritance TypeError.
|
||||
if hasattr(cherrypy, 'TimeoutError') and isinstance(exc, cherrypy.TimeoutError):
|
||||
raise cherrypy.HTTPError(504)
|
||||
|
||||
import traceback
|
||||
|
||||
logger.debug("Error while processing request for: %s",
|
||||
|
@ -2731,8 +2746,6 @@ class API(object):
|
|||
'server.socket_port': self.apiopts.get('port', 8000),
|
||||
'server.thread_pool': self.apiopts.get('thread_pool', 100),
|
||||
'server.socket_queue_size': self.apiopts.get('queue_size', 30),
|
||||
'engine.timeout_monitor.on': self.apiopts.get(
|
||||
'expire_responses', True),
|
||||
'max_request_body_size': self.apiopts.get(
|
||||
'max_request_body_size', 1048576),
|
||||
'debug': self.apiopts.get('debug', False),
|
||||
|
@ -2750,6 +2763,14 @@ class API(object):
|
|||
},
|
||||
}
|
||||
|
||||
if salt.utils.version_cmp(cherrypy.__version__, '12.0.0') < 0:
|
||||
# CherryPy >= 12.0 no longer supports "timeout_monitor", only set
|
||||
# this config option when using an older version of CherryPy.
|
||||
# See Issue #44601 for more information.
|
||||
conf['global']['engine.timeout_monitor.on'] = self.apiopts.get(
|
||||
'expire_responses', True
|
||||
)
|
||||
|
||||
if cpstats and self.apiopts.get('collect_stats', False):
|
||||
conf['/']['tools.cpstats.on'] = True
|
||||
|
||||
|
|
|
@ -143,6 +143,17 @@ def get_printout(out, opts=None, **kwargs):
|
|||
# See Issue #29796 for more information.
|
||||
out = opts['output']
|
||||
|
||||
# Handle setting the output when --static is passed.
|
||||
if not out and opts.get('static'):
|
||||
if opts.get('output'):
|
||||
out = opts['output']
|
||||
elif opts.get('fun', '').split('.')[0] == 'state':
|
||||
# --static doesn't have an output set at this point, but if we're
|
||||
# running a state function and "out" hasn't already been set, we
|
||||
# should set the out variable to "highstate". Otherwise state runs
|
||||
# are set to "nested" below. See Issue #44556 for more information.
|
||||
out = 'highstate'
|
||||
|
||||
if out == 'text':
|
||||
out = 'txt'
|
||||
elif out is None or out == '':
|
||||
|
|
|
@ -48,7 +48,7 @@ def _load_state():
|
|||
pck = open(FILENAME, 'r') # pylint: disable=W8470
|
||||
DETAILS = pickle.load(pck)
|
||||
pck.close()
|
||||
except IOError:
|
||||
except EOFError:
|
||||
DETAILS = {}
|
||||
DETAILS['initialized'] = False
|
||||
_save_state(DETAILS)
|
||||
|
|
|
@ -254,14 +254,14 @@ def returner(ret):
|
|||
with _get_serv(ret, commit=True) as cur:
|
||||
sql = '''INSERT INTO salt_returns
|
||||
(fun, jid, return, id, success, full_ret, alter_time)
|
||||
VALUES (%s, %s, %s, %s, %s, %s, %s)'''
|
||||
VALUES (%s, %s, %s, %s, %s, %s, to_timestamp(%s))'''
|
||||
|
||||
cur.execute(sql, (ret['fun'], ret['jid'],
|
||||
psycopg2.extras.Json(ret['return']),
|
||||
ret['id'],
|
||||
ret.get('success', False),
|
||||
psycopg2.extras.Json(ret),
|
||||
time.strftime('%Y-%m-%d %H:%M:%S %z', time.localtime())))
|
||||
time.time()))
|
||||
except salt.exceptions.SaltMasterError:
|
||||
log.critical('Could not store return with pgjsonb returner. PostgreSQL server unavailable.')
|
||||
|
||||
|
@ -278,9 +278,9 @@ def event_return(events):
|
|||
tag = event.get('tag', '')
|
||||
data = event.get('data', '')
|
||||
sql = '''INSERT INTO salt_events (tag, data, master_id, alter_time)
|
||||
VALUES (%s, %s, %s, %s)'''
|
||||
VALUES (%s, %s, %s, to_timestamp(%s))'''
|
||||
cur.execute(sql, (tag, psycopg2.extras.Json(data),
|
||||
__opts__['id'], time.strftime('%Y-%m-%d %H:%M:%S %z', time.localtime())))
|
||||
__opts__['id'], time.time()))
|
||||
|
||||
|
||||
def save_load(jid, load, minions=None):
|
||||
|
|
|
@ -77,7 +77,7 @@ class RosterMatcher(object):
|
|||
if fnmatch.fnmatch(minion, self.tgt):
|
||||
data = self.get_data(minion)
|
||||
if data:
|
||||
minions[minion] = data
|
||||
minions[minion] = data.copy()
|
||||
return minions
|
||||
|
||||
def ret_pcre_minions(self):
|
||||
|
@ -89,7 +89,7 @@ class RosterMatcher(object):
|
|||
if re.match(self.tgt, minion):
|
||||
data = self.get_data(minion)
|
||||
if data:
|
||||
minions[minion] = data
|
||||
minions[minion] = data.copy()
|
||||
return minions
|
||||
|
||||
def ret_list_minions(self):
|
||||
|
@ -103,7 +103,7 @@ class RosterMatcher(object):
|
|||
if minion in self.tgt:
|
||||
data = self.get_data(minion)
|
||||
if data:
|
||||
minions[minion] = data
|
||||
minions[minion] = data.copy()
|
||||
return minions
|
||||
|
||||
def ret_nodegroup_minions(self):
|
||||
|
@ -119,7 +119,7 @@ class RosterMatcher(object):
|
|||
if minion in nodegroup:
|
||||
data = self.get_data(minion)
|
||||
if data:
|
||||
minions[minion] = data
|
||||
minions[minion] = data.copy()
|
||||
return minions
|
||||
|
||||
def ret_range_minions(self):
|
||||
|
@ -136,7 +136,7 @@ class RosterMatcher(object):
|
|||
if minion in range_hosts:
|
||||
data = self.get_data(minion)
|
||||
if data:
|
||||
minions[minion] = data
|
||||
minions[minion] = data.copy()
|
||||
return minions
|
||||
|
||||
def get_data(self, minion):
|
||||
|
|
|
@ -37,7 +37,7 @@ log = logging.getLogger(__name__)
|
|||
|
||||
def _ping(tgt, tgt_type, timeout, gather_job_timeout):
|
||||
client = salt.client.get_local_client(__opts__['conf_file'])
|
||||
pub_data = client.run_job(tgt, 'test.ping', (), tgt_type, '', timeout, '')
|
||||
pub_data = client.run_job(tgt, 'test.ping', (), tgt_type, '', timeout, '', listen=True)
|
||||
|
||||
if not pub_data:
|
||||
return pub_data
|
||||
|
|
|
@ -686,7 +686,7 @@ class State(object):
|
|||
except AttributeError:
|
||||
pillar_enc = str(pillar_enc).lower()
|
||||
self._pillar_enc = pillar_enc
|
||||
if initial_pillar is not None:
|
||||
if initial_pillar:
|
||||
self.opts['pillar'] = initial_pillar
|
||||
if self._pillar_override:
|
||||
self.opts['pillar'] = salt.utils.dictupdate.merge(
|
||||
|
@ -1878,8 +1878,8 @@ class State(object):
|
|||
sys.modules[self.states[cdata['full']].__module__].__opts__[
|
||||
'test'] = test
|
||||
|
||||
self.state_con.pop('runas')
|
||||
self.state_con.pop('runas_password')
|
||||
self.state_con.pop('runas', None)
|
||||
self.state_con.pop('runas_password', None)
|
||||
|
||||
# If format_call got any warnings, let's show them to the user
|
||||
if 'warnings' in cdata:
|
||||
|
|
|
@ -202,7 +202,14 @@ def _check_cron(user,
|
|||
return 'present'
|
||||
else:
|
||||
for cron in lst['special']:
|
||||
if special == cron['spec'] and cmd == cron['cmd']:
|
||||
if _cron_matched(cron, cmd, identifier):
|
||||
if any([_needs_change(x, y) for x, y in
|
||||
((cron['spec'], special),
|
||||
(cron['identifier'], identifier),
|
||||
(cron['cmd'], cmd),
|
||||
(cron['comment'], comment),
|
||||
(cron['commented'], commented))]):
|
||||
return 'update'
|
||||
return 'present'
|
||||
return 'absent'
|
||||
|
||||
|
@ -349,7 +356,12 @@ def present(name,
|
|||
commented=commented,
|
||||
identifier=identifier)
|
||||
else:
|
||||
data = __salt__['cron.set_special'](user, special, name)
|
||||
data = __salt__['cron.set_special'](user=user,
|
||||
special=special,
|
||||
cmd=name,
|
||||
comment=comment,
|
||||
commented=commented,
|
||||
identifier=identifier)
|
||||
if data == 'present':
|
||||
ret['comment'] = 'Cron {0} already present'.format(name)
|
||||
return ret
|
||||
|
@ -418,7 +430,7 @@ def absent(name,
|
|||
if special is None:
|
||||
data = __salt__['cron.rm_job'](user, name, identifier=identifier)
|
||||
else:
|
||||
data = __salt__['cron.rm_special'](user, special, name)
|
||||
data = __salt__['cron.rm_special'](user, name, special=special, identifier=identifier)
|
||||
|
||||
if data == 'absent':
|
||||
ret['comment'] = "Cron {0} already absent".format(name)
|
||||
|
|
|
@ -758,7 +758,7 @@ def _check_directory_win(name,
|
|||
changes = {}
|
||||
|
||||
if not os.path.isdir(name):
|
||||
changes = {'directory': 'new'}
|
||||
changes = {name: {'directory': 'new'}}
|
||||
else:
|
||||
# Check owner
|
||||
owner = salt.utils.win_dacl.get_owner(name)
|
||||
|
@ -883,7 +883,11 @@ def _check_dir_meta(name,
|
|||
'''
|
||||
Check the changes in directory metadata
|
||||
'''
|
||||
stats = __salt__['file.stats'](name, None, follow_symlinks)
|
||||
try:
|
||||
stats = __salt__['file.stats'](name, None, follow_symlinks)
|
||||
except CommandExecutionError:
|
||||
stats = {}
|
||||
|
||||
changes = {}
|
||||
if not stats:
|
||||
changes['directory'] = 'new'
|
||||
|
@ -2087,6 +2091,9 @@ def managed(name,
|
|||
'name': name,
|
||||
'result': True}
|
||||
|
||||
if not name:
|
||||
return _error(ret, 'Destination file name is required')
|
||||
|
||||
if mode is not None and salt.utils.is_windows():
|
||||
return _error(ret, 'The \'mode\' option is not supported on Windows')
|
||||
|
||||
|
@ -2237,8 +2244,6 @@ def managed(name,
|
|||
ret['comment'] = 'Error while applying template on contents'
|
||||
return ret
|
||||
|
||||
if not name:
|
||||
return _error(ret, 'Must provide name to file.managed')
|
||||
user = _test_owner(kwargs, user=user)
|
||||
if salt.utils.is_windows():
|
||||
|
||||
|
@ -2988,7 +2993,7 @@ def directory(name,
|
|||
ret, _ = __salt__['file.check_perms'](
|
||||
full, ret, user, group, dir_mode, follow_symlinks)
|
||||
except CommandExecutionError as exc:
|
||||
if not exc.strerror.endswith('does not exist'):
|
||||
if not exc.strerror.startswith('Path not found'):
|
||||
errors.append(exc.strerror)
|
||||
|
||||
if clean:
|
||||
|
@ -3836,11 +3841,11 @@ def replace(name,
|
|||
|
||||
If you need to match a literal string that contains regex special
|
||||
characters, you may want to use salt's custom Jinja filter,
|
||||
``escape_regex``.
|
||||
``regex_escape``.
|
||||
|
||||
.. code-block:: jinja
|
||||
|
||||
{{ 'http://example.com?foo=bar%20baz' | escape_regex }}
|
||||
{{ 'http://example.com?foo=bar%20baz' | regex_escape }}
|
||||
|
||||
repl
|
||||
The replacement text
|
||||
|
|
|
@ -4,10 +4,13 @@ Manage grains on the minion
|
|||
===========================
|
||||
|
||||
This state allows for grains to be set.
|
||||
Grains set or altered this way are stored in the 'grains'
|
||||
file on the minions, by default at: /etc/salt/grains
|
||||
|
||||
Note: This does NOT override any grains set in the minion file.
|
||||
Grains set or altered with this module are stored in the 'grains'
|
||||
file on the minions, By default, this file is located at: ``/etc/salt/grains``
|
||||
|
||||
.. Note::
|
||||
|
||||
This does **NOT** override any grains set in the minion config file.
|
||||
'''
|
||||
|
||||
# Import Python libs
|
||||
|
|
|
@ -132,8 +132,18 @@ def wait_for_successful_query(name, wait_for=300, **kwargs):
|
|||
Like query but, repeat and wait until match/match_type or status is fulfilled. State returns result from last
|
||||
query state in case of success or if no successful query was made within wait_for timeout.
|
||||
|
||||
name
|
||||
The name of the query.
|
||||
|
||||
wait_for
|
||||
Total time to wait for requests that succeed.
|
||||
|
||||
request_interval
|
||||
Optional interval to delay requests by N seconds to reduce the number of requests sent.
|
||||
|
||||
.. note::
|
||||
|
||||
All other arguements are passed to the http.query state.
|
||||
'''
|
||||
starttime = time.time()
|
||||
|
||||
|
@ -141,7 +151,7 @@ def wait_for_successful_query(name, wait_for=300, **kwargs):
|
|||
caught_exception = None
|
||||
ret = None
|
||||
try:
|
||||
ret = query(name, wait_for=wait_for, **kwargs)
|
||||
ret = query(name, **kwargs)
|
||||
if ret['result']:
|
||||
return ret
|
||||
except Exception as exc:
|
||||
|
|
|
@ -41,6 +41,18 @@ def present(name,
|
|||
grants:
|
||||
foo_db: read
|
||||
bar_db: all
|
||||
|
||||
**Example:**
|
||||
|
||||
.. code-block:: yaml
|
||||
example user present in influxdb:
|
||||
influxdb_user.present:
|
||||
- name: example
|
||||
- password: somepassword
|
||||
- admin: False
|
||||
- grants:
|
||||
foo_db: read
|
||||
bar_db: all
|
||||
'''
|
||||
create = False
|
||||
ret = {'name': name,
|
||||
|
|
|
@ -59,8 +59,13 @@ def __init__(opts):
|
|||
salt.utils.compat.pack_dunder(__name__)
|
||||
|
||||
|
||||
def state_result(result, message):
|
||||
return {'result': result, 'comment': message}
|
||||
def state_result(name, result, message):
|
||||
return {
|
||||
'name': name,
|
||||
'result': result,
|
||||
'changes': {},
|
||||
'comment': message
|
||||
}
|
||||
|
||||
|
||||
def zone_present(domain, type, profile):
|
||||
|
@ -81,10 +86,10 @@ def zone_present(domain, type, profile):
|
|||
type = 'master'
|
||||
matching_zone = [z for z in zones if z.domain == domain]
|
||||
if len(matching_zone) > 0:
|
||||
return state_result(True, "Zone already exists")
|
||||
return state_result(domain, True, 'Zone already exists')
|
||||
else:
|
||||
result = __salt__['libcloud_dns.create_zone'](domain, profile, type)
|
||||
return state_result(result, "Created new zone")
|
||||
return state_result(domain, result, 'Created new zone')
|
||||
|
||||
|
||||
def zone_absent(domain, profile):
|
||||
|
@ -100,10 +105,10 @@ def zone_absent(domain, profile):
|
|||
zones = __salt__['libcloud_dns.list_zones'](profile)
|
||||
matching_zone = [z for z in zones if z.domain == domain]
|
||||
if len(matching_zone) == 0:
|
||||
return state_result(True, "Zone already absent")
|
||||
return state_result(domain, True, 'Zone already absent')
|
||||
else:
|
||||
result = __salt__['libcloud_dns.delete_zone'](matching_zone[0].id, profile)
|
||||
return state_result(result, "Deleted zone")
|
||||
return state_result(domain, result, 'Deleted zone')
|
||||
|
||||
|
||||
def record_present(name, zone, type, data, profile):
|
||||
|
@ -132,7 +137,7 @@ def record_present(name, zone, type, data, profile):
|
|||
try:
|
||||
matching_zone = [z for z in zones if z.domain == zone][0]
|
||||
except IndexError:
|
||||
return state_result(False, "Could not locate zone")
|
||||
return state_result(zone, False, 'Could not locate zone')
|
||||
records = __salt__['libcloud_dns.list_records'](matching_zone.id, profile)
|
||||
matching_records = [record for record in records
|
||||
if record.name == name and
|
||||
|
@ -142,9 +147,9 @@ def record_present(name, zone, type, data, profile):
|
|||
result = __salt__['libcloud_dns.create_record'](
|
||||
name, matching_zone.id,
|
||||
type, data, profile)
|
||||
return state_result(result, "Created new record")
|
||||
return state_result(name, result, 'Created new record')
|
||||
else:
|
||||
return state_result(True, "Record already exists")
|
||||
return state_result(name, True, 'Record already exists')
|
||||
|
||||
|
||||
def record_absent(name, zone, type, data, profile):
|
||||
|
@ -173,7 +178,7 @@ def record_absent(name, zone, type, data, profile):
|
|||
try:
|
||||
matching_zone = [z for z in zones if z.domain == zone][0]
|
||||
except IndexError:
|
||||
return state_result(False, "Zone could not be found")
|
||||
return state_result(zone, False, 'Zone could not be found')
|
||||
records = __salt__['libcloud_dns.list_records'](matching_zone.id, profile)
|
||||
matching_records = [record for record in records
|
||||
if record.name == name and
|
||||
|
@ -186,6 +191,6 @@ def record_absent(name, zone, type, data, profile):
|
|||
matching_zone.id,
|
||||
record.id,
|
||||
profile))
|
||||
return state_result(all(result), "Removed {0} records".format(len(result)))
|
||||
return state_result(name, all(result), 'Removed {0} records'.format(len(result)))
|
||||
else:
|
||||
return state_result(True, "Records already absent")
|
||||
return state_result(name, True, 'Records already absent')
|
||||
|
|
|
@ -709,7 +709,7 @@ def edited_conf(name, lxc_conf=None, lxc_conf_unset=None):
|
|||
# to keep this function around and cannot officially remove it. Progress of
|
||||
# the new function will be tracked in https://github.com/saltstack/salt/issues/35523
|
||||
salt.utils.warn_until(
|
||||
'Oxygen',
|
||||
'Fluorine',
|
||||
'This state is unsuitable for setting parameters that appear more '
|
||||
'than once in an LXC config file, or parameters which must appear in '
|
||||
'a certain order (such as when configuring more than one network '
|
||||
|
|
|
@ -25,6 +25,7 @@ log = logging.getLogger(__name__)
|
|||
|
||||
# import NAPALM utils
|
||||
import salt.utils.napalm
|
||||
import salt.utils.versions
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------------
|
||||
# state properties
|
||||
|
@ -133,6 +134,10 @@ def managed(name,
|
|||
|
||||
To replace the config, set ``replace`` to ``True``. This option is recommended to be used with caution!
|
||||
|
||||
.. warning::
|
||||
The spport for NAPALM native templates will be dropped beginning with Salt Fluorine.
|
||||
Implicitly, the ``template_path`` argument will be depreacted and removed.
|
||||
|
||||
template_name
|
||||
Identifies path to the template source. The template can be either stored on the local machine,
|
||||
either remotely.
|
||||
|
@ -320,7 +325,11 @@ def managed(name,
|
|||
}
|
||||
}
|
||||
'''
|
||||
|
||||
if template_path:
|
||||
salt.utils.versions.warn_until(
|
||||
'Fluorine',
|
||||
'Use of `template_path` detected. This argument will be removed in Salt Fluorine.'
|
||||
)
|
||||
ret = salt.utils.napalm.default_ret(name)
|
||||
|
||||
# the user can override the flags the equivalent CLI args
|
||||
|
|
|
@ -59,6 +59,7 @@ from __future__ import absolute_import
|
|||
|
||||
# Import python libs
|
||||
import logging
|
||||
import salt.utils
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
@ -186,13 +187,14 @@ def present(name,
|
|||
use_32bit_registry=use_32bit_registry)
|
||||
|
||||
if vdata == reg_current['vdata'] and reg_current['success']:
|
||||
ret['comment'] = '{0} in {1} is already configured'.\
|
||||
format(vname if vname else '(Default)', name)
|
||||
ret['comment'] = u'{0} in {1} is already configured' \
|
||||
''.format(salt.utils.to_unicode(vname, 'utf-8') if vname else u'(Default)',
|
||||
salt.utils.to_unicode(name, 'utf-8'))
|
||||
return ret
|
||||
|
||||
add_change = {'Key': r'{0}\{1}'.format(hive, key),
|
||||
'Entry': '{0}'.format(vname if vname else '(Default)'),
|
||||
'Value': '{0}'.format(vdata)}
|
||||
'Entry': u'{0}'.format(salt.utils.to_unicode(vname, 'utf-8') if vname else u'(Default)'),
|
||||
'Value': salt.utils.to_unicode(vdata, 'utf-8')}
|
||||
|
||||
# Check for test option
|
||||
if __opts__['test']:
|
||||
|
|
|
@ -65,7 +65,8 @@ def exists(name, index=None):
|
|||
'''
|
||||
Add the directory to the system PATH at index location
|
||||
|
||||
index: where the directory should be placed in the PATH (default: None)
|
||||
index: where the directory should be placed in the PATH (default: None).
|
||||
This is 0-indexed, so 0 means to prepend at the very start of the PATH.
|
||||
[Note: Providing no index will append directory to PATH and
|
||||
will not enforce its location within the PATH.]
|
||||
|
||||
|
@ -96,7 +97,7 @@ def exists(name, index=None):
|
|||
|
||||
try:
|
||||
currIndex = sysPath.index(path)
|
||||
if index:
|
||||
if index is not None:
|
||||
index = int(index)
|
||||
if index < 0:
|
||||
index = len(sysPath) + index + 1
|
||||
|
@ -115,7 +116,7 @@ def exists(name, index=None):
|
|||
except ValueError:
|
||||
pass
|
||||
|
||||
if not index:
|
||||
if index is None:
|
||||
index = len(sysPath) # put it at the end
|
||||
ret['changes']['added'] = '{0} will be added at index {1}'.format(name, index)
|
||||
if __opts__['test']:
|
||||
|
|
|
@ -5,5 +5,6 @@
|
|||
/{{route.netmask}}
|
||||
{%- endif -%}
|
||||
{%- if route.gateway %} via {{route.gateway}}
|
||||
{%- else %} dev {{iface}}
|
||||
{%- endif %}
|
||||
{% endfor -%}
|
||||
|
|
|
@ -10,7 +10,7 @@ import salt.runner
|
|||
|
||||
def cmd(
|
||||
name,
|
||||
fun=None,
|
||||
func=None,
|
||||
arg=(),
|
||||
**kwargs):
|
||||
'''
|
||||
|
@ -22,14 +22,14 @@ def cmd(
|
|||
|
||||
run_cloud:
|
||||
runner.cmd:
|
||||
- fun: cloud.create
|
||||
- func: cloud.create
|
||||
- arg:
|
||||
- my-ec2-config
|
||||
- myinstance
|
||||
|
||||
run_cloud:
|
||||
runner.cmd:
|
||||
- fun: cloud.create
|
||||
- func: cloud.create
|
||||
- kwargs:
|
||||
provider: my-ec2-config
|
||||
instances: myinstance
|
||||
|
@ -38,11 +38,16 @@ def cmd(
|
|||
'changes': {},
|
||||
'comment': '',
|
||||
'result': True}
|
||||
if fun is None:
|
||||
fun = name
|
||||
client = salt.runner.RunnerClient(__opts__)
|
||||
low = {'fun': fun,
|
||||
'arg': arg,
|
||||
'kwargs': kwargs}
|
||||
client.cmd_async(low)
|
||||
if func is None:
|
||||
func = name
|
||||
local_opts = {}
|
||||
local_opts.update(__opts__)
|
||||
local_opts['async'] = True # ensure this will be run async
|
||||
local_opts.update({
|
||||
'fun': func,
|
||||
'arg': arg,
|
||||
'kwarg': kwargs
|
||||
})
|
||||
runner = salt.runner.Runner(local_opts)
|
||||
runner.run()
|
||||
return ret
|
||||
|
|
|
@ -1143,10 +1143,10 @@ def format_call(fun,
|
|||
continue
|
||||
extra[key] = copy.deepcopy(value)
|
||||
|
||||
# We'll be showing errors to the users until Salt Oxygen comes out, after
|
||||
# We'll be showing errors to the users until Salt Fluorine comes out, after
|
||||
# which, errors will be raised instead.
|
||||
warn_until(
|
||||
'Oxygen',
|
||||
'Fluorine',
|
||||
'It\'s time to start raising `SaltInvocationError` instead of '
|
||||
'returning warnings',
|
||||
# Let's not show the deprecation warning on the console, there's no
|
||||
|
@ -1183,7 +1183,7 @@ def format_call(fun,
|
|||
'{0}. If you were trying to pass additional data to be used '
|
||||
'in a template context, please populate \'context\' with '
|
||||
'\'key: value\' pairs. Your approach will work until Salt '
|
||||
'Oxygen is out.{1}'.format(
|
||||
'Fluorine is out.{1}'.format(
|
||||
msg,
|
||||
'' if 'full' not in ret else ' Please update your state files.'
|
||||
)
|
||||
|
|
|
@ -15,6 +15,9 @@ import random
|
|||
import shutil
|
||||
import salt.ext.six as six
|
||||
|
||||
# Import salt libs
|
||||
import salt.utils.win_dacl
|
||||
|
||||
|
||||
CAN_RENAME_OPEN_FILE = False
|
||||
if os.name == 'nt': # pragma: no cover
|
||||
|
@ -120,6 +123,12 @@ class _AtomicWFile(object):
|
|||
self._fh.close()
|
||||
if os.path.isfile(self._filename):
|
||||
shutil.copymode(self._filename, self._tmp_filename)
|
||||
if salt.utils.win_dacl.HAS_WIN32:
|
||||
owner = salt.utils.win_dacl.get_owner(self._filename)
|
||||
salt.utils.win_dacl.set_owner(self._tmp_filename, owner)
|
||||
else:
|
||||
st = os.stat(self._filename)
|
||||
os.chown(self._tmp_filename, st.st_uid, st.st_gid)
|
||||
atomic_rename(self._tmp_filename, self._filename)
|
||||
|
||||
def __exit__(self, exc_type, exc_value, traceback):
|
||||
|
|
|
@ -39,6 +39,16 @@ HASHES = {
|
|||
HASHES_REVMAP = dict([(y, x) for x, y in six.iteritems(HASHES)])
|
||||
|
||||
|
||||
def __clean_tmp(tmp):
|
||||
'''
|
||||
Remove temporary files
|
||||
'''
|
||||
try:
|
||||
salt.utils.rm_rf(tmp)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
def guess_archive_type(name):
|
||||
'''
|
||||
Guess an archive type (tar, zip, or rar) by its file extension
|
||||
|
@ -116,7 +126,15 @@ def copyfile(source, dest, backup_mode='', cachedir=''):
|
|||
fstat = os.stat(dest)
|
||||
except OSError:
|
||||
pass
|
||||
shutil.move(tgt, dest)
|
||||
|
||||
# The move could fail if the dest has xattr protections, so delete the
|
||||
# temp file in this case
|
||||
try:
|
||||
shutil.move(tgt, dest)
|
||||
except Exception:
|
||||
__clean_tmp(tgt)
|
||||
raise
|
||||
|
||||
if fstat is not None:
|
||||
os.chown(dest, fstat.st_uid, fstat.st_gid)
|
||||
os.chmod(dest, fstat.st_mode)
|
||||
|
@ -134,10 +152,7 @@ def copyfile(source, dest, backup_mode='', cachedir=''):
|
|||
subprocess.call(cmd, stdout=dev_null, stderr=dev_null)
|
||||
if os.path.isfile(tgt):
|
||||
# The temp file failed to move
|
||||
try:
|
||||
os.remove(tgt)
|
||||
except Exception:
|
||||
pass
|
||||
__clean_tmp(tgt)
|
||||
|
||||
|
||||
def rename(src, dst):
|
||||
|
|
|
@ -637,11 +637,11 @@ class SerializerExtension(Extension, object):
|
|||
|
||||
.. code-block:: jinja
|
||||
|
||||
escape_regex = {{ 'https://example.com?foo=bar%20baz' | escape_regex }}
|
||||
regex_escape = {{ 'https://example.com?foo=bar%20baz' | regex_escape }}
|
||||
|
||||
will be rendered as::
|
||||
|
||||
escape_regex = https\\:\\/\\/example\\.com\\?foo\\=bar\\%20baz
|
||||
regex_escape = https\\:\\/\\/example\\.com\\?foo\\=bar\\%20baz
|
||||
|
||||
** Set Theory Filters **
|
||||
|
||||
|
|
|
@ -142,12 +142,12 @@ def get_pidfile(pidfile):
|
|||
'''
|
||||
Return the pid from a pidfile as an integer
|
||||
'''
|
||||
with salt.utils.fopen(pidfile) as pdf:
|
||||
pid = pdf.read()
|
||||
if pid:
|
||||
try:
|
||||
with salt.utils.fopen(pidfile) as pdf:
|
||||
pid = pdf.read().strip()
|
||||
return int(pid)
|
||||
else:
|
||||
return
|
||||
except (OSError, IOError, TypeError, ValueError):
|
||||
return None
|
||||
|
||||
|
||||
def clean_proc(proc, wait_for_kill=10):
|
||||
|
|
|
@ -334,6 +334,7 @@ import errno
|
|||
import random
|
||||
import yaml
|
||||
import copy
|
||||
import weakref
|
||||
|
||||
# Import Salt libs
|
||||
import salt.config
|
||||
|
@ -845,6 +846,47 @@ class Schedule(object):
|
|||
if key is not 'kwargs':
|
||||
kwargs['__pub_{0}'.format(key)] = copy.deepcopy(val)
|
||||
|
||||
# Only include these when running runner modules
|
||||
if self.opts['__role'] == 'master':
|
||||
jid = salt.utils.jid.gen_jid()
|
||||
tag = salt.utils.event.tagify(jid, prefix='salt/scheduler/')
|
||||
|
||||
event = salt.utils.event.get_event(
|
||||
self.opts['__role'],
|
||||
self.opts['sock_dir'],
|
||||
self.opts['transport'],
|
||||
opts=self.opts,
|
||||
listen=False)
|
||||
|
||||
namespaced_event = salt.utils.event.NamespacedEvent(
|
||||
event,
|
||||
tag,
|
||||
print_func=None
|
||||
)
|
||||
|
||||
func_globals = {
|
||||
'__jid__': jid,
|
||||
'__user__': salt.utils.get_user(),
|
||||
'__tag__': tag,
|
||||
'__jid_event__': weakref.proxy(namespaced_event),
|
||||
}
|
||||
self_functions = copy.copy(self.functions)
|
||||
salt.utils.lazy.verify_fun(self_functions, func)
|
||||
|
||||
# Inject some useful globals to *all* the function's global
|
||||
# namespace only once per module-- not per func
|
||||
completed_funcs = []
|
||||
|
||||
for mod_name in six.iterkeys(self_functions):
|
||||
if '.' not in mod_name:
|
||||
continue
|
||||
mod, _ = mod_name.split('.', 1)
|
||||
if mod in completed_funcs:
|
||||
continue
|
||||
completed_funcs.append(mod)
|
||||
for global_key, value in six.iteritems(func_globals):
|
||||
self.functions[mod_name].__globals__[global_key] = value
|
||||
|
||||
ret['return'] = self.functions[func](*args, **kwargs)
|
||||
|
||||
# runners do not provide retcode
|
||||
|
@ -1244,8 +1286,27 @@ class Schedule(object):
|
|||
|
||||
run = False
|
||||
seconds = data['_next_fire_time'] - now
|
||||
if data['_splay']:
|
||||
seconds = data['_splay'] - now
|
||||
|
||||
if 'splay' in data:
|
||||
# Got "splay" configured, make decision to run a job based on that
|
||||
if not data['_splay']:
|
||||
# Try to add "splay" time only if next job fire time is
|
||||
# still in the future. We should trigger job run
|
||||
# immediately otherwise.
|
||||
splay = _splay(data['splay'])
|
||||
if now < data['_next_fire_time'] + splay:
|
||||
log.debug('schedule.handle_func: Adding splay of '
|
||||
'{0} seconds to next run.'.format(splay))
|
||||
data['_splay'] = data['_next_fire_time'] + splay
|
||||
if 'when' in data:
|
||||
data['_run'] = True
|
||||
else:
|
||||
run = True
|
||||
|
||||
if data['_splay']:
|
||||
# The "splay" configuration has been already processed, just use it
|
||||
seconds = data['_splay'] - now
|
||||
|
||||
if seconds <= 0:
|
||||
if '_seconds' in data:
|
||||
run = True
|
||||
|
@ -1264,16 +1325,6 @@ class Schedule(object):
|
|||
run = True
|
||||
data['_run_on_start'] = False
|
||||
elif run:
|
||||
if 'splay' in data and not data['_splay']:
|
||||
splay = _splay(data['splay'])
|
||||
if now < data['_next_fire_time'] + splay:
|
||||
log.debug('schedule.handle_func: Adding splay of '
|
||||
'{0} seconds to next run.'.format(splay))
|
||||
run = False
|
||||
data['_splay'] = data['_next_fire_time'] + splay
|
||||
if 'when' in data:
|
||||
data['_run'] = True
|
||||
|
||||
if 'range' in data:
|
||||
if not _RANGE_SUPPORTED:
|
||||
log.error('Missing python-dateutil. Ignoring job {0}'.format(job))
|
||||
|
|
|
@ -536,7 +536,14 @@ def win_verify_env(path, dirs, permissive=False, pki_dir='', skip_extra=False):
|
|||
|
||||
# Make sure the file_roots is not set to something unsafe since permissions
|
||||
# on that directory are reset
|
||||
if not salt.utils.path.safe_path(path=path):
|
||||
|
||||
# `salt.utils.path.safe_path` will consider anything inside `C:\Windows` to
|
||||
# be unsafe. In some instances the test suite uses
|
||||
# `C:\Windows\Temp\salt-tests-tmpdir\rootdir` as the file_roots. So, we need
|
||||
# to consider anything in `C:\Windows\Temp` to be safe
|
||||
system_root = os.environ.get('SystemRoot', r'C:\Windows')
|
||||
allow_path = '\\'.join([system_root, 'TEMP'])
|
||||
if not salt.utils.path.safe_path(path=path, allow_path=allow_path):
|
||||
raise CommandExecutionError(
|
||||
'`file_roots` set to a possibly unsafe location: {0}'.format(path)
|
||||
)
|
||||
|
|
|
@ -197,7 +197,7 @@ def vb_get_network_adapters(machine_name=None, machine=None):
|
|||
return network_adapters
|
||||
|
||||
|
||||
def vb_wait_for_network_address(timeout, step=None, machine_name=None, machine=None):
|
||||
def vb_wait_for_network_address(timeout, step=None, machine_name=None, machine=None, wait_for_pattern=None):
|
||||
'''
|
||||
Wait until a machine has a network address to return or quit after the timeout
|
||||
|
||||
|
@ -209,12 +209,16 @@ def vb_wait_for_network_address(timeout, step=None, machine_name=None, machine=N
|
|||
@type machine_name: str
|
||||
@param machine:
|
||||
@type machine: IMachine
|
||||
@type wait_for_pattern: str
|
||||
@param wait_for_pattern:
|
||||
@type machine: str
|
||||
@return:
|
||||
@rtype: list
|
||||
'''
|
||||
kwargs = {
|
||||
'machine_name': machine_name,
|
||||
'machine': machine
|
||||
'machine': machine,
|
||||
'wait_for_pattern': wait_for_pattern
|
||||
}
|
||||
return wait_for(vb_get_network_addresses, timeout=timeout, step=step, default=[], func_kwargs=kwargs)
|
||||
|
||||
|
@ -251,7 +255,7 @@ def vb_wait_for_session_state(xp_session, state='Unlocked', timeout=10, step=Non
|
|||
wait_for(_check_session_state, timeout=timeout, step=step, default=False, func_args=args)
|
||||
|
||||
|
||||
def vb_get_network_addresses(machine_name=None, machine=None):
|
||||
def vb_get_network_addresses(machine_name=None, machine=None, wait_for_pattern=None):
|
||||
'''
|
||||
TODO distinguish between private and public addresses
|
||||
|
||||
|
@ -276,21 +280,38 @@ def vb_get_network_addresses(machine_name=None, machine=None):
|
|||
machine = vb_get_box().findMachine(machine_name)
|
||||
|
||||
ip_addresses = []
|
||||
# We can't trust virtualbox to give us up to date guest properties if the machine isn't running
|
||||
# For some reason it may give us outdated (cached?) values
|
||||
log.debug("checking for power on:")
|
||||
if machine.state == _virtualboxManager.constants.MachineState_Running:
|
||||
try:
|
||||
total_slots = int(machine.getGuestPropertyValue('/VirtualBox/GuestInfo/Net/Count'))
|
||||
except ValueError:
|
||||
total_slots = 0
|
||||
for i in range(total_slots):
|
||||
try:
|
||||
address = machine.getGuestPropertyValue('/VirtualBox/GuestInfo/Net/{0}/V4/IP'.format(i))
|
||||
if address:
|
||||
ip_addresses.append(address)
|
||||
except Exception as e:
|
||||
log.debug(e.message)
|
||||
|
||||
log.debug("got power on:")
|
||||
|
||||
#wait on an arbitrary named property
|
||||
#for instance use a dhcp client script to set a property via VBoxControl guestproperty set dhcp_done 1
|
||||
if wait_for_pattern and not machine.getGuestPropertyValue(wait_for_pattern):
|
||||
log.debug("waiting for pattern:{}:".format(wait_for_pattern))
|
||||
return None
|
||||
|
||||
_total_slots = machine.getGuestPropertyValue('/VirtualBox/GuestInfo/Net/Count')
|
||||
|
||||
#upon dhcp the net count drops to 0 and it takes some seconds for it to be set again
|
||||
if not _total_slots:
|
||||
log.debug("waiting for net count:{}:".format(wait_for_pattern))
|
||||
return None
|
||||
|
||||
try:
|
||||
total_slots = int(_total_slots)
|
||||
for i in range(total_slots):
|
||||
try:
|
||||
address = machine.getGuestPropertyValue('/VirtualBox/GuestInfo/Net/{0}/V4/IP'.format(i))
|
||||
if address:
|
||||
ip_addresses.append(address)
|
||||
except Exception as e:
|
||||
log.debug(e.message)
|
||||
except ValueError as e:
|
||||
log.debug(e.message)
|
||||
return None
|
||||
|
||||
log.debug("returning ip_addresses:{}:".format(ip_addresses))
|
||||
return ip_addresses
|
||||
|
||||
|
||||
|
@ -339,6 +360,7 @@ def vb_create_machine(name=None):
|
|||
def vb_clone_vm(
|
||||
name=None,
|
||||
clone_from=None,
|
||||
clone_mode=0,
|
||||
timeout=10000,
|
||||
**kwargs
|
||||
):
|
||||
|
@ -370,7 +392,7 @@ def vb_clone_vm(
|
|||
|
||||
progress = source_machine.cloneTo(
|
||||
new_machine,
|
||||
0, # CloneMode
|
||||
clone_mode, # CloneMode
|
||||
None # CloneOptions : None = Full?
|
||||
)
|
||||
|
||||
|
|
|
@ -805,6 +805,12 @@ class TestDaemon(object):
|
|||
os.path.join(FILES, 'pillar', 'base'),
|
||||
]
|
||||
}
|
||||
minion_opts['pillar_roots'] = {
|
||||
'base': [
|
||||
RUNTIME_VARS.TMP_PILLAR_TREE,
|
||||
os.path.join(FILES, 'pillar', 'base'),
|
||||
]
|
||||
}
|
||||
master_opts['file_roots'] = syndic_master_opts['file_roots'] = {
|
||||
'base': [
|
||||
os.path.join(FILES, 'file', 'base'),
|
||||
|
@ -818,6 +824,19 @@ class TestDaemon(object):
|
|||
RUNTIME_VARS.TMP_PRODENV_STATE_TREE
|
||||
]
|
||||
}
|
||||
minion_opts['file_roots'] = {
|
||||
'base': [
|
||||
os.path.join(FILES, 'file', 'base'),
|
||||
# Let's support runtime created files that can be used like:
|
||||
# salt://my-temp-file.txt
|
||||
RUNTIME_VARS.TMP_STATE_TREE
|
||||
],
|
||||
# Alternate root to test __env__ choices
|
||||
'prod': [
|
||||
os.path.join(FILES, 'file', 'prod'),
|
||||
RUNTIME_VARS.TMP_PRODENV_STATE_TREE
|
||||
]
|
||||
}
|
||||
master_opts.setdefault('reactor', []).append(
|
||||
{
|
||||
'salt/minion/*/start': [
|
||||
|
|
|
@ -116,6 +116,41 @@ class EC2Test(ShellCase):
|
|||
except AssertionError:
|
||||
raise
|
||||
|
||||
def test_instance_rename(self):
|
||||
'''
|
||||
Tests creating and renaming an instance on EC2 (classic)
|
||||
'''
|
||||
# create the instance
|
||||
rename = INSTANCE_NAME + '-rename'
|
||||
instance = self.run_cloud('-p ec2-test {0} --no-deploy'.format(INSTANCE_NAME), timeout=500)
|
||||
ret_str = '{0}:'.format(INSTANCE_NAME)
|
||||
|
||||
# check if instance returned
|
||||
try:
|
||||
self.assertIn(ret_str, instance)
|
||||
except AssertionError:
|
||||
self.run_cloud('-d {0} --assume-yes'.format(INSTANCE_NAME), timeout=500)
|
||||
raise
|
||||
|
||||
change_name = self.run_cloud('-a rename {0} newname={1} --assume-yes'.format(INSTANCE_NAME, rename), timeout=500)
|
||||
|
||||
check_rename = self.run_cloud('-a show_instance {0} --assume-yes'.format(rename), [rename])
|
||||
exp_results = [' {0}:'.format(rename), ' size:',
|
||||
' architecture:']
|
||||
try:
|
||||
for result in exp_results:
|
||||
self.assertIn(result, check_rename[0])
|
||||
except AssertionError:
|
||||
self.run_cloud('-d {0} --assume-yes'.format(INSTANCE_NAME), timeout=500)
|
||||
raise
|
||||
|
||||
# delete the instance
|
||||
delete = self.run_cloud('-d {0} --assume-yes'.format(rename), timeout=500)
|
||||
ret_str = ' shutting-down'
|
||||
|
||||
# check if deletion was performed appropriately
|
||||
self.assertIn(ret_str, delete)
|
||||
|
||||
def tearDown(self):
|
||||
'''
|
||||
Clean up after tests
|
||||
|
|
|
@ -1,3 +1,5 @@
|
|||
localhost:
|
||||
host: 127.0.0.1
|
||||
port: 2827
|
||||
mine_functions:
|
||||
test.arg: ['itworked']
|
||||
|
|
|
@ -0,0 +1,7 @@
|
|||
return_changes:
|
||||
test.succeed_with_changes:
|
||||
- watch_in:
|
||||
- test: watch_states
|
||||
|
||||
watch_states:
|
||||
test.succeed_without_changes
|
|
@ -0,0 +1,7 @@
|
|||
return_changes:
|
||||
test.fail_with_changes:
|
||||
- watch_in:
|
||||
- test: watch_states
|
||||
|
||||
watch_states:
|
||||
test.succeed_without_changes
|
|
@ -1,4 +1,5 @@
|
|||
{% set jinja = 'test' %}
|
||||
ssh-file-test:
|
||||
file.managed:
|
||||
- name: /tmp/test
|
||||
- name: /tmp/{{ jinja }}
|
||||
- contents: 'test'
|
||||
|
|
4
tests/integration/files/file/prod/non-base-env.sls
Normal file
4
tests/integration/files/file/prod/non-base-env.sls
Normal file
|
@ -0,0 +1,4 @@
|
|||
test_file:
|
||||
file.managed:
|
||||
- name: /tmp/nonbase_env
|
||||
- source: salt://nonbase_env
|
1
tests/integration/files/file/prod/nonbase_env
Normal file
1
tests/integration/files/file/prod/nonbase_env
Normal file
|
@ -0,0 +1 @@
|
|||
it worked - new environment!
|
|
@ -6,3 +6,6 @@ base:
|
|||
- generic
|
||||
- blackout
|
||||
- sub
|
||||
'localhost':
|
||||
- generic
|
||||
- blackout
|
||||
|
|
|
@ -3,12 +3,23 @@
|
|||
# Import python libs
|
||||
from __future__ import absolute_import
|
||||
import getpass
|
||||
import grp
|
||||
import pwd
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
|
||||
# Posix only
|
||||
try:
|
||||
import grp
|
||||
import pwd
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# Windows only
|
||||
try:
|
||||
import win32file
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# Import Salt Testing libs
|
||||
from tests.support.case import ModuleCase
|
||||
from tests.support.unit import skipIf
|
||||
|
@ -18,6 +29,16 @@ from tests.support.paths import FILES, TMP
|
|||
import salt.utils
|
||||
|
||||
|
||||
def symlink(source, link_name):
|
||||
'''
|
||||
Handle symlinks on Windows with Python < 3.2
|
||||
'''
|
||||
if salt.utils.is_windows():
|
||||
win32file.CreateSymbolicLink(link_name, source)
|
||||
else:
|
||||
os.symlink(source, link_name)
|
||||
|
||||
|
||||
class FileModuleTest(ModuleCase):
|
||||
'''
|
||||
Validate the file module
|
||||
|
@ -25,27 +46,27 @@ class FileModuleTest(ModuleCase):
|
|||
def setUp(self):
|
||||
self.myfile = os.path.join(TMP, 'myfile')
|
||||
with salt.utils.fopen(self.myfile, 'w+') as fp:
|
||||
fp.write('Hello\n')
|
||||
fp.write('Hello' + os.linesep)
|
||||
self.mydir = os.path.join(TMP, 'mydir/isawesome')
|
||||
if not os.path.isdir(self.mydir):
|
||||
# left behind... Don't fail because of this!
|
||||
os.makedirs(self.mydir)
|
||||
self.mysymlink = os.path.join(TMP, 'mysymlink')
|
||||
if os.path.islink(self.mysymlink):
|
||||
if os.path.islink(self.mysymlink) or os.path.isfile(self.mysymlink):
|
||||
os.remove(self.mysymlink)
|
||||
os.symlink(self.myfile, self.mysymlink)
|
||||
symlink(self.myfile, self.mysymlink)
|
||||
self.mybadsymlink = os.path.join(TMP, 'mybadsymlink')
|
||||
if os.path.islink(self.mybadsymlink):
|
||||
if os.path.islink(self.mybadsymlink) or os.path.isfile(self.mybadsymlink):
|
||||
os.remove(self.mybadsymlink)
|
||||
os.symlink('/nonexistentpath', self.mybadsymlink)
|
||||
symlink('/nonexistentpath', self.mybadsymlink)
|
||||
super(FileModuleTest, self).setUp()
|
||||
|
||||
def tearDown(self):
|
||||
if os.path.isfile(self.myfile):
|
||||
os.remove(self.myfile)
|
||||
if os.path.islink(self.mysymlink):
|
||||
if os.path.islink(self.mysymlink) or os.path.isfile(self.mysymlink):
|
||||
os.remove(self.mysymlink)
|
||||
if os.path.islink(self.mybadsymlink):
|
||||
if os.path.islink(self.mybadsymlink) or os.path.isfile(self.mybadsymlink):
|
||||
os.remove(self.mybadsymlink)
|
||||
shutil.rmtree(self.mydir, ignore_errors=True)
|
||||
super(FileModuleTest, self).tearDown()
|
||||
|
@ -173,3 +194,20 @@ class FileModuleTest(ModuleCase):
|
|||
ret = self.run_function('file.source_list', ['file://' + self.myfile,
|
||||
'filehash', 'base'])
|
||||
self.assertEqual(list(ret), ['file://' + self.myfile, 'filehash'])
|
||||
|
||||
def test_file_line_changes_format(self):
|
||||
'''
|
||||
Test file.line changes output formatting.
|
||||
|
||||
Issue #41474
|
||||
'''
|
||||
ret = self.minion_run('file.line', self.myfile, 'Goodbye',
|
||||
mode='insert', after='Hello')
|
||||
self.assertIn('Hello' + os.linesep + '+Goodbye', ret)
|
||||
|
||||
def test_file_line_content(self):
|
||||
self.minion_run('file.line', self.myfile, 'Goodbye',
|
||||
mode='insert', after='Hello')
|
||||
with salt.utils.fopen(self.myfile, 'r') as fp:
|
||||
content = fp.read()
|
||||
self.assertEqual(content, 'Hello' + os.linesep + 'Goodbye' + os.linesep)
|
||||
|
|
|
@ -14,6 +14,7 @@ from salt.ext.six.moves import range
|
|||
|
||||
|
||||
@skip_if_not_root
|
||||
@destructiveTest
|
||||
class GroupModuleTest(ModuleCase):
|
||||
'''
|
||||
Validate the linux group system module
|
||||
|
@ -39,7 +40,6 @@ class GroupModuleTest(ModuleCase):
|
|||
)
|
||||
)
|
||||
|
||||
@destructiveTest
|
||||
def tearDown(self):
|
||||
'''
|
||||
Reset to original settings
|
||||
|
@ -57,33 +57,30 @@ class GroupModuleTest(ModuleCase):
|
|||
for x in range(size)
|
||||
)
|
||||
|
||||
@destructiveTest
|
||||
def test_add(self):
|
||||
'''
|
||||
Test the add group function
|
||||
'''
|
||||
#add a new group
|
||||
# add a new group
|
||||
self.assertTrue(self.run_function('group.add', [self._group, self._gid]))
|
||||
group_info = self.run_function('group.info', [self._group])
|
||||
self.assertEqual(group_info['name'], self._group)
|
||||
self.assertEqual(group_info['gid'], self._gid)
|
||||
#try adding the group again
|
||||
# try adding the group again
|
||||
self.assertFalse(self.run_function('group.add', [self._group, self._gid]))
|
||||
|
||||
@destructiveTest
|
||||
def test_delete(self):
|
||||
'''
|
||||
Test the delete group function
|
||||
'''
|
||||
self.assertTrue(self.run_function('group.add', [self._group]))
|
||||
|
||||
#correct functionality
|
||||
# correct functionality
|
||||
self.assertTrue(self.run_function('group.delete', [self._group]))
|
||||
|
||||
#group does not exist
|
||||
# group does not exist
|
||||
self.assertFalse(self.run_function('group.delete', [self._no_group]))
|
||||
|
||||
@destructiveTest
|
||||
def test_info(self):
|
||||
'''
|
||||
Test the info group function
|
||||
|
@ -97,7 +94,6 @@ class GroupModuleTest(ModuleCase):
|
|||
self.assertEqual(group_info['gid'], self._gid)
|
||||
self.assertIn(self._user, group_info['members'])
|
||||
|
||||
@destructiveTest
|
||||
def test_chgid(self):
|
||||
'''
|
||||
Test the change gid function
|
||||
|
@ -107,7 +103,6 @@ class GroupModuleTest(ModuleCase):
|
|||
group_info = self.run_function('group.info', [self._group])
|
||||
self.assertEqual(group_info['gid'], self._new_gid)
|
||||
|
||||
@destructiveTest
|
||||
def test_adduser(self):
|
||||
'''
|
||||
Test the add user to group function
|
||||
|
@ -117,14 +112,13 @@ class GroupModuleTest(ModuleCase):
|
|||
self.assertTrue(self.run_function('group.adduser', [self._group, self._user]))
|
||||
group_info = self.run_function('group.info', [self._group])
|
||||
self.assertIn(self._user, group_info['members'])
|
||||
#try add a non existing user
|
||||
# try add a non existing user
|
||||
self.assertFalse(self.run_function('group.adduser', [self._group, self._no_user]))
|
||||
#try add a user to non existing group
|
||||
# try add a user to non existing group
|
||||
self.assertFalse(self.run_function('group.adduser', [self._no_group, self._user]))
|
||||
#try add a non existing user to a non existing group
|
||||
# try add a non existing user to a non existing group
|
||||
self.assertFalse(self.run_function('group.adduser', [self._no_group, self._no_user]))
|
||||
|
||||
@destructiveTest
|
||||
def test_deluser(self):
|
||||
'''
|
||||
Test the delete user from group function
|
||||
|
@ -136,7 +130,6 @@ class GroupModuleTest(ModuleCase):
|
|||
group_info = self.run_function('group.info', [self._group])
|
||||
self.assertNotIn(self._user, group_info['members'])
|
||||
|
||||
@destructiveTest
|
||||
def test_members(self):
|
||||
'''
|
||||
Test the members function
|
||||
|
@ -150,7 +143,6 @@ class GroupModuleTest(ModuleCase):
|
|||
self.assertIn(self._user, group_info['members'])
|
||||
self.assertIn(self._user1, group_info['members'])
|
||||
|
||||
@destructiveTest
|
||||
def test_getent(self):
|
||||
'''
|
||||
Test the getent function
|
||||
|
|
|
@ -7,12 +7,14 @@
|
|||
from __future__ import absolute_import
|
||||
import random
|
||||
import string
|
||||
import os
|
||||
|
||||
# Import Salt Testing Libs
|
||||
from tests.support.case import ModuleCase
|
||||
from tests.support.helpers import destructiveTest, skip_if_not_root
|
||||
|
||||
# Import Salt Libs
|
||||
import salt.utils
|
||||
from salt.exceptions import CommandExecutionError
|
||||
|
||||
# Import 3rd-party libs
|
||||
|
@ -148,6 +150,86 @@ class MacUserModuleTest(ModuleCase):
|
|||
self.run_function('user.delete', [CHANGE_USER])
|
||||
raise
|
||||
|
||||
def test_mac_user_enable_auto_login(self):
|
||||
'''
|
||||
Tests mac_user functions that enable auto login
|
||||
'''
|
||||
# Make sure auto login is disabled before we start
|
||||
if self.run_function('user.get_auto_login'):
|
||||
self.skipTest('Auto login already enabled')
|
||||
|
||||
try:
|
||||
# Does enable return True
|
||||
self.assertTrue(
|
||||
self.run_function('user.enable_auto_login',
|
||||
['Spongebob', 'Squarepants']))
|
||||
|
||||
# Did it set the user entry in the plist file
|
||||
self.assertEqual(
|
||||
self.run_function('user.get_auto_login'),
|
||||
'Spongebob')
|
||||
|
||||
# Did it generate the `/etc/kcpassword` file
|
||||
self.assertTrue(os.path.exists('/etc/kcpassword'))
|
||||
|
||||
# Are the contents of the file correct
|
||||
test_data = b'.\xf8\'B\xa0\xd9\xad\x8b\xcd\xcdl'
|
||||
with salt.utils.fopen('/etc/kcpassword', 'rb') as f:
|
||||
file_data = f.read()
|
||||
self.assertEqual(test_data, file_data)
|
||||
|
||||
# Does disable return True
|
||||
self.assertTrue(self.run_function('user.disable_auto_login'))
|
||||
|
||||
# Does it remove the user entry in the plist file
|
||||
self.assertFalse(self.run_function('user.get_auto_login'))
|
||||
|
||||
# Is the `/etc/kcpassword` file removed
|
||||
self.assertFalse(os.path.exists('/etc/kcpassword'))
|
||||
|
||||
finally:
|
||||
# Make sure auto_login is disabled
|
||||
self.assertTrue(self.run_function('user.disable_auto_login'))
|
||||
|
||||
# Make sure autologin is disabled
|
||||
if self.run_function('user.get_auto_login'):
|
||||
raise Exception('Failed to disable auto login')
|
||||
|
||||
def test_mac_user_disable_auto_login(self):
|
||||
'''
|
||||
Tests mac_user functions that disable auto login
|
||||
'''
|
||||
# Make sure auto login is enabled before we start
|
||||
# Is there an existing setting
|
||||
if self.run_function('user.get_auto_login'):
|
||||
self.skipTest('Auto login already enabled')
|
||||
|
||||
try:
|
||||
# Enable auto login for the test
|
||||
self.run_function('user.enable_auto_login',
|
||||
['Spongebob', 'Squarepants'])
|
||||
|
||||
# Make sure auto login got set up
|
||||
if not self.run_function('user.get_auto_login') == 'Spongebob':
|
||||
raise Exception('Failed to enable auto login')
|
||||
|
||||
# Does disable return True
|
||||
self.assertTrue(self.run_function('user.disable_auto_login'))
|
||||
|
||||
# Does it remove the user entry in the plist file
|
||||
self.assertFalse(self.run_function('user.get_auto_login'))
|
||||
|
||||
# Is the `/etc/kcpassword` file removed
|
||||
self.assertFalse(os.path.exists('/etc/kcpassword'))
|
||||
|
||||
finally:
|
||||
# Make sure auto login is disabled
|
||||
self.assertTrue(self.run_function('user.disable_auto_login'))
|
||||
|
||||
# Make sure auto login is disabled
|
||||
if self.run_function('user.get_auto_login'):
|
||||
raise Exception('Failed to disable auto login')
|
||||
|
||||
def tearDown(self):
|
||||
'''
|
||||
Clean up after tests
|
||||
|
|
|
@ -586,6 +586,33 @@ class StateModuleTest(ModuleCase, SaltReturnAssertsMixin):
|
|||
#result = self.normalize_ret(ret)
|
||||
#self.assertEqual(expected_result, result)
|
||||
|
||||
def test_watch_in(self):
|
||||
'''
|
||||
test watch_in requisite when there is a success
|
||||
'''
|
||||
ret = self.run_function('state.sls', mods='requisites.watch_in')
|
||||
changes = 'test_|-return_changes_|-return_changes_|-succeed_with_changes'
|
||||
watch = 'test_|-watch_states_|-watch_states_|-succeed_without_changes'
|
||||
|
||||
self.assertEqual(ret[changes]['__run_num__'], 0)
|
||||
self.assertEqual(ret[watch]['__run_num__'], 2)
|
||||
|
||||
self.assertEqual('Watch statement fired.', ret[watch]['comment'])
|
||||
self.assertEqual('Something pretended to change',
|
||||
ret[changes]['changes']['testing']['new'])
|
||||
|
||||
def test_watch_in_failure(self):
|
||||
'''
|
||||
test watch_in requisite when there is a failure
|
||||
'''
|
||||
ret = self.run_function('state.sls', mods='requisites.watch_in_failure')
|
||||
fail = 'test_|-return_changes_|-return_changes_|-fail_with_changes'
|
||||
watch = 'test_|-watch_states_|-watch_states_|-succeed_without_changes'
|
||||
|
||||
self.assertEqual(False, ret[fail]['result'])
|
||||
self.assertEqual('One or more requisite failed: requisites.watch_in_failure.return_changes',
|
||||
ret[watch]['comment'])
|
||||
|
||||
def normalize_ret(self, ret):
|
||||
'''
|
||||
Normalize the return to the format that we'll use for result checking
|
||||
|
@ -1240,3 +1267,23 @@ class StateModuleTest(ModuleCase, SaltReturnAssertsMixin):
|
|||
self.assertIn(state_id, state_run)
|
||||
self.assertEqual(state_run[state_id]['comment'], 'Failure!')
|
||||
self.assertFalse(state_run[state_id]['result'])
|
||||
|
||||
def test_state_nonbase_environment(self):
|
||||
'''
|
||||
test state.sls with saltenv using a nonbase environment
|
||||
with a salt source
|
||||
'''
|
||||
state_run = self.run_function(
|
||||
'state.sls',
|
||||
mods='non-base-env',
|
||||
saltenv='prod'
|
||||
)
|
||||
state_id = 'file_|-test_file_|-/tmp/nonbase_env_|-managed'
|
||||
self.assertEqual(state_run[state_id]['comment'], 'File /tmp/nonbase_env updated')
|
||||
self.assertTrue(state_run['file_|-test_file_|-/tmp/nonbase_env_|-managed']['result'])
|
||||
self.assertTrue(os.path.isfile('/tmp/nonbase_env'))
|
||||
|
||||
def tearDown(self):
|
||||
nonbase_file = '/tmp/nonbase_env'
|
||||
if os.path.isfile(nonbase_file):
|
||||
os.remove(nonbase_file)
|
||||
|
|
|
@ -109,3 +109,59 @@ class OutputReturnTest(ShellCase):
|
|||
delattr(self, 'maxDiff')
|
||||
else:
|
||||
self.maxDiff = old_max_diff
|
||||
|
||||
def test_output_highstate(self):
|
||||
'''
|
||||
Regression tests for the highstate outputter. Calls a basic state with various
|
||||
flags. Each comparison should be identical when successful.
|
||||
'''
|
||||
# Test basic highstate output. No frills.
|
||||
expected = ['minion:', ' ID: simple-ping', ' Function: module.run',
|
||||
' Name: test.ping', ' Result: True',
|
||||
' Comment: Module function test.ping executed',
|
||||
' Changes: ', ' ret:', ' True',
|
||||
'Summary for minion', 'Succeeded: 1 (changed=1)', 'Failed: 0',
|
||||
'Total states run: 1']
|
||||
state_run = self.run_salt('"minion" state.sls simple-ping')
|
||||
|
||||
for expected_item in expected:
|
||||
self.assertIn(expected_item, state_run)
|
||||
|
||||
# Test highstate output while also passing --out=highstate.
|
||||
# This is a regression test for Issue #29796
|
||||
state_run = self.run_salt('"minion" state.sls simple-ping --out=highstate')
|
||||
|
||||
for expected_item in expected:
|
||||
self.assertIn(expected_item, state_run)
|
||||
|
||||
# Test highstate output when passing --static and running a state function.
|
||||
# See Issue #44556.
|
||||
state_run = self.run_salt('"minion" state.sls simple-ping --static')
|
||||
|
||||
for expected_item in expected:
|
||||
self.assertIn(expected_item, state_run)
|
||||
|
||||
# Test highstate output when passing --static and --out=highstate.
|
||||
# See Issue #44556.
|
||||
state_run = self.run_salt('"minion" state.sls simple-ping --static --out=highstate')
|
||||
|
||||
for expected_item in expected:
|
||||
self.assertIn(expected_item, state_run)
|
||||
|
||||
def test_output_highstate_falls_back_nested(self):
|
||||
'''
|
||||
Tests outputter when passing --out=highstate with a non-state call. This should
|
||||
fall back to "nested" output.
|
||||
'''
|
||||
expected = ['minion:', ' True']
|
||||
ret = self.run_salt('"minion" test.ping --out=highstate')
|
||||
self.assertEqual(ret, expected)
|
||||
|
||||
def test_static_simple(self):
|
||||
'''
|
||||
Tests passing the --static option with a basic test.ping command. This
|
||||
should be the "nested" output.
|
||||
'''
|
||||
expected = ['minion:', ' True']
|
||||
ret = self.run_salt('"minion" test.ping --static')
|
||||
self.assertEqual(ret, expected)
|
||||
|
|
|
@ -441,6 +441,17 @@ class CallTest(ShellCase, testprogram.TestProgramCase, ShellCaseCommonTestsMixin
|
|||
log.debug('salt-call output:\n\n%s', '\n'.join(ret))
|
||||
self.fail('CLI pillar override not found in pillar data')
|
||||
|
||||
def test_pillar_items_masterless(self):
|
||||
'''
|
||||
Test to ensure we get expected output
|
||||
from pillar.items with salt-call
|
||||
'''
|
||||
get_items = self.run_call('pillar.items', local=True)
|
||||
exp_out = [' - Lancelot', ' - Galahad', ' - Bedevere',
|
||||
' monty:', ' python']
|
||||
for out in exp_out:
|
||||
self.assertIn(out, get_items)
|
||||
|
||||
def tearDown(self):
|
||||
'''
|
||||
Teardown method to remove installed packages
|
||||
|
@ -477,6 +488,21 @@ class CallTest(ShellCase, testprogram.TestProgramCase, ShellCaseCommonTestsMixin
|
|||
stdout=stdout, stderr=stderr
|
||||
)
|
||||
|
||||
def test_masterless_highstate(self):
|
||||
'''
|
||||
test state.highstate in masterless mode
|
||||
'''
|
||||
ret = self.run_call('state.highstate', local=True)
|
||||
|
||||
destpath = os.path.join(TMP, 'testfile')
|
||||
exp_out = [' Function: file.managed', ' Result: True',
|
||||
' ID: {0}'.format(destpath)]
|
||||
|
||||
for out in exp_out:
|
||||
self.assertIn(out, ret)
|
||||
|
||||
self.assertTrue(os.path.exists(destpath))
|
||||
|
||||
def test_exit_status_correct_usage(self):
|
||||
'''
|
||||
Ensure correct exit status when salt-call starts correctly.
|
||||
|
|
|
@ -5,6 +5,7 @@ from __future__ import absolute_import
|
|||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
import textwrap
|
||||
|
||||
# Import Salt Testing libs
|
||||
from tests.support.case import ShellCase
|
||||
|
@ -56,6 +57,36 @@ class KeyTest(ShellCase, ShellCaseCommonTestsMixin):
|
|||
if USERA in user:
|
||||
self.run_call('user.delete {0} remove=True'.format(USERA))
|
||||
|
||||
def test_remove_key(self):
|
||||
'''
|
||||
test salt-key -d usage
|
||||
'''
|
||||
min_name = 'minibar'
|
||||
pki_dir = self.master_opts['pki_dir']
|
||||
key = os.path.join(pki_dir, 'minions', min_name)
|
||||
|
||||
with salt.utils.fopen(key, 'w') as fp:
|
||||
fp.write(textwrap.dedent('''\
|
||||
-----BEGIN PUBLIC KEY-----
|
||||
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAoqIZDtcQtqUNs0wC7qQz
|
||||
JwFhXAVNT5C8M8zhI+pFtF/63KoN5k1WwAqP2j3LquTG68WpxcBwLtKfd7FVA/Kr
|
||||
OF3kXDWFnDi+HDchW2lJObgfzLckWNRFaF8SBvFM2dys3CGSgCV0S/qxnRAjrJQb
|
||||
B3uQwtZ64ncJAlkYpArv3GwsfRJ5UUQnYPDEJwGzMskZ0pHd60WwM1gMlfYmNX5O
|
||||
RBEjybyNpYDzpda6e6Ypsn6ePGLkP/tuwUf+q9wpbRE3ZwqERC2XRPux+HX2rGP+
|
||||
mkzpmuHkyi2wV33A9pDfMgRHdln2CLX0KgfRGixUQhW1o+Kmfv2rq4sGwpCgLbTh
|
||||
NwIDAQAB
|
||||
-----END PUBLIC KEY-----
|
||||
'''))
|
||||
|
||||
check_key = self.run_key('-p {0}'.format(min_name))
|
||||
self.assertIn('Accepted Keys:', check_key)
|
||||
self.assertIn('minibar: -----BEGIN PUBLIC KEY-----', check_key)
|
||||
|
||||
remove_key = self.run_key('-d {0} -y'.format(min_name))
|
||||
|
||||
check_key = self.run_key('-p {0}'.format(min_name))
|
||||
self.assertEqual([], check_key)
|
||||
|
||||
def test_list_accepted_args(self):
|
||||
'''
|
||||
test salt-key -l for accepted arguments
|
||||
|
|
|
@ -2,12 +2,13 @@
|
|||
|
||||
# Import python libs
|
||||
from __future__ import absolute_import
|
||||
import os
|
||||
|
||||
# Import Salt Testing libs
|
||||
from tests.support.case import ShellCase
|
||||
from tests.support.case import ShellCase, SPMCase
|
||||
|
||||
|
||||
class SPMTest(ShellCase):
|
||||
class SPMTest(ShellCase, SPMCase):
|
||||
'''
|
||||
Test spm script
|
||||
'''
|
||||
|
@ -29,3 +30,47 @@ class SPMTest(ShellCase):
|
|||
output = self.run_spm('doesnotexist')
|
||||
for arg in expected_args:
|
||||
self.assertIn(arg, ''.join(output))
|
||||
|
||||
def test_spm_assume_yes(self):
|
||||
'''
|
||||
test spm install with -y arg
|
||||
'''
|
||||
config = self._spm_config(assume_yes=False)
|
||||
self._spm_build_files(config)
|
||||
|
||||
spm_file = os.path.join(config['spm_build_dir'],
|
||||
'apache-201506-2.spm')
|
||||
|
||||
build = self.run_spm('build {0} -c {1}'.format(self.formula_dir,
|
||||
self._tmp_spm))
|
||||
|
||||
install = self.run_spm('install {0} -c {1} -y'.format(spm_file,
|
||||
self._tmp_spm))
|
||||
|
||||
self.assertTrue(os.path.exists(os.path.join(config['formula_path'],
|
||||
'apache', 'apache.sls')))
|
||||
|
||||
def test_spm_force(self):
|
||||
'''
|
||||
test spm install with -f arg
|
||||
'''
|
||||
config = self._spm_config(assume_yes=False)
|
||||
self._spm_build_files(config)
|
||||
|
||||
spm_file = os.path.join(config['spm_build_dir'],
|
||||
'apache-201506-2.spm')
|
||||
|
||||
build = self.run_spm('build {0} -c {1}'.format(self.formula_dir,
|
||||
self._tmp_spm))
|
||||
|
||||
install = self.run_spm('install {0} -c {1} -y'.format(spm_file,
|
||||
self._tmp_spm))
|
||||
|
||||
self.assertTrue(os.path.exists(os.path.join(config['formula_path'],
|
||||
'apache', 'apache.sls')))
|
||||
|
||||
# check if it forces the install after its already been installed it
|
||||
install = self.run_spm('install {0} -c {1} -y -f'.format(spm_file,
|
||||
self._tmp_spm))
|
||||
|
||||
self.assertEqual(['... installing apache'], install)
|
||||
|
|
|
@ -4,6 +4,8 @@ salt-ssh testing
|
|||
'''
|
||||
# Import Python libs
|
||||
from __future__ import absolute_import
|
||||
import os
|
||||
import shutil
|
||||
|
||||
# Import salt testing libs
|
||||
from tests.support.case import SSHCase
|
||||
|
@ -19,3 +21,21 @@ class SSHTest(SSHCase):
|
|||
'''
|
||||
ret = self.run_function('test.ping')
|
||||
self.assertTrue(ret, 'Ping did not return true')
|
||||
|
||||
def test_thin_dir(self):
|
||||
'''
|
||||
test to make sure thin_dir is created
|
||||
and salt-call file is included
|
||||
'''
|
||||
thin_dir = self.run_function('config.get', ['thin_dir'], wipe=False)
|
||||
os.path.isdir(thin_dir)
|
||||
os.path.exists(os.path.join(thin_dir, 'salt-call'))
|
||||
os.path.exists(os.path.join(thin_dir, 'running_data'))
|
||||
|
||||
def tearDown(self):
|
||||
'''
|
||||
make sure to clean up any old ssh directories
|
||||
'''
|
||||
salt_dir = self.run_function('config.get', ['thin_dir'], wipe=False)
|
||||
if os.path.exists(salt_dir):
|
||||
shutil.rmtree(salt_dir)
|
||||
|
|
34
tests/integration/ssh/test_mine.py
Normal file
34
tests/integration/ssh/test_mine.py
Normal file
|
@ -0,0 +1,34 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# Import Python libs
|
||||
from __future__ import absolute_import
|
||||
import os
|
||||
import shutil
|
||||
|
||||
# Import Salt Testing Libs
|
||||
from tests.support.case import SSHCase
|
||||
from tests.support.unit import skipIf
|
||||
|
||||
# Import Salt Libs
|
||||
import salt.utils
|
||||
|
||||
|
||||
@skipIf(salt.utils.is_windows(), 'salt-ssh not available on Windows')
|
||||
class SSHMineTest(SSHCase):
|
||||
'''
|
||||
testing salt-ssh with mine
|
||||
'''
|
||||
def test_ssh_mine_get(self):
|
||||
'''
|
||||
test salt-ssh with mine
|
||||
'''
|
||||
ret = self.run_function('mine.get', ['localhost test.arg'], wipe=False)
|
||||
self.assertEqual(ret['localhost']['args'], ['itworked'])
|
||||
|
||||
def tearDown(self):
|
||||
'''
|
||||
make sure to clean up any old ssh directories
|
||||
'''
|
||||
salt_dir = self.run_function('config.get', ['thin_dir'], wipe=False)
|
||||
if os.path.exists(salt_dir):
|
||||
shutil.rmtree(salt_dir)
|
41
tests/integration/ssh/test_pillar.py
Normal file
41
tests/integration/ssh/test_pillar.py
Normal file
|
@ -0,0 +1,41 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# Import Python libs
|
||||
from __future__ import absolute_import
|
||||
|
||||
# Import Salt Testing Libs
|
||||
from tests.support.case import SSHCase
|
||||
from tests.support.unit import skipIf
|
||||
|
||||
# Import Salt Libs
|
||||
import salt.utils
|
||||
|
||||
|
||||
@skipIf(salt.utils.is_windows(), 'salt-ssh not available on Windows')
|
||||
class SSHPillarTest(SSHCase):
|
||||
'''
|
||||
testing pillar with salt-ssh
|
||||
'''
|
||||
def test_pillar_items(self):
|
||||
'''
|
||||
test pillar.items with salt-ssh
|
||||
'''
|
||||
ret = self.run_function('pillar.items')
|
||||
self.assertDictContainsSubset({'monty': 'python'}, ret)
|
||||
self.assertDictContainsSubset(
|
||||
{'knights': ['Lancelot', 'Galahad', 'Bedevere', 'Robin']},
|
||||
ret)
|
||||
|
||||
def test_pillar_get(self):
|
||||
'''
|
||||
test pillar.get with salt-ssh
|
||||
'''
|
||||
ret = self.run_function('pillar.get', ['monty'])
|
||||
self.assertEqual(ret, 'python')
|
||||
|
||||
def test_pillar_get_doesnotexist(self):
|
||||
'''
|
||||
test pillar.get when pillar does not exist with salt-ssh
|
||||
'''
|
||||
ret = self.run_function('pillar.get', ['doesnotexist'])
|
||||
self.assertEqual(ret, '')
|
|
@ -42,6 +42,16 @@ class SSHStateTest(SSHCase):
|
|||
check_file = self.run_function('file.file_exists', ['/tmp/test'])
|
||||
self.assertTrue(check_file)
|
||||
|
||||
def test_state_sls_id(self):
|
||||
'''
|
||||
test state.sls_id with salt-ssh
|
||||
'''
|
||||
ret = self.run_function('state.sls_id', ['ssh-file-test', SSH_SLS])
|
||||
self._check_dict_ret(ret=ret, val='__sls__', exp_ret=SSH_SLS)
|
||||
|
||||
check_file = self.run_function('file.file_exists', ['/tmp/test'])
|
||||
self.assertTrue(check_file)
|
||||
|
||||
def test_state_show_sls(self):
|
||||
'''
|
||||
test state.show_sls with salt-ssh
|
||||
|
@ -57,7 +67,7 @@ class SSHStateTest(SSHCase):
|
|||
test state.show_top with salt-ssh
|
||||
'''
|
||||
ret = self.run_function('state.show_top')
|
||||
self.assertEqual(ret, {u'base': [u'master_tops_test', u'core']})
|
||||
self.assertEqual(ret, {u'base': list(set([u'master_tops_test']).union([u'core']))})
|
||||
|
||||
def test_state_single(self):
|
||||
'''
|
||||
|
|
|
@ -67,17 +67,16 @@ def _test_managed_file_mode_keep_helper(testcase, local=False):
|
|||
'''
|
||||
DRY helper function to run the same test with a local or remote path
|
||||
'''
|
||||
rel_path = 'grail/scene33'
|
||||
name = os.path.join(TMP, os.path.basename(rel_path))
|
||||
grail_fs_path = os.path.join(FILES, 'file', 'base', rel_path)
|
||||
grail = 'salt://' + rel_path if not local else grail_fs_path
|
||||
name = os.path.join(TMP, 'scene33')
|
||||
grail_fs_path = os.path.join(FILES, 'file', 'base', 'grail', 'scene33')
|
||||
grail = 'salt://grail/scene33' if not local else grail_fs_path
|
||||
|
||||
# Get the current mode so that we can put the file back the way we
|
||||
# found it when we're done.
|
||||
grail_fs_mode = os.stat(grail_fs_path).st_mode
|
||||
initial_mode = 504 # 0770 octal
|
||||
new_mode_1 = 384 # 0600 octal
|
||||
new_mode_2 = 420 # 0644 octal
|
||||
grail_fs_mode = int(testcase.run_function('file.get_mode', [grail_fs_path]), 8)
|
||||
initial_mode = 0o770
|
||||
new_mode_1 = 0o600
|
||||
new_mode_2 = 0o644
|
||||
|
||||
# Set the initial mode, so we can be assured that when we set the mode
|
||||
# to "keep", we're actually changing the permissions of the file to the
|
||||
|
@ -568,6 +567,84 @@ class FileTest(ModuleCase, SaltReturnAssertsMixin):
|
|||
if os.path.exists('/tmp/sudoers'):
|
||||
os.remove('/tmp/sudoers')
|
||||
|
||||
def test_managed_local_source_with_source_hash(self):
|
||||
'''
|
||||
Make sure that we enforce the source_hash even with local files
|
||||
'''
|
||||
name = os.path.join(TMP, 'local_source_with_source_hash')
|
||||
local_path = os.path.join(FILES, 'file', 'base', 'grail', 'scene33')
|
||||
actual_hash = '567fd840bf1548edc35c48eb66cdd78bfdfcccff'
|
||||
# Reverse the actual hash
|
||||
bad_hash = actual_hash[::-1]
|
||||
|
||||
def remove_file():
|
||||
try:
|
||||
os.remove(name)
|
||||
except OSError as exc:
|
||||
if exc.errno != errno.ENOENT:
|
||||
raise
|
||||
|
||||
def do_test(clean=False):
|
||||
for proto in ('file://', ''):
|
||||
source = proto + local_path
|
||||
log.debug('Trying source %s', source)
|
||||
try:
|
||||
ret = self.run_state(
|
||||
'file.managed',
|
||||
name=name,
|
||||
source=source,
|
||||
source_hash='sha1={0}'.format(bad_hash))
|
||||
self.assertSaltFalseReturn(ret)
|
||||
ret = ret[next(iter(ret))]
|
||||
# Shouldn't be any changes
|
||||
self.assertFalse(ret['changes'])
|
||||
# Check that we identified a hash mismatch
|
||||
self.assertIn(
|
||||
'does not match actual checksum', ret['comment'])
|
||||
|
||||
ret = self.run_state(
|
||||
'file.managed',
|
||||
name=name,
|
||||
source=source,
|
||||
source_hash='sha1={0}'.format(actual_hash))
|
||||
self.assertSaltTrueReturn(ret)
|
||||
finally:
|
||||
if clean:
|
||||
remove_file()
|
||||
|
||||
remove_file()
|
||||
log.debug('Trying with nonexistant destination file')
|
||||
do_test()
|
||||
log.debug('Trying with destination file already present')
|
||||
with salt.utils.fopen(name, 'w'):
|
||||
pass
|
||||
try:
|
||||
do_test(clean=False)
|
||||
finally:
|
||||
remove_file()
|
||||
|
||||
def test_managed_local_source_does_not_exist(self):
|
||||
'''
|
||||
Make sure that we exit gracefully when a local source doesn't exist
|
||||
'''
|
||||
name = os.path.join(TMP, 'local_source_does_not_exist')
|
||||
local_path = os.path.join(FILES, 'file', 'base', 'grail', 'scene99')
|
||||
|
||||
for proto in ('file://', ''):
|
||||
source = proto + local_path
|
||||
log.debug('Trying source %s', source)
|
||||
ret = self.run_state(
|
||||
'file.managed',
|
||||
name=name,
|
||||
source=source)
|
||||
self.assertSaltFalseReturn(ret)
|
||||
ret = ret[next(iter(ret))]
|
||||
# Shouldn't be any changes
|
||||
self.assertFalse(ret['changes'])
|
||||
# Check that we identified a hash mismatch
|
||||
self.assertIn(
|
||||
'does not exist', ret['comment'])
|
||||
|
||||
def test_directory(self):
|
||||
'''
|
||||
file.directory
|
||||
|
@ -585,19 +662,29 @@ class FileTest(ModuleCase, SaltReturnAssertsMixin):
|
|||
try:
|
||||
tmp_dir = os.path.join(TMP, 'pgdata')
|
||||
sym_dir = os.path.join(TMP, 'pg_data')
|
||||
os.mkdir(tmp_dir, 0o700)
|
||||
os.symlink(tmp_dir, sym_dir)
|
||||
|
||||
ret = self.run_state(
|
||||
'file.directory', test=True, name=sym_dir, follow_symlinks=True,
|
||||
mode=700
|
||||
)
|
||||
if IS_WINDOWS:
|
||||
self.run_function('file.mkdir', [tmp_dir, 'Administrators'])
|
||||
else:
|
||||
os.mkdir(tmp_dir, 0o700)
|
||||
|
||||
self.run_function('file.symlink', [tmp_dir, sym_dir])
|
||||
|
||||
if IS_WINDOWS:
|
||||
ret = self.run_state(
|
||||
'file.directory', test=True, name=sym_dir,
|
||||
follow_symlinks=True, win_owner='Administrators')
|
||||
else:
|
||||
ret = self.run_state(
|
||||
'file.directory', test=True, name=sym_dir,
|
||||
follow_symlinks=True, mode=700)
|
||||
|
||||
self.assertSaltTrueReturn(ret)
|
||||
finally:
|
||||
if os.path.isdir(tmp_dir):
|
||||
shutil.rmtree(tmp_dir)
|
||||
self.run_function('file.remove', [tmp_dir])
|
||||
if os.path.islink(sym_dir):
|
||||
os.unlink(sym_dir)
|
||||
self.run_function('file.remove', [sym_dir])
|
||||
|
||||
@skip_if_not_root
|
||||
@skipIf(IS_WINDOWS, 'Mode not available in Windows')
|
||||
|
@ -1592,25 +1679,24 @@ class FileTest(ModuleCase, SaltReturnAssertsMixin):
|
|||
'''
|
||||
fname = 'append_issue_1864_makedirs'
|
||||
name = os.path.join(TMP, fname)
|
||||
try:
|
||||
self.assertFalse(os.path.exists(name))
|
||||
except AssertionError:
|
||||
os.remove(name)
|
||||
|
||||
# Make sure the file is not there to begin with
|
||||
if os.path.isfile(name):
|
||||
self.run_function('file.remove', [name])
|
||||
|
||||
try:
|
||||
# Non existing file get's touched
|
||||
if os.path.isfile(name):
|
||||
# left over
|
||||
os.remove(name)
|
||||
ret = self.run_state(
|
||||
'file.append', name=name, text='cheese', makedirs=True
|
||||
)
|
||||
self.assertSaltTrueReturn(ret)
|
||||
finally:
|
||||
if os.path.isfile(name):
|
||||
os.remove(name)
|
||||
self.run_function('file.remove', [name])
|
||||
|
||||
# Nested directory and file get's touched
|
||||
name = os.path.join(TMP, 'issue_1864', fname)
|
||||
|
||||
try:
|
||||
ret = self.run_state(
|
||||
'file.append', name=name, text='cheese', makedirs=True
|
||||
|
@ -1618,20 +1704,17 @@ class FileTest(ModuleCase, SaltReturnAssertsMixin):
|
|||
self.assertSaltTrueReturn(ret)
|
||||
finally:
|
||||
if os.path.isfile(name):
|
||||
os.remove(name)
|
||||
self.run_function('file.remove', [name])
|
||||
|
||||
# Parent directory exists but file does not and makedirs is False
|
||||
try:
|
||||
# Parent directory exists but file does not and makedirs is False
|
||||
ret = self.run_state(
|
||||
'file.append', name=name, text='cheese'
|
||||
)
|
||||
self.assertSaltTrueReturn(ret)
|
||||
self.assertTrue(os.path.isfile(name))
|
||||
finally:
|
||||
shutil.rmtree(
|
||||
os.path.join(TMP, 'issue_1864'),
|
||||
ignore_errors=True
|
||||
)
|
||||
self.run_function('file.remove', [os.path.join(TMP, 'issue_1864')])
|
||||
|
||||
def test_prepend_issue_27401_makedirs(self):
|
||||
'''
|
||||
|
@ -1966,19 +2049,21 @@ class FileTest(ModuleCase, SaltReturnAssertsMixin):
|
|||
ret = self.run_function('state.sls', mods='issue-8343')
|
||||
for name, step in six.iteritems(ret):
|
||||
self.assertSaltTrueReturn({name: step})
|
||||
|
||||
with salt.utils.fopen(testcase_filedest) as fp_:
|
||||
contents = fp_.read().split(os.linesep)
|
||||
self.assertEqual(
|
||||
['#-- start salt managed zonestart -- PLEASE, DO NOT EDIT',
|
||||
'foo',
|
||||
'#-- end salt managed zonestart --',
|
||||
'#',
|
||||
'#-- start salt managed zoneend -- PLEASE, DO NOT EDIT',
|
||||
'bar',
|
||||
'#-- end salt managed zoneend --',
|
||||
''],
|
||||
contents
|
||||
)
|
||||
|
||||
expected = [
|
||||
'#-- start salt managed zonestart -- PLEASE, DO NOT EDIT',
|
||||
'foo',
|
||||
'#-- end salt managed zonestart --',
|
||||
'#',
|
||||
'#-- start salt managed zoneend -- PLEASE, DO NOT EDIT',
|
||||
'bar',
|
||||
'#-- end salt managed zoneend --',
|
||||
'']
|
||||
|
||||
self.assertEqual(expected, contents)
|
||||
finally:
|
||||
if os.path.isdir(testcase_filedest):
|
||||
os.unlink(testcase_filedest)
|
||||
|
|
|
@ -210,8 +210,10 @@ class ShellTestCase(TestCase, AdaptedConfigurationTestCaseMixin):
|
|||
arg_str = '--config-dir {0} {1}'.format(self.get_config_dir(), arg_str)
|
||||
return self.run_script('salt-cp', arg_str, with_retcode=with_retcode, catch_stderr=catch_stderr)
|
||||
|
||||
def run_call(self, arg_str, with_retcode=False, catch_stderr=False):
|
||||
arg_str = '--config-dir {0} {1}'.format(self.get_config_dir(), arg_str)
|
||||
def run_call(self, arg_str, with_retcode=False, catch_stderr=False, local=False):
|
||||
arg_str = '{0} --config-dir {1} {2}'.format('--local' if local else '',
|
||||
self.get_config_dir(), arg_str)
|
||||
|
||||
return self.run_script('salt-call', arg_str, with_retcode=with_retcode, catch_stderr=catch_stderr)
|
||||
|
||||
def run_cloud(self, arg_str, catch_stderr=False, timeout=None):
|
||||
|
@ -549,11 +551,12 @@ class ShellCase(ShellTestCase, AdaptedConfigurationTestCaseMixin, ScriptPathMixi
|
|||
catch_stderr=catch_stderr,
|
||||
timeout=60)
|
||||
|
||||
def run_call(self, arg_str, with_retcode=False, catch_stderr=False):
|
||||
def run_call(self, arg_str, with_retcode=False, catch_stderr=False, local=False):
|
||||
'''
|
||||
Execute salt-call.
|
||||
'''
|
||||
arg_str = '--config-dir {0} {1}'.format(self.get_config_dir(), arg_str)
|
||||
arg_str = '{0} --config-dir {1} {2}'.format('--local' if local else '',
|
||||
self.get_config_dir(), arg_str)
|
||||
return self.run_script('salt-call',
|
||||
arg_str,
|
||||
with_retcode=with_retcode,
|
||||
|
@ -625,7 +628,7 @@ class SPMCase(TestCase, AdaptedConfigurationTestCaseMixin):
|
|||
description: Formula for installing Apache
|
||||
'''))
|
||||
|
||||
def _spm_config(self):
|
||||
def _spm_config(self, assume_yes=True):
|
||||
self._tmp_spm = tempfile.mkdtemp()
|
||||
config = self.get_temp_config('minion', **{
|
||||
'spm_logfile': os.path.join(self._tmp_spm, 'log'),
|
||||
|
@ -638,10 +641,10 @@ class SPMCase(TestCase, AdaptedConfigurationTestCaseMixin):
|
|||
'spm_db': os.path.join(self._tmp_spm, 'packages.db'),
|
||||
'extension_modules': os.path.join(self._tmp_spm, 'modules'),
|
||||
'file_roots': {'base': [self._tmp_spm, ]},
|
||||
'formula_path': os.path.join(self._tmp_spm, 'spm'),
|
||||
'formula_path': os.path.join(self._tmp_spm, 'salt'),
|
||||
'pillar_path': os.path.join(self._tmp_spm, 'pillar'),
|
||||
'reactor_path': os.path.join(self._tmp_spm, 'reactor'),
|
||||
'assume_yes': True,
|
||||
'assume_yes': True if assume_yes else False,
|
||||
'force': False,
|
||||
'verbose': False,
|
||||
'cache': 'localfs',
|
||||
|
@ -649,6 +652,16 @@ class SPMCase(TestCase, AdaptedConfigurationTestCaseMixin):
|
|||
'spm_repo_dups': 'ignore',
|
||||
'spm_share_dir': os.path.join(self._tmp_spm, 'share'),
|
||||
})
|
||||
|
||||
import salt.utils
|
||||
import yaml
|
||||
|
||||
if not os.path.isdir(config['formula_path']):
|
||||
os.makedirs(config['formula_path'])
|
||||
|
||||
with salt.utils.fopen(os.path.join(self._tmp_spm, 'spm'), 'w') as fp:
|
||||
fp.write(yaml.dump(config))
|
||||
|
||||
return config
|
||||
|
||||
def _spm_create_update_repo(self, config):
|
||||
|
|
69
tests/unit/daemons/test_masterapi.py
Normal file
69
tests/unit/daemons/test_masterapi.py
Normal file
|
@ -0,0 +1,69 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# Import Python libs
|
||||
from __future__ import absolute_import
|
||||
|
||||
# Import Salt libs
|
||||
import salt.config
|
||||
import salt.daemons.masterapi as masterapi
|
||||
|
||||
# Import Salt Testing Libs
|
||||
from tests.support.unit import TestCase
|
||||
from tests.support.mock import (
|
||||
patch,
|
||||
MagicMock,
|
||||
)
|
||||
|
||||
|
||||
class FakeCache(object):
|
||||
|
||||
def __init__(self):
|
||||
self.data = {}
|
||||
|
||||
def store(self, bank, key, value):
|
||||
self.data[bank, key] = value
|
||||
|
||||
def fetch(self, bank, key):
|
||||
return self.data[bank, key]
|
||||
|
||||
|
||||
class RemoteFuncsTestCase(TestCase):
|
||||
'''
|
||||
TestCase for salt.daemons.masterapi.RemoteFuncs class
|
||||
'''
|
||||
|
||||
def setUp(self):
|
||||
opts = salt.config.master_config(None)
|
||||
self.funcs = masterapi.RemoteFuncs(opts)
|
||||
self.funcs.cache = FakeCache()
|
||||
|
||||
def test_mine_get(self, tgt_type_key='tgt_type'):
|
||||
'''
|
||||
Asserts that ``mine_get`` gives the expected results.
|
||||
|
||||
Actually this only tests that:
|
||||
|
||||
- the correct check minions method is called
|
||||
- the correct cache key is subsequently used
|
||||
'''
|
||||
self.funcs.cache.store('minions/webserver', 'mine',
|
||||
dict(ip_addr='2001:db8::1:3'))
|
||||
with patch('salt.utils.minions.CkMinions._check_compound_minions',
|
||||
MagicMock(return_value=['webserver'])):
|
||||
ret = self.funcs._mine_get(
|
||||
{
|
||||
'id': 'requester_minion',
|
||||
'tgt': 'G@roles:web',
|
||||
'fun': 'ip_addr',
|
||||
tgt_type_key: 'compound',
|
||||
}
|
||||
)
|
||||
self.assertDictEqual(ret, dict(webserver='2001:db8::1:3'))
|
||||
|
||||
def test_mine_get_pre_nitrogen_compat(self):
|
||||
'''
|
||||
Asserts that pre-Nitrogen API key ``expr_form`` is still accepted.
|
||||
|
||||
This is what minions before Nitrogen would issue.
|
||||
'''
|
||||
self.test_mine_get(tgt_type_key='expr_form')
|
|
@ -5,6 +5,7 @@
|
|||
|
||||
# Import Python libs
|
||||
from __future__ import absolute_import
|
||||
import logging
|
||||
import os
|
||||
|
||||
# Import Salt Testing Libs
|
||||
|
@ -25,6 +26,18 @@ import salt.grains.core as core
|
|||
# Import 3rd-party libs
|
||||
import salt.ext.six as six
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
# Globals
|
||||
IPv4Address = salt.ext.ipaddress.IPv4Address
|
||||
IPv6Address = salt.ext.ipaddress.IPv6Address
|
||||
IP4_LOCAL = '127.0.0.1'
|
||||
IP4_ADD1 = '10.0.0.1'
|
||||
IP4_ADD2 = '10.0.0.2'
|
||||
IP6_LOCAL = '::1'
|
||||
IP6_ADD1 = '2001:4860:4860::8844'
|
||||
IP6_ADD2 = '2001:4860:4860::8888'
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin):
|
||||
|
@ -462,3 +475,147 @@ PATCHLEVEL = 3
|
|||
self.assertEqual(os_grains.get('osrelease'), os_release_map['osrelease'])
|
||||
self.assertListEqual(list(os_grains.get('osrelease_info')), os_release_map['osrelease_info'])
|
||||
self.assertEqual(os_grains.get('osmajorrelease'), os_release_map['osmajorrelease'])
|
||||
|
||||
def test_docker_virtual(self):
|
||||
'''
|
||||
Test if OS grains are parsed correctly in Ubuntu Xenial Xerus
|
||||
'''
|
||||
with patch.object(os.path, 'isdir', MagicMock(return_value=False)):
|
||||
with patch.object(os.path,
|
||||
'isfile',
|
||||
MagicMock(side_effect=lambda x: True if x == '/proc/1/cgroup' else False)):
|
||||
for cgroup_substr in (':/system.slice/docker', ':/docker/',
|
||||
':/docker-ce/'):
|
||||
cgroup_data = \
|
||||
'10:memory{0}a_long_sha256sum'.format(cgroup_substr)
|
||||
log.debug(
|
||||
'Testing Docker cgroup substring \'%s\'', cgroup_substr)
|
||||
with patch('salt.utils.fopen', mock_open(read_data=cgroup_data)):
|
||||
self.assertEqual(
|
||||
core._virtual({'kernel': 'Linux'}).get('virtual_subtype'),
|
||||
'Docker'
|
||||
)
|
||||
|
||||
def _check_ipaddress(self, value, ip_v):
|
||||
'''
|
||||
check if ip address in a list is valid
|
||||
'''
|
||||
for val in value:
|
||||
assert isinstance(val, six.string_types)
|
||||
ip_method = 'is_ipv{0}'.format(ip_v)
|
||||
self.assertTrue(getattr(salt.utils.network, ip_method)(val))
|
||||
|
||||
def _check_empty(self, key, value, empty):
|
||||
'''
|
||||
if empty is False and value does not exist assert error
|
||||
if empty is True and value exists assert error
|
||||
'''
|
||||
if not empty and not value:
|
||||
raise Exception("{0} is empty, expecting a value".format(key))
|
||||
elif empty and value:
|
||||
raise Exception("{0} is suppose to be empty. value: {1} \
|
||||
exists".format(key, value))
|
||||
|
||||
@skipIf(not salt.utils.is_linux(), 'System is not Linux')
|
||||
def test_fqdn_return(self):
|
||||
'''
|
||||
test ip4 and ip6 return values
|
||||
'''
|
||||
net_ip4_mock = [IP4_LOCAL, IP4_ADD1, IP4_ADD2]
|
||||
net_ip6_mock = [IP6_LOCAL, IP6_ADD1, IP6_ADD2]
|
||||
|
||||
self._run_fqdn_tests(net_ip4_mock, net_ip6_mock,
|
||||
ip4_empty=False, ip6_empty=False)
|
||||
|
||||
@skipIf(not salt.utils.is_linux(), 'System is not Linux')
|
||||
def test_fqdn6_empty(self):
|
||||
'''
|
||||
test when ip6 is empty
|
||||
'''
|
||||
net_ip4_mock = [IP4_LOCAL, IP4_ADD1, IP4_ADD2]
|
||||
net_ip6_mock = []
|
||||
|
||||
self._run_fqdn_tests(net_ip4_mock, net_ip6_mock,
|
||||
ip4_empty=False)
|
||||
|
||||
@skipIf(not salt.utils.is_linux(), 'System is not Linux')
|
||||
def test_fqdn4_empty(self):
|
||||
'''
|
||||
test when ip4 is empty
|
||||
'''
|
||||
net_ip4_mock = []
|
||||
net_ip6_mock = [IP6_LOCAL, IP6_ADD1, IP6_ADD2]
|
||||
|
||||
self._run_fqdn_tests(net_ip4_mock, net_ip6_mock,
|
||||
ip6_empty=False)
|
||||
|
||||
@skipIf(not salt.utils.is_linux(), 'System is not Linux')
|
||||
def test_fqdn_all_empty(self):
|
||||
'''
|
||||
test when both ip4 and ip6 are empty
|
||||
'''
|
||||
net_ip4_mock = []
|
||||
net_ip6_mock = []
|
||||
|
||||
self._run_fqdn_tests(net_ip4_mock, net_ip6_mock)
|
||||
|
||||
def _run_fqdn_tests(self, net_ip4_mock, net_ip6_mock,
|
||||
ip6_empty=True, ip4_empty=True):
|
||||
|
||||
def _check_type(key, value, ip4_empty, ip6_empty):
|
||||
'''
|
||||
check type and other checks
|
||||
'''
|
||||
assert isinstance(value, list)
|
||||
|
||||
if '4' in key:
|
||||
self._check_empty(key, value, ip4_empty)
|
||||
self._check_ipaddress(value, ip_v='4')
|
||||
elif '6' in key:
|
||||
self._check_empty(key, value, ip6_empty)
|
||||
self._check_ipaddress(value, ip_v='6')
|
||||
|
||||
ip4_mock = [(2, 1, 6, '', (IP4_ADD1, 0)),
|
||||
(2, 3, 0, '', (IP4_ADD2, 0))]
|
||||
ip6_mock = [(10, 1, 6, '', (IP6_ADD1, 0, 0, 0)),
|
||||
(10, 3, 0, '', (IP6_ADD2, 0, 0, 0))]
|
||||
|
||||
with patch.dict(core.__opts__, {'ipv6': False}):
|
||||
with patch.object(salt.utils.network, 'ip_addrs',
|
||||
MagicMock(return_value=net_ip4_mock)):
|
||||
with patch.object(salt.utils.network, 'ip_addrs6',
|
||||
MagicMock(return_value=net_ip6_mock)):
|
||||
with patch.object(core.socket, 'getaddrinfo', side_effect=[ip4_mock, ip6_mock]):
|
||||
get_fqdn = core.ip_fqdn()
|
||||
ret_keys = ['fqdn_ip4', 'fqdn_ip6', 'ipv4', 'ipv6']
|
||||
for key in ret_keys:
|
||||
value = get_fqdn[key]
|
||||
_check_type(key, value, ip4_empty, ip6_empty)
|
||||
|
||||
@skipIf(not salt.utils.is_linux(), 'System is not Linux')
|
||||
def test_dns_return(self):
|
||||
'''
|
||||
test the return for a dns grain. test for issue:
|
||||
https://github.com/saltstack/salt/issues/41230
|
||||
'''
|
||||
resolv_mock = {'domain': '', 'sortlist': [], 'nameservers':
|
||||
[IPv4Address(IP4_ADD1),
|
||||
IPv6Address(IP6_ADD1)], 'ip4_nameservers':
|
||||
[IPv4Address(IP4_ADD1)],
|
||||
'search': ['test.saltstack.com'], 'ip6_nameservers':
|
||||
[IPv6Address(IP6_ADD1)], 'options': []}
|
||||
ret = {'dns': {'domain': '', 'sortlist': [], 'nameservers':
|
||||
[IP4_ADD1, IP6_ADD1], 'ip4_nameservers':
|
||||
[IP4_ADD1], 'search': ['test.saltstack.com'],
|
||||
'ip6_nameservers': [IP6_ADD1], 'options':
|
||||
[]}}
|
||||
self._run_dns_test(resolv_mock, ret)
|
||||
|
||||
def _run_dns_test(self, resolv_mock, ret):
|
||||
with patch.object(salt.utils, 'is_windows',
|
||||
MagicMock(return_value=False)):
|
||||
with patch.dict(core.__opts__, {'ipv6': False}):
|
||||
with patch.object(salt.utils.dns, 'parse_resolv',
|
||||
MagicMock(return_value=resolv_mock)):
|
||||
get_dns = core.dns()
|
||||
self.assertEqual(get_dns, ret)
|
||||
|
|
|
@ -8,12 +8,11 @@ from __future__ import absolute_import
|
|||
|
||||
# Import Salt Testing libs
|
||||
from tests.support.mixins import LoaderModuleMockMixin
|
||||
from tests.support.unit import skipIf, TestCase
|
||||
from tests.support.unit import TestCase
|
||||
from tests.support.mock import MagicMock, patch
|
||||
|
||||
# Import Salt libs
|
||||
import salt.modules.disk as disk
|
||||
import salt.utils
|
||||
|
||||
STUB_DISK_USAGE = {
|
||||
'/': {'filesystem': None, '1K-blocks': 10000, 'used': 10000, 'available': 10000, 'capacity': 10000},
|
||||
|
@ -141,15 +140,14 @@ class DiskTestCase(TestCase, LoaderModuleMockMixin):
|
|||
self.assertEqual(len(args[0].split()), 6)
|
||||
self.assertEqual(kwargs, {'python_shell': False})
|
||||
|
||||
@skipIf(not salt.utils.which('sync'), 'sync not found')
|
||||
@skipIf(not salt.utils.which('mkfs'), 'mkfs not found')
|
||||
def test_format(self):
|
||||
'''
|
||||
unit tests for disk.format
|
||||
'''
|
||||
device = '/dev/sdX1'
|
||||
mock = MagicMock(return_value=0)
|
||||
with patch.dict(disk.__salt__, {'cmd.retcode': mock}):
|
||||
with patch.dict(disk.__salt__, {'cmd.retcode': mock}),\
|
||||
patch('salt.utils.which', MagicMock(return_value=True)):
|
||||
self.assertEqual(disk.format_(device), True)
|
||||
|
||||
def test_fstype(self):
|
||||
|
@ -159,17 +157,18 @@ class DiskTestCase(TestCase, LoaderModuleMockMixin):
|
|||
device = '/dev/sdX1'
|
||||
fs_type = 'ext4'
|
||||
mock = MagicMock(return_value='FSTYPE\n{0}'.format(fs_type))
|
||||
with patch.dict(disk.__grains__, {'kernel': 'Linux'}):
|
||||
with patch.dict(disk.__salt__, {'cmd.run': mock}):
|
||||
self.assertEqual(disk.fstype(device), fs_type)
|
||||
with patch.dict(disk.__grains__, {'kernel': 'Linux'}), \
|
||||
patch.dict(disk.__salt__, {'cmd.run': mock}), \
|
||||
patch('salt.utils.which', MagicMock(return_value=True)):
|
||||
self.assertEqual(disk.fstype(device), fs_type)
|
||||
|
||||
@skipIf(not salt.utils.which('resize2fs'), 'resize2fs not found')
|
||||
def test_resize2fs(self):
|
||||
'''
|
||||
unit tests for disk.resize2fs
|
||||
'''
|
||||
device = '/dev/sdX1'
|
||||
mock = MagicMock()
|
||||
with patch.dict(disk.__salt__, {'cmd.run_all': mock}):
|
||||
with patch.dict(disk.__salt__, {'cmd.run_all': mock}), \
|
||||
patch('salt.utils.which', MagicMock(return_value=True)):
|
||||
disk.resize2fs(device)
|
||||
mock.assert_called_once_with('resize2fs {0}'.format(device), python_shell=False)
|
||||
|
|
|
@ -63,6 +63,22 @@ class LinuxAclTestCase(TestCase, LoaderModuleMockMixin):
|
|||
linux_acl.getfacl(*self.files, recursive=True)
|
||||
self.cmdrun.assert_called_once_with('getfacl --absolute-names -R ' + ' '.join(self.quoted_files), python_shell=False)
|
||||
|
||||
def test_getfacl__effective_acls(self):
|
||||
line = 'group:webmaster:r-x #effective:---'
|
||||
user = 'root'
|
||||
group = 'root'
|
||||
expected = {
|
||||
'type': 'acl',
|
||||
'group': 'webmaster',
|
||||
'permissions': {
|
||||
'read': False,
|
||||
'write': False,
|
||||
'execute': False
|
||||
},
|
||||
'octal': 0,
|
||||
}
|
||||
self.assertEqual(linux_acl._parse_acl(line, user, group), expected)
|
||||
|
||||
def test_wipefacls_wo_args(self):
|
||||
self.assertRaises(CommandExecutionError, linux_acl.wipefacls)
|
||||
|
||||
|
|
383
tests/unit/modules/test_napalm_network.py
Normal file
383
tests/unit/modules/test_napalm_network.py
Normal file
|
@ -0,0 +1,383 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
:codeauthor: :email:`Anthony Shaw <anthonyshaw@apache.org>`
|
||||
'''
|
||||
|
||||
# Import Python Libs
|
||||
from __future__ import absolute_import
|
||||
|
||||
from functools import wraps
|
||||
|
||||
# Import Salt Testing Libs
|
||||
from tests.support.mixins import LoaderModuleMockMixin
|
||||
from tests.support.unit import TestCase, skipIf
|
||||
from tests.support.mock import (
|
||||
MagicMock,
|
||||
NO_MOCK,
|
||||
NO_MOCK_REASON
|
||||
)
|
||||
|
||||
|
||||
# Test data
|
||||
TEST_FACTS = {
|
||||
'__opts__': {},
|
||||
'OPTIONAL_ARGS': {},
|
||||
'uptime': 'Forever',
|
||||
'UP': True,
|
||||
'HOSTNAME': 'test-device.com'
|
||||
}
|
||||
|
||||
TEST_ENVIRONMENT = {
|
||||
'hot': 'yes'
|
||||
}
|
||||
|
||||
TEST_COMMAND_RESPONSE = {
|
||||
'show run': 'all the command output'
|
||||
}
|
||||
|
||||
TEST_TRACEROUTE_RESPONSE = {
|
||||
'success': {
|
||||
1: {
|
||||
'probes': {
|
||||
1: {
|
||||
'rtt': 1.123,
|
||||
'ip_address': u'206.223.116.21',
|
||||
'host_name': u'eqixsj-google-gige.google.com'
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
TEST_PING_RESPONSE = {
|
||||
'success': {
|
||||
'probes_sent': 5,
|
||||
'packet_loss': 0,
|
||||
'rtt_min': 72.158,
|
||||
'rtt_max': 72.433,
|
||||
'rtt_avg': 72.268,
|
||||
'rtt_stddev': 0.094,
|
||||
'results': [
|
||||
{
|
||||
'ip_address': '1.1.1.1',
|
||||
'rtt': 72.248
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
TEST_ARP_TABLE = [
|
||||
{
|
||||
'interface': 'MgmtEth0/RSP0/CPU0/0',
|
||||
'mac': '5C:5E:AB:DA:3C:F0',
|
||||
'ip': '172.17.17.1',
|
||||
'age': 1454496274.84
|
||||
}
|
||||
]
|
||||
|
||||
TEST_IPADDRS = {
|
||||
'FastEthernet8': {
|
||||
'ipv4': {
|
||||
'10.66.43.169': {
|
||||
'prefix_length': 22
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
TEST_INTERFACES = {
|
||||
'Management1': {
|
||||
'is_up': False,
|
||||
'is_enabled': False,
|
||||
'description': u'',
|
||||
'last_flapped': -1,
|
||||
'speed': 1000,
|
||||
'mac_address': u'dead:beef:dead',
|
||||
}
|
||||
}
|
||||
|
||||
TEST_LLDP_NEIGHBORS = {
|
||||
u'Ethernet2':
|
||||
[
|
||||
{
|
||||
'hostname': u'junos-unittest',
|
||||
'port': u'520',
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
TEST_MAC_TABLE = [
|
||||
{
|
||||
'mac': '00:1C:58:29:4A:71',
|
||||
'interface': 'Ethernet47',
|
||||
'vlan': 100,
|
||||
'static': False,
|
||||
'active': True,
|
||||
'moves': 1,
|
||||
'last_move': 1454417742.58
|
||||
}
|
||||
]
|
||||
|
||||
TEST_RUNNING_CONFIG = {
|
||||
'one': 'two'
|
||||
}
|
||||
|
||||
TEST_OPTICS = {
|
||||
'et1': {
|
||||
'physical_channels': {
|
||||
'channel': [
|
||||
{
|
||||
'index': 0,
|
||||
'state': {
|
||||
'input_power': {
|
||||
'instant': 0.0,
|
||||
'avg': 0.0,
|
||||
'min': 0.0,
|
||||
'max': 0.0,
|
||||
},
|
||||
'output_power': {
|
||||
'instant': 0.0,
|
||||
'avg': 0.0,
|
||||
'min': 0.0,
|
||||
'max': 0.0,
|
||||
},
|
||||
'laser_bias_current': {
|
||||
'instant': 0.0,
|
||||
'avg': 0.0,
|
||||
'min': 0.0,
|
||||
'max': 0.0,
|
||||
},
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
class MockNapalmDevice(object):
|
||||
'''Setup a mock device for our tests'''
|
||||
def get_facts(self):
|
||||
return TEST_FACTS
|
||||
|
||||
def get_environment(self):
|
||||
return TEST_ENVIRONMENT
|
||||
|
||||
def get_arp_table(self):
|
||||
return TEST_ARP_TABLE
|
||||
|
||||
def get(self, key, default=None, *args, **kwargs):
|
||||
try:
|
||||
if key == 'DRIVER':
|
||||
return self
|
||||
return TEST_FACTS[key]
|
||||
except KeyError:
|
||||
return default
|
||||
|
||||
def cli(self, commands, *args, **kwargs):
|
||||
assert commands[0] == 'show run'
|
||||
return TEST_COMMAND_RESPONSE
|
||||
|
||||
def traceroute(self, destination, **kwargs):
|
||||
assert destination == 'destination.com'
|
||||
return TEST_TRACEROUTE_RESPONSE
|
||||
|
||||
def ping(self, destination, **kwargs):
|
||||
assert destination == 'destination.com'
|
||||
return TEST_PING_RESPONSE
|
||||
|
||||
def get_config(self, retrieve='all'):
|
||||
assert retrieve == 'running'
|
||||
return TEST_RUNNING_CONFIG
|
||||
|
||||
def get_interfaces_ip(self, **kwargs):
|
||||
return TEST_IPADDRS
|
||||
|
||||
def get_interfaces(self, **kwargs):
|
||||
return TEST_INTERFACES
|
||||
|
||||
def get_lldp_neighbors_detail(self, **kwargs):
|
||||
return TEST_LLDP_NEIGHBORS
|
||||
|
||||
def get_mac_address_table(self, **kwargs):
|
||||
return TEST_MAC_TABLE
|
||||
|
||||
def get_optics(self, **kwargs):
|
||||
return TEST_OPTICS
|
||||
|
||||
def load_merge_candidate(self, filename=None, config=None):
|
||||
assert config == 'new config'
|
||||
return TEST_RUNNING_CONFIG
|
||||
|
||||
def load_replace_candidate(self, filename=None, config=None):
|
||||
assert config == 'new config'
|
||||
return TEST_RUNNING_CONFIG
|
||||
|
||||
def commit_config(self, **kwargs):
|
||||
return TEST_RUNNING_CONFIG
|
||||
|
||||
def discard_config(self, **kwargs):
|
||||
return TEST_RUNNING_CONFIG
|
||||
|
||||
def compare_config(self, **kwargs):
|
||||
return TEST_RUNNING_CONFIG
|
||||
|
||||
def rollback(self, **kwargs):
|
||||
return TEST_RUNNING_CONFIG
|
||||
|
||||
|
||||
def mock_proxy_napalm_wrap(func):
|
||||
'''
|
||||
The proper decorator checks for proxy minions. We don't care
|
||||
so just pass back to the origination function
|
||||
'''
|
||||
|
||||
@wraps(func)
|
||||
def func_wrapper(*args, **kwargs):
|
||||
func.__globals__['napalm_device'] = MockNapalmDevice()
|
||||
return func(*args, **kwargs)
|
||||
return func_wrapper
|
||||
|
||||
|
||||
import salt.utils.napalm as napalm_utils # NOQA
|
||||
napalm_utils.proxy_napalm_wrap = mock_proxy_napalm_wrap # pylint: disable=E9502
|
||||
|
||||
import salt.modules.napalm_network as napalm_network # NOQA
|
||||
|
||||
|
||||
def true(name):
|
||||
assert name == 'set_ntp_peers'
|
||||
return True
|
||||
|
||||
|
||||
def random_hash(source, method):
|
||||
return 12346789
|
||||
|
||||
|
||||
def join(*files):
|
||||
return True
|
||||
|
||||
|
||||
def get_managed_file(*args, **kwargs):
|
||||
return 'True'
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
class NapalmNetworkModuleTestCase(TestCase, LoaderModuleMockMixin):
|
||||
|
||||
def setup_loader_modules(self):
|
||||
module_globals = {
|
||||
'__salt__': {
|
||||
'config.option': MagicMock(return_value={
|
||||
'test': {
|
||||
'driver': 'test',
|
||||
'key': '2orgk34kgk34g'
|
||||
}
|
||||
}),
|
||||
'file.file_exists': true,
|
||||
'file.join': join,
|
||||
'file.get_managed': get_managed_file,
|
||||
'random.hash': random_hash
|
||||
}
|
||||
}
|
||||
|
||||
return {napalm_network: module_globals}
|
||||
|
||||
def test_connected_pass(self):
|
||||
ret = napalm_network.connected()
|
||||
assert ret['out'] is True
|
||||
|
||||
def test_facts(self):
|
||||
ret = napalm_network.facts()
|
||||
assert ret['out'] == TEST_FACTS
|
||||
|
||||
def test_environment(self):
|
||||
ret = napalm_network.environment()
|
||||
assert ret['out'] == TEST_ENVIRONMENT
|
||||
|
||||
def test_cli_single_command(self):
|
||||
'''
|
||||
Test that CLI works with 1 arg
|
||||
'''
|
||||
ret = napalm_network.cli("show run")
|
||||
assert ret['out'] == TEST_COMMAND_RESPONSE
|
||||
|
||||
def test_cli_multi_command(self):
|
||||
'''
|
||||
Test that CLI works with 2 arg
|
||||
'''
|
||||
ret = napalm_network.cli("show run", "show run")
|
||||
assert ret['out'] == TEST_COMMAND_RESPONSE
|
||||
|
||||
def test_traceroute(self):
|
||||
ret = napalm_network.traceroute('destination.com')
|
||||
assert list(ret['out'].keys())[0] == 'success'
|
||||
|
||||
def test_ping(self):
|
||||
ret = napalm_network.ping('destination.com')
|
||||
assert list(ret['out'].keys())[0] == 'success'
|
||||
|
||||
def test_arp(self):
|
||||
ret = napalm_network.arp()
|
||||
assert ret['out'] == TEST_ARP_TABLE
|
||||
|
||||
def test_ipaddrs(self):
|
||||
ret = napalm_network.ipaddrs()
|
||||
assert ret['out'] == TEST_IPADDRS
|
||||
|
||||
def test_interfaces(self):
|
||||
ret = napalm_network.interfaces()
|
||||
assert ret['out'] == TEST_INTERFACES
|
||||
|
||||
def test_lldp(self):
|
||||
ret = napalm_network.lldp()
|
||||
assert ret['out'] == TEST_LLDP_NEIGHBORS
|
||||
|
||||
def test_mac(self):
|
||||
ret = napalm_network.mac()
|
||||
assert ret['out'] == TEST_MAC_TABLE
|
||||
|
||||
def test_config(self):
|
||||
ret = napalm_network.config('running')
|
||||
assert ret['out'] == TEST_RUNNING_CONFIG
|
||||
|
||||
def test_optics(self):
|
||||
ret = napalm_network.optics()
|
||||
assert ret['out'] == TEST_OPTICS
|
||||
|
||||
def test_load_config(self):
|
||||
ret = napalm_network.load_config(text='new config')
|
||||
assert ret['result']
|
||||
|
||||
def test_load_config_replace(self):
|
||||
ret = napalm_network.load_config(text='new config', replace=True)
|
||||
assert ret['result']
|
||||
|
||||
def test_load_template(self):
|
||||
ret = napalm_network.load_template('set_ntp_peers',
|
||||
peers=['192.168.0.1'])
|
||||
assert ret['out'] is None
|
||||
|
||||
def test_commit(self):
|
||||
ret = napalm_network.commit()
|
||||
assert ret['out'] == TEST_RUNNING_CONFIG
|
||||
|
||||
def test_discard_config(self):
|
||||
ret = napalm_network.discard_config()
|
||||
assert ret['out'] == TEST_RUNNING_CONFIG
|
||||
|
||||
def test_compare_config(self):
|
||||
ret = napalm_network.compare_config()
|
||||
assert ret['out'] == TEST_RUNNING_CONFIG
|
||||
|
||||
def test_rollback(self):
|
||||
ret = napalm_network.rollback()
|
||||
assert ret['out'] == TEST_RUNNING_CONFIG
|
||||
|
||||
def test_config_changed(self):
|
||||
ret = napalm_network.config_changed()
|
||||
assert ret == (True, '')
|
||||
|
||||
def test_config_control(self):
|
||||
ret = napalm_network.config_control()
|
||||
assert ret == (True, '')
|
|
@ -698,9 +698,9 @@ class StateTestCase(TestCase, LoaderModuleMockMixin):
|
|||
with patch.object(state, '_check_queue', mock):
|
||||
self.assertEqual(state.top("reverse_top.sls"), "A")
|
||||
|
||||
mock = MagicMock(side_effect=[False, True, True])
|
||||
with patch.object(state, '_check_pillar', mock):
|
||||
with patch.dict(state.__pillar__, {"_errors": "E"}):
|
||||
mock = MagicMock(side_effect=[['E'], None, None])
|
||||
with patch.object(state, '_get_pillar_errors', mock):
|
||||
with patch.dict(state.__pillar__, {"_errors": ['E']}):
|
||||
self.assertListEqual(state.top("reverse_top.sls"), ret)
|
||||
|
||||
with patch.dict(state.__opts__, {"test": "A"}):
|
||||
|
@ -857,14 +857,10 @@ class StateTestCase(TestCase, LoaderModuleMockMixin):
|
|||
True),
|
||||
["A"])
|
||||
|
||||
mock = MagicMock(side_effect=[False,
|
||||
True,
|
||||
True,
|
||||
True,
|
||||
True])
|
||||
with patch.object(state, '_check_pillar', mock):
|
||||
mock = MagicMock(side_effect=[['E', '1'], None, None, None, None])
|
||||
with patch.object(state, '_get_pillar_errors', mock):
|
||||
with patch.dict(state.__context__, {"retcode": 5}):
|
||||
with patch.dict(state.__pillar__, {"_errors": "E1"}):
|
||||
with patch.dict(state.__pillar__, {"_errors": ['E', '1']}):
|
||||
self.assertListEqual(state.sls("core,edit.vim dev",
|
||||
None,
|
||||
None,
|
||||
|
@ -979,3 +975,62 @@ class StateTestCase(TestCase, LoaderModuleMockMixin):
|
|||
MockJson.flag = False
|
||||
with patch('salt.utils.fopen', mock_open()):
|
||||
self.assertTrue(state.pkg(tar_file, 0, "md5"))
|
||||
|
||||
def test_get_pillar_errors_CC(self):
|
||||
'''
|
||||
Test _get_pillar_errors function.
|
||||
CC: External clean, Internal clean
|
||||
:return:
|
||||
'''
|
||||
for int_pillar, ext_pillar in [({'foo': 'bar'}, {'fred': 'baz'}),
|
||||
({'foo': 'bar'}, None),
|
||||
({}, {'fred': 'baz'})]:
|
||||
with patch('salt.modules.state.__pillar__', int_pillar):
|
||||
for opts, res in [({'force': True}, None),
|
||||
({'force': False}, None),
|
||||
({}, None)]:
|
||||
assert res == state._get_pillar_errors(kwargs=opts, pillar=ext_pillar)
|
||||
|
||||
def test_get_pillar_errors_EC(self):
|
||||
'''
|
||||
Test _get_pillar_errors function.
|
||||
EC: External erroneous, Internal clean
|
||||
:return:
|
||||
'''
|
||||
errors = ['failure', 'everywhere']
|
||||
for int_pillar, ext_pillar in [({'foo': 'bar'}, {'fred': 'baz', '_errors': errors}),
|
||||
({}, {'fred': 'baz', '_errors': errors})]:
|
||||
with patch('salt.modules.state.__pillar__', int_pillar):
|
||||
for opts, res in [({'force': True}, None),
|
||||
({'force': False}, errors),
|
||||
({}, errors)]:
|
||||
assert res == state._get_pillar_errors(kwargs=opts, pillar=ext_pillar)
|
||||
|
||||
def test_get_pillar_errors_EE(self):
|
||||
'''
|
||||
Test _get_pillar_errors function.
|
||||
CC: External erroneous, Internal erroneous
|
||||
:return:
|
||||
'''
|
||||
errors = ['failure', 'everywhere']
|
||||
for int_pillar, ext_pillar in [({'foo': 'bar', '_errors': errors}, {'fred': 'baz', '_errors': errors})]:
|
||||
with patch('salt.modules.state.__pillar__', int_pillar):
|
||||
for opts, res in [({'force': True}, None),
|
||||
({'force': False}, errors),
|
||||
({}, errors)]:
|
||||
assert res == state._get_pillar_errors(kwargs=opts, pillar=ext_pillar)
|
||||
|
||||
def test_get_pillar_errors_CE(self):
|
||||
'''
|
||||
Test _get_pillar_errors function.
|
||||
CC: External clean, Internal erroneous
|
||||
:return:
|
||||
'''
|
||||
errors = ['failure', 'everywhere']
|
||||
for int_pillar, ext_pillar in [({'foo': 'bar', '_errors': errors}, {'fred': 'baz'}),
|
||||
({'foo': 'bar', '_errors': errors}, None)]:
|
||||
with patch('salt.modules.state.__pillar__', int_pillar):
|
||||
for opts, res in [({'force': True}, None),
|
||||
({'force': False}, errors),
|
||||
({}, errors)]:
|
||||
assert res == state._get_pillar_errors(kwargs=opts, pillar=ext_pillar)
|
||||
|
|
51
tests/unit/modules/test_win_file.py
Normal file
51
tests/unit/modules/test_win_file.py
Normal file
|
@ -0,0 +1,51 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
:codeauthor: :email:`Shane Lee <slee@saltstack.com>`
|
||||
'''
|
||||
# Import Python Libs
|
||||
from __future__ import absolute_import
|
||||
import os
|
||||
|
||||
# Import Salt Testing Libs
|
||||
from tests.support.unit import TestCase, skipIf
|
||||
from tests.support.mock import (
|
||||
patch,
|
||||
NO_MOCK,
|
||||
NO_MOCK_REASON
|
||||
)
|
||||
|
||||
# Import Salt Libs
|
||||
import salt.modules.win_file as win_file
|
||||
from salt.exceptions import CommandExecutionError
|
||||
import salt.utils
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
class WinFileTestCase(TestCase):
|
||||
'''
|
||||
Test cases for salt.modules.win_file
|
||||
'''
|
||||
FAKE_RET = {'fake': 'ret data'}
|
||||
if salt.utils.is_windows():
|
||||
FAKE_PATH = os.sep.join(['C:', 'path', 'does', 'not', 'exist'])
|
||||
else:
|
||||
FAKE_PATH = os.sep.join(['path', 'does', 'not', 'exist'])
|
||||
|
||||
def test_issue_43328_stats(self):
|
||||
'''
|
||||
Make sure that a CommandExecutionError is raised if the file does NOT
|
||||
exist
|
||||
'''
|
||||
with patch('os.path.exists', return_value=False):
|
||||
self.assertRaises(CommandExecutionError,
|
||||
win_file.stats,
|
||||
self.FAKE_PATH)
|
||||
|
||||
def test_issue_43328_check_perms_no_ret(self):
|
||||
'''
|
||||
Make sure that a CommandExecutionError is raised if the file does NOT
|
||||
exist
|
||||
'''
|
||||
with patch('os.path.exists', return_value=False):
|
||||
self.assertRaises(
|
||||
CommandExecutionError, win_file.check_perms, self.FAKE_PATH)
|
70
tests/unit/ssh/test_roster_defaults.py
Normal file
70
tests/unit/ssh/test_roster_defaults.py
Normal file
|
@ -0,0 +1,70 @@
|
|||
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Test roster default rendering
|
||||
'''
|
||||
|
||||
# Import python libs
|
||||
from __future__ import absolute_import
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
import yaml
|
||||
|
||||
# Import Salt Testing libs
|
||||
from tests.support.unit import TestCase
|
||||
from tests.support.mock import MagicMock, patch
|
||||
from tests.support.paths import TMP
|
||||
|
||||
# Import Salt libs
|
||||
import salt.roster
|
||||
import salt.config
|
||||
import salt.utils
|
||||
|
||||
ROSTER = '''
|
||||
localhost:
|
||||
host: 127.0.0.1
|
||||
port: 2827
|
||||
self:
|
||||
host: 0.0.0.0
|
||||
port: 42
|
||||
'''
|
||||
|
||||
|
||||
class SSHRosterDefaults(TestCase):
|
||||
def test_roster_defaults_flat(self):
|
||||
'''
|
||||
Test Roster Defaults on the flat roster
|
||||
'''
|
||||
tempdir = tempfile.mkdtemp(dir=TMP)
|
||||
expected = {
|
||||
'self': {
|
||||
'host': '0.0.0.0',
|
||||
'user': 'daniel',
|
||||
'port': 42,
|
||||
},
|
||||
'localhost': {
|
||||
'host': '127.0.0.1',
|
||||
'user': 'daniel',
|
||||
'port': 2827,
|
||||
},
|
||||
}
|
||||
try:
|
||||
root_dir = os.path.join(tempdir, 'foo', 'bar')
|
||||
os.makedirs(root_dir)
|
||||
fpath = os.path.join(root_dir, 'config')
|
||||
with salt.utils.fopen(fpath, 'w') as fp_:
|
||||
fp_.write(
|
||||
'''
|
||||
roster_defaults:
|
||||
user: daniel
|
||||
'''
|
||||
)
|
||||
opts = salt.config.master_config(fpath)
|
||||
with patch('salt.roster.get_roster_file', MagicMock(return_value=ROSTER)):
|
||||
with patch('salt.template.compile_template', MagicMock(return_value=yaml.load(ROSTER))):
|
||||
roster = salt.roster.Roster(opts=opts)
|
||||
self.assertEqual(roster.targets('*', 'glob'), expected)
|
||||
finally:
|
||||
if os.path.isdir(tempdir):
|
||||
shutil.rmtree(tempdir)
|
|
@ -577,7 +577,7 @@ class TestFileState(TestCase, LoaderModuleMockMixin):
|
|||
'file.copy': mock_cp,
|
||||
'file.manage_file': mock_ex,
|
||||
'cmd.run_all': mock_cmd_fail}):
|
||||
comt = ('Must provide name to file.managed')
|
||||
comt = ('Destination file name is required')
|
||||
ret.update({'comment': comt, 'name': '', 'pchanges': {}})
|
||||
self.assertDictEqual(filestate.managed(''), ret)
|
||||
|
||||
|
@ -743,7 +743,7 @@ class TestFileState(TestCase, LoaderModuleMockMixin):
|
|||
mock_check = MagicMock(return_value=(
|
||||
None,
|
||||
'The directory "{0}" will be changed'.format(name),
|
||||
{'directory': 'new'}))
|
||||
{name: {'directory': 'new'}}))
|
||||
mock_error = CommandExecutionError
|
||||
with patch.dict(filestate.__salt__, {'config.manage_mode': mock_t,
|
||||
'file.user_to_uid': mock_uid,
|
||||
|
@ -801,16 +801,15 @@ class TestFileState(TestCase, LoaderModuleMockMixin):
|
|||
group=group),
|
||||
ret)
|
||||
|
||||
with patch.object(os.path, 'isfile', mock_f):
|
||||
with patch.object(os.path, 'isdir', mock_f):
|
||||
with patch.dict(filestate.__opts__, {'test': True}):
|
||||
if salt.utils.is_windows():
|
||||
comt = 'The directory "{0}" will be changed' \
|
||||
''.format(name)
|
||||
p_chg = {'directory': 'new'}
|
||||
else:
|
||||
comt = ('The following files will be changed:\n{0}:'
|
||||
' directory - new\n'.format(name))
|
||||
p_chg = {'/etc/grub.conf': {'directory': 'new'}}
|
||||
p_chg = {'/etc/grub.conf': {'directory': 'new'}}
|
||||
ret.update({
|
||||
'comment': comt,
|
||||
'result': None,
|
||||
|
|
|
@ -131,3 +131,51 @@ class MinionTestCase(TestCase):
|
|||
self.assertEqual(minion.jid_queue, [456, 789])
|
||||
finally:
|
||||
minion.destroy()
|
||||
|
||||
def test_beacons_before_connect(self):
|
||||
'''
|
||||
Tests that the 'beacons_before_connect' option causes the beacons to be initialized before connect.
|
||||
'''
|
||||
with patch('salt.minion.Minion.ctx', MagicMock(return_value={})), \
|
||||
patch('salt.minion.Minion.sync_connect_master', MagicMock(side_effect=RuntimeError('stop execution'))), \
|
||||
patch('salt.utils.process.SignalHandlingMultiprocessingProcess.start', MagicMock(return_value=True)), \
|
||||
patch('salt.utils.process.SignalHandlingMultiprocessingProcess.join', MagicMock(return_value=True)):
|
||||
mock_opts = copy.copy(salt.config.DEFAULT_MINION_OPTS)
|
||||
mock_opts['beacons_before_connect'] = True
|
||||
try:
|
||||
minion = salt.minion.Minion(mock_opts, io_loop=tornado.ioloop.IOLoop())
|
||||
|
||||
try:
|
||||
minion.tune_in(start=True)
|
||||
except RuntimeError:
|
||||
pass
|
||||
|
||||
# Make sure beacons are initialized but the sheduler is not
|
||||
self.assertTrue('beacons' in minion.periodic_callbacks)
|
||||
self.assertTrue('schedule' not in minion.periodic_callbacks)
|
||||
finally:
|
||||
minion.destroy()
|
||||
|
||||
def test_scheduler_before_connect(self):
|
||||
'''
|
||||
Tests that the 'scheduler_before_connect' option causes the scheduler to be initialized before connect.
|
||||
'''
|
||||
with patch('salt.minion.Minion.ctx', MagicMock(return_value={})), \
|
||||
patch('salt.minion.Minion.sync_connect_master', MagicMock(side_effect=RuntimeError('stop execution'))), \
|
||||
patch('salt.utils.process.SignalHandlingMultiprocessingProcess.start', MagicMock(return_value=True)), \
|
||||
patch('salt.utils.process.SignalHandlingMultiprocessingProcess.join', MagicMock(return_value=True)):
|
||||
mock_opts = copy.copy(salt.config.DEFAULT_MINION_OPTS)
|
||||
mock_opts['scheduler_before_connect'] = True
|
||||
try:
|
||||
minion = salt.minion.Minion(mock_opts, io_loop=tornado.ioloop.IOLoop())
|
||||
|
||||
try:
|
||||
minion.tune_in(start=True)
|
||||
except RuntimeError:
|
||||
pass
|
||||
|
||||
# Make sure the scheduler is initialized but the beacons are not
|
||||
self.assertTrue('schedule' in minion.periodic_callbacks)
|
||||
self.assertTrue('beacons' not in minion.periodic_callbacks)
|
||||
finally:
|
||||
minion.destroy()
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue