Merge branch 'develop' into add_file_tree_environments

This commit is contained in:
pjcreath 2017-12-04 11:46:36 -05:00
commit 518e709f35
117 changed files with 4593 additions and 726 deletions

4
.github/stale.yml vendored
View file

@ -1,8 +1,8 @@
# Probot Stale configuration file
# Number of days of inactivity before an issue becomes stale
# 890 is approximately 2 years and 5 months
daysUntilStale: 890
# 875 is approximately 2 years and 5 months
daysUntilStale: 875
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 7

View file

@ -1,6 +1,6 @@
---
<% vagrant = system('which vagrant 2>/dev/null >/dev/null') %>
<% version = '2017.7.2' %>
<% version = '2017.7.1' %>
<% platformsfile = ENV['SALT_KITCHEN_PLATFORMS'] || '.kitchen/platforms.yml' %>
<% driverfile = ENV['SALT_KITCHEN_DRIVER'] || '.kitchen/driver.yml' %>
@ -19,6 +19,8 @@ driver:
disable_upstart: false
provision_command:
- echo 'L /run/docker.sock - - - - /docker.sock' > /etc/tmpfiles.d/docker.conf
transport:
name: sftp
<% end %>
sudo: false
@ -164,6 +166,9 @@ suites:
clone_repo: false
salttesting_namespec: salttesting==2017.6.1
- name: py3
excludes:
- centos-6
- ubuntu-14.04
provisioner:
pillars:
top.sls:

View file

@ -1,9 +1,10 @@
# This file is only used for running the test suite with kitchen-salt.
source "https://rubygems.org"
source 'https://rubygems.org'
gem "test-kitchen"
gem "kitchen-salt", :git => 'https://github.com/saltstack/kitchen-salt.git'
gem 'test-kitchen'
gem 'kitchen-salt', :git => 'https://github.com/saltstack/kitchen-salt.git'
gem 'kitchen-sync'
gem 'git'
group :docker do

View file

@ -297,6 +297,11 @@
#batch_safe_limit: 100
#batch_safe_size: 8
# Master stats enables stats events to be fired from the master at close
# to the defined interval
#master_stats: False
#master_stats_event_iter: 60
##### Security settings #####
##########################################

View file

@ -868,6 +868,29 @@ what you are doing! Transports are explained in :ref:`Salt Transports
ret_port: 4606
zeromq: []
.. conf_master:: master_stats
``master_stats``
----------------
Default: False
Turning on the master stats enables runtime throughput and statistics events
to be fired from the master event bus. These events will report on what
functions have been run on the master and how long these runs have, on
average, taken over a given period of time.
.. conf_master:: master_stats_event_iter
``master_stats_event_iter``
---------------------------
Default: 60
The time in seconds to fire master_stats events. This will only fire in
conjunction with receiving a request to the master, idle masters will not
fire these events.
.. conf_master:: sock_pool_size
``sock_pool_size``

View file

@ -417,6 +417,7 @@ execution modules
system
system_profiler
systemd
telegram
telemetry
temp
test

View file

@ -404,6 +404,22 @@ The above example will force the minion to use the :py:mod:`systemd
.. __: https://github.com/saltstack/salt/issues/new
Logging Restrictions
--------------------
As a rule, logging should not be done anywhere in a Salt module before it is
loaded. This rule apples to all code that would run before the ``__virtual__()``
function, as well as the code within the ``__virtual__()`` function itself.
If logging statements are made before the virtual function determines if
the module should be loaded, then those logging statements will be called
repeatedly. This clutters up log files unnecessarily.
Exceptions may be considered for logging statements made at the ``trace`` level.
However, it is better to provide the necessary information by another means.
One method is to :ref:`return error information <modules-error-info>` in the
``__virtual__()`` function.
.. _modules-virtual-name:
``__virtualname__``

View file

@ -111,6 +111,8 @@ This code will call the `managed` function in the :mod:`file
<salt.states.file>` state module and pass the arguments ``name`` and ``source``
to it.
.. _state-return-data:
Return Data
===========

View file

@ -5,10 +5,10 @@ Orchestrate Runner
==================
Executing states or highstate on a minion is perfect when you want to ensure that
minion configured and running the way you want. Sometimes however you want to
minion configured and running the way you want. Sometimes however you want to
configure a set of minions all at once.
For example, if you want to set up a load balancer in front of a cluster of web
For example, if you want to set up a load balancer in front of a cluster of web
servers you can ensure the load balancer is set up first, and then the same
matching configuration is applied consistently across the whole cluster.
@ -222,6 +222,34 @@ To execute with pillar data.
"master": "mymaster"}'
Return Codes in Runner/Wheel Jobs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. versionadded:: Oxygen
State (``salt.state``) jobs are able to report failure via the :ref:`state
return dictionary <state-return-data>`. Remote execution (``salt.function``)
jobs are able to report failure by setting a ``retcode`` key in the
``__context__`` dictionary. However, runner (``salt.runner``) and wheel
(``salt.wheel``) jobs would only report a ``False`` result when the
runner/wheel function raised an exception. As of the Oxygen release, it is now
possible to set a retcode in runner and wheel functions just as you can do in
remote execution functions. Here is some example pseudocode:
.. code-block:: python
def myrunner():
...
do stuff
...
if some_error_condition:
__context__['retcode'] = 1
return result
This allows a custom runner/wheel function to report its failure so that
requisites can accurately tell that a job has failed.
More Complex Orchestration
~~~~~~~~~~~~~~~~~~~~~~~~~~

View file

@ -25,6 +25,25 @@ by any master tops matches that are not matched via a top file.
To make master tops matches execute first, followed by top file matches, set
the new :conf_minion:`master_tops_first` minion config option to ``True``.
Return Codes for Runner/Wheel Functions
---------------------------------------
When using :ref:`orchestration <orchestrate-runner>`, runner and wheel
functions used to report a ``True`` result if the function ran to completion
without raising an exception. It is now possible to set a return code in the
``__context__`` dictionary, allowing runner and wheel functions to report that
they failed. Here's some example pseudocode:
.. code-block:: python
def myrunner():
...
do stuff
...
if some_error_condition:
__context__['retcode'] = 1
return result
LDAP via External Authentication Changes
----------------------------------------
In this release of Salt, if LDAP Bind Credentials are supplied, then

View file

@ -78,7 +78,7 @@ UNIX systems
**BSD**:
- OpenBSD (``pip`` installation)
- OpenBSD
- FreeBSD 9/10/11
**SunOS**:
@ -272,66 +272,118 @@ Here's a summary of the command line options:
$ sh bootstrap-salt.sh -h
Usage : bootstrap-salt.sh [options] <install-type> <install-type-args>
Installation types:
- stable (default)
- stable [version] (ubuntu specific)
- daily (ubuntu specific)
- testing (redhat specific)
- git
- stable Install latest stable release. This is the default
install type
- stable [branch] Install latest version on a branch. Only supported
for packages available at repo.saltstack.com
- stable [version] Install a specific version. Only supported for
packages available at repo.saltstack.com
- daily Ubuntu specific: configure SaltStack Daily PPA
- testing RHEL-family specific: configure EPEL testing repo
- git Install from the head of the develop branch
- git [ref] Install from any git ref (such as a branch, tag, or
commit)
Examples:
- bootstrap-salt.sh
- bootstrap-salt.sh stable
- bootstrap-salt.sh stable 2014.7
- bootstrap-salt.sh stable 2017.7
- bootstrap-salt.sh stable 2017.7.2
- bootstrap-salt.sh daily
- bootstrap-salt.sh testing
- bootstrap-salt.sh git
- bootstrap-salt.sh git develop
- bootstrap-salt.sh git v0.17.0
- bootstrap-salt.sh git 8c3fadf15ec183e5ce8c63739850d543617e4357
- bootstrap-salt.sh git 2017.7
- bootstrap-salt.sh git v2017.7.2
- bootstrap-salt.sh git 06f249901a2e2f1ed310d58ea3921a129f214358
Options:
-h Display this message
-v Display script version
-n No colours.
-D Show debug output.
-c Temporary configuration directory
-g Salt repository URL. (default: git://github.com/saltstack/salt.git)
-G Instead of cloning from git://github.com/saltstack/salt.git, clone from https://github.com/saltstack/salt.git (Usually necessary on systems which have the regular git protocol port blocked, where https usually is not)
-k Temporary directory holding the minion keys which will pre-seed
the master.
-s Sleep time used when waiting for daemons to start, restart and when checking
for the services running. Default: 3
-M Also install salt-master
-S Also install salt-syndic
-N Do not install salt-minion
-X Do not start daemons after installation
-C Only run the configuration function. This option automatically
bypasses any installation.
-P Allow pip based installations. On some distributions the required salt
packages or its dependencies are not available as a package for that
distribution. Using this flag allows the script to use pip as a last
resort method. NOTE: This only works for functions which actually
implement pip based installations.
-F Allow copied files to overwrite existing(config, init.d, etc)
-U If set, fully upgrade the system prior to bootstrapping salt
-K If set, keep the temporary files in the temporary directories specified
with -c and -k.
-I If set, allow insecure connections while downloading any files. For
example, pass '--no-check-certificate' to 'wget' or '--insecure' to 'curl'
-A Pass the salt-master DNS name or IP. This will be stored under
${BS_SALT_ETC_DIR}/minion.d/99-master-address.conf
-i Pass the salt-minion id. This will be stored under
${BS_SALT_ETC_DIR}/minion_id
-L Install the Apache Libcloud package if possible(required for salt-cloud)
-p Extra-package to install while installing salt dependencies. One package
per -p flag. You're responsible for providing the proper package name.
-d Disable check_service functions. Setting this flag disables the
'install_<distro>_check_services' checks. You can also do this by
touching /tmp/disable_salt_checks on the target host. Defaults ${BS_FALSE}
-H Use the specified http proxy for the installation
-Z Enable external software source for newer ZeroMQ(Only available for RHEL/CentOS/Fedora/Ubuntu based distributions)
-b Assume that dependencies are already installed and software sources are set up.
If git is selected, git tree is still checked out as dependency step.
-h Display this message
-v Display script version
-n No colours
-D Show debug output
-c Temporary configuration directory
-g Salt Git repository URL. Default: https://github.com/saltstack/salt.git
-w Install packages from downstream package repository rather than
upstream, saltstack package repository. This is currently only
implemented for SUSE.
-k Temporary directory holding the minion keys which will pre-seed
the master.
-s Sleep time used when waiting for daemons to start, restart and when
checking for the services running. Default: 3
-L Also install salt-cloud and required python-libcloud package
-M Also install salt-master
-S Also install salt-syndic
-N Do not install salt-minion
-X Do not start daemons after installation
-d Disables checking if Salt services are enabled to start on system boot.
You can also do this by touching /tmp/disable_salt_checks on the target
host. Default: ${BS_FALSE}
-P Allow pip based installations. On some distributions the required salt
packages or its dependencies are not available as a package for that
distribution. Using this flag allows the script to use pip as a last
resort method. NOTE: This only works for functions which actually
implement pip based installations.
-U If set, fully upgrade the system prior to bootstrapping Salt
-I If set, allow insecure connections while downloading any files. For
example, pass '--no-check-certificate' to 'wget' or '--insecure' to
'curl'. On Debian and Ubuntu, using this option with -U allows to obtain
GnuPG archive keys insecurely if distro has changed release signatures.
-F Allow copied files to overwrite existing (config, init.d, etc)
-K If set, keep the temporary files in the temporary directories specified
with -c and -k
-C Only run the configuration function. Implies -F (forced overwrite).
To overwrite Master or Syndic configs, -M or -S, respectively, must
also be specified. Salt installation will be ommitted, but some of the
dependencies could be installed to write configuration with -j or -J.
-A Pass the salt-master DNS name or IP. This will be stored under
${BS_SALT_ETC_DIR}/minion.d/99-master-address.conf
-i Pass the salt-minion id. This will be stored under
${BS_SALT_ETC_DIR}/minion_id
-p Extra-package to install while installing Salt dependencies. One package
per -p flag. You're responsible for providing the proper package name.
-H Use the specified HTTP proxy for all download URLs (including https://).
For example: http://myproxy.example.com:3128
-Z Enable additional package repository for newer ZeroMQ
(only available for RHEL/CentOS/Fedora/Ubuntu based distributions)
-b Assume that dependencies are already installed and software sources are
set up. If git is selected, git tree is still checked out as dependency
step.
-f Force shallow cloning for git installations.
This may result in an "n/a" in the version number.
-l Disable ssl checks. When passed, switches "https" calls to "http" where
possible.
-V Install Salt into virtualenv
(only available for Ubuntu based distributions)
-a Pip install all Python pkg dependencies for Salt. Requires -V to install
all pip pkgs into the virtualenv.
(Only available for Ubuntu based distributions)
-r Disable all repository configuration performed by this script. This
option assumes all necessary repository configuration is already present
on the system.
-R Specify a custom repository URL. Assumes the custom repository URL
points to a repository that mirrors Salt packages located at
repo.saltstack.com. The option passed with -R replaces the
"repo.saltstack.com". If -R is passed, -r is also set. Currently only
works on CentOS/RHEL and Debian based distributions.
-J Replace the Master config file with data passed in as a JSON string. If
a Master config file is found, a reasonable effort will be made to save
the file with a ".bak" extension. If used in conjunction with -C or -F,
no ".bak" file will be created as either of those options will force
a complete overwrite of the file.
-j Replace the Minion config file with data passed in as a JSON string. If
a Minion config file is found, a reasonable effort will be made to save
the file with a ".bak" extension. If used in conjunction with -C or -F,
no ".bak" file will be created as either of those options will force
a complete overwrite of the file.
-q Quiet salt installation from git (setup.py install -q)
-x Changes the python version used to install a git version of salt. Currently
this is considered experimental and has only been tested on Centos 6. This
only works for git installations.
-y Installs a different python version on host. Currently this has only been
tested with Centos 6 and is considered experimental. This will install the
ius repo on the box if disable repo is false. This must be used in conjunction
with -x <pythonversion>. For example:
sh bootstrap.sh -P -y -x python2.7 git v2016.11.3
The above will install python27 and install the git version of salt using the
python2.7 executable. This only works for git and pip installations.

View file

@ -161,6 +161,7 @@ class Master(salt.utils.parsers.MasterOptionParser, DaemonsMixin): # pylint: di
v_dirs,
self.config['user'],
permissive=self.config['permissive_pki_access'],
root_dir=self.config['root_dir'],
sensitive_dirs=[self.config['pki_dir'], self.config['key_dir']],
)
# Clear out syndics from cachedir
@ -281,6 +282,7 @@ class Minion(salt.utils.parsers.MinionOptionParser, DaemonsMixin): # pylint: di
v_dirs,
self.config['user'],
permissive=self.config['permissive_pki_access'],
root_dir=self.config['root_dir'],
sensitive_dirs=[self.config['pki_dir']],
)
except OSError as error:
@ -468,6 +470,7 @@ class ProxyMinion(salt.utils.parsers.ProxyMinionOptionParser, DaemonsMixin): #
v_dirs,
self.config['user'],
permissive=self.config['permissive_pki_access'],
root_dir=self.config['root_dir'],
sensitive_dirs=[self.config['pki_dir']],
)
except OSError as error:
@ -576,6 +579,7 @@ class Syndic(salt.utils.parsers.SyndicOptionParser, DaemonsMixin): # pylint: di
],
self.config['user'],
permissive=self.config['permissive_pki_access'],
root_dir=self.config['root_dir'],
sensitive_dirs=[self.config['pki_dir']],
)
except OSError as error:

View file

@ -32,7 +32,10 @@ class SPM(parsers.SPMParser):
v_dirs = [
self.config['cachedir'],
]
verify_env(v_dirs, self.config['user'],)
verify_env(v_dirs,
self.config['user'],
root_dir=self.config['root_dir'],
)
verify_log(self.config)
client = salt.spm.SPMClient(ui, self.config)
client.run(self.args)

View file

@ -385,7 +385,11 @@ class SyncClientMixin(object):
# Initialize a context for executing the method.
with tornado.stack_context.StackContext(self.functions.context_dict.clone):
data[u'return'] = self.functions[fun](*args, **kwargs)
data[u'success'] = True
try:
data[u'success'] = self.context.get(u'retcode', 0) == 0
except AttributeError:
# Assume a True result if no context attribute
data[u'success'] = True
if isinstance(data[u'return'], dict) and u'data' in data[u'return']:
# some functions can return boolean values
data[u'success'] = salt.utils.state.check_result(data[u'return'][u'data'])

View file

@ -1026,6 +1026,8 @@ class Single(object):
opts_pkg[u'__master_opts__'] = self.context[u'master_opts']
if u'_caller_cachedir' in self.opts:
opts_pkg[u'_caller_cachedir'] = self.opts[u'_caller_cachedir']
if u'known_hosts_file' in self.opts:
opts_pkg[u'known_hosts_file'] = self.opts[u'known_hosts_file']
else:
opts_pkg[u'_caller_cachedir'] = self.opts[u'cachedir']
# Use the ID defined in the roster file

View file

@ -67,7 +67,8 @@ class SaltCloud(salt.utils.parsers.SaltCloudParser):
if self.config['verify_env']:
verify_env(
[os.path.dirname(self.config['conf_file'])],
salt_master_user
salt_master_user,
root_dir=self.config['root_dir'],
)
logfile = self.config['log_file']
if logfile is not None and not logfile.startswith('tcp://') \

View file

@ -80,6 +80,7 @@ def _master_opts(cfg='master'):
cfg = os.environ.get(
'SALT_MASTER_CONFIG', os.path.join(default_dir, cfg))
opts = config.master_config(cfg)
opts['output'] = 'quiet'
return opts
@ -559,9 +560,10 @@ def get_configured_provider(vm_=None):
# in all cases, verify that the linked saltmaster is alive.
if data:
ret = _salt('test.ping', salt_target=data['target'])
if not ret:
raise SaltCloudSystemExit(
if ret:
return data
else:
log.error(
'Configured provider {0} minion: {1} is unreachable'.format(
__active_provider_name__, data['target']))
return data
return False

View file

@ -165,6 +165,10 @@ VALID_OPTS = {
# The master_pubkey_signature must also be set for this.
'master_use_pubkey_signature': bool,
# Enable master stats eveents to be fired, these events will contain information about
# what commands the master is processing and what the rates are of the executions
'master_stats': bool,
'master_stats_event_iter': int,
# The key fingerprint of the higher-level master for the syndic to verify it is talking to the
# intended master
'syndic_finger': str,
@ -1515,6 +1519,8 @@ DEFAULT_MASTER_OPTS = {
'svnfs_saltenv_whitelist': [],
'svnfs_saltenv_blacklist': [],
'max_event_size': 1048576,
'master_stats': False,
'master_stats_event_iter': 60,
'minionfs_env': 'base',
'minionfs_mountpoint': '',
'minionfs_whitelist': [],

View file

@ -42,6 +42,7 @@ import salt.utils.platform
import salt.utils.stringutils
import salt.utils.user
import salt.utils.verify
import salt.utils.versions
from salt.defaults import DEFAULT_TARGET_DELIM
from salt.pillar import git_pillar
from salt.exceptions import FileserverConfigError, SaltMasterError
@ -534,7 +535,7 @@ class RemoteFuncs(object):
return ret
expr_form = load.get('expr_form')
if expr_form is not None and 'tgt_type' not in load:
salt.utils.warn_until(
salt.utils.versions.warn_until(
u'Neon',
u'_mine_get: minion {0} uses pre-Nitrogen API key '
u'"expr_form". Accepting for backwards compatibility '

View file

@ -474,8 +474,14 @@ def _sunos_memdata():
grains['mem_total'] = int(comps[2].strip())
swap_cmd = salt.utils.path.which('swap')
swap_total = __salt__['cmd.run']('{0} -s'.format(swap_cmd)).split()[1]
grains['swap_total'] = int(swap_total) // 1024
swap_data = __salt__['cmd.run']('{0} -s'.format(swap_cmd)).split()
try:
swap_avail = int(swap_data[-2][:-1])
swap_used = int(swap_data[-4][:-1])
swap_total = (swap_avail + swap_used) // 1024
except ValueError:
swap_total = None
grains['swap_total'] = swap_total
return grains
@ -2475,10 +2481,9 @@ def _linux_iqn():
if os.path.isfile(initiator):
with salt.utils.files.fopen(initiator, 'r') as _iscsi:
for line in _iscsi:
if line.find('InitiatorName') != -1:
iqn = line.split('=')
final_iqn = iqn[1].rstrip()
ret.extend([final_iqn])
line = line.strip()
if line.startswith('InitiatorName='):
ret.append(line.split('=', 1)[1])
return ret
@ -2492,9 +2497,10 @@ def _aix_iqn():
aixret = __salt__['cmd.run'](aixcmd)
if aixret[0].isalpha():
iqn = aixret.split()
final_iqn = iqn[1].rstrip()
ret.extend([final_iqn])
try:
ret.append(aixret.split()[1].rstrip())
except IndexError:
pass
return ret
@ -2507,8 +2513,7 @@ def _linux_wwns():
for fcfile in glob.glob('/sys/class/fc_host/*/port_name'):
with salt.utils.files.fopen(fcfile, 'r') as _wwn:
for line in _wwn:
line = line.rstrip()
ret.extend([line[2:]])
ret.append(line.rstrip()[2:])
return ret
@ -2532,11 +2537,9 @@ def _windows_iqn():
wmic, namespace, mspath, get))
for line in cmdret['stdout'].splitlines():
if line[0].isalpha():
continue
line = line.rstrip()
ret.extend([line])
if line.startswith('iqn.'):
line = line.rstrip()
ret.append(line.rstrip())
return ret
@ -2551,7 +2554,6 @@ def _windows_wwns():
cmdret = __salt__['cmd.run_ps'](ps_cmd)
for line in cmdret:
line = line.rstrip()
ret.append(line)
ret.append(line.rstrip())
return ret

View file

@ -372,15 +372,18 @@ def tops(opts):
return FilterDictWrapper(ret, u'.top')
def wheels(opts, whitelist=None):
def wheels(opts, whitelist=None, context=None):
'''
Returns the wheels modules
'''
if context is None:
context = {}
return LazyLoader(
_module_dirs(opts, u'wheel'),
opts,
tag=u'wheel',
whitelist=whitelist,
pack={u'__context__': context},
)
@ -836,17 +839,19 @@ def call(fun, **kwargs):
return funcs[fun](*args)
def runner(opts, utils=None):
def runner(opts, utils=None, context=None):
'''
Directly call a function inside a loader directory
'''
if utils is None:
utils = {}
if context is None:
context = {}
ret = LazyLoader(
_module_dirs(opts, u'runners', u'runner', ext_type_dirs=u'runner_dirs'),
opts,
tag=u'runners',
pack={u'__utils__': utils},
pack={u'__utils__': utils, u'__context__': context},
)
# TODO: change from __salt__ to something else, we overload __salt__ too much
ret.pack[u'__salt__'] = ret

View file

@ -16,6 +16,7 @@ import errno
import signal
import stat
import logging
import collections
import multiprocessing
import salt.serializers.msgpack
@ -797,6 +798,7 @@ class MWorker(salt.utils.process.SignalHandlingMultiprocessingProcess):
:return: Master worker
'''
kwargs[u'name'] = name
self.name = name
super(MWorker, self).__init__(**kwargs)
self.opts = opts
self.req_channels = req_channels
@ -804,6 +806,8 @@ class MWorker(salt.utils.process.SignalHandlingMultiprocessingProcess):
self.mkey = mkey
self.key = key
self.k_mtime = 0
self.stats = collections.defaultdict(lambda: {'mean': 0, 'runs': 0})
self.stat_clock = time.time()
# We need __setstate__ and __getstate__ to also pickle 'SMaster.secrets'.
# Otherwise, 'SMaster.secrets' won't be copied over to the spawned process
@ -879,6 +883,19 @@ class MWorker(salt.utils.process.SignalHandlingMultiprocessingProcess):
u'clear': self._handle_clear}[key](load)
raise tornado.gen.Return(ret)
def _post_stats(self, start, cmd):
'''
Calculate the master stats and fire events with stat info
'''
end = time.time()
duration = end - start
self.stats[cmd][u'mean'] = (self.stats[cmd][u'mean'] * (self.stats[cmd][u'runs'] - 1) + duration) / self.stats[cmd][u'runs']
if end - self.stat_clock > self.opts[u'master_stats_event_iter']:
# Fire the event with the stats and wipe the tracker
self.aes_funcs.event.fire_event({u'time': end - self.stat_clock, u'worker': self.name, u'stats': self.stats}, tagify(self.name, u'stats'))
self.stats = collections.defaultdict(lambda: {'mean': 0, 'runs': 0})
self.stat_clock = end
def _handle_clear(self, load):
'''
Process a cleartext command
@ -888,9 +905,16 @@ class MWorker(salt.utils.process.SignalHandlingMultiprocessingProcess):
the command specified in the load's 'cmd' key.
'''
log.trace(u'Clear payload received with command %s', load[u'cmd'])
if load[u'cmd'].startswith(u'__'):
cmd = load[u'cmd']
if cmd.startswith(u'__'):
return False
return getattr(self.clear_funcs, load[u'cmd'])(load), {u'fun': u'send_clear'}
if self.opts[u'master_stats']:
start = time.time()
self.stats[cmd][u'runs'] += 1
ret = getattr(self.clear_funcs, cmd)(load), {u'fun': u'send_clear'}
if self.opts[u'master_stats']:
self._post_stats(start, cmd)
return ret
def _handle_aes(self, data):
'''
@ -903,10 +927,17 @@ class MWorker(salt.utils.process.SignalHandlingMultiprocessingProcess):
if u'cmd' not in data:
log.error(u'Received malformed command %s', data)
return {}
cmd = data[u'cmd']
log.trace(u'AES payload received with command %s', data[u'cmd'])
if data[u'cmd'].startswith(u'__'):
if cmd.startswith(u'__'):
return False
return self.aes_funcs.run_func(data[u'cmd'], data)
if self.opts[u'master_stats']:
start = time.time()
self.stats[cmd][u'runs'] += 1
ret = self.aes_funcs.run_func(data[u'cmd'], data)
if self.opts[u'master_stats']:
self._post_stats(start, cmd)
return ret
def run(self):
'''

View file

@ -2067,12 +2067,16 @@ class Minion(MinionBase):
self.schedule.run_job(name)
elif func == u'disable_job':
self.schedule.disable_job(name, persist)
elif func == u'postpone_job':
self.schedule.postpone_job(name, data)
elif func == u'reload':
self.schedule.reload(schedule)
elif func == u'list':
self.schedule.list(where)
elif func == u'save_schedule':
self.schedule.save_schedule()
elif func == u'get_next_fire_time':
self.schedule.get_next_fire_time(name)
def manage_beacons(self, tag, data):
'''

View file

@ -125,8 +125,7 @@ def cert(name,
salt 'gitlab.example.com' acme.cert dev.example.com "[gitlab.example.com]" test_cert=True renew=14 webroot=/opt/gitlab/embedded/service/gitlab-rails/public
'''
# cmd = [LEA, 'certonly', '--quiet']
cmd = [LEA, 'certonly']
cmd = [LEA, 'certonly', '--non-interactive']
cert_file = _cert_file(name, 'cert')
if not __salt__['file.file_exists'](cert_file):

View file

@ -214,14 +214,12 @@ def __virtual__():
'''
ret = ansible is not None
msg = not ret and "Ansible is not installed on this system" or None
if msg:
log.warning(msg)
else:
if ret:
global _resolver
global _caller
_resolver = AnsibleModuleResolver(__opts__).resolve().install()
_caller = AnsibleModuleCaller(_resolver)
_set_callables(list())
_set_callables(list())
return ret, msg

View file

@ -441,6 +441,30 @@ def reshard(stream_name, desired_size, force=False,
return r
def list_streams(region=None, key=None, keyid=None, profile=None):
'''
Return a list of all streams visible to the current account
CLI example:
.. code-block:: bash
salt myminion boto_kinesis.list_streams
'''
conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile)
streams = []
exclusive_start_stream_name = ''
while exclusive_start_stream_name is not None:
args = {'ExclusiveStartStreamName': exclusive_start_stream_name} if exclusive_start_stream_name else {}
ret = _execute_with_retries(conn, 'list_streams', **args)
if 'error' in ret:
return ret
ret = ret['result'] if ret and ret.get('result') else {}
streams += ret.get('StreamNames', [])
exclusive_start_stream_name = streams[-1] if ret.get('HasMoreStreams', False) in (True, 'true') else None
return {'result': streams}
def _get_next_open_shard(stream_details, shard_id):
'''
Return the next open shard after shard_id
@ -502,10 +526,12 @@ def _execute_with_retries(conn, function, **kwargs):
else:
# ResourceNotFoundException or InvalidArgumentException
r['error'] = e.response['Error']
log.error(r['error'])
r['result'] = None
return r
r['error'] = "Tried to execute function {0} {1} times, but was unable".format(function, max_attempts)
log.error(r['error'])
return r

View file

@ -552,11 +552,11 @@ def lsattr(path):
raise SaltInvocationError("File or directory does not exist.")
cmd = ['lsattr', path]
result = __salt__['cmd.run'](cmd, python_shell=False)
result = __salt__['cmd.run'](cmd, ignore_retcode=True, python_shell=False)
results = {}
for line in result.splitlines():
if not line.startswith('lsattr'):
if not line.startswith('lsattr: '):
vals = line.split(None, 1)
results[vals[1]] = re.findall(r"[acdijstuADST]", vals[0])
@ -5203,13 +5203,18 @@ def manage_file(name,
'Replace symbolic link with regular file'
if salt.utils.platform.is_windows():
ret = check_perms(name,
ret,
kwargs.get('win_owner'),
kwargs.get('win_perms'),
kwargs.get('win_deny_perms'),
None,
kwargs.get('win_inheritance'))
# This function resides in win_file.py and will be available
# on Windows. The local function will be overridden
# pylint: disable=E1120,E1121,E1123
ret = check_perms(
path=name,
ret=ret,
owner=kwargs.get('win_owner'),
grant_perms=kwargs.get('win_perms'),
deny_perms=kwargs.get('win_deny_perms'),
inheritance=kwargs.get('win_inheritance', True),
reset=kwargs.get('win_perms_reset', False))
# pylint: enable=E1120,E1121,E1123
else:
ret, _ = check_perms(name, ret, user, group, mode, attrs, follow_symlinks)
@ -5250,13 +5255,15 @@ def manage_file(name,
if salt.utils.platform.is_windows():
# This function resides in win_file.py and will be available
# on Windows. The local function will be overridden
# pylint: disable=E1121
makedirs_(name,
kwargs.get('win_owner'),
kwargs.get('win_perms'),
kwargs.get('win_deny_perms'),
kwargs.get('win_inheritance'))
# pylint: enable=E1121
# pylint: disable=E1120,E1121,E1123
makedirs_(
path=name,
owner=kwargs.get('win_owner'),
grant_perms=kwargs.get('win_perms'),
deny_perms=kwargs.get('win_deny_perms'),
inheritance=kwargs.get('win_inheritance', True),
reset=kwargs.get('win_perms_reset', False))
# pylint: enable=E1120,E1121,E1123
else:
makedirs_(name, user=user, group=group, mode=dir_mode)
@ -5369,13 +5376,18 @@ def manage_file(name,
mode = oct((0o777 ^ mask) & 0o666)
if salt.utils.platform.is_windows():
ret = check_perms(name,
ret,
kwargs.get('win_owner'),
kwargs.get('win_perms'),
kwargs.get('win_deny_perms'),
None,
kwargs.get('win_inheritance'))
# This function resides in win_file.py and will be available
# on Windows. The local function will be overridden
# pylint: disable=E1120,E1121,E1123
ret = check_perms(
path=name,
ret=ret,
owner=kwargs.get('win_owner'),
grant_perms=kwargs.get('win_perms'),
deny_perms=kwargs.get('win_deny_perms'),
inheritance=kwargs.get('win_inheritance', True),
reset=kwargs.get('win_perms_reset', False))
# pylint: enable=E1120,E1121,E1123
else:
ret, _ = check_perms(name, ret, user, group, mode, attrs)

View file

@ -13,6 +13,7 @@ import re
# Import Salt Libs
from salt.exceptions import CommandExecutionError
import salt.utils.path
import salt.utils.versions
log = logging.getLogger(__name__)
@ -635,8 +636,10 @@ def add_port(zone, port, permanent=True, force_masquerade=None):
# This will be deprecated in a future release
if force_masquerade is None:
force_masquerade = True
salt.utils.warn_until('Neon',
'add_port function will no longer force enable masquerading in future releases. Use add_masquerade to enable masquerading.')
salt.utils.versions.warn_until(
'Neon',
'add_port function will no longer force enable masquerading '
'in future releases. Use add_masquerade to enable masquerading.')
# (DEPRECATED) Force enable masquerading
# TODO: remove in future release
@ -709,8 +712,10 @@ def add_port_fwd(zone, src, dest, proto='tcp', dstaddr='', permanent=True, force
# This will be deprecated in a future release
if force_masquerade is None:
force_masquerade = True
salt.utils.warn_until('Neon',
'add_port_fwd function will no longer force enable masquerading in future releases. Use add_masquerade to enable masquerading.')
salt.utils.versions.warn_until(
'Neon',
'add_port_fwd function will no longer force enable masquerading '
'in future releases. Use add_masquerade to enable masquerading.')
# (DEPRECATED) Force enable masquerading
# TODO: remove in future release

View file

@ -36,11 +36,12 @@ _IPSET_FAMILIES = {
'ip6': 'inet6',
}
_IPSET_SET_TYPES = [
_IPSET_SET_TYPES = set([
'bitmap:ip',
'bitmap:ip,mac',
'bitmap:port',
'hash:ip',
'hash:mac',
'hash:ip,port',
'hash:ip,port,ip',
'hash:ip,port,net',
@ -49,32 +50,37 @@ _IPSET_SET_TYPES = [
'hash:net,iface',
'hash:net,port',
'hash:net,port,net',
'hash:ip,mark',
'list:set'
]
])
_CREATE_OPTIONS = {
'bitmap:ip': ['range', 'netmask', 'timeout', 'counters', 'comment'],
'bitmap:ip,mac': ['range', 'timeout', 'counters', 'comment'],
'bitmap:port': ['range', 'timeout', 'counters', 'comment'],
'hash:ip': ['family', 'hashsize', 'maxelem', 'netmask', 'timeout', 'counters', 'comment'],
'hash:net': ['family', 'hashsize', 'maxelem', 'netmask', 'timeout', 'counters', 'comment'],
'hash:net,net': ['family', 'hashsize', 'maxelem', 'netmask', 'timeout', 'counters', 'comment'],
'hash:net,port': ['family', 'hashsize', 'maxelem', 'netmask', 'timeout', 'counters', 'comment'],
'hash:net,port,net': ['family', 'hashsize', 'maxelem', 'netmask', 'timeout', 'counters', 'comment'],
'hash:ip,port,ip': ['family', 'hashsize', 'maxelem', 'netmask', 'timeout', 'counters', 'comment'],
'hash:ip,port,net': ['family', 'hashsize', 'maxelem', 'netmask', 'timeout', 'counters', 'comment'],
'hash:ip,port': ['family', 'hashsize', 'maxelem', 'netmask', 'timeout', 'counters', 'comment'],
'hash:net,iface': ['family', 'hashsize', 'maxelem', 'netmask', 'timeout', 'counters', 'comment'],
'list:set': ['size', 'timeout', 'counters', 'comment'],
'bitmap:ip': set(['range', 'netmask', 'timeout', 'counters', 'comment', 'skbinfo']),
'bitmap:ip,mac': set(['range', 'timeout', 'counters', 'comment', 'skbinfo']),
'bitmap:port': set(['range', 'timeout', 'counters', 'comment', 'skbinfo']),
'hash:ip': set(['family', 'hashsize', 'maxelem', 'netmask', 'timeout', 'counters', 'comment', 'skbinfo']),
'hash:mac': set(['hashsize', 'maxelem', 'timeout', 'counters', 'comment', 'skbinfo']),
'hash:net': set(['family', 'hashsize', 'maxelem', 'netmask', 'timeout', 'counters', 'comment', 'skbinfo']),
'hash:net,net': set(['family', 'hashsize', 'maxelem', 'netmask', 'timeout', 'counters', 'comment', 'skbinfo']),
'hash:net,port': set(['family', 'hashsize', 'maxelem', 'netmask', 'timeout', 'counters', 'comment', 'skbinfo']),
'hash:net,port,net': set(['family', 'hashsize', 'maxelem', 'netmask', 'timeout', 'counters', 'comment', 'skbinfo']),
'hash:ip,port,ip': set(['family', 'hashsize', 'maxelem', 'netmask', 'timeout', 'counters', 'comment', 'skbinfo']),
'hash:ip,port,net': set(['family', 'hashsize', 'maxelem', 'netmask', 'timeout', 'counters', 'comment', 'skbinfo']),
'hash:ip,port': set(['family', 'hashsize', 'maxelem', 'netmask', 'timeout', 'counters', 'comment', 'skbinfo']),
'hash:ip,mark': set(['family', 'markmask', 'hashsize', 'maxelem', 'timeout', 'counters', 'comment', 'skbinfo']),
'hash:net,iface': set(['family', 'hashsize', 'maxelem', 'netmask', 'timeout', 'counters', 'comment', 'skbinfo']),
'list:set': set(['size', 'timeout', 'counters', 'comment']),
}
_CREATE_OPTIONS_WITHOUT_VALUE = set(['comment', 'counters', 'skbinfo'])
_CREATE_OPTIONS_REQUIRED = {
'bitmap:ip': ['range'],
'bitmap:ip,mac': ['range'],
'bitmap:port': ['range'],
'hash:ip': [],
'hash:mac': [],
'hash:net': [],
'hash:net,net': [],
'hash:ip,port': [],
@ -83,24 +89,27 @@ _CREATE_OPTIONS_REQUIRED = {
'hash:ip,port,net': [],
'hash:net,port,net': [],
'hash:net,iface': [],
'hash:ip,mark': [],
'list:set': []
}
_ADD_OPTIONS = {
'bitmap:ip': ['timeout', 'packets', 'bytes'],
'bitmap:ip,mac': ['timeout', 'packets', 'bytes'],
'bitmap:port': ['timeout', 'packets', 'bytes'],
'hash:ip': ['timeout', 'packets', 'bytes'],
'hash:net': ['timeout', 'nomatch', 'packets', 'bytes'],
'hash:net,net': ['timeout', 'nomatch', 'packets', 'bytes'],
'hash:net,port': ['timeout', 'nomatch', 'packets', 'bytes'],
'hash:net,port,net': ['timeout', 'nomatch', 'packets', 'bytes'],
'hash:ip,port,ip': ['timeout', 'packets', 'bytes'],
'hash:ip,port,net': ['timeout', 'nomatch', 'packets', 'bytes'],
'hash:ip,port': ['timeout', 'nomatch', 'packets', 'bytes'],
'hash:net,iface': ['timeout', 'nomatch', 'packets', 'bytes'],
'list:set': ['timeout', 'packets', 'bytes'],
'bitmap:ip': set(['timeout', 'packets', 'bytes', 'skbmark', 'skbprio', 'skbqueue']),
'bitmap:ip,mac': set(['timeout', 'packets', 'bytes', 'skbmark', 'skbprio', 'skbqueue']),
'bitmap:port': set(['timeout', 'packets', 'bytes', 'skbmark', 'skbprio', 'skbprio']),
'hash:ip': set(['timeout', 'packets', 'bytes', 'skbmark', 'skbprio', 'skbqueue']),
'hash:mac': set(['timeout', 'packets', 'bytes', 'skbmark', 'skbprio', 'skbqueue']),
'hash:net': set(['timeout', 'nomatch', 'packets', 'bytes', 'skbmark', 'skbprio', 'skbqueue']),
'hash:net,net': set(['timeout', 'nomatch', 'packets', 'bytes', 'skbmark', 'skbprio', 'skbqueue']),
'hash:net,port': set(['timeout', 'nomatch', 'packets', 'bytes', 'skbmark', 'skbprio', 'skbqueue']),
'hash:net,port,net': set(['timeout', 'nomatch', 'packets', 'bytes', 'skbmark', 'skbprio', 'skbqueue']),
'hash:ip,port,ip': set(['timeout', 'packets', 'bytes', 'skbmark', 'skbprio', 'skbqueue']),
'hash:ip,port,net': set(['timeout', 'nomatch', 'packets', 'bytes', 'skbmark', 'skbprio', 'skbqueue']),
'hash:ip,port': set(['timeout', 'nomatch', 'packets', 'bytes', 'skbmark', 'skbprio', 'skbqueue']),
'hash:net,iface': set(['timeout', 'nomatch', 'packets', 'bytes', 'skbmark', 'skbprio', 'skbqueue']),
'hash:ip,mark': set(['timeout', 'packets', 'bytes', 'skbmark', 'skbprio', 'skbqueue']),
'list:set': set(['timeout', 'packets', 'bytes', 'skbmark', 'skbprio', 'skbqueue']),
}
@ -173,7 +182,10 @@ def new_set(set=None, set_type=None, family='ipv4', comment=False, **kwargs):
for item in _CREATE_OPTIONS[set_type]:
if item in kwargs:
cmd = '{0} {1} {2} '.format(cmd, item, kwargs[item])
if item in _CREATE_OPTIONS_WITHOUT_VALUE:
cmd = '{0} {1} '.format(cmd, item)
else:
cmd = '{0} {1} {2} '.format(cmd, item, kwargs[item])
# Family only valid for certain set types
if 'family' in _CREATE_OPTIONS[set_type]:
@ -307,7 +319,7 @@ def check_set(set=None, family='ipv4'):
return True
def add(set=None, entry=None, family='ipv4', **kwargs):
def add(setname=None, entry=None, family='ipv4', **kwargs):
'''
Append an entry to the specified set.
@ -320,14 +332,14 @@ def add(set=None, entry=None, family='ipv4', **kwargs):
salt '*' ipset.add setname 192.168.0.3,AA:BB:CC:DD:EE:FF
'''
if not set:
if not setname:
return 'Error: Set needs to be specified'
if not entry:
return 'Error: Entry needs to be specified'
setinfo = _find_set_info(set)
setinfo = _find_set_info(setname)
if not setinfo:
return 'Error: Set {0} does not exist'.format(set)
return 'Error: Set {0} does not exist'.format(setname)
settype = setinfo['Type']
@ -335,27 +347,32 @@ def add(set=None, entry=None, family='ipv4', **kwargs):
if 'timeout' in kwargs:
if 'timeout' not in setinfo['Header']:
return 'Error: Set {0} not created with timeout support'.format(set)
return 'Error: Set {0} not created with timeout support'.format(setname)
if 'packets' in kwargs or 'bytes' in kwargs:
if 'counters' not in setinfo['Header']:
return 'Error: Set {0} not created with counters support'.format(set)
return 'Error: Set {0} not created with counters support'.format(setname)
if 'comment' in kwargs:
if 'comment' not in setinfo['Header']:
return 'Error: Set {0} not created with comment support'.format(set)
cmd = '{0} comment "{1}"'.format(cmd, kwargs['comment'])
return 'Error: Set {0} not created with comment support'.format(setname)
if 'comment' not in entry:
cmd = '{0} comment "{1}"'.format(cmd, kwargs['comment'])
if len(set(['skbmark', 'skbprio', 'skbqueue']) & set(kwargs.keys())) > 0:
if 'skbinfo' not in setinfo['Header']:
return 'Error: Set {0} not created with skbinfo support'.format(setname)
for item in _ADD_OPTIONS[settype]:
if item in kwargs:
cmd = '{0} {1} {2}'.format(cmd, item, kwargs[item])
current_members = _find_set_members(set)
current_members = _find_set_members(setname)
if cmd in current_members:
return 'Warn: Entry {0} already exists in set {1}'.format(cmd, set)
return 'Warn: Entry {0} already exists in set {1}'.format(cmd, setname)
# Using -exist to ensure entries are updated if the comment changes
cmd = '{0} add -exist {1} {2}'.format(_ipset_cmd(), set, cmd)
cmd = '{0} add -exist {1} {2}'.format(_ipset_cmd(), setname, cmd)
out = __salt__['cmd.run'](cmd, python_shell=False)
if len(out) == 0:

View file

@ -6,9 +6,10 @@ Module for sending messages to Mattermost
:configuration: This module can be used by either passing an api_url and hook
directly or by specifying both in a configuration profile in the salt
master/minion config.
For example:
master/minion config. For example:
.. code-block:: yaml
mattermost:
hook: peWcBiMOS9HrZG15peWcBiMOS9HrZG15
api_url: https://example.com
@ -35,6 +36,7 @@ __virtualname__ = 'mattermost'
def __virtual__():
'''
Return virtual name of the module.
:return: The virtual name of the module.
'''
return __virtualname__
@ -43,6 +45,7 @@ def __virtual__():
def _get_hook():
'''
Retrieves and return the Mattermost's configured hook
:return: String: the hook string
'''
hook = __salt__['config.get']('mattermost.hook') or \
@ -56,6 +59,7 @@ def _get_hook():
def _get_api_url():
'''
Retrieves and return the Mattermost's configured api url
:return: String: the api url string
'''
api_url = __salt__['config.get']('mattermost.api_url') or \
@ -69,6 +73,7 @@ def _get_api_url():
def _get_channel():
'''
Retrieves the Mattermost's configured channel
:return: String: the channel string
'''
channel = __salt__['config.get']('mattermost.channel') or \
@ -80,6 +85,7 @@ def _get_channel():
def _get_username():
'''
Retrieves the Mattermost's configured username
:return: String: the username string
'''
username = __salt__['config.get']('mattermost.username') or \
@ -95,14 +101,18 @@ def post_message(message,
hook=None):
'''
Send a message to a Mattermost channel.
:param channel: The channel name, either will work.
:param username: The username of the poster.
:param message: The message to send to the Mattermost channel.
:param api_url: The Mattermost api url, if not specified in the configuration.
:param hook: The Mattermost hook, if not specified in the configuration.
:return: Boolean if message was sent successfully.
CLI Example:
.. code-block:: bash
salt '*' mattermost.post_message message='Build is done"
'''
if not api_url:

126
salt/modules/netbox.py Normal file
View file

@ -0,0 +1,126 @@
# -*- coding: utf-8 -*-
'''
NetBox
======
Module to query NetBox
:codeauthor: Zach Moody <zmoody@do.co>
:maturity: new
:depends: pynetbox
The following config should be in the minion config file. In order to
work with ``secrets`` you should provide a token and path to your
private key file:
.. code-block:: yaml
netbox:
url: <NETBOX_URL>
token: <NETBOX_USERNAME_API_TOKEN (OPTIONAL)>
keyfile: </PATH/TO/NETBOX/KEY (OPTIONAL)>
.. versionadded:: Oxygen
'''
from __future__ import absolute_import
import logging
from salt.exceptions import CommandExecutionError
from salt.utils.args import clean_kwargs
log = logging.getLogger(__name__)
try:
import pynetbox
HAS_PYNETBOX = True
except ImportError:
HAS_PYNETBOX = False
AUTH_ENDPOINTS = (
'secrets',
)
def __virtual__():
'''
pynetbox must be installed.
'''
if not HAS_PYNETBOX:
return (
False,
'The netbox execution module cannot be loaded: '
'pynetbox library is not installed.'
)
else:
return True
def _config():
config = __salt__['config.get']('netbox')
if not config:
raise CommandExecutionError(
'NetBox execution module configuration could not be found'
)
return config
def _nb_obj(auth_required=False):
pynb_kwargs = {}
if auth_required:
pynb_kwargs['token'] = _config().get('token')
pynb_kwargs['private_key_file'] = _config().get('keyfile')
return pynetbox.api(_config().get('url'), **pynb_kwargs)
def _strip_url_field(input_dict):
if 'url' in input_dict.keys():
del input_dict['url']
for k, v in input_dict.items():
if isinstance(v, dict):
_strip_url_field(v)
return input_dict
def filter(app, endpoint, **kwargs):
'''
Get a list of items from NetBox.
.. code-block:: bash
salt myminion netbox.filter dcim devices status=1 role=router
'''
ret = []
nb = _nb_obj(auth_required=True if app in AUTH_ENDPOINTS else False)
nb_query = getattr(getattr(nb, app), endpoint).filter(
**clean_kwargs(**kwargs)
)
if nb_query:
ret = [_strip_url_field(dict(i)) for i in nb_query]
return sorted(ret)
def get(app, endpoint, id=None, **kwargs):
'''
Get a single item from NetBox.
To get an item based on ID.
.. code-block:: bash
salt myminion netbox.get dcim devices id=123
Or using named arguments that correspond with accepted filters on
the NetBox endpoint.
.. code-block:: bash
salt myminion netbox.get dcim devices name=my-router
'''
nb = _nb_obj(auth_required=True if app in AUTH_ENDPOINTS else False)
if id:
return dict(getattr(getattr(nb, app), endpoint).get(id))
else:
return dict(
getattr(getattr(nb, app), endpoint).get(**clean_kwargs(**kwargs))
)

537
salt/modules/nexus.py Normal file
View file

@ -0,0 +1,537 @@
# -*- coding: utf-8 -*-
'''
Module for fetching artifacts from Nexus 3.x
.. versionadded:: Oxygen
'''
# Import python libs
from __future__ import absolute_import
import os
import base64
import logging
# Import Salt libs
import salt.utils.files
import salt.ext.six.moves.http_client # pylint: disable=import-error,redefined-builtin,no-name-in-module
from salt.ext.six.moves import urllib # pylint: disable=no-name-in-module
from salt.ext.six.moves.urllib.error import HTTPError, URLError # pylint: disable=no-name-in-module
from salt.exceptions import CommandExecutionError
# Import 3rd party libs
try:
from salt._compat import ElementTree as ET
HAS_ELEMENT_TREE = True
except ImportError:
HAS_ELEMENT_TREE = False
log = logging.getLogger(__name__)
__virtualname__ = 'nexus'
def __virtual__():
'''
Only load if elementtree xml library is available.
'''
if not HAS_ELEMENT_TREE:
return (False, 'Cannot load {0} module: ElementTree library unavailable'.format(__virtualname__))
else:
return True
def get_latest_snapshot(nexus_url, repository, group_id, artifact_id, packaging, target_dir='/tmp', target_file=None, classifier=None, username=None, password=None):
'''
Gets latest snapshot of the given artifact
nexus_url
URL of nexus instance
repository
Snapshot repository in nexus to retrieve artifact from, for example: libs-snapshots
group_id
Group Id of the artifact
artifact_id
Artifact Id of the artifact
packaging
Packaging type (jar,war,ear,etc)
target_dir
Target directory to download artifact to (default: /tmp)
target_file
Target file to download artifact to (by default it is target_dir/artifact_id-snapshot_version.packaging)
classifier
Artifact classifier name (ex: sources,javadoc,etc). Optional parameter.
username
nexus username. Optional parameter.
password
nexus password. Optional parameter.
'''
log.debug("======================== MODULE FUNCTION: nexus.get_latest_snapshot, nexus_url=%s, repository=%s, group_id=%s, artifact_id=%s, packaging=%s, target_dir=%s, classifier=%s)",
nexus_url, repository, group_id, artifact_id, packaging, target_dir, classifier)
headers = {}
if username and password:
headers['Authorization'] = 'Basic {0}'.format(base64.encodestring('{0}:{1}'.format(username, password)).replace('\n', ''))
artifact_metadata = _get_artifact_metadata(nexus_url=nexus_url, repository=repository, group_id=group_id, artifact_id=artifact_id, headers=headers)
version = artifact_metadata['latest_version']
snapshot_url, file_name = _get_snapshot_url(nexus_url=nexus_url, repository=repository, group_id=group_id, artifact_id=artifact_id, version=version, packaging=packaging, classifier=classifier, headers=headers)
target_file = __resolve_target_file(file_name, target_dir, target_file)
return __save_artifact(snapshot_url, target_file, headers)
def get_snapshot(nexus_url, repository, group_id, artifact_id, packaging, version, snapshot_version=None, target_dir='/tmp', target_file=None, classifier=None, username=None, password=None):
'''
Gets snapshot of the desired version of the artifact
nexus_url
URL of nexus instance
repository
Snapshot repository in nexus to retrieve artifact from, for example: libs-snapshots
group_id
Group Id of the artifact
artifact_id
Artifact Id of the artifact
packaging
Packaging type (jar,war,ear,etc)
version
Version of the artifact
target_dir
Target directory to download artifact to (default: /tmp)
target_file
Target file to download artifact to (by default it is target_dir/artifact_id-snapshot_version.packaging)
classifier
Artifact classifier name (ex: sources,javadoc,etc). Optional parameter.
username
nexus username. Optional parameter.
password
nexus password. Optional parameter.
'''
log.debug('======================== MODULE FUNCTION: nexus.get_snapshot(nexus_url=%s, repository=%s, group_id=%s, artifact_id=%s, packaging=%s, version=%s, target_dir=%s, classifier=%s)',
nexus_url, repository, group_id, artifact_id, packaging, version, target_dir, classifier)
headers = {}
if username and password:
headers['Authorization'] = 'Basic {0}'.format(base64.encodestring('{0}:{1}'.format(username, password)).replace('\n', ''))
snapshot_url, file_name = _get_snapshot_url(nexus_url=nexus_url, repository=repository, group_id=group_id, artifact_id=artifact_id, version=version, packaging=packaging, snapshot_version=snapshot_version, classifier=classifier, headers=headers)
target_file = __resolve_target_file(file_name, target_dir, target_file)
return __save_artifact(snapshot_url, target_file, headers)
def get_snapshot_version_string(nexus_url, repository, group_id, artifact_id, packaging, version, classifier=None, username=None, password=None):
'''
Gets the specific version string of a snapshot of the desired version of the artifact
nexus_url
URL of nexus instance
repository
Snapshot repository in nexus to retrieve artifact from, for example: libs-snapshots
group_id
Group Id of the artifact
artifact_id
Artifact Id of the artifact
packaging
Packaging type (jar,war,ear,etc)
version
Version of the artifact
classifier
Artifact classifier name (ex: sources,javadoc,etc). Optional parameter.
username
nexus username. Optional parameter.
password
nexus password. Optional parameter.
'''
log.debug('======================== MODULE FUNCTION: nexus.get_snapshot_version_string(nexus_url=%s, repository=%s, group_id=%s, artifact_id=%s, packaging=%s, version=%s, classifier=%s)',
nexus_url, repository, group_id, artifact_id, packaging, version, classifier)
headers = {}
if username and password:
headers['Authorization'] = 'Basic {0}'.format(base64.encodestring('{0}:{1}'.format(username, password)).replace('\n', ''))
return _get_snapshot_url(nexus_url=nexus_url, repository=repository, group_id=group_id, artifact_id=artifact_id, version=version, packaging=packaging, classifier=classifier, just_get_version_string=True)
def get_latest_release(nexus_url, repository, group_id, artifact_id, packaging, target_dir='/tmp', target_file=None, classifier=None, username=None, password=None):
'''
Gets the latest release of the artifact
nexus_url
URL of nexus instance
repository
Release repository in nexus to retrieve artifact from, for example: libs-releases
group_id
Group Id of the artifact
artifact_id
Artifact Id of the artifact
packaging
Packaging type (jar,war,ear,etc)
target_dir
Target directory to download artifact to (default: /tmp)
target_file
Target file to download artifact to (by default it is target_dir/artifact_id-version.packaging)
classifier
Artifact classifier name (ex: sources,javadoc,etc). Optional parameter.
username
nexus username. Optional parameter.
password
nexus password. Optional parameter.
'''
log.debug('======================== MODULE FUNCTION: nexus.get_latest_release(nexus_url=%s, repository=%s, group_id=%s, artifact_id=%s, packaging=%s, target_dir=%s, classifier=%s)',
nexus_url, repository, group_id, artifact_id, packaging, target_dir, classifier)
headers = {}
if username and password:
headers['Authorization'] = 'Basic {0}'.format(base64.encodestring('{0}:{1}'.format(username, password)).replace('\n', ''))
artifact_metadata = _get_artifact_metadata(nexus_url=nexus_url, repository=repository, group_id=group_id, artifact_id=artifact_id, headers=headers)
version = artifact_metadata['latest_version']
release_url, file_name = _get_release_url(repository, group_id, artifact_id, packaging, version, nexus_url, classifier)
target_file = __resolve_target_file(file_name, target_dir, target_file)
return __save_artifact(release_url, target_file, headers)
def get_release(nexus_url, repository, group_id, artifact_id, packaging, version, target_dir='/tmp', target_file=None, classifier=None, username=None, password=None):
'''
Gets the specified release of the artifact
nexus_url
URL of nexus instance
repository
Release repository in nexus to retrieve artifact from, for example: libs-releases
group_id
Group Id of the artifact
artifact_id
Artifact Id of the artifact
packaging
Packaging type (jar,war,ear,etc)
version
Version of the artifact
target_dir
Target directory to download artifact to (default: /tmp)
target_file
Target file to download artifact to (by default it is target_dir/artifact_id-version.packaging)
classifier
Artifact classifier name (ex: sources,javadoc,etc). Optional parameter.
username
nexus username. Optional parameter.
password
nexus password. Optional parameter.
'''
log.debug('======================== MODULE FUNCTION: nexus.get_release(nexus_url=%s, repository=%s, group_id=%s, artifact_id=%s, packaging=%s, version=%s, target_dir=%s, classifier=%s)',
nexus_url, repository, group_id, artifact_id, packaging, version, target_dir, classifier)
headers = {}
if username and password:
headers['Authorization'] = 'Basic {0}'.format(base64.encodestring('{0}:{1}'.format(username, password)).replace('\n', ''))
release_url, file_name = _get_release_url(repository, group_id, artifact_id, packaging, version, nexus_url, classifier)
target_file = __resolve_target_file(file_name, target_dir, target_file)
return __save_artifact(release_url, target_file, headers)
def __resolve_target_file(file_name, target_dir, target_file=None):
if target_file is None:
target_file = os.path.join(target_dir, file_name)
return target_file
def _get_snapshot_url(nexus_url, repository, group_id, artifact_id, version, packaging, snapshot_version=None, classifier=None, headers=None, just_get_version_string=None):
if headers is None:
headers = {}
has_classifier = classifier is not None and classifier != ""
if snapshot_version is None:
snapshot_version_metadata = _get_snapshot_version_metadata(nexus_url=nexus_url, repository=repository, group_id=group_id, artifact_id=artifact_id, version=version, headers=headers)
if packaging not in snapshot_version_metadata['snapshot_versions']:
error_message = '''Cannot find requested packaging '{packaging}' in the snapshot version metadata.
nexus_url: {nexus_url}
repository: {repository}
group_id: {group_id}
artifact_id: {artifact_id}
packaging: {packaging}
classifier: {classifier}
version: {version}'''.format(
nexus_url=nexus_url,
repository=repository,
group_id=group_id,
artifact_id=artifact_id,
packaging=packaging,
classifier=classifier,
version=version)
raise nexusError(error_message)
if has_classifier and classifier not in snapshot_version_metadata['snapshot_versions']:
error_message = '''Cannot find requested classifier '{classifier}' in the snapshot version metadata.
nexus_url: {nexus_url}
repository: {repository}
group_id: {group_id}
artifact_id: {artifact_id}
packaging: {packaging}
classifier: {classifier}
version: {version}'''.format(
nexus_url=nexus_url,
repository=repository,
group_id=group_id,
artifact_id=artifact_id,
packaging=packaging,
classifier=classifier,
version=version)
raise nexusError(error_message)
snapshot_version = snapshot_version_metadata['snapshot_versions'][packaging]
group_url = __get_group_id_subpath(group_id)
file_name = '{artifact_id}-{snapshot_version}{classifier}.{packaging}'.format(
artifact_id=artifact_id,
snapshot_version=snapshot_version,
packaging=packaging,
classifier=__get_classifier_url(classifier))
snapshot_url = '{nexus_url}/{repository}/{group_url}/{artifact_id}/{version}/{file_name}'.format(
nexus_url=nexus_url,
repository=repository,
group_url=group_url,
artifact_id=artifact_id,
version=version,
file_name=file_name)
log.debug('snapshot_url=%s', snapshot_url)
if just_get_version_string:
return snapshot_version
else:
return snapshot_url, file_name
def _get_release_url(repository, group_id, artifact_id, packaging, version, nexus_url, classifier=None):
group_url = __get_group_id_subpath(group_id)
# for released versions the suffix for the file is same as version
file_name = '{artifact_id}-{version}{classifier}.{packaging}'.format(
artifact_id=artifact_id,
version=version,
packaging=packaging,
classifier=__get_classifier_url(classifier))
release_url = '{nexus_url}/{repository}/{group_url}/{artifact_id}/{version}/{file_name}'.format(
nexus_url=nexus_url,
repository=repository,
group_url=group_url,
artifact_id=artifact_id,
version=version,
file_name=file_name)
log.debug('release_url=%s', release_url)
return release_url, file_name
def _get_artifact_metadata_url(nexus_url, repository, group_id, artifact_id):
group_url = __get_group_id_subpath(group_id)
# for released versions the suffix for the file is same as version
artifact_metadata_url = '{nexus_url}/{repository}/{group_url}/{artifact_id}/maven-metadata.xml'.format(
nexus_url=nexus_url,
repository=repository,
group_url=group_url,
artifact_id=artifact_id)
log.debug('artifact_metadata_url=%s', artifact_metadata_url)
return artifact_metadata_url
def _get_artifact_metadata_xml(nexus_url, repository, group_id, artifact_id, headers):
artifact_metadata_url = _get_artifact_metadata_url(
nexus_url=nexus_url,
repository=repository,
group_id=group_id,
artifact_id=artifact_id
)
try:
request = urllib.request.Request(artifact_metadata_url, None, headers)
artifact_metadata_xml = urllib.request.urlopen(request).read()
except (HTTPError, URLError) as err:
message = 'Could not fetch data from url: {0}. ERROR: {1}'.format(
artifact_metadata_url,
err
)
raise CommandExecutionError(message)
log.debug('artifact_metadata_xml=%s', artifact_metadata_xml)
return artifact_metadata_xml
def _get_artifact_metadata(nexus_url, repository, group_id, artifact_id, headers):
metadata_xml = _get_artifact_metadata_xml(nexus_url=nexus_url, repository=repository, group_id=group_id, artifact_id=artifact_id, headers=headers)
root = ET.fromstring(metadata_xml)
assert group_id == root.find('groupId').text
assert artifact_id == root.find('artifactId').text
versions = root.find('versioning').find('versions')
versionList = []
for version in versions.iter('version'):
versionList.append(version.text)
latest_version = max(versionList)
log.debug('latest version=%s', latest_version)
return {
'latest_version': latest_version
}
# functions for handling snapshots
def _get_snapshot_version_metadata_url(nexus_url, repository, group_id, artifact_id, version):
group_url = __get_group_id_subpath(group_id)
# for released versions the suffix for the file is same as version
snapshot_version_metadata_url = '{nexus_url}/{repository}/{group_url}/{artifact_id}/{version}/maven-metadata.xml'.format(
nexus_url=nexus_url,
repository=repository,
group_url=group_url,
artifact_id=artifact_id,
version=version)
log.debug('snapshot_version_metadata_url=%s', snapshot_version_metadata_url)
return snapshot_version_metadata_url
def _get_snapshot_version_metadata_xml(nexus_url, repository, group_id, artifact_id, version, headers):
snapshot_version_metadata_url = _get_snapshot_version_metadata_url(
nexus_url=nexus_url,
repository=repository,
group_id=group_id,
artifact_id=artifact_id,
version=version
)
try:
request = urllib.request.Request(snapshot_version_metadata_url, None, headers)
snapshot_version_metadata_xml = urllib.request.urlopen(request).read()
except (HTTPError, URLError) as err:
message = 'Could not fetch data from url: {0}. ERROR: {1}'.format(
snapshot_version_metadata_url,
err
)
raise CommandExecutionError(message)
log.debug('snapshot_version_metadata_xml=%s', snapshot_version_metadata_xml)
return snapshot_version_metadata_xml
def _get_snapshot_version_metadata(nexus_url, repository, group_id, artifact_id, version, headers):
metadata_xml = _get_snapshot_version_metadata_xml(nexus_url=nexus_url, repository=repository, group_id=group_id, artifact_id=artifact_id, version=version, headers=headers)
metadata = ET.fromstring(metadata_xml)
assert group_id == metadata.find('groupId').text
assert artifact_id == metadata.find('artifactId').text
assert version == metadata.find('version').text
snapshot_versions = metadata.find('versioning').find('snapshotVersions')
extension_version_dict = {}
for snapshot_version in snapshot_versions:
extension = snapshot_version.find('extension').text
value = snapshot_version.find('value').text
extension_version_dict[extension] = value
if snapshot_version.find('classifier') is not None:
classifier = snapshot_version.find('classifier').text
extension_version_dict[classifier] = value
return {
'snapshot_versions': extension_version_dict
}
def __save_artifact(artifact_url, target_file, headers):
log.debug("__save_artifact(%s, %s)", artifact_url, target_file)
result = {
'status': False,
'changes': {},
'comment': ''
}
if os.path.isfile(target_file):
log.debug("File {0} already exists, checking checksum...".format(target_file))
checksum_url = artifact_url + ".sha1"
checksum_success, artifact_sum, checksum_comment = __download(checksum_url, headers)
if checksum_success:
log.debug("Downloaded SHA1 SUM: %s", artifact_sum)
file_sum = __salt__['file.get_hash'](path=target_file, form='sha1')
log.debug("Target file (%s) SHA1 SUM: %s", target_file, file_sum)
if artifact_sum == file_sum:
result['status'] = True
result['target_file'] = target_file
result['comment'] = 'File {0} already exists, checksum matches with nexus.\n' \
'Checksum URL: {1}'.format(target_file, checksum_url)
return result
else:
result['comment'] = 'File {0} already exists, checksum does not match with nexus!\n'\
'Checksum URL: {1}'.format(target_file, checksum_url)
else:
result['status'] = False
result['comment'] = checksum_comment
return result
log.debug('Downloading: {url} -> {target_file}'.format(url=artifact_url, target_file=target_file))
try:
request = urllib.request.Request(artifact_url, None, headers)
f = urllib.request.urlopen(request)
with salt.utils.files.fopen(target_file, "wb") as local_file:
local_file.write(f.read())
result['status'] = True
result['comment'] = __append_comment(('Artifact downloaded from URL: {0}'.format(artifact_url)), result['comment'])
result['changes']['downloaded_file'] = target_file
result['target_file'] = target_file
except (HTTPError, URLError) as e:
result['status'] = False
result['comment'] = __get_error_comment(e, artifact_url)
return result
def __get_group_id_subpath(group_id):
group_url = group_id.replace('.', '/')
return group_url
def __get_classifier_url(classifier):
has_classifier = classifier is not None and classifier != ""
return "-" + classifier if has_classifier else ""
def __download(request_url, headers):
log.debug('Downloading content from {0}'.format(request_url))
success = False
content = None
comment = None
try:
request = urllib.request.Request(request_url, None, headers)
url = urllib.request.urlopen(request)
content = url.read()
success = True
except HTTPError as e:
comment = __get_error_comment(e, request_url)
return success, content, comment
def __get_error_comment(http_error, request_url):
if http_error.code == salt.ext.six.moves.http_client.NOT_FOUND:
comment = 'HTTP Error 404. Request URL: ' + request_url
elif http_error.code == salt.ext.six.moves.http_client.CONFLICT:
comment = 'HTTP Error 409: Conflict. Requested URL: {0}. \n' \
'This error may be caused by reading snapshot artifact from non-snapshot repository.'.format(request_url)
else:
comment = 'HTTP Error {err_code}. Request URL: {url}'.format(err_code=http_error.code, url=request_url)
return comment
def __append_comment(new_comment, current_comment=''):
return current_comment+'\n'+new_comment
class nexusError(Exception):
def __init__(self, value):
super(nexusError, self).__init__()
self.value = value
def __str__(self):
return repr(self.value)

View file

@ -519,6 +519,22 @@ def get_domain_config():
return __proxy__['panos.call'](query)
def get_dos_blocks():
'''
Show the DoS block-ip table.
CLI Example:
.. code-block:: bash
salt '*' panos.get_dos_blocks
'''
query = {'type': 'op', 'cmd': '<show><dos-block-table><all></all></dos-block-table></show>'}
return __proxy__['panos.call'](query)
def get_fqdn_cache():
'''
Print FQDNs used in rules and their IPs.
@ -846,6 +862,32 @@ def get_lldp_neighbors():
return __proxy__['panos.call'](query)
def get_local_admins():
'''
Show all local administrator accounts.
CLI Example:
.. code-block:: bash
salt '*' panos.get_local_admins
'''
admin_list = get_users_config()
response = []
if 'users' not in admin_list['result']:
return response
if isinstance(admin_list['result']['users']['entry'], list):
for entry in admin_list['result']['users']['entry']:
response.append(entry['name'])
else:
response.append(admin_list['result']['users']['entry']['name'])
return response
def get_logdb_quota():
'''
Report the logdb quotas.
@ -2157,6 +2199,120 @@ def shutdown():
return __proxy__['panos.call'](query)
def test_fib_route(ip=None,
vr='vr1'):
'''
Perform a route lookup within active route table (fib).
ip (str): The destination IP address to test.
vr (str): The name of the virtual router to test.
CLI Example:
.. code-block:: bash
salt '*' panos.test_fib_route 4.2.2.2
salt '*' panos.test_fib_route 4.2.2.2 my-vr
'''
xpath = "<test><routing><fib-lookup>"
if ip:
xpath += "<ip>{0}</ip>".format(ip)
if vr:
xpath += "<virtual-router>{0}</virtual-router>".format(vr)
xpath += "</fib-lookup></routing></test>"
query = {'type': 'op',
'cmd': xpath}
return __proxy__['panos.call'](query)
def test_security_policy(sourcezone=None,
destinationzone=None,
source=None,
destination=None,
protocol=None,
port=None,
application=None,
category=None,
vsys='1',
allrules=False):
'''
Checks which security policy as connection will match on the device.
sourcezone (str): The source zone matched against the connection.
destinationzone (str): The destination zone matched against the connection.
source (str): The source address. This must be a single IP address.
destination (str): The destination address. This must be a single IP address.
protocol (int): The protocol number for the connection. This is the numerical representation of the protocol.
port (int): The port number for the connection.
application (str): The application that should be matched.
category (str): The category that should be matched.
vsys (int): The numerical representation of the VSYS ID.
allrules (bool): Show all potential match rules until first allow rule.
CLI Example:
.. code-block:: bash
salt '*' panos.test_security_policy sourcezone=trust destinationzone=untrust protocol=6 port=22
salt '*' panos.test_security_policy sourcezone=trust destinationzone=untrust protocol=6 port=22 vsys=2
'''
xpath = "<test><security-policy-match>"
if sourcezone:
xpath += "<from>{0}</from>".format(sourcezone)
if destinationzone:
xpath += "<to>{0}</to>".format(destinationzone)
if source:
xpath += "<source>{0}</source>".format(source)
if destination:
xpath += "<destination>{0}</destination>".format(destination)
if protocol:
xpath += "<protocol>{0}</protocol>".format(protocol)
if port:
xpath += "<destination-port>{0}</destination-port>".format(port)
if application:
xpath += "<application>{0}</application>".format(application)
if category:
xpath += "<category>{0}</category>".format(category)
if allrules:
xpath += "<show-all>yes</show-all>"
xpath += "</security-policy-match></test>"
query = {'type': 'op',
'vsys': "vsys{0}".format(vsys),
'cmd': xpath}
return __proxy__['panos.call'](query)
def unlock_admin(username=None):
'''
Unlocks a locked administrator account.

View file

@ -1153,6 +1153,61 @@ def list_upgrades(bin_env=None,
return packages
def is_installed(pkgname=None,
bin_env=None,
user=None,
cwd=None):
'''
Filter list of installed apps from ``freeze`` and return True or False if
``pkgname`` exists in the list of packages installed.
.. note::
If the version of pip available is older than 8.0.3, the packages
wheel, setuptools, and distribute will not be reported by this function
even if they are installed. Unlike
:py:func:`pip.freeze <salt.modules.pip.freeze>`, this function always
reports the version of pip which is installed.
CLI Example:
.. code-block:: bash
salt '*' pip.is_installed salt
.. versionadded:: Oxygen
The packages wheel, setuptools, and distribute are included if the
installed pip is new enough.
'''
for line in freeze(bin_env=bin_env, user=user, cwd=cwd):
if line.startswith('-f') or line.startswith('#'):
# ignore -f line as it contains --find-links directory
# ignore comment lines
continue
elif line.startswith('-e hg+not trust'):
# ignore hg + not trust problem
continue
elif line.startswith('-e'):
line = line.split('-e ')[1]
version_, name = line.split('#egg=')
elif len(line.split('===')) >= 2:
name = line.split('===')[0]
version_ = line.split('===')[1]
elif len(line.split('==')) >= 2:
name = line.split('==')[0]
version_ = line.split('==')[1]
else:
logger.error('Can\'t parse line \'{0}\''.format(line))
continue
if pkgname:
if pkgname == name.lower():
return True
return False
def upgrade_available(pkg,
bin_env=None,
user=None,

View file

@ -643,12 +643,18 @@ def _parse_settings_eth(opts, iface_type, enabled, iface):
result[opt] = opts[opt]
if iface_type not in ['bond', 'vlan', 'bridge', 'ipip']:
auto_addr = False
if 'addr' in opts:
if salt.utils.validate.net.mac(opts['addr']):
result['addr'] = opts['addr']
else:
_raise_error_iface(iface, opts['addr'], ['AA:BB:CC:DD:EE:FF'])
elif opts['addr'] == 'auto':
auto_addr = True
elif opts['addr'] != 'none':
_raise_error_iface(iface, opts['addr'], ['AA:BB:CC:DD:EE:FF', 'auto', 'none'])
else:
auto_addr = True
if auto_addr:
# If interface type is slave for bond, not setting hwaddr
if iface_type != 'slave':
ifaces = __salt__['network.interfaces']()

View file

@ -474,7 +474,7 @@ def sync_returners(saltenv=None, refresh=True, extmod_whitelist=None, extmod_bla
'''
.. versionadded:: 0.10.0
Sync beacons from ``salt://_returners`` to the minion
Sync returners from ``salt://_returners`` to the minion
saltenv
The fileserver environment from which to sync. To sync from more than
@ -585,6 +585,44 @@ def sync_engines(saltenv=None, refresh=False, extmod_whitelist=None, extmod_blac
return ret
def sync_thorium(saltenv=None, refresh=False, extmod_whitelist=None, extmod_blacklist=None):
'''
.. versionadded:: Oxygen
Sync Thorium modules from ``salt://_thorium`` to the minion
saltenv
The fileserver environment from which to sync. To sync from more than
one environment, pass a comma-separated list.
If not passed, then all environments configured in the :ref:`top files
<states-top>` will be checked for engines to sync. If no top files are
found, then the ``base`` environment will be synced.
refresh: ``True``
If ``True``, refresh the available execution modules on the minion.
This refresh will be performed even if no new Thorium modules are synced.
Set to ``False`` to prevent this refresh.
extmod_whitelist
comma-seperated list of modules to sync
extmod_blacklist
comma-seperated list of modules to blacklist based on type
CLI Examples:
.. code-block:: bash
salt '*' saltutil.sync_thorium
salt '*' saltutil.sync_thorium saltenv=base,dev
'''
ret = _sync('thorium', saltenv, extmod_whitelist, extmod_blacklist)
if refresh:
refresh_modules()
return ret
def sync_output(saltenv=None, refresh=True, extmod_whitelist=None, extmod_blacklist=None):
'''
Sync outputters from ``salt://_output`` to the minion
@ -628,7 +666,7 @@ def sync_clouds(saltenv=None, refresh=True, extmod_whitelist=None, extmod_blackl
'''
.. versionadded:: 2017.7.0
Sync utility modules from ``salt://_cloud`` to the minion
Sync cloud modules from ``salt://_cloud`` to the minion
saltenv : base
The fileserver environment from which to sync. To sync from more than
@ -864,6 +902,7 @@ def sync_all(saltenv=None, refresh=True, extmod_whitelist=None, extmod_blacklist
ret['log_handlers'] = sync_log_handlers(saltenv, False, extmod_whitelist, extmod_blacklist)
ret['proxymodules'] = sync_proxymodules(saltenv, False, extmod_whitelist, extmod_blacklist)
ret['engines'] = sync_engines(saltenv, False, extmod_whitelist, extmod_blacklist)
ret['thorium'] = sync_thorium(saltenv, False, extmod_whitelist, extmod_blacklist)
if __opts__['file_client'] == 'local':
ret['pillar'] = sync_pillar(saltenv, False, extmod_whitelist, extmod_blacklist)
if refresh:

View file

@ -10,6 +10,7 @@ Module for managing the Salt schedule on a minion
from __future__ import absolute_import
import copy as pycopy
import difflib
import logging
import os
import yaml
@ -23,7 +24,6 @@ from salt.ext import six
__proxyenabled__ = ['*']
import logging
log = logging.getLogger(__name__)
__func_alias__ = {
@ -58,6 +58,7 @@ SCHEDULE_CONF = [
'return_config',
'return_kwargs',
'run_on_start'
'skip_during_range',
]
@ -353,7 +354,7 @@ def build_schedule_item(name, **kwargs):
for item in ['range', 'when', 'once', 'once_fmt', 'cron',
'returner', 'after', 'return_config', 'return_kwargs',
'until', 'run_on_start']:
'until', 'run_on_start', 'skip_during_range']:
if item in kwargs:
schedule[name][item] = kwargs[item]
@ -771,7 +772,7 @@ def disable(**kwargs):
return ret
except KeyError:
# Effectively a no-op, since we can't really return without an event system
ret['comment'] = 'Event module not available. Schedule enable job failed.'
ret['comment'] = 'Event module not available. Schedule disable job failed.'
return ret
@ -951,3 +952,191 @@ def copy(name, target, **kwargs):
ret['minions'] = minions
return ret
return ret
def postpone_job(name, current_time, new_time, **kwargs):
'''
Postpone a job in the minion's schedule
Current time and new time should be specified as Unix timestamps
.. versionadded:: Oxygen
CLI Example:
.. code-block:: bash
salt '*' schedule.postpone_job job current_time new_time
'''
ret = {'comment': [],
'result': True}
if not name:
ret['comment'] = 'Job name is required.'
ret['result'] = False
return ret
if not current_time:
ret['comment'] = 'Job current time is required.'
ret['result'] = False
return ret
else:
if not isinstance(current_time, six.integer_types):
ret['comment'] = 'Job current time must be an integer.'
ret['result'] = False
return ret
if not new_time:
ret['comment'] = 'Job new_time is required.'
ret['result'] = False
return ret
else:
if not isinstance(new_time, six.integer_types):
ret['comment'] = 'Job new time must be an integer.'
ret['result'] = False
return ret
if 'test' in __opts__ and __opts__['test']:
ret['comment'] = 'Job: {0} would be postponed in schedule.'.format(name)
else:
if name in list_(show_all=True, where='opts', return_yaml=False):
event_data = {'name': name,
'time': current_time,
'new_time': new_time,
'func': 'postpone_job'}
elif name in list_(show_all=True, where='pillar', return_yaml=False):
event_data = {'name': name,
'time': current_time,
'new_time': new_time,
'where': 'pillar',
'func': 'postpone_job'}
else:
ret['comment'] = 'Job {0} does not exist.'.format(name)
ret['result'] = False
return ret
try:
eventer = salt.utils.event.get_event('minion', opts=__opts__)
res = __salt__['event.fire'](event_data, 'manage_schedule')
if res:
event_ret = eventer.get_event(tag='/salt/minion/minion_schedule_postpone_job_complete', wait=30)
if event_ret and event_ret['complete']:
schedule = event_ret['schedule']
# check item exists in schedule and is enabled
if name in schedule and schedule[name]['enabled']:
ret['result'] = True
ret['comment'] = 'Postponed Job {0} in schedule.'.format(name)
else:
ret['result'] = False
ret['comment'] = 'Failed to postpone job {0} in schedule.'.format(name)
return ret
except KeyError:
# Effectively a no-op, since we can't really return without an event system
ret['comment'] = 'Event module not available. Schedule postpone job failed.'
return ret
def skip_job(name, time, **kwargs):
'''
Skip a job in the minion's schedule at specified time.
Time to skip should be specified as Unix timestamps
.. versionadded:: Oxygen
CLI Example:
.. code-block:: bash
salt '*' schedule.skip_job job time
'''
ret = {'comment': [],
'result': True}
if not name:
ret['comment'] = 'Job name is required.'
ret['result'] = False
if not time:
ret['comment'] = 'Job time is required.'
ret['result'] = False
if 'test' in __opts__ and __opts__['test']:
ret['comment'] = 'Job: {0} would be skipped in schedule.'.format(name)
else:
if name in list_(show_all=True, where='opts', return_yaml=False):
event_data = {'name': name,
'time': time,
'func': 'skip_job'}
elif name in list_(show_all=True, where='pillar', return_yaml=False):
event_data = {'name': name,
'time': time,
'where': 'pillar',
'func': 'skip_job'}
else:
ret['comment'] = 'Job {0} does not exist.'.format(name)
ret['result'] = False
return ret
try:
eventer = salt.utils.event.get_event('minion', opts=__opts__)
res = __salt__['event.fire'](event_data, 'manage_schedule')
if res:
event_ret = eventer.get_event(tag='/salt/minion/minion_schedule_skip_job_complete', wait=30)
if event_ret and event_ret['complete']:
schedule = event_ret['schedule']
# check item exists in schedule and is enabled
if name in schedule and schedule[name]['enabled']:
ret['result'] = True
ret['comment'] = 'Added Skip Job {0} in schedule.'.format(name)
else:
ret['result'] = False
ret['comment'] = 'Failed to skip job {0} in schedule.'.format(name)
return ret
except KeyError:
# Effectively a no-op, since we can't really return without an event system
ret['comment'] = 'Event module not available. Schedule skip job failed.'
return ret
def show_next_fire_time(name, **kwargs):
'''
Show the next fire time for scheduled job
.. versionadded:: Oxygen
CLI Example:
.. code-block:: bash
salt '*' schedule.show_next_fire_time job_name
'''
ret = {'comment': [],
'result': True}
if not name:
ret['comment'] = 'Job name is required.'
ret['result'] = False
try:
event_data = {'name': name, 'func': 'get_next_fire_time'}
eventer = salt.utils.event.get_event('minion', opts=__opts__)
res = __salt__['event.fire'](event_data,
'manage_schedule')
if res:
event_ret = eventer.get_event(tag='/salt/minion/minion_schedule_next_fire_time_complete', wait=30)
except KeyError:
# Effectively a no-op, since we can't really return without an event system
ret = {}
ret['comment'] = 'Event module not available. Schedule show next fire time failed.'
ret['result'] = True
log.debug(ret['comment'])
return ret
return event_ret

View file

@ -43,6 +43,7 @@ from salt.runners.state import orchestrate as _orchestrate
# Import 3rd-party libs
from salt.ext import six
import msgpack
__proxyenabled__ = ['*']
@ -165,6 +166,99 @@ def _snapper_post(opts, jid, pre_num):
log.error('Failed to create snapper pre snapshot for jid: {0}'.format(jid))
def pause(jid, state_id=None, duration=None):
'''
Set up a state id pause, this instructs a running state to pause at a given
state id. This needs to pass in the jid of the running state and can
optionally pass in a duration in seconds. If a state_id is not passed then
the jid referenced will be paused at the begining of the next state run.
The given state id is the id got a given state execution, so given a state
that looks like this:
.. code-block:: yaml
vim:
pkg.installed: []
The state_id to pass to `pause` is `vim`
CLI Examples:
.. code-block:: bash
salt '*' state.pause 20171130110407769519
salt '*' state.pause 20171130110407769519 vim
salt '*' state.pause 20171130110407769519 vim 20
'''
jid = str(jid)
if state_id is None:
state_id = '__all__'
pause_dir = os.path.join(__opts__[u'cachedir'], 'state_pause')
pause_path = os.path.join(pause_dir, jid)
if not os.path.exists(pause_dir):
try:
os.makedirs(pause_dir)
except OSError:
# File created in the gap
pass
data = {}
if os.path.exists(pause_path):
with salt.utils.files.fopen(pause_path, 'rb') as fp_:
data = msgpack.loads(fp_.read())
if state_id not in data:
data[state_id] = {}
if duration:
data[state_id]['duration'] = int(duration)
with salt.utils.files.fopen(pause_path, 'wb') as fp_:
fp_.write(msgpack.dumps(data))
def resume(jid, state_id=None):
'''
Remove a pause from a jid, allowing it to continue. If the state_id is
not specified then the a general pause will be resumed.
The given state_id is the id got a given state execution, so given a state
that looks like this:
.. code-block:: yaml
vim:
pkg.installed: []
The state_id to pass to `rm_pause` is `vim`
CLI Examples:
.. code-block:: bash
salt '*' state.resume 20171130110407769519
salt '*' state.resume 20171130110407769519 vim
'''
jid = str(jid)
if state_id is None:
state_id = '__all__'
pause_dir = os.path.join(__opts__[u'cachedir'], 'state_pause')
pause_path = os.path.join(pause_dir, jid)
if not os.path.exists(pause_dir):
try:
os.makedirs(pause_dir)
except OSError:
# File created in the gap
pass
data = {}
if os.path.exists(pause_path):
with salt.utils.files.fopen(pause_path, 'rb') as fp_:
data = msgpack.loads(fp_.read())
else:
return True
if state_id in data:
data.pop(state_id)
with salt.utils.files.fopen(pause_path, 'wb') as fp_:
fp_.write(msgpack.dumps(data))
def orchestrate(mods,
saltenv='base',
test=None,

143
salt/modules/telegram.py Normal file
View file

@ -0,0 +1,143 @@
# -*- coding: utf-8 -*-
'''
Module for sending messages via Telegram.
:configuration: In order to send a message via the Telegram, certain
configuration is required in /etc/salt/minion on the relevant minions or
in the pillar. Some sample configs might look like::
telegram.chat_id: '123456789'
telegram.token: '00000000:xxxxxxxxxxxxxxxxxxxxxxxx'
'''
from __future__ import absolute_import
# Import Python libs
import logging
from salt.exceptions import SaltInvocationError
# Import 3rd-party libs
try:
import requests
HAS_REQUESTS = True
except ImportError:
HAS_REQUESTS = False
log = logging.getLogger(__name__)
__virtualname__ = 'telegram'
def __virtual__():
'''
Return virtual name of the module.
:return: The virtual name of the module.
'''
if not HAS_REQUESTS:
return False
return __virtualname__
def _get_chat_id():
'''
Retrieves and return the Telegram's configured chat id
:return: String: the chat id string
'''
chat_id = __salt__['config.get']('telegram:chat_id') or \
__salt__['config.get']('telegram.chat_id')
if not chat_id:
raise SaltInvocationError('No Telegram chat id found')
return chat_id
def _get_token():
'''
Retrieves and return the Telegram's configured token
:return: String: the token string
'''
token = __salt__['config.get']('telegram:token') or \
__salt__['config.get']('telegram.token')
if not token:
raise SaltInvocationError('No Telegram token found')
return token
def post_message(message, chat_id=None, token=None):
'''
Send a message to a Telegram chat.
:param message: The message to send to the Telegram chat.
:param chat_id: (optional) The Telegram chat id.
:param token: (optional) The Telegram API token.
:return: Boolean if message was sent successfully.
CLI Example:
.. code-block:: bash
salt '*' telegram.post_message message="Hello Telegram!"
'''
if not chat_id:
chat_id = _get_chat_id()
if not token:
token = _get_token()
if not message:
log.error('message is a required option.')
return _post_message(message=message, chat_id=chat_id, token=token)
def _post_message(message, chat_id, token):
'''
Send a message to a Telegram chat.
:param chat_id: The chat id.
:param message: The message to send to the telegram chat.
:param token: The Telegram API token.
:return: Boolean if message was sent successfully.
'''
url = 'https://api.telegram.org/bot{0}/sendMessage'.format(token)
parameters = dict()
if chat_id:
parameters['chat_id'] = chat_id
if message:
parameters['text'] = message
try:
response = requests.post(
url,
data=parameters
)
result = response.json()
log.debug(
'Raw response of the telegram request is {0}'.format(response)
)
except Exception:
log.exception(
'Sending telegram api request failed'
)
return False
# Check if the Telegram Bot API returned successfully.
if not result.get('ok', False):
log.debug(
'Sending telegram api request failed due to error {0} ({1})'.format(
result.get('error_code'), result.get('description')
)
)
return False
return True

View file

@ -340,6 +340,10 @@ def zone_compare(timezone):
if 'Solaris' in __grains__['os_family'] or 'AIX' in __grains__['os_family']:
return timezone == get_zone()
if 'FreeBSD' in __grains__['os_family']:
if not os.path.isfile(_get_etc_localtime_path()):
return timezone == get_zone()
tzfile = _get_etc_localtime_path()
zonepath = _get_zone_file(timezone)
try:

View file

@ -1218,41 +1218,55 @@ def mkdir(path,
owner=None,
grant_perms=None,
deny_perms=None,
inheritance=True):
inheritance=True,
reset=False):
'''
Ensure that the directory is available and permissions are set.
Args:
path (str): The full path to the directory.
path (str):
The full path to the directory.
owner (str): The owner of the directory. If not passed, it will be the
account that created the directory, likely SYSTEM
owner (str):
The owner of the directory. If not passed, it will be the account
that created the directory, likely SYSTEM
grant_perms (dict): A dictionary containing the user/group and the basic
permissions to grant, ie: ``{'user': {'perms': 'basic_permission'}}``.
You can also set the ``applies_to`` setting here. The default is
``this_folder_subfolders_files``. Specify another ``applies_to`` setting
like this:
grant_perms (dict):
A dictionary containing the user/group and the basic permissions to
grant, ie: ``{'user': {'perms': 'basic_permission'}}``. You can also
set the ``applies_to`` setting here. The default is
``this_folder_subfolders_files``. Specify another ``applies_to``
setting like this:
.. code-block:: yaml
.. code-block:: yaml
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
To set advanced permissions use a list for the ``perms`` parameter, ie:
To set advanced permissions use a list for the ``perms`` parameter,
ie:
.. code-block:: yaml
.. code-block:: yaml
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
deny_perms (dict): A dictionary containing the user/group and
permissions to deny along with the ``applies_to`` setting. Use the same
format used for the ``grant_perms`` parameter. Remember, deny
permissions supersede grant permissions.
deny_perms (dict):
A dictionary containing the user/group and permissions to deny along
with the ``applies_to`` setting. Use the same format used for the
``grant_perms`` parameter. Remember, deny permissions supersede
grant permissions.
inheritance (bool): If True the object will inherit permissions from the
parent, if False, inheritance will be disabled. Inheritance setting will
not apply to parent directories if they must be created
inheritance (bool):
If True the object will inherit permissions from the parent, if
``False``, inheritance will be disabled. Inheritance setting will
not apply to parent directories if they must be created.
reset (bool):
If ``True`` the existing DACL will be cleared and replaced with the
settings defined in this function. If ``False``, new entries will be
appended to the existing DACL. Default is ``False``.
.. versionadded:: Oxygen
Returns:
bool: True if successful
@ -1289,10 +1303,16 @@ def mkdir(path,
# Set owner
if owner:
salt.utils.win_dacl.set_owner(path, owner)
salt.utils.win_dacl.set_owner(obj_name=path, principal=owner)
# Set permissions
set_perms(path, grant_perms, deny_perms, inheritance)
set_perms(
path=path,
grant_perms=grant_perms,
deny_perms=deny_perms,
inheritance=inheritance,
reset=reset)
except WindowsError as exc:
raise CommandExecutionError(exc)
@ -1303,49 +1323,63 @@ def makedirs_(path,
owner=None,
grant_perms=None,
deny_perms=None,
inheritance=True):
inheritance=True,
reset=False):
'''
Ensure that the parent directory containing this path is available.
Args:
path (str): The full path to the directory.
path (str):
The full path to the directory.
owner (str): The owner of the directory. If not passed, it will be the
account that created the directly, likely SYSTEM
.. note::
grant_perms (dict): A dictionary containing the user/group and the basic
permissions to grant, ie: ``{'user': {'perms': 'basic_permission'}}``.
You can also set the ``applies_to`` setting here. The default is
``this_folder_subfolders_files``. Specify another ``applies_to`` setting
like this:
The path must end with a trailing slash otherwise the
directory(s) will be created up to the parent directory. For
example if path is ``C:\\temp\\test``, then it would be treated
as ``C:\\temp\\`` but if the path ends with a trailing slash
like ``C:\\temp\\test\\``, then it would be treated as
``C:\\temp\\test\\``.
.. code-block:: yaml
owner (str):
The owner of the directory. If not passed, it will be the account
that created the directly, likely SYSTEM
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
grant_perms (dict):
A dictionary containing the user/group and the basic permissions to
grant, ie: ``{'user': {'perms': 'basic_permission'}}``. You can also
set the ``applies_to`` setting here. The default is
``this_folder_subfolders_files``. Specify another ``applies_to``
setting like this:
To set advanced permissions use a list for the ``perms`` parameter, ie:
.. code-block:: yaml
.. code-block:: yaml
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
To set advanced permissions use a list for the ``perms`` parameter, ie:
deny_perms (dict): A dictionary containing the user/group and
permissions to deny along with the ``applies_to`` setting. Use the same
format used for the ``grant_perms`` parameter. Remember, deny
permissions supersede grant permissions.
.. code-block:: yaml
inheritance (bool): If True the object will inherit permissions from the
parent, if False, inheritance will be disabled. Inheritance setting will
not apply to parent directories if they must be created
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
.. note::
deny_perms (dict):
A dictionary containing the user/group and permissions to deny along
with the ``applies_to`` setting. Use the same format used for the
``grant_perms`` parameter. Remember, deny permissions supersede
grant permissions.
The path must end with a trailing slash otherwise the directory(s) will
be created up to the parent directory. For example if path is
``C:\\temp\\test``, then it would be treated as ``C:\\temp\\`` but if
the path ends with a trailing slash like ``C:\\temp\\test\\``, then it
would be treated as ``C:\\temp\\test\\``.
inheritance (bool):
If True the object will inherit permissions from the parent, if
False, inheritance will be disabled. Inheritance setting will not
apply to parent directories if they must be created.
reset (bool):
If ``True`` the existing DACL will be cleared and replaced with the
settings defined in this function. If ``False``, new entries will be
appended to the existing DACL. Default is ``False``.
.. versionadded:: Oxygen
Returns:
bool: True if successful
@ -1405,7 +1439,13 @@ def makedirs_(path,
for directory_to_create in directories_to_create:
# all directories have the user, group and mode set!!
log.debug('Creating directory: %s', directory_to_create)
mkdir(directory_to_create, owner, grant_perms, deny_perms, inheritance)
mkdir(
path=directory_to_create,
owner=owner,
grant_perms=grant_perms,
deny_perms=deny_perms,
inheritance=inheritance,
reset=reset)
return True
@ -1414,41 +1454,54 @@ def makedirs_perms(path,
owner=None,
grant_perms=None,
deny_perms=None,
inheritance=True):
inheritance=True,
reset=True):
'''
Set owner and permissions for each directory created.
Args:
path (str): The full path to the directory.
path (str):
The full path to the directory.
owner (str): The owner of the directory. If not passed, it will be the
account that created the directory, likely SYSTEM
owner (str):
The owner of the directory. If not passed, it will be the account
that created the directory, likely SYSTEM
grant_perms (dict): A dictionary containing the user/group and the basic
permissions to grant, ie: ``{'user': {'perms': 'basic_permission'}}``.
You can also set the ``applies_to`` setting here. The default is
``this_folder_subfolders_files``. Specify another ``applies_to`` setting
like this:
grant_perms (dict):
A dictionary containing the user/group and the basic permissions to
grant, ie: ``{'user': {'perms': 'basic_permission'}}``. You can also
set the ``applies_to`` setting here. The default is
``this_folder_subfolders_files``. Specify another ``applies_to``
setting like this:
.. code-block:: yaml
.. code-block:: yaml
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
To set advanced permissions use a list for the ``perms`` parameter, ie:
To set advanced permissions use a list for the ``perms`` parameter, ie:
.. code-block:: yaml
.. code-block:: yaml
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
deny_perms (dict): A dictionary containing the user/group and
permissions to deny along with the ``applies_to`` setting. Use the same
format used for the ``grant_perms`` parameter. Remember, deny
permissions supersede grant permissions.
deny_perms (dict):
A dictionary containing the user/group and permissions to deny along
with the ``applies_to`` setting. Use the same format used for the
``grant_perms`` parameter. Remember, deny permissions supersede
grant permissions.
inheritance (bool): If True the object will inherit permissions from the
parent, if False, inheritance will be disabled. Inheritance setting will
not apply to parent directories if they must be created
inheritance (bool):
If ``True`` the object will inherit permissions from the parent, if
``False``, inheritance will be disabled. Inheritance setting will
not apply to parent directories if they must be created
reset (bool):
If ``True`` the existing DACL will be cleared and replaced with the
settings defined in this function. If ``False``, new entries will be
appended to the existing DACL. Default is ``False``.
.. versionadded:: Oxygen
Returns:
bool: True if successful, otherwise raise an error
@ -1482,8 +1535,15 @@ def makedirs_perms(path,
try:
# Create the directory here, set inherited True because this is a
# parent directory, the inheritance setting will only apply to the
# child directory
makedirs_perms(head, owner, grant_perms, deny_perms, True)
# target directory. Reset will be False as we only want to reset
# the permissions on the target directory
makedirs_perms(
path=head,
owner=owner,
grant_perms=grant_perms,
deny_perms=deny_perms,
inheritance=True,
reset=False)
except OSError as exc:
# be happy if someone already created the path
if exc.errno != errno.EEXIST:
@ -1492,7 +1552,13 @@ def makedirs_perms(path,
return {}
# Make the directory
mkdir(path, owner, grant_perms, deny_perms, inheritance)
mkdir(
path=path,
owner=owner,
grant_perms=grant_perms,
deny_perms=deny_perms,
inheritance=inheritance,
reset=reset)
return True
@ -1502,66 +1568,64 @@ def check_perms(path,
owner=None,
grant_perms=None,
deny_perms=None,
inheritance=True):
inheritance=True,
reset=False):
'''
Set owner and permissions for each directory created.
Check owner and permissions for the passed directory. This function checks
the permissions and sets them, returning the changes made.
Args:
path (str): The full path to the directory.
path (str):
The full path to the directory.
ret (dict): A dictionary to append changes to and return. If not passed,
will create a new dictionary to return.
ret (dict):
A dictionary to append changes to and return. If not passed, will
create a new dictionary to return.
owner (str): The owner of the directory. If not passed, it will be the
account that created the directory, likely SYSTEM
owner (str):
The owner to set for the directory.
grant_perms (dict): A dictionary containing the user/group and the basic
permissions to grant, ie: ``{'user': {'perms': 'basic_permission'}}``.
You can also set the ``applies_to`` setting here. The default is
``this_folder_subfolders_files``. Specify another ``applies_to`` setting
like this:
grant_perms (dict):
A dictionary containing the user/group and the basic permissions to
check/grant, ie: ``{'user': {'perms': 'basic_permission'}}``.
Default is ``None``.
.. code-block:: yaml
deny_perms (dict):
A dictionary containing the user/group and permissions to
check/deny. Default is ``None``.
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
inheritance (bool):
``True will check if inheritance is enabled and enable it. ``False``
will check if inheritance is disabled and disable it. Defaultl is
``True``.
To set advanced permissions use a list for the ``perms`` parameter, ie:
.. code-block:: yaml
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
deny_perms (dict): A dictionary containing the user/group and
permissions to deny along with the ``applies_to`` setting. Use the same
format used for the ``grant_perms`` parameter. Remember, deny
permissions supersede grant permissions.
inheritance (bool): If True the object will inherit permissions from the
parent, if False, inheritance will be disabled. Inheritance setting will
not apply to parent directories if they must be created
reset (bool):
``True`` wil show what permisisons will be removed by resetting the
DACL. ``False`` will do nothing. Default is ``False``.
Returns:
bool: True if successful, otherwise raise an error
dict: A dictionary of changes that have been made
CLI Example:
.. code-block:: bash
# To grant the 'Users' group 'read & execute' permissions.
salt '*' file.check_perms C:\\Temp\\ Administrators "{'Users': {'perms': 'read_execute'}}"
# To see changes to ``C:\\Temp`` if the 'Users' group is given 'read & execute' permissions.
salt '*' file.check_perms C:\\Temp\\ {} Administrators "{'Users': {'perms': 'read_execute'}}"
# Locally using salt call
salt-call file.check_perms C:\\Temp\\ Administrators "{'Users': {'perms': 'read_execute', 'applies_to': 'this_folder_only'}}"
salt-call file.check_perms C:\\Temp\\ {} Administrators "{'Users': {'perms': 'read_execute', 'applies_to': 'this_folder_only'}}"
# Specify advanced attributes with a list
salt '*' file.check_perms C:\\Temp\\ Administrators "{'jsnuffy': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'files_only'}}"
salt '*' file.check_perms C:\\Temp\\ {} Administrators "{'jsnuffy': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'files_only'}}"
'''
path = os.path.expanduser(path)
if not ret:
ret = {'name': path,
'changes': {},
'pchanges': {},
'comment': [],
'result': True}
orig_comment = ''
@ -1571,14 +1635,16 @@ def check_perms(path,
# Check owner
if owner:
owner = salt.utils.win_dacl.get_name(owner)
current_owner = salt.utils.win_dacl.get_owner(path)
owner = salt.utils.win_dacl.get_name(principal=owner)
current_owner = salt.utils.win_dacl.get_owner(obj_name=path)
if owner != current_owner:
if __opts__['test'] is True:
ret['pchanges']['owner'] = owner
else:
try:
salt.utils.win_dacl.set_owner(path, owner)
salt.utils.win_dacl.set_owner(
obj_name=path,
principal=owner)
ret['changes']['owner'] = owner
except CommandExecutionError:
ret['result'] = False
@ -1586,7 +1652,7 @@ def check_perms(path,
'Failed to change owner to "{0}"'.format(owner))
# Check permissions
cur_perms = salt.utils.win_dacl.get_permissions(path)
cur_perms = salt.utils.win_dacl.get_permissions(obj_name=path)
# Verify Deny Permissions
changes = {}
@ -1594,7 +1660,7 @@ def check_perms(path,
for user in deny_perms:
# Check that user exists:
try:
user_name = salt.utils.win_dacl.get_name(user)
user_name = salt.utils.win_dacl.get_name(principal=user)
except CommandExecutionError:
ret['comment'].append(
'Deny Perms: User "{0}" missing from Target System'.format(user))
@ -1619,7 +1685,11 @@ def check_perms(path,
# Check Perms
if isinstance(deny_perms[user]['perms'], six.string_types):
if not salt.utils.win_dacl.has_permission(
path, user, deny_perms[user]['perms'], 'deny'):
obj_name=path,
principal=user,
permission=deny_perms[user]['perms'],
access_mode='deny',
exact=False):
changes[user] = {'perms': deny_perms[user]['perms']}
else:
for perm in deny_perms[user]['perms']:
@ -1640,9 +1710,10 @@ def check_perms(path,
changes[user]['applies_to'] = applies_to
if changes:
ret['pchanges']['deny_perms'] = {}
ret['changes']['deny_perms'] = {}
for user in changes:
user_name = salt.utils.win_dacl.get_name(user)
user_name = salt.utils.win_dacl.get_name(principal=user)
if __opts__['test'] is True:
ret['pchanges']['deny_perms'][user] = changes[user]
@ -1689,7 +1760,11 @@ def check_perms(path,
try:
salt.utils.win_dacl.set_permissions(
path, user, perms, 'deny', applies_to)
obj_name=path,
principal=user,
permissions=perms,
access_mode='deny',
applies_to=applies_to)
ret['changes']['deny_perms'][user] = changes[user]
except CommandExecutionError:
ret['result'] = False
@ -1703,7 +1778,7 @@ def check_perms(path,
for user in grant_perms:
# Check that user exists:
try:
user_name = salt.utils.win_dacl.get_name(user)
user_name = salt.utils.win_dacl.get_name(principal=user)
except CommandExecutionError:
ret['comment'].append(
'Grant Perms: User "{0}" missing from Target System'.format(user))
@ -1729,12 +1804,19 @@ def check_perms(path,
# Check Perms
if isinstance(grant_perms[user]['perms'], six.string_types):
if not salt.utils.win_dacl.has_permission(
path, user, grant_perms[user]['perms']):
obj_name=path,
principal=user,
permission=grant_perms[user]['perms'],
access_mode='grant'):
changes[user] = {'perms': grant_perms[user]['perms']}
else:
for perm in grant_perms[user]['perms']:
if not salt.utils.win_dacl.has_permission(
path, user, perm, exact=False):
obj_name=path,
principal=user,
permission=perm,
access_mode='grant',
exact=False):
if user not in changes:
changes[user] = {'perms': []}
changes[user]['perms'].append(grant_perms[user]['perms'])
@ -1750,11 +1832,12 @@ def check_perms(path,
changes[user]['applies_to'] = applies_to
if changes:
ret['pchanges']['grant_perms'] = {}
ret['changes']['grant_perms'] = {}
for user in changes:
user_name = salt.utils.win_dacl.get_name(user)
user_name = salt.utils.win_dacl.get_name(principal=user)
if __opts__['test'] is True:
ret['changes']['grant_perms'][user] = changes[user]
ret['pchanges']['grant_perms'][user] = changes[user]
else:
applies_to = None
if 'applies_to' not in changes[user]:
@ -1796,7 +1879,11 @@ def check_perms(path,
try:
salt.utils.win_dacl.set_permissions(
path, user, perms, 'grant', applies_to)
obj_name=path,
principal=user,
permissions=perms,
access_mode='grant',
applies_to=applies_to)
ret['changes']['grant_perms'][user] = changes[user]
except CommandExecutionError:
ret['result'] = False
@ -1806,12 +1893,14 @@ def check_perms(path,
# Check inheritance
if inheritance is not None:
if not inheritance == salt.utils.win_dacl.get_inheritance(path):
if not inheritance == salt.utils.win_dacl.get_inheritance(obj_name=path):
if __opts__['test'] is True:
ret['changes']['inheritance'] = inheritance
ret['pchanges']['inheritance'] = inheritance
else:
try:
salt.utils.win_dacl.set_inheritance(path, inheritance)
salt.utils.win_dacl.set_inheritance(
obj_name=path,
enabled=inheritance)
ret['changes']['inheritance'] = inheritance
except CommandExecutionError:
ret['result'] = False
@ -1819,6 +1908,45 @@ def check_perms(path,
'Failed to set inheritance for "{0}" to '
'{1}'.format(path, inheritance))
# Check reset
# If reset=True, which users will be removed as a result
if reset:
for user_name in cur_perms:
if user_name not in grant_perms:
if 'grant' in cur_perms[user_name] and not \
cur_perms[user_name]['grant']['inherited']:
if __opts__['test'] is True:
if 'remove_perms' not in ret['pchanges']:
ret['pchanges']['remove_perms'] = {}
ret['pchanges']['remove_perms'].update(
{user_name: cur_perms[user_name]})
else:
if 'remove_perms' not in ret['changes']:
ret['changes']['remove_perms'] = {}
salt.utils.win_dacl.rm_permissions(
obj_name=path,
principal=user_name,
ace_type='grant')
ret['changes']['remove_perms'].update(
{user_name: cur_perms[user_name]})
if user_name not in deny_perms:
if 'deny' in cur_perms[user_name] and not \
cur_perms[user_name]['deny']['inherited']:
if __opts__['test'] is True:
if 'remove_perms' not in ret['pchanges']:
ret['pchanges']['remove_perms'] = {}
ret['pchanges']['remove_perms'].update(
{user_name: cur_perms[user_name]})
else:
if 'remove_perms' not in ret['changes']:
ret['changes']['remove_perms'] = {}
salt.utils.win_dacl.rm_permissions(
obj_name=path,
principal=user_name,
ace_type='deny')
ret['changes']['remove_perms'].update(
{user_name: cur_perms[user_name]})
# Re-add the Original Comment if defined
if isinstance(orig_comment, six.string_types):
if orig_comment:
@ -1830,25 +1958,30 @@ def check_perms(path,
ret['comment'] = '\n'.join(ret['comment'])
# Set result for test = True
if __opts__['test'] is True and ret['changes']:
if __opts__['test'] and (ret['changes'] or ret['pchanges']):
ret['result'] = None
return ret
def set_perms(path, grant_perms=None, deny_perms=None, inheritance=True):
def set_perms(path,
grant_perms=None,
deny_perms=None,
inheritance=True,
reset=False):
'''
Set permissions for the given path
Args:
path (str): The full path to the directory.
path (str):
The full path to the directory.
grant_perms (dict):
A dictionary containing the user/group and the basic permissions to
grant, ie: ``{'user': {'perms': 'basic_permission'}}``. You can also
set the ``applies_to`` setting here. The default is
``this_folder_subfolders_files``. Specify another ``applies_to``
set the ``applies_to`` setting here. The default for ``applise_to``
is ``this_folder_subfolders_files``. Specify another ``applies_to``
setting like this:
.. code-block:: yaml
@ -1863,7 +1996,10 @@ def set_perms(path, grant_perms=None, deny_perms=None, inheritance=True):
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
To see a list of available attributes and applies to settings see
the documentation for salt.utils.win_dacl
the documentation for salt.utils.win_dacl.
A value of ``None`` will make no changes to the ``grant`` portion of
the DACL. Default is ``None``.
deny_perms (dict):
A dictionary containing the user/group and permissions to deny along
@ -1871,13 +2007,27 @@ def set_perms(path, grant_perms=None, deny_perms=None, inheritance=True):
``grant_perms`` parameter. Remember, deny permissions supersede
grant permissions.
A value of ``None`` will make no changes to the ``deny`` portion of
the DACL. Default is ``None``.
inheritance (bool):
If True the object will inherit permissions from the parent, if
False, inheritance will be disabled. Inheritance setting will not
apply to parent directories if they must be created
If ``True`` the object will inherit permissions from the parent, if
``False``, inheritance will be disabled. Inheritance setting will
not apply to parent directories if they must be created. Default is
``False``.
reset (bool):
If ``True`` the existing DCL will be cleared and replaced with the
settings defined in this function. If ``False``, new entries will be
appended to the existing DACL. Default is ``False``.
.. versionadded: Oxygen
Returns:
bool: True if successful, otherwise raise an error
bool: True if successful
Raises:
CommandExecutionError: If unsuccessful
CLI Example:
@ -1894,11 +2044,19 @@ def set_perms(path, grant_perms=None, deny_perms=None, inheritance=True):
'''
ret = {}
# Get the DACL for the directory
dacl = salt.utils.win_dacl.dacl(path)
if reset:
# Get an empty DACL
dacl = salt.utils.win_dacl.dacl()
# Get current file/folder permissions
cur_perms = salt.utils.win_dacl.get_permissions(path)
# Get an empty perms dict
cur_perms = {}
else:
# Get the DACL for the directory
dacl = salt.utils.win_dacl.dacl(path)
# Get current file/folder permissions
cur_perms = salt.utils.win_dacl.get_permissions(path)
# Set 'deny' perms if any
if deny_perms is not None:

View file

@ -54,6 +54,7 @@ import salt.utils.args
import salt.utils.data
import salt.utils.files
import salt.utils.hashutils
import salt.utils.path
import salt.utils.pkg
import salt.utils.platform
import salt.utils.versions
@ -646,33 +647,10 @@ def _get_repo_details(saltenv):
# Do some safety checks on the repo_path as its contents can be removed,
# this includes check for bad coding
system_root = os.environ.get('SystemRoot', r'C:\Windows')
deny_paths = (
r'[a-z]\:\\$', # C:\, D:\, etc
r'\\$', # \
re.escape(system_root) # C:\Windows
)
if not salt.utils.path.safe_path(
path=local_dest,
allow_path='\\'.join([system_root, 'TEMP'])):
# Since the above checks anything in C:\Windows, there are some
# directories we may want to make exceptions for
allow_paths = (
re.escape('\\'.join([system_root, 'TEMP'])), # C:\Windows\TEMP
)
# Check the local_dest to make sure it's not one of the bad paths
good_path = True
for d_path in deny_paths:
if re.match(d_path, local_dest, flags=re.IGNORECASE) is not None:
# Found deny path
good_path = False
# If local_dest is one of the bad paths, check for exceptions
if not good_path:
for a_path in allow_paths:
if re.match(a_path, local_dest, flags=re.IGNORECASE) is not None:
# Found exception
good_path = True
if not good_path:
raise CommandExecutionError(
'Attempting to delete files from a possibly unsafe location: '
'{0}'.format(local_dest)

View file

@ -6,8 +6,7 @@ or for problem solving if your minion is having problems.
.. versionadded:: 0.12.0
:depends: - pythoncom
- wmi
:depends: - wmi
'''
# Import Python Libs

View file

@ -279,11 +279,23 @@ def _get_extra_options(**kwargs):
'''
ret = []
kwargs = salt.utils.args.clean_kwargs(**kwargs)
# Remove already handled options from kwargs
fromrepo = kwargs.pop('fromrepo', '')
repo = kwargs.pop('repo', '')
disablerepo = kwargs.pop('disablerepo', '')
enablerepo = kwargs.pop('enablerepo', '')
disable_excludes = kwargs.pop('disableexcludes', '')
branch = kwargs.pop('branch', '')
for key, value in six.iteritems(kwargs):
if isinstance(key, six.string_types):
if isinstance(value, six.string_types):
log.info('Adding extra option --%s=\'%s\'', key, value)
ret.append('--{0}=\'{1}\''.format(key, value))
elif value is True:
log.info('Adding extra option --%s', key)
ret.append('--{0}'.format(key))
log.info('Adding extra options %s', ret)
return ret

View file

@ -509,7 +509,7 @@ def destroy(zpool, force=False):
'''
ret = {}
ret[zpool] = {}
if not exists(zpool):
if not __salt__['zpool.exists'](zpool):
ret[zpool] = 'storage pool does not exist'
else:
zpool_cmd = _check_zpool()
@ -529,7 +529,7 @@ def destroy(zpool, force=False):
return ret
def scrub(zpool, stop=False):
def scrub(zpool, stop=False, pause=False):
'''
.. versionchanged:: 2016.3.0
@ -539,6 +539,13 @@ def scrub(zpool, stop=False):
name of storage pool
stop : boolean
if true, cancel ongoing scrub
pause : boolean
if true, pause ongoing scrub
.. versionadded:: Oxygen
.. note::
If both pause and stop are true, stop will win.
CLI Example:
@ -548,11 +555,18 @@ def scrub(zpool, stop=False):
'''
ret = {}
ret[zpool] = {}
if exists(zpool):
if __salt__['zpool.exists'](zpool):
zpool_cmd = _check_zpool()
cmd = '{zpool_cmd} scrub {stop}{zpool}'.format(
if stop:
action = '-s '
elif pause:
# NOTE: https://github.com/openzfs/openzfs/pull/407
action = '-p '
else:
action = ''
cmd = '{zpool_cmd} scrub {action}{zpool}'.format(
zpool_cmd=zpool_cmd,
stop='-s ' if stop else '',
action=action,
zpool=zpool
)
res = __salt__['cmd.run_all'](cmd, python_shell=False)
@ -567,7 +581,12 @@ def scrub(zpool, stop=False):
else:
ret[zpool]['error'] = res['stdout']
else:
ret[zpool]['scrubbing'] = True if not stop else False
if stop:
ret[zpool]['scrubbing'] = False
elif pause:
ret[zpool]['scrubbing'] = False
else:
ret[zpool]['scrubbing'] = True
else:
ret[zpool] = 'storage pool does not exist'
@ -595,6 +614,9 @@ def create(zpool, *vdevs, **kwargs):
additional pool properties
filesystem_properties : dict
additional filesystem properties
createboot : boolean
..versionadded:: Oxygen
create a boot partition
CLI Example:
@ -629,7 +651,7 @@ def create(zpool, *vdevs, **kwargs):
ret = {}
# Check if the pool_name is already being used
if exists(zpool):
if __salt__['zpool.exists'](zpool):
ret[zpool] = 'storage pool already exists'
return ret
@ -641,14 +663,21 @@ def create(zpool, *vdevs, **kwargs):
zpool_cmd = _check_zpool()
force = kwargs.get('force', False)
altroot = kwargs.get('altroot', None)
createboot = kwargs.get('createboot', False)
mountpoint = kwargs.get('mountpoint', None)
properties = kwargs.get('properties', None)
filesystem_properties = kwargs.get('filesystem_properties', None)
cmd = '{0} create'.format(zpool_cmd)
# bootsize implies createboot
if properties and 'bootsize' in properties:
createboot = True
# apply extra arguments from kwargs
if force: # force creation
cmd = '{0} -f'.format(cmd)
if createboot: # create boot paritition
cmd = '{0} -B'.format(cmd)
if properties: # create "-o property=value" pairs
optlist = []
for prop in properties:
@ -712,7 +741,7 @@ def add(zpool, *vdevs, **kwargs):
ret = {}
# check for pool
if not exists(zpool):
if not __salt__['zpool.exists'](zpool):
ret[zpool] = 'storage pool does not exist'
return ret
@ -765,7 +794,7 @@ def attach(zpool, device, new_device, force=False):
dlist = []
# check for pool
if not exists(zpool):
if not __salt__['zpool.exists'](zpool):
ret[zpool] = 'storage pool does not exist'
return ret
@ -827,7 +856,7 @@ def detach(zpool, device):
dlist = []
# check for pool
if not exists(zpool):
if not __salt__['zpool.exists'](zpool):
ret[zpool] = 'storage pool does not exist'
return ret
@ -848,6 +877,95 @@ def detach(zpool, device):
return ret
def split(zpool, newzpool, **kwargs):
'''
.. versionadded:: Oxygen
Splits devices off pool creating newpool.
.. note::
All vdevs in pool must be mirrors. At the time of the split,
newpool will be a replica of pool.
zpool : string
name of storage pool
newzpool : string
name of new storage pool
mountpoint : string
sets the mount point for the root dataset
altroot : string
sets altroot for newzpool
properties : dict
additional pool properties for newzpool
CLI Example:
.. code-block:: bash
salt '*' zpool.split datamirror databackup
salt '*' zpool.split datamirror databackup altroot=/backup
.. note::
Zpool properties can be specified at the time of creation of the pool by
passing an additional argument called "properties" and specifying the properties
with their respective values in the form of a python dictionary::
properties="{'property1': 'value1', 'property2': 'value2'}"
Example:
.. code-block:: bash
salt '*' zpool.split datamirror databackup properties="{'readonly': 'on'}"
'''
ret = {}
# Check if the pool_name is already being used
if __salt__['zpool.exists'](newzpool):
ret[newzpool] = 'storage pool already exists'
return ret
if not __salt__['zpool.exists'](zpool):
ret[zpool] = 'storage pool does not exists'
return ret
zpool_cmd = _check_zpool()
altroot = kwargs.get('altroot', None)
properties = kwargs.get('properties', None)
cmd = '{0} split'.format(zpool_cmd)
# apply extra arguments from kwargs
if properties: # create "-o property=value" pairs
optlist = []
for prop in properties:
if isinstance(properties[prop], bool):
value = 'on' if properties[prop] else 'off'
else:
if ' ' in properties[prop]:
value = "'{0}'".format(properties[prop])
else:
value = properties[prop]
optlist.append('-o {0}={1}'.format(prop, value))
opts = ' '.join(optlist)
cmd = '{0} {1}'.format(cmd, opts)
if altroot: # set altroot
cmd = '{0} -R {1}'.format(cmd, altroot)
cmd = '{0} {1} {2}'.format(cmd, zpool, newzpool)
# Create storage pool
res = __salt__['cmd.run_all'](cmd, python_shell=False)
# Check and see if the pools is available
if res['retcode'] != 0:
ret[newzpool] = res['stderr'] if 'stderr' in res else res['stdout']
else:
ret[newzpool] = 'split off from {}'.format(zpool)
return ret
def replace(zpool, old_device, new_device=None, force=False):
'''
.. versionchanged:: 2016.3.0
@ -878,7 +996,7 @@ def replace(zpool, old_device, new_device=None, force=False):
'''
ret = {}
# Make sure pool is there
if not exists(zpool):
if not __salt__['zpool.exists'](zpool):
ret[zpool] = 'storage pool does not exist'
return ret
@ -991,7 +1109,7 @@ def export(*pools, **kwargs):
return ret
for pool in pools:
if not exists(pool):
if not __salt__['zpool.exists'](pool):
ret[pool] = 'storage pool does not exist'
else:
pool_present.append(pool)
@ -1106,7 +1224,7 @@ def import_(zpool=None, new_name=None, **kwargs):
ret['error'] = res['stderr'] if 'stderr' in res else res['stdout']
else:
if zpool:
ret[zpool if not new_name else new_name] = 'imported' if exists(zpool if not new_name else new_name) else 'not found'
ret[zpool if not new_name else new_name] = 'imported' if __salt__['zpool.exists'](zpool if not new_name else new_name) else 'not found'
else:
ret = True
return ret
@ -1141,7 +1259,7 @@ def online(zpool, *vdevs, **kwargs):
dlist = []
# Check if the pool_name exists
if not exists(zpool):
if not __salt__['zpool.exists'](zpool):
ret[zpool] = 'storage pool does not exist'
return ret
@ -1197,7 +1315,7 @@ def offline(zpool, *vdevs, **kwargs):
ret = {}
# Check if the pool_name exists
if not exists(zpool):
if not __salt__['zpool.exists'](zpool):
ret[zpool] = 'storage pool does not exist'
return ret
@ -1225,6 +1343,50 @@ def offline(zpool, *vdevs, **kwargs):
return ret
def labelclear(device, force=False):
'''
.. versionadded:: Oxygen
Removes ZFS label information from the specified device
.. warning::
The device must not be part of an active pool configuration.
device : string
device
force : boolean
treat exported or foreign devices as inactive
CLI Example:
.. code-block:: bash
salt '*' zpool.labelclear /path/to/dev
'''
ret = {}
zpool_cmd = _check_zpool()
cmd = '{zpool_cmd} labelclear {force}{device}'.format(
zpool_cmd=zpool_cmd,
force='-f ' if force else '',
device=device,
)
# Bring all specified devices offline
res = __salt__['cmd.run_all'](cmd, python_shell=False)
if res['retcode'] != 0:
## NOTE: skip the "use '-f' hint"
res['stderr'] = res['stderr'].split("\n")
if len(res['stderr']) >= 1:
if res['stderr'][0].startswith("use '-f'"):
del res['stderr'][0]
res['stderr'] = "\n".join(res['stderr'])
ret[device] = res['stderr'] if 'stderr' in res and res['stderr'] else res['stdout']
else:
ret[device] = 'cleared'
return ret
def reguid(zpool):
'''
.. versionadded:: 2016.3.0

View file

@ -102,6 +102,13 @@ A REST API for Salt
expire_responses : True
Whether to check for and kill HTTP responses that have exceeded the
default timeout.
.. deprecated:: 2016.11.9, 2017.7.3, Oxygen
The "expire_responses" configuration setting, which corresponds
to the ``timeout_monitor`` setting in CherryPy, is no longer
supported in CherryPy versions >= 12.0.0.
max_request_body_size : ``1048576``
Maximum size for the HTTP request body.
collect_stats : False
@ -606,8 +613,10 @@ import yaml
# Import Salt libs
import salt
import salt.auth
import salt.exceptions
import salt.utils.event
import salt.utils.stringutils
import salt.utils.versions
from salt.ext import six
# Import salt-api libs
@ -854,11 +863,18 @@ def hypermedia_handler(*args, **kwargs):
except (salt.exceptions.SaltDaemonNotRunning,
salt.exceptions.SaltReqTimeoutError) as exc:
raise cherrypy.HTTPError(503, exc.strerror)
except (cherrypy.TimeoutError, salt.exceptions.SaltClientTimeout):
except salt.exceptions.SaltClientTimeout:
raise cherrypy.HTTPError(504)
except cherrypy.CherryPyException:
raise
except Exception as exc:
# The TimeoutError exception class was removed in CherryPy in 12.0.0, but
# Still check existence of TimeoutError and handle in CherryPy < 12.
# The check was moved down from the SaltClientTimeout error line because
# A one-line if statement throws a BaseException inheritance TypeError.
if hasattr(cherrypy, 'TimeoutError') and isinstance(exc, cherrypy.TimeoutError):
raise cherrypy.HTTPError(504)
import traceback
logger.debug("Error while processing request for: %s",
@ -2839,8 +2855,6 @@ class API(object):
'server.socket_port': self.apiopts.get('port', 8000),
'server.thread_pool': self.apiopts.get('thread_pool', 100),
'server.socket_queue_size': self.apiopts.get('queue_size', 30),
'engine.timeout_monitor.on': self.apiopts.get(
'expire_responses', True),
'max_request_body_size': self.apiopts.get(
'max_request_body_size', 1048576),
'debug': self.apiopts.get('debug', False),
@ -2858,6 +2872,14 @@ class API(object):
},
}
if salt.utils.versions.version_cmp(cherrypy.__version__, '12.0.0') < 0:
# CherryPy >= 12.0 no longer supports "timeout_monitor", only set
# this config option when using an older version of CherryPy.
# See Issue #44601 for more information.
conf['global']['engine.timeout_monitor.on'] = self.apiopts.get(
'expire_responses', True
)
if cpstats and self.apiopts.get('collect_stats', False):
conf['/']['tools.cpstats.on'] = True

View file

@ -310,25 +310,25 @@ class PillarCache(object):
return fresh_pillar.compile_pillar()
def compile_pillar(self, *args, **kwargs): # Will likely just be pillar_dirs
log.debug('Scanning pillar cache for information about minion {0} and saltenv {1}'.format(self.minion_id, self.saltenv))
log.debug('Scanning pillar cache for information about minion {0} and pillarenv {1}'.format(self.minion_id, self.pillarenv))
log.debug('Scanning cache: {0}'.format(self.cache._dict))
# Check the cache!
if self.minion_id in self.cache: # Keyed by minion_id
# TODO Compare grains, etc?
if self.saltenv in self.cache[self.minion_id]:
if self.pillarenv in self.cache[self.minion_id]:
# We have a cache hit! Send it back.
log.debug('Pillar cache hit for minion {0} and saltenv {1}'.format(self.minion_id, self.saltenv))
return self.cache[self.minion_id][self.saltenv]
log.debug('Pillar cache hit for minion {0} and pillarenv {1}'.format(self.minion_id, self.pillarenv))
return self.cache[self.minion_id][self.pillarenv]
else:
# We found the minion but not the env. Store it.
fresh_pillar = self.fetch_pillar()
self.cache[self.minion_id][self.saltenv] = fresh_pillar
log.debug('Pillar cache miss for saltenv {0} for minion {1}'.format(self.saltenv, self.minion_id))
self.cache[self.minion_id][self.pillarenv] = fresh_pillar
log.debug('Pillar cache miss for pillarenv {0} for minion {1}'.format(self.pillarenv, self.minion_id))
return fresh_pillar
else:
# We haven't seen this minion yet in the cache. Store it.
fresh_pillar = self.fetch_pillar()
self.cache[self.minion_id] = {self.saltenv: fresh_pillar}
self.cache[self.minion_id] = {self.pillarenv: fresh_pillar}
log.debug('Pillar cache miss for minion {0}'.format(self.minion_id))
log.debug('Current pillar cache: {0}'.format(self.cache._dict)) # FIXME hack!
return fresh_pillar

View file

@ -6,8 +6,11 @@ from __future__ import absolute_import
# Import python libs
import os
import logging
import pickle
import logging
# Import Salt modules
import salt.utils.files
# Import Salt libs
import salt.utils.files
@ -22,7 +25,7 @@ DETAILS = {}
DETAILS['services'] = {'apache': 'running', 'ntp': 'running', 'samba': 'stopped'}
DETAILS['packages'] = {'coreutils': '1.0', 'apache': '2.4', 'tinc': '1.4', 'redbull': '999.99'}
FILENAME = os.tmpnam()
FILENAME = salt.utils.files.mkstemp()
# Want logging!
log = logging.getLogger(__file__)

View file

@ -196,9 +196,7 @@ def __virtual__():
Only return if all the modules are available
'''
if not salt.utils.path.which('racadm'):
log.critical('fx2 proxy minion needs "racadm" to be installed.')
return False
return False, 'fx2 proxy minion needs "racadm" to be installed.'
return True

View file

@ -16,9 +16,21 @@ Dependencies
The ``napalm`` proxy module requires NAPALM_ library to be installed: ``pip install napalm``
Please check Installation_ for complete details.
.. _NAPALM: https://napalm.readthedocs.io
.. _Installation: https://napalm.readthedocs.io/en/latest/installation.html
.. _NAPALM: https://napalm-automation.net/
.. _Installation: http://napalm.readthedocs.io/en/latest/installation/index.html
.. note::
Beginning with Salt release 2017.7.3, it is recommended to use
``napalm`` >= ``2.0.0``. The library has been unified into a monolithic
package, as in opposite to separate packages per driver. For more details
you can check `this document <https://napalm-automation.net/reunification/>`_.
While it will still work with the old packages, bear in mind that the NAPALM
core team will maintain only the main ``napalm`` package.
Moreover, for additional capabilities, the users can always define a
library that extends NAPALM's base capabilities and configure the
``provider`` option (see below).
Pillar
------
@ -59,7 +71,7 @@ always_alive: ``True``
.. versionadded:: 2017.7.0
provider: ``napalm_base``
The module that provides the ``get_network_device`` function.
The library that provides the ``get_network_device`` function.
This option is useful when the user has more specific needs and requires
to extend the NAPALM capabilities using a private library implementation.
The only constraint is that the alternative library needs to have the
@ -129,17 +141,7 @@ from __future__ import absolute_import
import logging
log = logging.getLogger(__file__)
# Import third party lib
try:
# will try to import NAPALM
# https://github.com/napalm-automation/napalm
# pylint: disable=W0611
import napalm_base
# pylint: enable=W0611
HAS_NAPALM = True
except ImportError:
HAS_NAPALM = False
# Import Salt modules
from salt.ext import six
import salt.utils.napalm
@ -163,7 +165,7 @@ DETAILS = {}
def __virtual__():
return HAS_NAPALM or (False, 'Please install the NAPALM library: `pip install napalm`!')
return salt.utils.napalm.virtual(__opts__, 'napalm', __file__)
# ----------------------------------------------------------------------------------------------------------------------
# helper functions -- will not be exported

View file

@ -294,9 +294,11 @@ def get_load(jid):
if not os.path.exists(jid_dir) or not os.path.exists(load_fn):
return {}
serial = salt.payload.Serial(__opts__)
ret = {}
with salt.utils.files.fopen(os.path.join(jid_dir, LOAD_P), 'rb') as rfh:
ret = serial.load(rfh)
if ret is None:
ret = {}
minions_cache = [os.path.join(jid_dir, MINIONS_P)]
minions_cache.extend(
glob.glob(os.path.join(jid_dir, SYNDIC_MINIONS_P.format('*')))

View file

@ -27,13 +27,6 @@ from __future__ import absolute_import
# Import Python libs
import logging
# Import 3rd-party libs
try:
import requests
HAS_REQUESTS = True
except ImportError:
HAS_REQUESTS = False
# Import Salt Libs
import salt.returners
@ -48,8 +41,6 @@ def __virtual__():
:return: The virtual name of the module.
'''
if not HAS_REQUESTS:
return False
return __virtualname__
@ -61,7 +52,6 @@ def _get_options(ret=None):
:return: Dictionary containing the data and options needed to send
them to telegram.
'''
attrs = {'chat_id': 'chat_id',
'token': 'token'}
@ -81,7 +71,6 @@ def returner(ret):
:param ret: The data to be sent.
:return: Boolean if message was sent successfully.
'''
_options = _get_options(ret)
chat_id = _options.get('chat_id')
@ -105,51 +94,6 @@ def returner(ret):
ret.get('jid'),
returns)
telegram = _post_message(chat_id,
message,
token)
return telegram
def _post_message(chat_id, message, token):
'''
Send a message to a Telegram chat.
:param chat_id: The chat id.
:param message: The message to send to the telegram chat.
:param token: The Telegram API token.
:return: Boolean if message was sent successfully.
'''
url = 'https://api.telegram.org/bot{0}/sendMessage'.format(token)
parameters = dict()
if chat_id:
parameters['chat_id'] = chat_id
if message:
parameters['text'] = message
try:
response = requests.post(
url,
data=parameters
)
result = response.json()
log.debug(
'Raw response of the telegram request is {0}'.format(response))
except Exception:
log.exception(
'Sending telegram api request failed'
)
result = False
if response and 'message_id' in result:
success = True
else:
success = False
log.debug('result {0}'.format(success))
return bool(success)
return __salt__['telegram.post_message'](message,
chat_id=chat_id,
token=token)

View file

@ -43,6 +43,7 @@ class RunnerClient(mixins.SyncClientMixin, mixins.AsyncClientMixin, object):
def __init__(self, opts):
self.opts = opts
self.context = {}
@property
def functions(self):
@ -51,11 +52,13 @@ class RunnerClient(mixins.SyncClientMixin, mixins.AsyncClientMixin, object):
self.utils = salt.loader.utils(self.opts)
# Must be self.functions for mixin to work correctly :-/
try:
self._functions = salt.loader.runner(self.opts, utils=self.utils)
self._functions = salt.loader.runner(
self.opts, utils=self.utils, context=self.context)
except AttributeError:
# Just in case self.utils is still not present (perhaps due to
# problems with the loader), load the runner funcs without them
self._functions = salt.loader.runner(self.opts)
self._functions = salt.loader.runner(
self.opts, context=self.context)
return self._functions

View file

@ -75,6 +75,7 @@ from __future__ import unicode_literals
# Import salt lib
import salt.output
import salt.utils.network
from salt.ext import six
from salt.ext.six.moves import map
@ -812,7 +813,25 @@ def find(addr, best=True, display=_DEFAULT_DISPLAY):
ip = '' # pylint: disable=invalid-name
ipnet = None
results = {}
results = {
'int_net': [],
'int_descr': [],
'int_name': [],
'int_ip': [],
'int_mac': [],
'int_device': [],
'lldp_descr': [],
'lldp_int': [],
'lldp_device': [],
'lldp_mac': [],
'lldp_device_int': [],
'mac_device': [],
'mac_int': [],
'arp_device': [],
'arp_int': [],
'arp_mac': [],
'arp_ip': []
}
if isinstance(addr, int):
results['mac'] = findmac(vlan=addr, display=display)
@ -826,6 +845,8 @@ def find(addr, best=True, display=_DEFAULT_DISPLAY):
except IndexError:
# no problem, let's keep searching
pass
if salt.utils.network.is_ipv6(addr):
mac = False
if not mac:
try:
ip = napalm_helpers.convert(napalm_helpers.ip, addr) # pylint: disable=invalid-name

View file

@ -52,6 +52,7 @@ def sync_all(saltenv='base', extmod_whitelist=None, extmod_blacklist=None):
ret['runners'] = sync_runners(saltenv=saltenv, extmod_whitelist=extmod_whitelist, extmod_blacklist=extmod_blacklist)
ret['wheel'] = sync_wheel(saltenv=saltenv, extmod_whitelist=extmod_whitelist, extmod_blacklist=extmod_blacklist)
ret['engines'] = sync_engines(saltenv=saltenv, extmod_whitelist=extmod_whitelist, extmod_blacklist=extmod_blacklist)
ret['thorium'] = sync_thorium(saltenv=saltenv, extmod_whitelist=extmod_whitelist, extmod_blacklist=extmod_blacklist)
ret['queues'] = sync_queues(saltenv=saltenv, extmod_whitelist=extmod_whitelist, extmod_blacklist=extmod_blacklist)
ret['pillar'] = sync_pillar(saltenv=saltenv, extmod_whitelist=extmod_whitelist, extmod_blacklist=extmod_blacklist)
ret['utils'] = sync_utils(saltenv=saltenv, extmod_whitelist=extmod_whitelist, extmod_blacklist=extmod_blacklist)
@ -303,6 +304,32 @@ def sync_engines(saltenv='base', extmod_whitelist=None, extmod_blacklist=None):
extmod_blacklist=extmod_blacklist)[0]
def sync_thorium(saltenv='base', extmod_whitelist=None, extmod_blacklist=None):
'''
.. versionadded:: Oxygen
Sync Thorium from ``salt://_thorium`` to the master
saltenv: ``base``
The fileserver environment from which to sync. To sync from more than
one environment, pass a comma-separated list.
extmod_whitelist
comma-seperated list of modules to sync
extmod_blacklist
comma-seperated list of modules to blacklist based on type
CLI Example:
.. code-block:: bash
salt-run saltutil.sync_thorium
'''
return salt.utils.extmods.sync(__opts__, 'thorium', saltenv=saltenv, extmod_whitelist=extmod_whitelist,
extmod_blacklist=extmod_blacklist)[0]
def sync_queues(saltenv='base', extmod_whitelist=None, extmod_blacklist=None):
'''
Sync queue modules from ``salt://_queues`` to the master
@ -381,7 +408,7 @@ def sync_sdb(saltenv='base', extmod_whitelist=None, extmod_blacklist=None):
'''
.. versionadded:: 2017.7.0
Sync utils modules from ``salt://_sdb`` to the master
Sync sdb modules from ``salt://_sdb`` to the master
saltenv : base
The fileserver environment from which to sync. To sync from more than
@ -427,7 +454,7 @@ def sync_cache(saltenv='base', extmod_whitelist=None, extmod_blacklist=None):
'''
.. versionadded:: 2017.7.0
Sync utils modules from ``salt://_cache`` to the master
Sync cache modules from ``salt://_cache`` to the master
saltenv : base
The fileserver environment from which to sync. To sync from more than
@ -453,7 +480,7 @@ def sync_fileserver(saltenv='base', extmod_whitelist=None, extmod_blacklist=None
'''
.. versionadded:: Oxygen
Sync utils modules from ``salt://_fileserver`` to the master
Sync fileserver modules from ``salt://_fileserver`` to the master
saltenv : base
The fileserver environment from which to sync. To sync from more than
@ -479,7 +506,7 @@ def sync_clouds(saltenv='base', extmod_whitelist=None, extmod_blacklist=None):
'''
.. versionadded:: 2017.7.0
Sync utils modules from ``salt://_clouds`` to the master
Sync cloud modules from ``salt://_clouds`` to the master
saltenv : base
The fileserver environment from which to sync. To sync from more than
@ -505,7 +532,7 @@ def sync_roster(saltenv='base', extmod_whitelist=None, extmod_blacklist=None):
'''
.. versionadded:: 2017.7.0
Sync utils modules from ``salt://_roster`` to the master
Sync roster modules from ``salt://_roster`` to the master
saltenv : base
The fileserver environment from which to sync. To sync from more than

View file

@ -22,7 +22,7 @@ def get(uri):
.. code-block:: bash
salt '*' sdb.get sdb://mymemcached/foo
salt-run sdb.get sdb://mymemcached/foo
'''
return salt.utils.sdb.sdb_get(uri, __opts__, __utils__)
@ -37,7 +37,7 @@ def set_(uri, value):
.. code-block:: bash
salt '*' sdb.set sdb://mymemcached/foo bar
salt-run sdb.set sdb://mymemcached/foo bar
'''
return salt.utils.sdb.sdb_set(uri, value, __opts__, __utils__)
@ -52,7 +52,7 @@ def delete(uri):
.. code-block:: bash
salt '*' sdb.delete sdb://mymemcached/foo
salt-run sdb.delete sdb://mymemcached/foo
'''
return salt.utils.sdb.sdb_delete(uri, __opts__, __utils__)

View file

@ -15,6 +15,24 @@ from salt.exceptions import SaltInvocationError
LOGGER = logging.getLogger(__name__)
def set_pause(jid, state_id, duration=None):
'''
Set up a state id pause, this instructs a running state to pause at a given
state id. This needs to pass in the jid of the running state and can
optionally pass in a duration in seconds.
'''
minion = salt.minion.MasterMinion(__opts__)
minion['state.set_pause'](jid, state_id, duration)
def rm_pause(jid, state_id, duration=None):
'''
Remove a pause from a jid, allowing it to continue
'''
minion = salt.minion.MasterMinion(__opts__)
minion['state.rm_pause'](jid, state_id)
def orchestrate(mods,
saltenv='base',
test=None,

View file

@ -1605,7 +1605,24 @@ class State(object):
for ind in items:
if not isinstance(ind, dict):
# Malformed req_in
continue
if ind in high:
_ind_high = [x for x
in high[ind]
if not x.startswith('__')]
ind = {_ind_high[0]: ind}
else:
found = False
for _id in iter(high):
for state in [state for state
in iter(high[_id])
if not state.startswith('__')]:
for j in iter(high[_id][state]):
if isinstance(j, dict) and 'name' in j:
if j['name'] == ind:
ind = {state: _id}
found = True
if not found:
continue
if len(ind) < 1:
continue
pstate = next(iter(ind))
@ -1901,6 +1918,8 @@ class State(object):
if self.mocked:
ret = mock_ret(cdata)
else:
# Check if this low chunk is paused
self.check_pause(low)
# Execute the state function
if not low.get(u'__prereq__') and low.get(u'parallel'):
# run the state call in parallel, but only if not in a prereq
@ -2110,6 +2129,48 @@ class State(object):
return not running[tag][u'result']
return False
def check_pause(self, low):
'''
Check to see if this low chunk has been paused
'''
if not self.jid:
# Can't pause on salt-ssh since we can't track continuous state
return
pause_path = os.path.join(self.opts[u'cachedir'], 'state_pause', self.jid)
start = time.time()
if os.path.isfile(pause_path):
try:
while True:
tries = 0
with salt.utils.files.fopen(pause_path, 'rb') as fp_:
try:
pdat = msgpack.loads(fp_.read())
except msgpack.UnpackValueError:
# Reading race condition
if tries > 10:
# Break out if there are a ton of read errors
return
tries += 1
time.sleep(1)
continue
id_ = low[u'__id__']
key = u''
if id_ in pdat:
key = id_
elif u'__all__' in pdat:
key = u'__all__'
if key:
if u'duration' in pdat[key]:
now = time.time()
if now - start > pdat[key][u'duration']:
return
else:
return
time.sleep(1)
except Exception as exc:
log.error('Failed to read in pause data for file located at: %s', pause_path)
return
def reconcile_procs(self, running):
'''
Check the running dict for processes and resolve them
@ -2579,7 +2640,14 @@ class State(object):
for key, val in six.iteritems(l_dict):
for listen_to in val:
if not isinstance(listen_to, dict):
continue
found = False
for chunk in chunks:
if chunk['__id__'] == listen_to or \
chunk['name'] == listen_to:
listen_to = {chunk['state']: chunk['__id__']}
found = True
if not found:
continue
for lkey, lval in six.iteritems(listen_to):
if (lkey, lval) not in crefs:
rerror = {_l_tag(lkey, lval):
@ -2658,6 +2726,14 @@ class State(object):
except OSError:
log.debug(u'File %s does not exist, no need to cleanup', accum_data_path)
_cleanup_accumulator_data()
if self.jid is not None:
pause_path = os.path.join(self.opts[u'cachedir'], u'state_pause', self.jid)
if os.path.isfile(pause_path):
try:
os.remove(pause_path)
except OSError:
# File is not present, all is well
pass
return ret

View file

@ -85,6 +85,7 @@ def cert(name,
comment += 'would have been renewed'
else:
comment += 'would not have been touched'
ret['result'] = True
ret['comment'] = comment
return ret

View file

@ -13,6 +13,7 @@ import os
import re
import shlex
import stat
import string
import tarfile
from contextlib import closing
@ -771,12 +772,24 @@ def extracted(name,
return ret
urlparsed_source = _urlparse(source_match)
source_hash_basename = urlparsed_source.path or urlparsed_source.netloc
urlparsed_scheme = urlparsed_source.scheme
urlparsed_path = os.path.join(
urlparsed_source.netloc,
urlparsed_source.path).rstrip(os.sep)
source_is_local = urlparsed_source.scheme in salt.utils.files.LOCAL_PROTOS
# urlparsed_scheme will be the drive letter if this is a Windows file path
# This checks for a drive letter as the scheme and changes it to file
if urlparsed_scheme and \
urlparsed_scheme.lower() in string.ascii_lowercase:
urlparsed_path = ':'.join([urlparsed_scheme, urlparsed_path])
urlparsed_scheme = 'file'
source_hash_basename = urlparsed_path or urlparsed_source.netloc
source_is_local = urlparsed_scheme in salt.utils.files.LOCAL_PROTOS
if source_is_local:
# Get rid of "file://" from start of source_match
source_match = os.path.realpath(os.path.expanduser(urlparsed_source.path))
source_match = os.path.realpath(os.path.expanduser(urlparsed_path))
if not os.path.isfile(source_match):
ret['comment'] = 'Source file \'{0}\' does not exist'.format(
salt.utils.url.redact_http_basic_auth(source_match))

View file

@ -1498,13 +1498,8 @@ def accept_vpc_peering_connection(name=None, conn_id=None, conn_name=None,
'''
log.debug('Called state to accept VPC peering connection')
pending = __salt__['boto_vpc.is_peering_connection_pending'](
conn_id=conn_id,
conn_name=conn_name,
region=region,
key=key,
keyid=keyid,
profile=profile
)
conn_id=conn_id, conn_name=conn_name, region=region, key=key,
keyid=keyid, profile=profile)
ret = {
'name': name,
@ -1515,30 +1510,25 @@ def accept_vpc_peering_connection(name=None, conn_id=None, conn_name=None,
if not pending:
ret['result'] = True
ret['changes'].update({
'old': 'No pending VPC peering connection found. '
'Nothing to be done.'
})
ret['changes'].update({'old':
'No pending VPC peering connection found. Nothing to be done.'})
return ret
if __opts__['test']:
ret['changes'].update({'old': 'Pending VPC peering connection found '
'and can be accepted'})
ret['changes'].update({'old':
'Pending VPC peering connection found and can be accepted'})
return ret
log.debug('Calling module to accept this VPC peering connection')
result = __salt__['boto_vpc.accept_vpc_peering_connection'](
conn_id=conn_id, name=conn_name, region=region, key=key,
fun = 'boto_vpc.accept_vpc_peering_connection'
log.debug('Calling `{0}()` to accept this VPC peering connection'.format(fun))
result = __salt__[fun](conn_id=conn_id, name=conn_name, region=region, key=key,
keyid=keyid, profile=profile)
if 'error' in result:
ret['comment'] = "Failed to request VPC peering: {0}".format(result['error'])
ret['comment'] = "Failed to accept VPC peering: {0}".format(result['error'])
ret['result'] = False
return ret
ret['changes'].update({
'old': '',
'new': result['msg']
})
ret['changes'].update({'old': '', 'new': result['msg']})
return ret

View file

@ -392,9 +392,9 @@ def absent(name,
The special keyword used in the job (eg. @reboot, @hourly...).
Quotes must be used, otherwise PyYAML will strip the '@' sign.
'''
### NOTE: The keyword arguments in **kwargs are ignored in this state, but
### cannot be removed from the function definition, otherwise the use
### of unsupported arguments will result in a traceback.
# NOTE: The keyword arguments in **kwargs are ignored in this state, but
# cannot be removed from the function definition, otherwise the use
# of unsupported arguments will result in a traceback.
name = name.strip()
if identifier is False:
@ -566,6 +566,7 @@ def file(name,
user,
group,
mode,
[], # no special attrs for cron
template,
context,
defaults,

View file

@ -761,7 +761,8 @@ def _check_directory_win(name,
win_owner,
win_perms=None,
win_deny_perms=None,
win_inheritance=None):
win_inheritance=None,
win_perms_reset=None):
'''
Check what changes need to be made on a directory
'''
@ -879,6 +880,20 @@ def _check_directory_win(name,
if not win_inheritance == salt.utils.win_dacl.get_inheritance(name):
changes['inheritance'] = win_inheritance
# Check reset
if win_perms_reset:
for user_name in perms:
if user_name not in win_perms:
if 'grant' in perms[user_name] and not perms[user_name]['grant']['inherited']:
if 'remove_perms' not in changes:
changes['remove_perms'] = {}
changes['remove_perms'].update({user_name: perms[user_name]})
if user_name not in win_deny_perms:
if 'deny' in perms[user_name] and not perms[user_name]['deny']['inherited']:
if 'remove_perms' not in changes:
changes['remove_perms'] = {}
changes['remove_perms'].update({user_name: perms[user_name]})
if changes:
return None, 'The directory "{0}" will be changed'.format(name), changes
@ -1566,6 +1581,7 @@ def managed(name,
win_perms=None,
win_deny_perms=None,
win_inheritance=True,
win_perms_reset=False,
**kwargs):
r'''
Manage a given file, this function allows for a file to be downloaded from
@ -2072,6 +2088,13 @@ def managed(name,
.. versionadded:: 2017.7.0
win_perms_reset : False
If ``True`` the existing DACL will be cleared and replaced with the
settings defined in this function. If ``False``, new entries will be
appended to the existing DACL. Default is ``False``.
.. versionadded:: Oxygen
Here's an example using the above ``win_*`` parameters:
.. code-block:: yaml
@ -2314,8 +2337,13 @@ def managed(name,
# Check and set the permissions if necessary
if salt.utils.platform.is_windows():
ret = __salt__['file.check_perms'](
name, ret, win_owner, win_perms, win_deny_perms, None,
win_inheritance)
path=name,
ret=ret,
owner=win_owner,
grant_perms=win_perms,
deny_perms=win_deny_perms,
inheritance=win_inheritance,
reset=win_perms_reset)
else:
ret, _ = __salt__['file.check_perms'](
name, ret, user, group, mode, attrs, follow_symlinks)
@ -2356,8 +2384,13 @@ def managed(name,
if salt.utils.platform.is_windows():
ret = __salt__['file.check_perms'](
name, ret, win_owner, win_perms, win_deny_perms, None,
win_inheritance)
path=name,
ret=ret,
owner=win_owner,
grant_perms=win_perms,
deny_perms=win_deny_perms,
inheritance=win_inheritance,
reset=win_perms_reset)
if isinstance(ret['pchanges'], tuple):
ret['result'], ret['comment'] = ret['pchanges']
@ -2448,6 +2481,7 @@ def managed(name,
win_perms=win_perms,
win_deny_perms=win_deny_perms,
win_inheritance=win_inheritance,
win_perms_reset=win_perms_reset,
encoding=encoding,
encoding_errors=encoding_errors,
**kwargs)
@ -2517,6 +2551,7 @@ def managed(name,
win_perms=win_perms,
win_deny_perms=win_deny_perms,
win_inheritance=win_inheritance,
win_perms_reset=win_perms_reset,
encoding=encoding,
encoding_errors=encoding_errors,
**kwargs)
@ -2590,6 +2625,7 @@ def directory(name,
win_perms=None,
win_deny_perms=None,
win_inheritance=True,
win_perms_reset=False,
**kwargs):
r'''
Ensure that a named directory is present and has the right perms
@ -2751,6 +2787,13 @@ def directory(name,
.. versionadded:: 2017.7.0
win_perms_reset : False
If ``True`` the existing DACL will be cleared and replaced with the
settings defined in this function. If ``False``, new entries will be
appended to the existing DACL. Default is ``False``.
.. versionadded:: Oxygen
Here's an example using the above ``win_*`` parameters:
.. code-block:: yaml
@ -2855,13 +2898,23 @@ def directory(name,
elif force:
# Remove whatever is in the way
if os.path.isfile(name):
os.remove(name)
ret['changes']['forced'] = 'File was forcibly replaced'
if __opts__['test']:
ret['pchanges']['forced'] = 'File was forcibly replaced'
else:
os.remove(name)
ret['changes']['forced'] = 'File was forcibly replaced'
elif __salt__['file.is_link'](name):
__salt__['file.remove'](name)
ret['changes']['forced'] = 'Symlink was forcibly replaced'
if __opts__['test']:
ret['pchanges']['forced'] = 'Symlink was forcibly replaced'
else:
__salt__['file.remove'](name)
ret['changes']['forced'] = 'Symlink was forcibly replaced'
else:
__salt__['file.remove'](name)
if __opts__['test']:
ret['pchanges']['forced'] = 'Directory was forcibly replaced'
else:
__salt__['file.remove'](name)
ret['changes']['forced'] = 'Directory was forcibly replaced'
else:
if os.path.isfile(name):
return _error(
@ -2874,17 +2927,26 @@ def directory(name,
# Check directory?
if salt.utils.platform.is_windows():
presult, pcomment, ret['pchanges'] = _check_directory_win(
name, win_owner, win_perms, win_deny_perms, win_inheritance)
presult, pcomment, pchanges = _check_directory_win(
name=name,
win_owner=win_owner,
win_perms=win_perms,
win_deny_perms=win_deny_perms,
win_inheritance=win_inheritance,
win_perms_reset=win_perms_reset)
else:
presult, pcomment, ret['pchanges'] = _check_directory(
presult, pcomment, pchanges = _check_directory(
name, user, group, recurse or [], dir_mode, clean, require,
exclude_pat, max_depth, follow_symlinks)
if __opts__['test']:
if pchanges:
ret['pchanges'].update(pchanges)
# Don't run through the reset of the function if there are no changes to be
# made
if not ret['pchanges'] or __opts__['test']:
ret['result'] = presult
ret['comment'] = pcomment
ret['changes'] = ret['pchanges']
return ret
if not os.path.isdir(name):
@ -2900,8 +2962,13 @@ def directory(name,
if not os.path.isdir(drive):
return _error(
ret, 'Drive {0} is not mapped'.format(drive))
__salt__['file.makedirs'](name, win_owner, win_perms,
win_deny_perms, win_inheritance)
__salt__['file.makedirs'](
path=name,
owner=win_owner,
grant_perms=win_perms,
deny_perms=win_deny_perms,
inheritance=win_inheritance,
reset=win_perms_reset)
else:
__salt__['file.makedirs'](name, user=user, group=group,
mode=dir_mode)
@ -2910,8 +2977,13 @@ def directory(name,
ret, 'No directory to create {0} in'.format(name))
if salt.utils.platform.is_windows():
__salt__['file.mkdir'](name, win_owner, win_perms, win_deny_perms,
win_inheritance)
__salt__['file.mkdir'](
path=name,
owner=win_owner,
grant_perms=win_perms,
deny_perms=win_deny_perms,
inheritance=win_inheritance,
reset=win_perms_reset)
else:
__salt__['file.mkdir'](name, user=user, group=group, mode=dir_mode)
@ -2925,7 +2997,13 @@ def directory(name,
if not children_only:
if salt.utils.platform.is_windows():
ret = __salt__['file.check_perms'](
name, ret, win_owner, win_perms, win_deny_perms, None, win_inheritance)
path=name,
ret=ret,
owner=win_owner,
grant_perms=win_perms,
deny_perms=win_deny_perms,
inheritance=win_inheritance,
reset=win_perms_reset)
else:
ret, perms = __salt__['file.check_perms'](
name, ret, user, group, dir_mode, None, follow_symlinks)
@ -2996,8 +3074,13 @@ def directory(name,
try:
if salt.utils.platform.is_windows():
ret = __salt__['file.check_perms'](
full, ret, win_owner, win_perms, win_deny_perms, None,
win_inheritance)
path=full,
ret=ret,
owner=win_owner,
grant_perms=win_perms,
deny_perms=win_deny_perms,
inheritance=win_inheritance,
reset=win_perms_reset)
else:
ret, _ = __salt__['file.check_perms'](
full, ret, user, group, file_mode, None, follow_symlinks)
@ -3011,8 +3094,13 @@ def directory(name,
try:
if salt.utils.platform.is_windows():
ret = __salt__['file.check_perms'](
full, ret, win_owner, win_perms, win_deny_perms, None,
win_inheritance)
path=full,
ret=ret,
owner=win_owner,
grant_perms=win_perms,
deny_perms=win_deny_perms,
inheritance=win_inheritance,
reset=win_perms_reset)
else:
ret, _ = __salt__['file.check_perms'](
full, ret, user, group, dir_mode, None, follow_symlinks)
@ -3034,7 +3122,8 @@ def directory(name,
if children_only:
ret['comment'] = u'Directory {0}/* updated'.format(name)
else:
ret['comment'] = u'Directory {0} updated'.format(name)
if ret['changes']:
ret['comment'] = u'Directory {0} updated'.format(name)
if __opts__['test']:
ret['comment'] = 'Directory {0} not updated'.format(name)

View file

@ -82,8 +82,9 @@ import logging
# Import Salt Libs
from salt.exceptions import CommandExecutionError
import salt.utils.path
from salt.output import nested
import salt.utils.path
import salt.utils.versions
log = logging.getLogger(__name__)
@ -231,8 +232,10 @@ def present(name,
# if prune_services == None, set to True and log a deprecation warning
if prune_services is None:
prune_services = True
salt.utils.warn_until('Neon',
'The \'prune_services\' argument default is currently True, but will be changed to True in future releases.')
salt.utils.versions.warn_until(
'Neon',
'The \'prune_services\' argument default is currently True, '
'but will be changed to True in future releases.')
ret = _present(name, block_icmp, prune_block_icmp, default, masquerade, ports, prune_ports,
port_fwd, prune_port_fwd, services, prune_services, interfaces, prune_interfaces,

View file

@ -204,7 +204,7 @@ def present(name, entry=None, family='ipv4', **kwargs):
entry_opts = 'timeout {0} {1}'.format(kwargs['timeout'], entry_opts)
if 'comment' in kwargs and 'comment' not in entry_opts:
entry_opts = '{0} comment "{1}"'.format(entry_opts, kwargs['comment'])
_entry = ' '.join([entry, entry_opts]).strip()
_entry = ' '.join([entry, entry_opts.lstrip()]).strip()
if __salt__['ipset.check'](kwargs['set_name'],
_entry,
@ -221,7 +221,7 @@ def present(name, entry=None, family='ipv4', **kwargs):
kwargs['set_name'],
family)
else:
command = __salt__['ipset.add'](kwargs['set_name'], entry, family, **kwargs)
command = __salt__['ipset.add'](kwargs['set_name'], _entry, family, **kwargs)
if 'Error' not in command:
ret['changes'] = {'locale': name}
ret['comment'] += 'entry {0} added to set {1} for family {2}\n'.format(

159
salt/states/nexus.py Normal file
View file

@ -0,0 +1,159 @@
# -*- coding: utf-8 -*-
'''
This state downloads artifacts from Nexus 3.x.
.. versionadded:: Oxygen
'''
# Import python libs
from __future__ import absolute_import
import logging
log = logging.getLogger(__name__)
__virtualname__ = 'nexus'
def __virtual__():
'''
Set the virtual name for the module
'''
return __virtualname__
def downloaded(name, artifact, target_dir='/tmp', target_file=None):
'''
Ensures that the artifact from nexus exists at given location. If it doesn't exist, then
it will be downloaded. If it already exists then the checksum of existing file is checked
against checksum in nexus. If it is different then the step will fail.
artifact
Details of the artifact to be downloaded from nexus. Various options are:
- nexus_url: URL of the nexus instance
- repository: Repository in nexus
- artifact_id: Artifact ID
- group_id: Group ID
- packaging: Packaging
- classifier: Classifier
- version: Version
One of the following:
- Version to download
- ``latest`` - Download the latest release of this artifact
- ``latest_snapshot`` - Download the latest snapshot for this artifact
- username: nexus username
- password: nexus password
target_dir
Directory where the artifact should be downloaded. By default it is downloaded to /tmp directory.
target_file
Target file to download artifact to. By default file name is resolved by nexus.
An example to download an artifact to a specific file:
.. code-block:: yaml
jboss_module_downloaded:
nexus.downloaded:
- artifact:
nexus_url: http://nexus.intranet.example.com/repository
repository: 'libs-release-local'
artifact_id: 'module'
group_id: 'com.company.module'
packaging: 'jar'
classifier: 'sources'
version: '1.0'
- target_file: /opt/jboss7/modules/com/company/lib/module.jar
Download artifact to the folder (automatically resolves file name):
.. code-block:: yaml
maven_artifact_downloaded:
nexus.downloaded:
- artifact:
nexus_url: http://nexus.intranet.example.com/repository
repository: 'maven-releases'
artifact_id: 'module'
group_id: 'com.company.module'
packaging: 'zip'
classifier: 'dist'
version: '1.0'
- target_dir: /opt/maven/modules/com/company/release
'''
log.debug(" ======================== STATE: nexus.downloaded (name: %s) ", name)
ret = {'name': name,
'result': True,
'changes': {},
'comment': ''}
try:
fetch_result = __fetch_from_nexus(artifact, target_dir, target_file)
except Exception as exc:
ret['result'] = False
ret['comment'] = str(exc)
return ret
log.debug("fetch_result=%s", str(fetch_result))
ret['result'] = fetch_result['status']
ret['comment'] = fetch_result['comment']
ret['changes'] = fetch_result['changes']
log.debug("ret=%s", str(ret))
return ret
def __fetch_from_nexus(artifact, target_dir, target_file):
nexus_url = artifact['nexus_url']
repository = artifact['repository']
group_id = artifact['group_id']
artifact_id = artifact['artifact_id']
packaging = artifact['packaging'] if 'packaging' in artifact else 'jar'
classifier = artifact['classifier'] if 'classifier' in artifact else None
username = artifact['username'] if 'username' in artifact else None
password = artifact['password'] if 'password' in artifact else None
version = artifact['version'] if 'version' in artifact else None
# determine module function to use
if version == 'latest_snapshot':
function = 'nexus.get_latest_snapshot'
version_param = False
elif version == 'latest':
function = 'nexus.get_latest_release'
version_param = False
elif version.endswith('SNAPSHOT'):
function = 'nexus.get_snapshot'
version_param = True
else:
function = 'nexus.get_release'
version_param = True
if version_param:
fetch_result = __salt__[function](nexus_url=nexus_url,
repository=repository,
group_id=group_id,
artifact_id=artifact_id,
packaging=packaging,
classifier=classifier,
target_dir=target_dir,
target_file=target_file,
username=username,
password=password,
version=version)
else:
fetch_result = __salt__[function](nexus_url=nexus_url,
repository=repository,
group_id=group_id,
artifact_id=artifact_id,
packaging=packaging,
classifier=classifier,
target_dir=target_dir,
target_file=target_file,
username=username,
password=password)
return fetch_result

View file

@ -787,28 +787,15 @@ def runner(name, **kwargs):
runner_return = out.get('return')
if isinstance(runner_return, dict) and 'Error' in runner_return:
out['success'] = False
if not out.get('success', True):
cmt = "Runner function '{0}' failed{1}.".format(
name,
' with return {0}'.format(runner_return) if runner_return else '',
)
ret = {
'name': name,
'result': False,
'changes': {},
'comment': cmt,
}
else:
cmt = "Runner function '{0}' executed{1}.".format(
name,
' with return {0}'.format(runner_return) if runner_return else '',
)
ret = {
'name': name,
'result': True,
'changes': {},
'comment': cmt,
}
success = out.get('success', True)
ret = {'name': name,
'changes': {'return': runner_return},
'result': success}
ret['comment'] = "Runner function '{0}' {1}.".format(
name,
'executed' if success else 'failed',
)
ret['__orchestration__'] = True
if 'jid' in out:
@ -1039,15 +1026,21 @@ def wheel(name, **kwargs):
__env__=__env__,
**kwargs)
ret['result'] = True
wheel_return = out.get('return')
if isinstance(wheel_return, dict) and 'Error' in wheel_return:
out['success'] = False
success = out.get('success', True)
ret = {'name': name,
'changes': {'return': wheel_return},
'result': success}
ret['comment'] = "Wheel function '{0}' {1}.".format(
name,
'executed' if success else 'failed',
)
ret['__orchestration__'] = True
if 'jid' in out:
ret['__jid__'] = out['jid']
runner_return = out.get('return')
ret['comment'] = "Wheel function '{0}' executed{1}.".format(
name,
' with return {0}'.format(runner_return) if runner_return else '',
)
return ret

View file

@ -451,10 +451,10 @@ def format_call(fun,
continue
extra[key] = copy.deepcopy(value)
# We'll be showing errors to the users until Salt Oxygen comes out, after
# We'll be showing errors to the users until Salt Fluorine comes out, after
# which, errors will be raised instead.
salt.utils.versions.warn_until(
'Oxygen',
'Fluorine',
'It\'s time to start raising `SaltInvocationError` instead of '
'returning warnings',
# Let's not show the deprecation warning on the console, there's no
@ -491,7 +491,7 @@ def format_call(fun,
'{0}. If you were trying to pass additional data to be used '
'in a template context, please populate \'context\' with '
'\'key: value\' pairs. Your approach will work until Salt '
'Oxygen is out.{1}'.format(
'Fluorine is out.{1}'.format(
msg,
'' if 'full' not in ret else ' Please update your state files.'
)

View file

@ -27,13 +27,14 @@ from jinja2.exceptions import TemplateRuntimeError
from jinja2.ext import Extension
# Import salt libs
from salt.exceptions import TemplateError
import salt.fileclient
import salt.utils.data
import salt.utils.files
import salt.utils.url
import salt.utils.yamldumper
from salt.utils.decorators.jinja import jinja_filter, jinja_test, jinja_global
from salt.utils.odict import OrderedDict
from salt.exceptions import TemplateError
log = logging.getLogger(__name__)
@ -44,18 +45,6 @@ __all__ = [
GLOBAL_UUID = uuid.UUID('91633EBF-1C86-5E33-935A-28061F4B480E')
# To dump OrderedDict objects as regular dicts. Used by the yaml
# template filter.
class OrderedDictDumper(yaml.Dumper): # pylint: disable=W0232
pass
yaml.add_representer(OrderedDict,
yaml.representer.SafeRepresenter.represent_dict,
Dumper=OrderedDictDumper)
class SaltCacheLoader(BaseLoader):
'''
@ -796,8 +785,8 @@ class SerializerExtension(Extension, object):
return Markup(json.dumps(value, sort_keys=sort_keys, indent=indent).strip())
def format_yaml(self, value, flow_style=True):
yaml_txt = yaml.dump(value, default_flow_style=flow_style,
Dumper=OrderedDictDumper).strip()
yaml_txt = salt.utils.yamldumper.safe_dump(
value, default_flow_style=flow_style).strip()
if yaml_txt.endswith('\n...'):
yaml_txt = yaml_txt[:len(yaml_txt)-4]
return Markup(yaml_txt)

View file

@ -14,6 +14,7 @@ Utils for the NAPALM modules and proxy.
.. versionadded:: 2017.7.0
'''
# Import Python libs
from __future__ import absolute_import
import traceback
@ -22,20 +23,31 @@ import importlib
from functools import wraps
# Import Salt libs
from salt.ext import six as six
import salt.output
import salt.utils.platform
# Import 3rd-party libs
from salt.ext import six
# Import third party libs
try:
# will try to import NAPALM
# https://github.com/napalm-automation/napalm
# pylint: disable=W0611
import napalm_base
import napalm
import napalm.base as napalm_base
# pylint: enable=W0611
HAS_NAPALM = True
HAS_NAPALM_BASE = False # doesn't matter anymore, but needed for the logic below
try:
NAPALM_MAJOR = int(napalm.__version__.split('.')[0])
except AttributeError:
NAPALM_MAJOR = 0
except ImportError:
HAS_NAPALM = False
try:
import napalm_base
HAS_NAPALM_BASE = True
except ImportError:
HAS_NAPALM_BASE = False
try:
# try importing ConnectionClosedException
@ -81,7 +93,7 @@ def virtual(opts, virtualname, filename):
'''
Returns the __virtual__.
'''
if HAS_NAPALM and (is_proxy(opts) or is_minion(opts)):
if ((HAS_NAPALM and NAPALM_MAJOR >= 2) or HAS_NAPALM_BASE) and (is_proxy(opts) or is_minion(opts)):
return virtualname
else:
return (

View file

@ -2157,9 +2157,12 @@ class SaltCMDOptionParser(six.with_metaclass(OptionParserMeta,
self.config['arg'].append([])
else:
self.config['arg'][cmd_index].append(arg)
if len(self.config['fun']) != len(self.config['arg']):
if len(self.config['fun']) > len(self.config['arg']):
self.exit(42, 'Cannot execute compound command without '
'defining all arguments.\n')
elif len(self.config['fun']) < len(self.config['arg']):
self.exit(42, 'Cannot execute compound command with more '
'arguments than commands.\n')
# parse the args and kwargs before sending to the publish
# interface
for i in range(len(self.config['arg'])):

View file

@ -344,3 +344,60 @@ def sanitize_win_path(winpath):
elif isinstance(winpath, six.text_type):
winpath = winpath.translate(dict((ord(c), u'_') for c in intab))
return winpath
def safe_path(path, allow_path=None):
r'''
.. versionadded:: 2017.7.3
Checks that the path is safe for modification by Salt. For example, you
wouldn't want to have salt delete the contents of ``C:\Windows``. The
following directories are considered unsafe:
- C:\, D:\, E:\, etc.
- \
- C:\Windows
Args:
path (str): The path to check
allow_paths (str, list): A directory or list of directories inside of
path that may be safe. For example: ``C:\Windows\TEMP``
Returns:
bool: True if safe, otherwise False
'''
# Create regex definitions for directories that may be unsafe to modify
system_root = os.environ.get('SystemRoot', 'C:\\Windows')
deny_paths = (
r'[a-z]\:\\$', # C:\, D:\, etc
r'\\$', # \
re.escape(system_root) # C:\Windows
)
# Make allow_path a list
if allow_path and not isinstance(allow_path, list):
allow_path = [allow_path]
# Create regex definition for directories we may want to make exceptions for
allow_paths = list()
if allow_path:
for item in allow_path:
allow_paths.append(re.escape(item))
# Check the path to make sure it's not one of the bad paths
good_path = True
for d_path in deny_paths:
if re.match(d_path, path, flags=re.IGNORECASE) is not None:
# Found deny path
good_path = False
# If local_dest is one of the bad paths, check for exceptions
if not good_path:
for a_path in allow_paths:
if re.match(a_path, path, flags=re.IGNORECASE) is not None:
# Found exception
good_path = True
return good_path

View file

@ -104,11 +104,12 @@ def dict_search_and_replace(d, old, new, expanded):
def find_value_to_expand(x, v):
a = x
for i in v[2:-1].split(':'):
if a is None:
return v
if i in a:
a = a.get(i)
else:
a = v
return a
return v
return a

View file

@ -89,6 +89,28 @@ localtime.
This will schedule the command: ``state.sls httpd test=True`` at 5:00 PM on
Monday, Wednesday and Friday, and 3:00 PM on Tuesday and Thursday.
.. code-block:: yaml
schedule:
job1:
function: state.sls
args:
- httpd
kwargs:
test: True
when:
- 'tea time'
.. code-block:: yaml
whens:
tea time: 1:40pm
deployment time: Friday 5:00pm
The Salt scheduler also allows custom phrases to be used for the `when`
parameter. These `whens` can be stored as either pillar values or
grain values.
.. code-block:: yaml
schedule:
@ -333,7 +355,6 @@ import logging
import errno
import random
import yaml
import copy
# Import Salt libs
import salt.config
@ -409,6 +430,7 @@ class Schedule(object):
self.proxy = proxy
self.functions = functions
self.standalone = standalone
self.skip_function = None
if isinstance(intervals, dict):
self.intervals = intervals
else:
@ -745,6 +767,69 @@ class Schedule(object):
evt.fire_event({'complete': True},
tag='/salt/minion/minion_schedule_saved')
def postpone_job(self, name, data):
'''
Postpone a job in the scheduler.
Ignores jobs from pillar
'''
time = data['time']
new_time = data['new_time']
# ensure job exists, then disable it
if name in self.opts['schedule']:
if 'skip_explicit' not in self.opts['schedule'][name]:
self.opts['schedule'][name]['skip_explicit'] = []
self.opts['schedule'][name]['skip_explicit'].append(time)
if 'run_explicit' not in self.opts['schedule'][name]:
self.opts['schedule'][name]['run_explicit'] = []
self.opts['schedule'][name]['run_explicit'].append(new_time)
elif name in self._get_schedule(include_opts=False):
log.warning('Cannot modify job {0}, '
'it`s in the pillar!'.format(name))
# Fire the complete event back along with updated list of schedule
evt = salt.utils.event.get_event('minion', opts=self.opts, listen=False)
evt.fire_event({'complete': True, 'schedule': self._get_schedule()},
tag='/salt/minion/minion_schedule_postpone_job_complete')
def skip_job(self, name, data):
'''
Skip a job at a specific time in the scheduler.
Ignores jobs from pillar
'''
time = data['time']
# ensure job exists, then disable it
if name in self.opts['schedule']:
if 'skip_explicit' not in self.opts['schedule'][name]:
self.opts['schedule'][name]['skip_explicit'] = []
self.opts['schedule'][name]['skip_explicit'].append(time)
elif name in self._get_schedule(include_opts=False):
log.warning('Cannot modify job {0}, '
'it`s in the pillar!'.format(name))
# Fire the complete event back along with updated list of schedule
evt = salt.utils.event.get_event('minion', opts=self.opts, listen=False)
evt.fire_event({'complete': True, 'schedule': self._get_schedule()},
tag='/salt/minion/minion_schedule_skip_job_complete')
def get_next_fire_time(self, name):
'''
Disable a job in the scheduler. Ignores jobs from pillar
'''
schedule = self._get_schedule()
if schedule:
_next_fire_time = schedule[name]['_next_fire_time']
# Fire the complete event back along with updated list of schedule
evt = salt.utils.event.get_event('minion', opts=self.opts, listen=False)
evt.fire_event({'complete': True, 'next_fire_time': _next_fire_time},
tag='/salt/minion/minion_schedule_next_fire_time_complete')
def handle_func(self, multiprocessing_enabled, func, data):
'''
Execute this method in a multiprocess or thread
@ -948,11 +1033,16 @@ class Schedule(object):
# Let's make sure we exit the process!
sys.exit(salt.defaults.exitcodes.EX_GENERIC)
def eval(self):
def eval(self, now=None):
'''
Evaluate and execute the schedule
:param int now: Override current time with a Unix timestamp``
'''
log.trace('==== evaluating schedule =====')
def _splay(splaytime):
'''
Calculate splaytime
@ -974,9 +1064,13 @@ class Schedule(object):
raise ValueError('Schedule must be of type dict.')
if 'enabled' in schedule and not schedule['enabled']:
return
if 'skip_function' in schedule:
self.skip_function = schedule['skip_function']
for job, data in six.iteritems(schedule):
if job == 'enabled' or not data:
continue
if job == 'skip_function' or not data:
continue
if not isinstance(data, dict):
log.error('Scheduled job "{0}" should have a dict value, not {1}'.format(job, type(data)))
continue
@ -1011,7 +1105,8 @@ class Schedule(object):
'_run_on_start' not in data:
data['_run_on_start'] = True
now = int(time.time())
if not now:
now = int(time.time())
if 'until' in data:
if not _WHEN_SUPPORTED:
@ -1065,6 +1160,23 @@ class Schedule(object):
'", "'.join(scheduling_elements)))
continue
if 'run_explicit' in data:
_run_explicit = data['run_explicit']
if isinstance(_run_explicit, six.string_types):
_run_explicit = [_run_explicit]
# Copy the list so we can loop through it
for i in copy.deepcopy(_run_explicit):
if len(_run_explicit) > 1:
if i < now - self.opts['loop_interval']:
_run_explicit.remove(i)
if _run_explicit:
if _run_explicit[0] <= now < (_run_explicit[0] + self.opts['loop_interval']):
run = True
data['_next_fire_time'] = _run_explicit[0]
if True in [True for item in time_elements if item in data]:
if '_seconds' not in data:
interval = int(data.get('seconds', 0))
@ -1153,10 +1265,11 @@ class Schedule(object):
# Copy the list so we can loop through it
for i in copy.deepcopy(_when):
if i < now and len(_when) > 1:
# Remove all missed schedules except the latest one.
# We need it to detect if it was triggered previously.
_when.remove(i)
if len(_when) > 1:
if i < now - self.opts['loop_interval']:
# Remove all missed schedules except the latest one.
# We need it to detect if it was triggered previously.
_when.remove(i)
if _when:
# Grab the first element, which is the next run time or
@ -1258,19 +1371,21 @@ class Schedule(object):
seconds = data['_next_fire_time'] - now
if data['_splay']:
seconds = data['_splay'] - now
if seconds <= 0:
if '_seconds' in data:
if '_seconds' in data:
if seconds <= 0:
run = True
elif 'when' in data and data['_run']:
elif 'when' in data and data['_run']:
if data['_next_fire_time'] <= now <= (data['_next_fire_time'] + self.opts['loop_interval']):
data['_run'] = False
run = True
elif 'cron' in data:
# Reset next scheduled time because it is in the past now,
# and we should trigger the job run, then wait for the next one.
elif 'cron' in data:
# Reset next scheduled time because it is in the past now,
# and we should trigger the job run, then wait for the next one.
if seconds <= 0:
data['_next_fire_time'] = None
run = True
elif seconds == 0:
run = True
elif seconds == 0:
run = True
if '_run_on_start' in data and data['_run_on_start']:
run = True
@ -1312,7 +1427,11 @@ class Schedule(object):
if start <= now <= end:
run = True
else:
run = False
if self.skip_function:
run = True
func = self.skip_function
else:
run = False
else:
log.error('schedule.handle_func: Invalid range, end must be larger than start. \
Ignoring job {0}.'.format(job))
@ -1322,6 +1441,62 @@ class Schedule(object):
Ignoring job {0}.'.format(job))
continue
if 'skip_during_range' in data:
if not _RANGE_SUPPORTED:
log.error('Missing python-dateutil. Ignoring job {0}'.format(job))
continue
else:
if isinstance(data['skip_during_range'], dict):
try:
start = int(time.mktime(dateutil_parser.parse(data['skip_during_range']['start']).timetuple()))
except ValueError:
log.error('Invalid date string for start in skip_during_range. Ignoring job {0}.'.format(job))
continue
try:
end = int(time.mktime(dateutil_parser.parse(data['skip_during_range']['end']).timetuple()))
except ValueError:
log.error('Invalid date string for end in skip_during_range. Ignoring job {0}.'.format(job))
log.error(data)
continue
if end > start:
if start <= now <= end:
if self.skip_function:
run = True
func = self.skip_function
else:
run = False
else:
run = True
else:
log.error('schedule.handle_func: Invalid range, end must be larger than start. \
Ignoring job {0}.'.format(job))
continue
else:
log.error('schedule.handle_func: Invalid, range must be specified as a dictionary. \
Ignoring job {0}.'.format(job))
continue
if 'skip_explicit' in data:
_skip_explicit = data['skip_explicit']
if isinstance(_skip_explicit, six.string_types):
_skip_explicit = [_skip_explicit]
# Copy the list so we can loop through it
for i in copy.deepcopy(_skip_explicit):
if i < now - self.opts['loop_interval']:
_skip_explicit.remove(i)
if _skip_explicit:
if _skip_explicit[0] <= now <= (_skip_explicit[0] + self.opts['loop_interval']):
if self.skip_function:
run = True
func = self.skip_function
else:
run = False
else:
run = True
if not run:
continue
@ -1374,6 +1549,7 @@ class Schedule(object):
finally:
if '_seconds' in data:
data['_next_fire_time'] = now + data['_seconds']
data['_last_run'] = now
data['_splay'] = None
if salt.utils.platform.is_windows():
# Restore our function references.

View file

@ -30,9 +30,12 @@ import salt.defaults.exitcodes
import salt.utils.files
import salt.utils.platform
import salt.utils.user
import salt.utils.versions
log = logging.getLogger(__name__)
ROOT_DIR = 'c:\\salt' if salt.utils.platform.is_windows() else '/'
def zmq_version():
'''
@ -194,13 +197,34 @@ def verify_files(files, user):
return True
def verify_env(dirs, user, permissive=False, sensitive_dirs=None, skip_extra=False):
def verify_env(
dirs,
user,
permissive=False,
pki_dir='',
skip_extra=False,
root_dir=ROOT_DIR,
sensitive_dirs=None):
'''
Verify that the named directories are in place and that the environment
can shake the salt
'''
if pki_dir:
salt.utils.versions.warn_until(
'Neon',
'Use of \'pki_dir\' was detected: \'pki_dir\' has been deprecated '
'in favor of \'sensitive_dirs\'. Support for \'pki_dir\' will be '
'removed in Salt Neon.'
)
sensitive_dirs = sensitive_dirs or []
sensitive_dirs.append(list(pki_dir))
if salt.utils.platform.is_windows():
return win_verify_env(dirs, permissive, sensitive_dirs, skip_extra)
return win_verify_env(root_dir,
dirs,
permissive=permissive,
skip_extra=skip_extra,
sensitive_dirs=sensitive_dirs)
import pwd # after confirming not running Windows
try:
pwnam = pwd.getpwnam(user)
@ -526,18 +550,37 @@ def verify_log(opts):
log.warning('Insecure logging configuration detected! Sensitive data may be logged.')
def win_verify_env(dirs, permissive=False, sensitive_dirs=None, skip_extra=False):
def win_verify_env(
path,
dirs,
permissive=False,
pki_dir='',
skip_extra=False,
sensitive_dirs=None):
'''
Verify that the named directories are in place and that the environment
can shake the salt
'''
if pki_dir:
salt.utils.versions.warn_until(
'Neon',
'Use of \'pki_dir\' was detected: \'pki_dir\' has been deprecated '
'in favor of \'sensitive_dirs\'. Support for \'pki_dir\' will be '
'removed in Salt Neon.'
)
sensitive_dirs = sensitive_dirs or []
sensitive_dirs.append(list(pki_dir))
import salt.utils.win_functions
import salt.utils.win_dacl
import salt.utils.path
# Get the root path directory where salt is installed
path = dirs[0]
while os.path.basename(path) not in ['salt', 'salt-tests-tmpdir']:
path, base = os.path.split(path)
# Make sure the file_roots is not set to something unsafe since permissions
# on that directory are reset
if not salt.utils.path.safe_path(path=path):
raise CommandExecutionError(
'`file_roots` set to a possibly unsafe location: {0}'.format(path)
)
# Create the root path directory if missing
if not os.path.isdir(path):

View file

@ -1133,9 +1133,14 @@ def get_name(principal):
try:
return win32security.LookupAccountSid(None, sid_obj)[0]
except TypeError:
raise CommandExecutionError(
'Could not find User for {0}'.format(principal))
except (pywintypes.error, TypeError) as exc:
if type(exc) == pywintypes.error:
win_error = win32api.FormatMessage(exc.winerror).rstrip('\n')
message = 'Error resolving {0} ({1})'.format(principal, win_error)
else:
message = 'Error resolving {0}'.format(principal)
raise CommandExecutionError(message)
def get_owner(obj_name):
@ -1173,7 +1178,7 @@ def get_owner(obj_name):
owner_sid = 'S-1-1-0'
else:
raise CommandExecutionError(
'Failed to set permissions: {0}'.format(exc.strerror))
'Failed to get owner: {0}'.format(exc.strerror))
return get_name(win32security.ConvertSidToStringSid(owner_sid))

View file

@ -43,7 +43,8 @@ class WheelClient(salt.client.mixins.SyncClientMixin,
def __init__(self, opts=None):
self.opts = opts
self.functions = salt.loader.wheels(opts)
self.context = {}
self.functions = salt.loader.wheels(opts, context=self.context)
# TODO: remove/deprecate
def call_func(self, fun, **kwargs):

View file

@ -986,7 +986,9 @@ class TestDaemon(object):
RUNTIME_VARS.TMP_PRODENV_STATE_TREE,
TMP,
],
RUNTIME_VARS.RUNNING_TESTS_USER)
RUNTIME_VARS.RUNNING_TESTS_USER,
root_dir=master_opts['root_dir'],
)
cls.master_opts = master_opts
cls.minion_opts = minion_opts

View file

@ -0,0 +1,16 @@
# -*- coding: utf-8 -*-
'''
Runner functions for integration tests
'''
# Import python libs
from __future__ import absolute_import
def failure():
__context__['retcode'] = 1
return False
def success():
return True

View file

@ -0,0 +1,16 @@
# -*- coding: utf-8 -*-
'''
Wheel functions for integration tests
'''
# Import python libs
from __future__ import absolute_import
def failure():
__context__['retcode'] = 1
return False
def success():
return True

View file

@ -0,0 +1,15 @@
test_runner_success:
salt.runner:
- name: runtests_helpers.success
test_runner_failure:
salt.runner:
- name: runtests_helpers.failure
test_wheel_success:
salt.wheel:
- name: runtests_helpers.success
test_wheel_failure:
salt.wheel:
- name: runtests_helpers.failure

View file

@ -0,0 +1,36 @@
successful_changing_state:
cmd.run:
- name: echo "Successful Change"
# mock is installed with salttesting, so it should already be
# present on the system, resulting in no changes
non_changing_state:
pip.installed:
- name: mock
test_listening_change_state:
cmd.run:
- name: echo "Listening State"
- listen:
- successful_changing_state
test_listening_non_changing_state:
cmd.run:
- name: echo "Only run once"
- listen:
- non_changing_state
# test that requisite resolution for listen uses ID declaration.
# test_listening_resolution_one and test_listening_resolution_two
# should both run.
test_listening_resolution_one:
cmd.run:
- name: echo "Successful listen resolution"
- listen:
- successful_changing_state
test_listening_resolution_two:
cmd.run:
- name: echo "Successful listen resolution"
- listen:
- successful_changing_state

View file

@ -0,0 +1,21 @@
changing_state:
cmd.run:
- name: echo "Changed!"
# mock is installed with salttesting, so it should already be
# present on the system, resulting in no changes
non_changing_state:
pip.installed:
- name: mock
test_changing_state:
cmd.run:
- name: echo "Success!"
- onchanges:
- changing_state
test_non_changing_state:
cmd.run:
- name: echo "Should not run"
- onchanges:
- non_changing_state

View file

@ -0,0 +1,19 @@
failing_state:
cmd.run:
- name: asdf
non_failing_state:
cmd.run:
- name: echo "Non-failing state"
test_failing_state:
cmd.run:
- name: echo "Success!"
- onfail:
- failing_state
test_non_failing_state:
cmd.run:
- name: echo "Should not run"
- onfail:
- non_failing_state

View file

@ -0,0 +1,33 @@
# B --+
# |
# C <-+ ----+
# |
# A <-------+
# runs after C
A:
cmd.run:
- name: echo A third
# is running in test mode before C
# C gets executed first if this states modify something
- prereq_in:
- C
# runs before C
B:
cmd.run:
- name: echo B first
# will test C and be applied only if C changes,
# and then will run before C
- prereq:
- C
C:
cmd.run:
- name: echo C second
# will fail with "The following requisites were not found"
I:
cmd.run:
- name: echo I
- prereq:
- Z

View file

@ -0,0 +1,58 @@
# Complex require/require_in graph
#
# Relative order of C>E is given by the definition order
#
# D (1) <--+
# |
# B (2) ---+ <-+ <-+ <-+
# | | |
# C (3) <--+ --|---|---+
# | | |
# E (4) ---|---|---+ <-+
# | | |
# A (5) ---+ --+ ------+
#
A:
cmd.run:
- name: echo A fifth
- require:
- C
B:
cmd.run:
- name: echo B second
- require_in:
- A
- C
C:
cmd.run:
- name: echo C third
D:
cmd.run:
- name: echo D first
- require_in:
- B
E:
cmd.run:
- name: echo E fourth
- require:
- B
- require_in:
- A
# will fail with "The following requisites were not found"
G:
cmd.run:
- name: echo G
- require:
- Z
# will fail with "The following requisites were not found"
H:
cmd.run:
- name: echo H
- require:
- Z

View file

@ -0,0 +1,55 @@
# None of theses states should run
A:
cmd.run:
- name: echo "A"
- onlyif: 'false'
# issue #8235
#B:
# cmd.run:
# - name: echo "B"
# # here used without "-"
# - use:
# cmd: A
C:
cmd.run:
- name: echo "C"
- use:
- A
D:
cmd.run:
- name: echo "D"
- onlyif: 'false'
- use_in:
- E
E:
cmd.run:
- name: echo "E"
# issue 8235
#F:
# cmd.run:
# - name: echo "F"
# - onlyif: return 0
# - use_in:
# cmd: G
#
#G:
# cmd.run:
# - name: echo "G"
# issue xxxx
#H:
# cmd.run:
# - name: echo "H"
# - use:
# - cmd: C
#I:
# cmd.run:
# - name: echo "I"
# - use:
# - cmd: E

View file

@ -0,0 +1,4 @@
test_file:
file.managed:
- name: /tmp/nonbase_env
- source: salt://nonbase_env

View file

@ -0,0 +1 @@
it worked - new environment!

View file

@ -9,9 +9,11 @@ pillars:
default:
network:
dns:
{% if __grains__['os'] == 'should_never_match' %}
srv1: 192.168.0.1
srv2: 192.168.0.2
domain: example.com
{% endif %}
ntp:
srv1: 192.168.10.10
srv2: 192.168.10.20

View file

@ -1,6 +1,6 @@
environment: base
classes:
{% for class in ['default'] %}
{% for class in ['default', 'roles.app'] %}
- {{ class }}
{% endfor %}

View file

@ -93,7 +93,8 @@ class SaltUtilSyncModuleTest(ModuleCase):
'states': [],
'sdb': [],
'proxymodules': [],
'output': []}
'output': [],
'thorium': []}
ret = self.run_function('saltutil.sync_all')
self.assertEqual(ret, expected_return)
@ -113,7 +114,8 @@ class SaltUtilSyncModuleTest(ModuleCase):
'states': [],
'sdb': [],
'proxymodules': [],
'output': []}
'output': [],
'thorium': []}
ret = self.run_function('saltutil.sync_all', extmod_whitelist={'modules': ['salttest']})
self.assertEqual(ret, expected_return)
@ -135,7 +137,8 @@ class SaltUtilSyncModuleTest(ModuleCase):
'states': [],
'sdb': [],
'proxymodules': [],
'output': []}
'output': [],
'thorium': []}
ret = self.run_function('saltutil.sync_all', extmod_blacklist={'modules': ['runtests_decorators']})
self.assertEqual(ret, expected_return)
@ -155,7 +158,8 @@ class SaltUtilSyncModuleTest(ModuleCase):
'states': [],
'sdb': [],
'proxymodules': [],
'output': []}
'output': [],
'thorium': []}
ret = self.run_function('saltutil.sync_all', extmod_whitelist={'modules': ['runtests_decorators']},
extmod_blacklist={'modules': ['runtests_decorators']})
self.assertEqual(ret, expected_return)
@ -186,7 +190,7 @@ class SaltUtilSyncPillarTest(ModuleCase):
'''))
self.run_function('saltutil.refresh_pillar')
self.run_function('test.sleep', [1])
self.run_function('test.sleep', [5])
post_pillar = self.run_function('pillar.raw')
self.assertIn(pillar_key, post_pillar.get(pillar_key, 'didnotwork'))

View file

@ -964,6 +964,65 @@ class StateModuleTest(ModuleCase, SaltReturnAssertsMixin):
#ret = self.run_function('state.sls', mods='requisites.fullsls_prereq')
#self.assertEqual(['sls command can only be used with require requisite'], ret)
def test_requisites_require_no_state_module(self):
'''
Call sls file containing several require_in and require.
Ensure that some of them are failing and that the order is right.
'''
expected_result = {
'cmd_|-A_|-echo A fifth_|-run': {
'__run_num__': 4,
'comment': 'Command "echo A fifth" run',
'result': True,
'changes': True,
},
'cmd_|-B_|-echo B second_|-run': {
'__run_num__': 1,
'comment': 'Command "echo B second" run',
'result': True,
'changes': True,
},
'cmd_|-C_|-echo C third_|-run': {
'__run_num__': 2,
'comment': 'Command "echo C third" run',
'result': True,
'changes': True,
},
'cmd_|-D_|-echo D first_|-run': {
'__run_num__': 0,
'comment': 'Command "echo D first" run',
'result': True,
'changes': True,
},
'cmd_|-E_|-echo E fourth_|-run': {
'__run_num__': 3,
'comment': 'Command "echo E fourth" run',
'result': True,
'changes': True,
},
'cmd_|-G_|-echo G_|-run': {
'__run_num__': 5,
'comment': 'The following requisites were not found:\n'
+ ' require:\n'
+ ' id: Z\n',
'result': False,
'changes': False,
},
'cmd_|-H_|-echo H_|-run': {
'__run_num__': 6,
'comment': 'The following requisites were not found:\n'
+ ' require:\n'
+ ' id: Z\n',
'result': False,
'changes': False,
}
}
ret = self.run_function('state.sls', mods='requisites.require_no_state_module')
result = self.normalize_ret(ret)
self.assertReturnNonEmptySaltType(ret)
self.assertEqual(expected_result, result)
def test_requisites_prereq_simple_ordering_and_errors(self):
'''
Call sls file containing several prereq_in and prereq.
@ -1001,6 +1060,30 @@ class StateModuleTest(ModuleCase, SaltReturnAssertsMixin):
'result': False,
'changes': False}
}
expected_result_simple_no_state_module = {
'cmd_|-A_|-echo A third_|-run': {
'__run_num__': 2,
'comment': 'Command "echo A third" run',
'result': True,
'changes': True},
'cmd_|-B_|-echo B first_|-run': {
'__run_num__': 0,
'comment': 'Command "echo B first" run',
'result': True,
'changes': True},
'cmd_|-C_|-echo C second_|-run': {
'__run_num__': 1,
'comment': 'Command "echo C second" run',
'result': True,
'changes': True},
'cmd_|-I_|-echo I_|-run': {
'__run_num__': 3,
'comment': 'The following requisites were not found:\n'
+ ' prereq:\n'
+ ' id: Z\n',
'result': False,
'changes': False}
}
expected_result_simple2 = {
'cmd_|-A_|-echo A_|-run': {
'__run_num__': 1,
@ -1131,6 +1214,11 @@ class StateModuleTest(ModuleCase, SaltReturnAssertsMixin):
# ret,
# ['A recursive requisite was found, SLS "requisites.prereq_recursion_error" ID "B" ID "A"']
#)
ret = self.run_function('state.sls', mods='requisites.prereq_simple_no_state_module')
result = self.normalize_ret(ret)
self.assertEqual(expected_result_simple_no_state_module, result)
def test_infinite_recursion_sls_prereq(self):
ret = self.run_function('state.sls', mods='requisites.prereq_sls_infinite_recursion')
self.assertSaltTrueReturn(ret)
@ -1167,6 +1255,16 @@ class StateModuleTest(ModuleCase, SaltReturnAssertsMixin):
# + ' ID "A" ID "A"'
#])
def test_requisites_use_no_state_module(self):
'''
Call sls file containing several use_in and use.
'''
ret = self.run_function('state.sls', mods='requisites.use_no_state_module')
self.assertReturnNonEmptySaltType(ret)
for item, descr in six.iteritems(ret):
self.assertEqual(descr['comment'], 'onlyif condition is false')
def test_get_file_from_env_in_top_match(self):
tgt = os.path.join(TMP, 'prod-cheese-file')
try:
@ -1244,6 +1342,16 @@ class StateModuleTest(ModuleCase, SaltReturnAssertsMixin):
expected_result = 'State was not run because none of the onchanges reqs changed'
self.assertIn(expected_result, test_data)
def test_onchanges_requisite_no_state_module(self):
'''
Tests a simple state using the onchanges requisite without state modules
'''
# Only run the state once and keep the return data
state_run = self.run_function('state.sls', mods='requisites.onchanges_simple_no_state_module')
test_data = state_run['cmd_|-test_changing_state_|-echo "Success!"_|-run']['comment']
expected_result = 'Command "echo "Success!"" run'
self.assertIn(expected_result, test_data)
# onfail tests
def test_onfail_requisite(self):
@ -1297,6 +1405,24 @@ class StateModuleTest(ModuleCase, SaltReturnAssertsMixin):
expected_result = 'State was not run because onfail req did not change'
self.assertIn(expected_result, test_data)
def test_onfail_requisite_no_state_module(self):
'''
Tests a simple state using the onfail requisite
'''
# Only run the state once and keep the return data
state_run = self.run_function('state.sls', mods='requisites.onfail_simple_no_state_module')
# First, test the result of the state run when a failure is expected to happen
test_data = state_run['cmd_|-test_failing_state_|-echo "Success!"_|-run']['comment']
expected_result = 'Command "echo "Success!"" run'
self.assertIn(expected_result, test_data)
# Then, test the result of the state run when a failure is not expected to happen
test_data = state_run['cmd_|-test_non_failing_state_|-echo "Should not run"_|-run']['comment']
expected_result = 'State was not run because onfail req did not change'
self.assertIn(expected_result, test_data)
# listen tests
def test_listen_requisite(self):
@ -1358,6 +1484,21 @@ class StateModuleTest(ModuleCase, SaltReturnAssertsMixin):
listener_state = 'cmd_|-listener_test_listening_resolution_two_|-echo "Successful listen resolution"_|-mod_watch'
self.assertIn(listener_state, state_run)
def test_listen_requisite_no_state_module(self):
'''
Tests a simple state using the listen requisite
'''
# Only run the state once and keep the return data
state_run = self.run_function('state.sls', mods='requisites.listen_simple_no_state_module')
# First, test the result of the state run when a listener is expected to trigger
listener_state = 'cmd_|-listener_test_listening_change_state_|-echo "Listening State"_|-mod_watch'
self.assertIn(listener_state, state_run)
# Then, test the result of the state run when a listener should not trigger
absent_state = 'cmd_|-listener_test_listening_non_changing_state_|-echo "Only run once"_|-mod_watch'
self.assertNotIn(absent_state, state_run)
def test_issue_30820_requisite_in_match_by_name(self):
'''
This tests the case where a requisite_in matches by name instead of ID
@ -1477,3 +1618,23 @@ class StateModuleTest(ModuleCase, SaltReturnAssertsMixin):
self.assertIn(state_id, state_run)
self.assertEqual(state_run[state_id]['comment'], 'Failure!')
self.assertFalse(state_run[state_id]['result'])
def test_state_nonbase_environment(self):
'''
test state.sls with saltenv using a nonbase environment
with a salt source
'''
state_run = self.run_function(
'state.sls',
mods='non-base-env',
saltenv='prod'
)
state_id = 'file_|-test_file_|-/tmp/nonbase_env_|-managed'
self.assertEqual(state_run[state_id]['comment'], 'File /tmp/nonbase_env updated')
self.assertTrue(state_run['file_|-test_file_|-/tmp/nonbase_env_|-managed']['result'])
self.assertTrue(os.path.isfile('/tmp/nonbase_env'))
def tearDown(self):
nonbase_file = '/tmp/nonbase_env'
if os.path.isfile(nonbase_file):
os.remove(nonbase_file)

View file

@ -106,6 +106,35 @@ class StateRunnerTest(ShellCase):
for item in out:
self.assertIn(item, ret)
def test_orchestrate_retcode(self):
'''
Test orchestration with nonzero retcode set in __context__
'''
self.run_run('saltutil.sync_runners')
self.run_run('saltutil.sync_wheel')
ret = '\n'.join(self.run_run('state.orchestrate orch.retcode'))
for result in (' ID: test_runner_success\n'
' Function: salt.runner\n'
' Name: runtests_helpers.success\n'
' Result: True',
' ID: test_runner_failure\n'
' Function: salt.runner\n'
' Name: runtests_helpers.failure\n'
' Result: False',
' ID: test_wheel_success\n'
' Function: salt.wheel\n'
' Name: runtests_helpers.success\n'
' Result: True',
' ID: test_wheel_failure\n'
' Function: salt.wheel\n'
' Name: runtests_helpers.failure\n'
' Result: False'):
self.assertIn(result, ret)
def test_orchestrate_target_doesnt_exists(self):
'''
test orchestration when target doesnt exist

View file

@ -5,6 +5,7 @@ from __future__ import absolute_import
import os
import shutil
import tempfile
import textwrap
# Import Salt Testing libs
from tests.support.case import ShellCase
@ -57,6 +58,36 @@ class KeyTest(ShellCase, ShellCaseCommonTestsMixin):
if USERA in user:
self.run_call('user.delete {0} remove=True'.format(USERA))
def test_remove_key(self):
'''
test salt-key -d usage
'''
min_name = 'minibar'
pki_dir = self.master_opts['pki_dir']
key = os.path.join(pki_dir, 'minions', min_name)
with salt.utils.files.fopen(key, 'w') as fp:
fp.write(textwrap.dedent('''\
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAoqIZDtcQtqUNs0wC7qQz
JwFhXAVNT5C8M8zhI+pFtF/63KoN5k1WwAqP2j3LquTG68WpxcBwLtKfd7FVA/Kr
OF3kXDWFnDi+HDchW2lJObgfzLckWNRFaF8SBvFM2dys3CGSgCV0S/qxnRAjrJQb
B3uQwtZ64ncJAlkYpArv3GwsfRJ5UUQnYPDEJwGzMskZ0pHd60WwM1gMlfYmNX5O
RBEjybyNpYDzpda6e6Ypsn6ePGLkP/tuwUf+q9wpbRE3ZwqERC2XRPux+HX2rGP+
mkzpmuHkyi2wV33A9pDfMgRHdln2CLX0KgfRGixUQhW1o+Kmfv2rq4sGwpCgLbTh
NwIDAQAB
-----END PUBLIC KEY-----
'''))
check_key = self.run_key('-p {0}'.format(min_name))
self.assertIn('Accepted Keys:', check_key)
self.assertIn('minibar: -----BEGIN PUBLIC KEY-----', check_key)
remove_key = self.run_key('-d {0} -y'.format(min_name))
check_key = self.run_key('-p {0}'.format(min_name))
self.assertEqual([], check_key)
def test_list_accepted_args(self):
'''
test salt-key -l for accepted arguments

View file

@ -4,6 +4,8 @@ salt-ssh testing
'''
# Import Python libs
from __future__ import absolute_import
import os
import shutil
# Import salt testing libs
from tests.support.case import SSHCase
@ -19,3 +21,21 @@ class SSHTest(SSHCase):
'''
ret = self.run_function('test.ping')
self.assertTrue(ret, 'Ping did not return true')
def test_thin_dir(self):
'''
test to make sure thin_dir is created
and salt-call file is included
'''
thin_dir = self.run_function('config.get', ['thin_dir'], wipe=False)
os.path.isdir(thin_dir)
os.path.exists(os.path.join(thin_dir, 'salt-call'))
os.path.exists(os.path.join(thin_dir, 'running_data'))
def tearDown(self):
'''
make sure to clean up any old ssh directories
'''
salt_dir = self.run_function('config.get', ['thin_dir'], wipe=False)
if os.path.exists(salt_dir):
shutil.rmtree(salt_dir)

View file

@ -7,8 +7,13 @@ import shutil
# Import Salt Testing Libs
from tests.support.case import SSHCase
from tests.support.paths import TMP
# Import Salt Libs
from salt.ext import six
SSH_SLS = 'ssh_state_tests'
SSH_SLS_FILE = '/tmp/test'
class SSHStateTest(SSHCase):
@ -37,6 +42,87 @@ class SSHStateTest(SSHCase):
check_file = self.run_function('file.file_exists', ['/tmp/test'])
self.assertTrue(check_file)
def test_state_show_sls(self):
'''
test state.show_sls with salt-ssh
'''
ret = self.run_function('state.show_sls', [SSH_SLS])
self._check_dict_ret(ret=ret, val='__sls__', exp_ret=SSH_SLS)
check_file = self.run_function('file.file_exists', [SSH_SLS_FILE], wipe=False)
self.assertFalse(check_file)
def test_state_show_top(self):
'''
test state.show_top with salt-ssh
'''
ret = self.run_function('state.show_top')
self.assertEqual(ret, {u'base': [u'core', u'master_tops_test']})
def test_state_single(self):
'''
state.single with salt-ssh
'''
ret_out = {'name': 'itworked',
'result': True,
'comment': 'Success!'}
single = self.run_function('state.single',
['test.succeed_with_changes name=itworked'])
for key, value in six.iteritems(single):
self.assertEqual(value['name'], ret_out['name'])
self.assertEqual(value['result'], ret_out['result'])
self.assertEqual(value['comment'], ret_out['comment'])
def test_show_highstate(self):
'''
state.show_highstate with salt-ssh
'''
high = self.run_function('state.show_highstate')
destpath = os.path.join(TMP, 'testfile')
self.assertTrue(isinstance(high, dict))
self.assertTrue(destpath in high)
self.assertEqual(high[destpath]['__env__'], 'base')
def test_state_high(self):
'''
state.high with salt-ssh
'''
ret_out = {'name': 'itworked',
'result': True,
'comment': 'Success!'}
high = self.run_function('state.high', ['"{"itworked": {"test": ["succeed_with_changes"]}}"'])
for key, value in six.iteritems(high):
self.assertEqual(value['name'], ret_out['name'])
self.assertEqual(value['result'], ret_out['result'])
self.assertEqual(value['comment'], ret_out['comment'])
def test_show_lowstate(self):
'''
state.show_lowstate with salt-ssh
'''
low = self.run_function('state.show_lowstate')
self.assertTrue(isinstance(low, list))
self.assertTrue(isinstance(low[0], dict))
def test_state_low(self):
'''
state.low with salt-ssh
'''
ret_out = {'name': 'itworked',
'result': True,
'comment': 'Success!'}
low = self.run_function('state.low', ['"{"state": "test", "fun": "succeed_with_changes", "name": "itworked"}"'])
for key, value in six.iteritems(low):
self.assertEqual(value['name'], ret_out['name'])
self.assertEqual(value['result'], ret_out['result'])
self.assertEqual(value['comment'], ret_out['comment'])
def test_state_request_check_clear(self):
'''
test state.request system with salt-ssh
@ -60,7 +146,7 @@ class SSHStateTest(SSHCase):
run = self.run_function('state.run_request', wipe=False)
check_file = self.run_function('file.file_exists', ['/tmp/test'], wipe=False)
check_file = self.run_function('file.file_exists', [SSH_SLS_FILE], wipe=False)
self.assertTrue(check_file)
def tearDown(self):
@ -70,3 +156,6 @@ class SSHStateTest(SSHCase):
salt_dir = self.run_function('config.get', ['thin_dir'], wipe=False)
if os.path.exists(salt_dir):
shutil.rmtree(salt_dir)
if os.path.exists(SSH_SLS_FILE):
os.remove(SSH_SLS_FILE)

View file

@ -7,7 +7,6 @@ from __future__ import absolute_import
# Import Salt Testing libs
from tests.support.case import ModuleCase
from tests.support.unit import skipIf
from tests.support.helpers import destructiveTest
from tests.support.mixins import SaltReturnAssertsMixin
@ -15,32 +14,58 @@ from tests.support.mixins import SaltReturnAssertsMixin
import salt.utils.path
INIT_DELAY = 5
SERVICE_NAME = 'crond'
@destructiveTest
@skipIf(salt.utils.path.which('crond') is None, 'crond not installed')
class ServiceTest(ModuleCase, SaltReturnAssertsMixin):
'''
Validate the service state
'''
def setUp(self):
self.service_name = 'cron'
cmd_name = 'crontab'
os_family = self.run_function('grains.get', ['os_family'])
if os_family == 'RedHat':
self.service_name = 'crond'
elif os_family == 'Arch':
self.service_name = 'systemd-journald'
cmd_name = 'systemctl'
if salt.utils.path.which(cmd_name) is None:
self.skipTest('{0} is not installed'.format(cmd_name))
def check_service_status(self, exp_return):
'''
helper method to check status of service
'''
check_status = self.run_function('service.status', name=SERVICE_NAME)
check_status = self.run_function('service.status',
name=self.service_name)
if check_status is not exp_return:
self.fail('status of service is not returning correctly')
def test_service_running(self):
'''
test service.running state module
'''
stop_service = self.run_function('service.stop', self.service_name)
self.assertTrue(stop_service)
self.check_service_status(False)
start_service = self.run_state('service.running',
name=self.service_name)
self.assertTrue(start_service)
self.check_service_status(True)
def test_service_dead(self):
'''
test service.dead state module
'''
start_service = self.run_state('service.running', name=SERVICE_NAME)
start_service = self.run_state('service.running',
name=self.service_name)
self.assertSaltTrueReturn(start_service)
self.check_service_status(True)
ret = self.run_state('service.dead', name=SERVICE_NAME)
ret = self.run_state('service.dead', name=self.service_name)
self.assertSaltTrueReturn(ret)
self.check_service_status(False)
@ -48,11 +73,12 @@ class ServiceTest(ModuleCase, SaltReturnAssertsMixin):
'''
test service.dead state module with init_delay arg
'''
start_service = self.run_state('service.running', name=SERVICE_NAME)
start_service = self.run_state('service.running',
name=self.service_name)
self.assertSaltTrueReturn(start_service)
self.check_service_status(True)
ret = self.run_state('service.dead', name=SERVICE_NAME,
ret = self.run_state('service.dead', name=self.service_name,
init_delay=INIT_DELAY)
self.assertSaltTrueReturn(ret)
self.check_service_status(False)

View file

@ -111,7 +111,9 @@ class AdaptedConfigurationTestCaseMixin(object):
rdict['sock_dir'],
conf_dir
],
RUNTIME_VARS.RUNNING_TESTS_USER)
RUNTIME_VARS.RUNNING_TESTS_USER,
root_dir=rdict['root_dir'],
)
rdict['config_dir'] = conf_dir
rdict['conf_file'] = os.path.join(conf_dir, config_for)

View file

@ -464,6 +464,55 @@ PATCHLEVEL = 3
self.assertListEqual(list(os_grains.get('osrelease_info')), os_release_map['osrelease_info'])
self.assertEqual(os_grains.get('osmajorrelease'), os_release_map['osmajorrelease'])
def test_windows_iscsi_iqn_grains(self):
cmd_run_mock = MagicMock(
return_value={'stdout': 'iSCSINodeName\niqn.1991-05.com.microsoft:simon-x1\n'}
)
with patch.object(salt.utils.platform, 'is_linux',
MagicMock(return_value=False)):
with patch.object(salt.utils.platform, 'is_windows',
MagicMock(return_value=True)):
with patch.dict(core.__salt__, {'run_all': cmd_run_mock}):
with patch.object(salt.utils.path, 'which',
MagicMock(return_value=True)):
with patch.dict(core.__salt__, {'cmd.run_all': cmd_run_mock}):
_grains = core.iscsi_iqn()
self.assertEqual(_grains.get('iscsi_iqn'),
['iqn.1991-05.com.microsoft:simon-x1'])
def test_aix_iscsi_iqn_grains(self):
cmd_run_mock = MagicMock(
return_value='initiator_name iqn.localhost.hostid.7f000001'
)
with patch.object(salt.utils.platform, 'is_linux',
MagicMock(return_value=False)):
with patch.object(salt.utils.platform, 'is_aix',
MagicMock(return_value=True)):
with patch.dict(core.__salt__, {'cmd.run': cmd_run_mock}):
_grains = core.iscsi_iqn()
self.assertEqual(_grains.get('iscsi_iqn'),
['iqn.localhost.hostid.7f000001'])
def test_linux_iscsi_iqn_grains(self):
_iscsi_file = '## DO NOT EDIT OR REMOVE THIS FILE!\n' \
'## If you remove this file, the iSCSI daemon will not start.\n' \
'## If you change the InitiatorName, existing access control lists\n' \
'## may reject this initiator. The InitiatorName must be unique\n' \
'## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.\n' \
'InitiatorName=iqn.1993-08.org.debian:01:d12f7aba36\n'
with patch('os.path.isfile', MagicMock(return_value=True)):
with patch('salt.utils.files.fopen', mock_open()) as iscsi_initiator_file:
iscsi_initiator_file.return_value.__iter__.return_value = _iscsi_file.splitlines()
_grains = core.iscsi_iqn()
self.assertEqual(_grains.get('iscsi_iqn'),
['iqn.1993-08.org.debian:01:d12f7aba36'])
@skipIf(not salt.utils.platform.is_linux(), 'System is not Linux')
def test_linux_memdata(self):
'''
@ -477,7 +526,8 @@ PATCHLEVEL = 3
'/proc/meminfo': True
}
_cmd_run_map = {
'dpkg --print-architecture': 'amd64'
'dpkg --print-architecture': 'amd64',
'rpm --eval %{_host_cpu}': 'x86_64'
}
path_exists_mock = MagicMock(side_effect=lambda x: _path_exists_map[x])

Some files were not shown because too many files have changed in this diff Show more