Merge branch '2016.11' into 'develop'

Conflicts:
  - doc/ref/cache/all/index.rst
  - doc/topics/cache/index.rst
  - salt/cache/localfs.py
  - salt/modules/boto_rds.py
  - salt/roster/cloud.py
  - salt/states/virtualenv_mod.py
  - tests/integration/states/test_archive.py
  - tests/unit/modules/test_dockermod.py
  - tests/unit/states/dockerng_test.py
This commit is contained in:
rallytime 2017-03-28 17:09:30 -06:00
commit 52edbffc85
31 changed files with 575 additions and 154 deletions

View file

@ -510,7 +510,7 @@ are expected to reply from executions.
.. conf_master:: cache
``cache``
---------------------
---------
Default: ``localfs``

View file

@ -4,18 +4,43 @@
Minion Data Cache
=================
.. versionadded:: 2016.11.0
The Minion data cache contains the Salt Mine data, minion grains and minion
pillar info cached on the Salt master. By default Salt uses the `localfs` cache
module to save the data in a msgpack file on the Salt master. Other external
data stores can also be used to store this data such as the `Consul` module.
pillar information cached on the Salt Master. By default, Salt uses the ``localfs`` cache
module to save the data in a ``msgpack`` file on the Salt Master.
.. _pluggable-data-cache:
Pluggable Data Cache
====================
While the default Minion data cache is the ``localfs`` cache, other external
data stores can also be used to store this data such as the ``consul`` module.
To configure a Salt Master to use a different data store, the :conf_master:`cache`
setting needs to be established:
.. code-block:: yaml
cache: consul
The pluggable data cache streamlines using various Salt topologies such as a
:ref:`Multi-Master <tutorial-multi-master>` or :ref:`Salt Syndics <syndic>` configuration
by allowing the data stored on the Salt Master about a Salt Minion to be available to
other Salt Syndics or Salt Masters that a Salt Minion is connected to.
Additional minion data cache modules can be easily created by modeling the custom data
store after one of the existing cache modules.
See :ref:`cache modules <all-salt.cache>` for a current list.
.. _configure-minion-data-cache:
Configuring the Minion Data Cache
=================================
The default `localfs` Minion data cache module doesn't require any
The default ``localfs`` Minion data cache module doesn't require any
configuration. External data cache modules with external data stores such as
Consul require a configuration setting in the master config.

View file

@ -218,3 +218,16 @@ Syndic node.
- :conf_master:`syndic_log_file`: path to the logfile (absolute or not)
- :conf_master:`syndic_pidfile`: path to the pidfile (absolute or not)
- :conf_master:`syndic_wait`: time in seconds to wait on returns from this syndic
Minion Data Cache
=================
Beginning with Salt 2016.11.0, the :ref:`Pluggable Minion Data Cache <pluggable-data-cache>`
was introduced. The minion data cache contains the Salt Mine data, minion grains, and minion
pillar information cached on the Salt Master. By default, Salt uses the ``localfs`` cache
module, but other external data stores can be used instead.
Using a pluggable minion cache modules allows for the data stored on a Salt Master about
Salt Minions to be replicated on other Salt Masters the Minion is connected to. Please see
the :ref:`Minion Data Cache <cache>` documentation for more information and configuration
examples.

View file

@ -20,6 +20,16 @@ on both masters, and shared files need to be shared manually or use tools like
the git fileserver backend to ensure that the :conf_master:`file_roots` are
kept consistent.
Beginning with Salt 2016.11.0, the :ref:`Pluggable Minion Data Cache <pluggable-data-cache>`
was introduced. The minion data cache contains the Salt Mine data, minion grains, and minion
pillar information cached on the Salt Master. By default, Salt uses the ``localfs`` cache
module, but other external data stores can be used instead.
Using a pluggable minion cache modules allows for the data stored on a Salt Master about
Salt Minions to be replicated on other Salt Masters the Minion is connected to. Please see
the :ref:`Minion Data Cache <cache>` documentation for more information and configuration
examples.
Summary of Steps
----------------

View file

@ -140,8 +140,8 @@ sudo -H $MAKE install
############################################################################
echo -n -e "\033]0;Build_Env: libsodium\007"
PKGURL="https://download.libsodium.org/libsodium/releases/libsodium-1.0.7.tar.gz"
PKGDIR="libsodium-1.0.7"
PKGURL="https://download.libsodium.org/libsodium/releases/libsodium-1.0.12.tar.gz"
PKGDIR="libsodium-1.0.12"
download $PKGURL

View file

@ -0,0 +1 @@
1e63960da42bcc90945463ae1f5b1355849881dce5bba6d293391f8d6f0932063a5bfd433a071cb184af90ebeab469acc34710587116922144d61f3d7661901b ./libsodium-1.0.12.tar.gz

View file

@ -1 +0,0 @@
21a2991010bc4e6e03d42c6df5443049c99f7622dc68a7bdc3d6d082621a165faab32612280526509d310ad1faefc00aa21c594a384a7fa8b05f4666e82e5e1d ./libsodium-1.0.7.tar.gz

30
salt/cache/consul.py vendored
View file

@ -2,17 +2,23 @@
'''
Minion data cache plugin for Consul key/value data store.
.. versionadded:: 2016.11.2
It is up to the system administrator to set up and configure the Consul
infrastructure. All is needed for this plugin is a working Consul agent
with a read-write access to the key-value storae.
with a read-write access to the key-value store.
The related documentation can be found here: https://www.consul.io/docs/index.html
The related documentation can be found in the `Consul documentation`_.
To enable this cache plugin the master will need the python client for
Consul installed that could be easily done with `pip install python-consul`.
To enable this cache plugin, the master will need the python client for
Consul installed. This can be easily installed with pip:
Optionally depending on the Consul agent configuration the following values
could be set in the master config, these are the defaults:
.. code-block: bash
pip install python-consul
Optionally, depending on the Consul agent configuration, the following values
could be set in the master config. These are the defaults:
.. code-block:: yaml
@ -24,17 +30,19 @@ could be set in the master config, these are the defaults:
consul.dc: dc1
consul.verify: True
Related docs could be found here:
* python-consul: https://python-consul.readthedocs.io/en/latest/#consul
Related docs could be found in the `python-consul documentation`_.
To use the consul as a minion data cache backend set the master `cache` config
value to `consul`:
To use the consul as a minion data cache backend, set the master ``cache`` config
value to ``consul``:
.. code-block:: yaml
cache: consul
.. versionadded:: 2016.11.2
.. _`Consul documentation`: https://www.consul.io/docs/index.html
.. _`python-consul documentation`: https://python-consul.readthedocs.io/en/latest/#consul
'''
from __future__ import absolute_import
import logging

View file

@ -4,10 +4,10 @@ Cache data in filesystem.
.. versionadded:: 2016.11.0
The `localfs` Minion cache module is the default cache module and does not
The ``localfs`` Minion cache module is the default cache module and does not
require any configuration.
Expirations can be set in the relevant config file (``/etc/salt/master`` for
Expiration values can be set in the relevant config file (``/etc/salt/master`` for
the master, ``/etc/salt/cloud`` for Salt Cloud, etc).
'''
from __future__ import absolute_import

View file

@ -53,7 +53,9 @@ class Batch(object):
args.append(self.opts.get('tgt_type', 'glob'))
self.pub_kwargs['yield_pub_data'] = True
ping_gen = self.local.cmd_iter(*args, **self.pub_kwargs)
ping_gen = self.local.cmd_iter(*args,
gather_job_timeout=self.opts['gather_job_timeout'],
**self.pub_kwargs)
# Broadcast to targets
fret = set()
@ -174,6 +176,7 @@ class Batch(object):
ret=self.opts.get('return', ''),
show_jid=show_jid,
verbose=show_verbose,
gather_job_timeout=self.opts['gather_job_timeout'],
**self.eauth)
# add it to our iterators and to the minion_tracker
iters.append(new_iter)

View file

@ -231,7 +231,7 @@ class LocalClient(object):
Return the information about a given job
'''
log.debug('Checking whether jid {0} is still running'.format(jid))
timeout = kwargs.get('gather_job_timeout', self.opts['gather_job_timeout'])
timeout = int(kwargs.get('gather_job_timeout', self.opts['gather_job_timeout']))
pub_data = self.run_job(tgt,
'saltutil.find_job',
@ -561,6 +561,12 @@ class LocalClient(object):
'ret': ret,
'batch': batch,
'raw': kwargs.get('raw', False)}
if 'timeout' in kwargs:
opts['timeout'] = kwargs['timeout']
if 'gather_job_timeout' in kwargs:
opts['gather_job_timeout'] = kwargs['gather_job_timeout']
for key, val in six.iteritems(self.opts):
if key not in opts:
opts[key] = val
@ -1113,7 +1119,7 @@ class LocalClient(object):
if timeout is None:
timeout = self.opts['timeout']
gather_job_timeout = kwargs.get('gather_job_timeout', self.opts['gather_job_timeout'])
gather_job_timeout = int(kwargs.get('gather_job_timeout', self.opts['gather_job_timeout']))
start = int(time.time())
# timeouts per minion, id_ -> timeout time

View file

@ -38,8 +38,6 @@ examples could be set up in the cloud configuration at
.. code-block:: yaml
my-openstack-config:
# The ID of the minion that will execute the salt nova functions
auth_minion: myminion
# The name of the configuration profile to use on said minion
config_profile: my_openstack_profile

View file

@ -1772,6 +1772,8 @@ def ip_fqdn():
info = socket.getaddrinfo(_fqdn, None, socket_type)
ret[key] = list(set(item[4][0] for item in info))
except socket.error:
log.warning('Unable to find IPv{0} record for "{1}" causing a 10 second timeout when rendering grains. '
'Set the dns or /etc/hosts for IPv{0} to clear this.'.format(ipv_num, _fqdn))
ret[key] = []
return ret

View file

@ -336,6 +336,53 @@ def _netstat_sunos():
return ret
def _netstat_aix():
'''
Return netstat information for SunOS flavors
'''
ret = []
## AIX 6.1 - 7.2, appears to ignore addr_family field contents
## for addr_family in ('inet', 'inet6'):
for addr_family in ('inet',):
# Lookup connections
cmd = 'netstat -n -a -f {0} | tail -n +3'.format(addr_family)
out = __salt__['cmd.run'](cmd, python_shell=True)
for line in out.splitlines():
comps = line.split()
if len(comps) < 5:
continue
proto_seen = None
tcp_flag = True
if 'tcp' == comps[0] or 'tcp4' == comps[0]:
proto_seen = 'tcp'
elif 'tcp6' == comps[0]:
proto_seen = 'tcp6'
elif 'udp' == comps[0] or 'udp4' == comps[0]:
proto_seen = 'udp'
tcp_flag = False
elif 'udp6' == comps[0]:
proto_seen = 'udp6'
tcp_flag = False
if tcp_flag:
if len(comps) >= 6:
ret.append({
'proto': proto_seen,
'recv-q': comps[1],
'send-q': comps[2],
'local-address': comps[3],
'remote-address': comps[4],
'state': comps[5]})
else:
if len(comps) >= 5:
ret.append({
'proto': proto_seen,
'local-address': comps[3],
'remote-address': comps[4]})
return ret
def _netstat_route_linux():
'''
Return netstat routing information for Linux distros
@ -506,6 +553,36 @@ def _netstat_route_sunos():
return ret
def _netstat_route_aix():
'''
Return netstat routing information for AIX
'''
ret = []
cmd = 'netstat -f inet -rn | tail -n +5'
out = __salt__['cmd.run'](cmd, python_shell=True)
for line in out.splitlines():
comps = line.split()
ret.append({
'addr_family': 'inet',
'destination': comps[0],
'gateway': comps[1],
'netmask': '',
'flags': comps[2],
'interface': comps[5] if len(comps) >= 6 else ''})
cmd = 'netstat -f inet6 -rn | tail -n +5'
out = __salt__['cmd.run'](cmd, python_shell=True)
for line in out.splitlines():
comps = line.split()
ret.append({
'addr_family': 'inet6',
'destination': comps[0],
'gateway': comps[1],
'netmask': '',
'flags': comps[2],
'interface': comps[5] if len(comps) >= 6 else ''})
return ret
def netstat():
'''
Return information on open ports and states
@ -520,6 +597,9 @@ def netstat():
.. versionchanged:: 2015.8.0
Added support for SunOS
.. versionchanged:: 2016.11.4
Added support for AIX
CLI Example:
.. code-block:: bash
@ -532,6 +612,8 @@ def netstat():
return _netstat_bsd()
elif __grains__['kernel'] == 'SunOS':
return _netstat_sunos()
elif __grains__['kernel'] == 'AIX':
return _netstat_aix()
raise CommandExecutionError('Not yet supported on this platform')
@ -566,6 +648,22 @@ def active_tcp():
'remote_port': '.'.join(connection['remote-address'].split('.')[-1:])
}
return ret
elif __grains__['kernel'] == 'AIX':
# lets use netstat to mimic linux as close as possible
ret = {}
for connection in _netstat_aix():
## TBD need to deliver AIX output in consumable fashion
if not connection['proto'].startswith('tcp'):
continue
if connection['state'] != 'ESTABLISHED':
continue
ret[len(ret)+1] = {
'local_addr': '.'.join(connection['local-address'].split('.')[:-1]),
'local_port': '.'.join(connection['local-address'].split('.')[-1:]),
'remote_addr': '.'.join(connection['remote-address'].split('.')[:-1]),
'remote_port': '.'.join(connection['remote-address'].split('.')[-1:])
}
return ret
else:
return {}
@ -577,6 +675,9 @@ def traceroute(host):
.. versionchanged:: 2015.8.0
Added support for SunOS
.. versionchanged:: 2016.11.4
Added support for AIX
CLI Example:
.. code-block:: bash
@ -593,7 +694,7 @@ def traceroute(host):
out = __salt__['cmd.run'](cmd)
# Parse version of traceroute
if salt.utils.is_sunos():
if salt.utils.is_sunos() or salt.utils.is_aix():
traceroute_version = [0, 0, 0]
else:
cmd2 = 'traceroute --version'
@ -626,8 +727,21 @@ def traceroute(host):
if line.startswith('traceroute'):
continue
if salt.utils.is_aix():
if line.startswith('trying to get source for'):
continue
if line.startswith('source should be'):
continue
if line.startswith('outgoing MTU'):
continue
if line.startswith('fragmentation required'):
continue
if 'Darwin' in str(traceroute_version[1]) or 'FreeBSD' in str(traceroute_version[1]) or \
__grains__['kernel'] == 'SunOS':
__grains__['kernel'] in ('SunOS', 'AIX'):
try:
traceline = re.findall(r'\s*(\d*)\s+(.*)\s+\((.*)\)\s+(.*)$', line)[0]
except IndexError:
@ -729,8 +843,13 @@ def arp():
if comps[0] == 'Host' or comps[1] == '(incomplete)':
continue
ret[comps[1]] = comps[0]
elif __grains__['kernel'] == 'AIX':
if comps[0] in ('bucket', 'There'):
continue
ret[comps[3]] = comps[1].strip('(').strip(')')
else:
ret[comps[3]] = comps[1].strip('(').strip(')')
return ret
@ -1311,6 +1430,9 @@ def routes(family=None):
.. versionchanged:: 2015.8.0
Added support for SunOS (Solaris 10, Illumos, SmartOS)
.. versionchanged:: 2016.11.4
Added support for AIX
CLI Example:
.. code-block:: bash
@ -1330,6 +1452,8 @@ def routes(family=None):
routes_ = _netstat_route_netbsd()
elif __grains__['os'] in ['OpenBSD']:
routes_ = _netstat_route_openbsd()
elif __grains__['os'] in ['AIX']:
routes_ = _netstat_route_aix()
else:
raise CommandExecutionError('Not yet supported on this platform')
@ -1347,6 +1471,9 @@ def default_route(family=None):
.. versionchanged:: 2015.8.0
Added support for SunOS (Solaris 10, Illumos, SmartOS)
.. versionchanged:: 2016.11.4
Added support for AIX
CLI Example:
.. code-block:: bash
@ -1363,7 +1490,7 @@ def default_route(family=None):
default_route['inet'] = ['0.0.0.0', 'default']
default_route['inet6'] = ['::/0', 'default']
elif __grains__['os'] in ['FreeBSD', 'NetBSD', 'OpenBSD', 'MacOS', 'Darwin'] or \
__grains__['kernel'] == 'SunOS':
__grains__['kernel'] in ('SunOS', 'AIX'):
default_route['inet'] = ['default']
default_route['inet6'] = ['default']
else:
@ -1394,6 +1521,9 @@ def get_route(ip):
Added support for SunOS (Solaris 10, Illumos, SmartOS)
Added support for OpenBSD
.. versionchanged:: 2016.11.4
Added support for AIX
CLI Example::
salt '*' network.get_route 10.10.10.10
@ -1479,6 +1609,39 @@ def get_route(ip):
return ret
if __grains__['kernel'] == 'AIX':
# root@la68pp002_pub:~# route -n get 172.29.149.95
# route to: 172.29.149.95
#destination: 172.29.149.95
# gateway: 127.0.0.1
# interface: lo0
#interf addr: 127.0.0.1
# flags: <UP,GATEWAY,HOST,DONE,STATIC>
#recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire
# 0 0 0 0 0 0 0 -68642
cmd = 'route -n get {0}'.format(ip)
out = __salt__['cmd.run'](cmd, python_shell=False)
ret = {
'destination': ip,
'gateway': None,
'interface': None,
'source': None
}
for line in out.splitlines():
line = line.split(':')
if 'route to' in line[0]:
ret['destination'] = line[1].strip()
if 'gateway' in line[0]:
ret['gateway'] = line[1].strip()
if 'interface' in line[0]:
ret['interface'] = line[1].strip()
if 'interf addr' in line[0]:
ret['source'] = line[1].strip()
return ret
else:
raise CommandExecutionError('Not yet supported on this platform')

View file

@ -371,7 +371,8 @@ def install(pkgs=None, # pylint: disable=R0912,R0913,R0914
env_vars=None,
use_vt=False,
trusted_host=None,
no_cache_dir=False):
no_cache_dir=False,
cache_dir=None):
'''
Install packages with pip
@ -444,8 +445,8 @@ def install(pkgs=None, # pylint: disable=R0912,R0913,R0914
download
Download packages into ``download`` instead of installing them
download_cache
Cache downloaded packages in ``download_cache`` dir
download_cache | cache_dir
Cache downloaded packages in ``download_cache`` or ``cache_dir`` dir
source
Check out ``editable`` packages into ``source`` dir
@ -682,8 +683,10 @@ def install(pkgs=None, # pylint: disable=R0912,R0913,R0914
if download:
cmd.extend(['--download', download])
if download_cache:
cmd.extend(['--download-cache', download_cache])
if download_cache or cache_dir:
cmd.extend(['--cache-dir' if salt.utils.compare_versions(
ver1=version(bin_env), oper='>=', ver2='6.0'
) else '--download-cache', download_cache or cache_dir])
if source:
cmd.extend(['--source', source])

View file

@ -240,6 +240,9 @@ def get_service_name(*args):
If arguments are passed, create a dict of Display Names and Service Names
Returns:
dict: A dictionary of display names and service names
CLI Examples:
.. code-block:: bash
@ -268,7 +271,7 @@ def info(name):
Args:
name (str): The name of the service. This is not the display name. Use
``get_service_name`` to find the service name.
``get_service_name`` to find the service name.
Returns:
dict: A dictionary containing information about the service.
@ -484,6 +487,9 @@ def create_win_salt_restart_task():
'''
Create a task in Windows task scheduler to enable restarting the salt-minion
Returns:
bool: ``True`` if successful, otherwise ``False``
CLI Example:
.. code-block:: bash
@ -508,6 +514,9 @@ def execute_salt_restart_task():
'''
Run the Windows Salt restart task
Returns:
bool: ``True`` if successful, otherwise ``False``
CLI Example:
.. code-block:: bash
@ -581,10 +590,10 @@ def modify(name,
Args:
name (str): The name of the service. Can be found using the
``service.get_service_name`` function
``service.get_service_name`` function
bin_path (str): The path to the service executable. Backslashes must be
escaped, eg: C:\\path\\to\\binary.exe
escaped, eg: C:\\path\\to\\binary.exe
exe_args (str): Any arguments required by the service executable
@ -593,7 +602,8 @@ def modify(name,
description (str): The description to display for the service
service_type (str): Specifies the service type. Default is ``own``.
Valid options are as follows:
Valid options are as follows:
- kernel: Driver service
- filesystem: File system driver service
- adapter: Adapter driver service (reserved)
@ -603,6 +613,7 @@ def modify(name,
start_type (str): Specifies the service start type. Valid options are as
follows:
- boot: Device driver that is loaded by the boot loader
- system: Device driver that is started during kernel initialization
- auto: Service that automatically starts
@ -610,11 +621,13 @@ def modify(name,
- disabled: Service cannot be started
start_delayed (bool): Set the service to Auto(Delayed Start). Only valid
if the start_type is set to ``Auto``. If service_type is not passed, but
the service is already set to ``Auto``, then the flag will be set.
if the start_type is set to ``Auto``. If service_type is not passed,
but the service is already set to ``Auto``, then the flag will be
set.
error_control (str): The severity of the error, and action taken, if
this service fails to start. Valid options are as follows:
this service fails to start. Valid options are as follows:
- normal: Error is logged and a message box is displayed
- severe: Error is logged and computer attempts a restart with the
last known good configuration
@ -627,26 +640,28 @@ def modify(name,
belongs
dependencies (list): A list of services or load ordering groups that
must start before this service
must start before this service
account_name (str): The name of the account under which the service
should run. For ``own`` type services this should be in the
``domain\username`` format. The following are examples of valid built-in
service accounts:
should run. For ``own`` type services this should be in the
``domain\username`` format. The following are examples of valid
built-in service accounts:
- NT Authority\\LocalService
- NT Authority\\NetworkService
- NT Authority\\LocalSystem
- .\LocalSystem
account_password (str): The password for the account name specified in
``account_name``. For the above built-in accounts, this can be None.
Otherwise a password must be specified.
``account_name``. For the above built-in accounts, this can be None.
Otherwise a password must be specified.
run_interactive (bool): If this setting is True, the service will be
allowed to interact with the user. Not recommended for services that run
with elevated privileges.
allowed to interact with the user. Not recommended for services that
run with elevated privileges.
Returns (dict): A dictionary of changes made
Returns:
dict: a dictionary of changes made
CLI Example:
@ -882,16 +897,18 @@ def create(name,
name (str): Specifies the service name. This is not the display_name
bin_path (str): Specifies the path to the service binary file.
Backslashes must be escaped, eg: C:\\path\\to\\binary.exe
Backslashes must be escaped, eg: C:\\path\\to\\binary.exe
exe_args (str): Any additional arguments required by the service binary.
display_name (str): the name to be displayed in the service manager
display_name (str): the name to be displayed in the service manager. If
not passed, the ``name`` will be used
description (str): A description of the service
service_type (str): Specifies the service type. Default is ``own``.
Valid options are as follows:
Valid options are as follows:
- kernel: Driver service
- filesystem: File system driver service
- adapter: Adapter driver service (reserved)
@ -900,7 +917,8 @@ def create(name,
- share: Service shares a process with one or more other services
start_type (str): Specifies the service start type. Valid options are as
follows:
follows:
- boot: Device driver that is loaded by the boot loader
- system: Device driver that is started during kernel initialization
- auto: Service that automatically starts
@ -908,12 +926,13 @@ def create(name,
- disabled: Service cannot be started
start_delayed (bool): Set the service to Auto(Delayed Start). Only valid
if the start_type is set to ``Auto``. If service_type is not passed, but
the service is already set to ``Auto``, then the flag will be set.
Default is ``False``
if the start_type is set to ``Auto``. If service_type is not passed,
but the service is already set to ``Auto``, then the flag will be
set. Default is ``False``
error_control (str): The severity of the error, and action taken, if
this service fails to start. Valid options are as follows:
this service fails to start. Valid options are as follows:
- normal (normal): Error is logged and a message box is displayed
- severe: Error is logged and computer attempts a restart with the
last known good configuration
@ -926,24 +945,25 @@ def create(name,
belongs
dependencies (list): A list of services or load ordering groups that
must start before this service
must start before this service
account_name (str): The name of the account under which the service
should run. For ``own`` type services this should be in the
``domain\username`` format. The following are examples of valid built-in
service accounts:
should run. For ``own`` type services this should be in the
``domain\username`` format. The following are examples of valid
built-in service accounts:
- NT Authority\\LocalService
- NT Authority\\NetworkService
- NT Authority\\LocalSystem
- .\\LocalSystem
account_password (str): The password for the account name specified in
``account_name``. For the above built-in accounts, this can be None.
Otherwise a password must be specified.
``account_name``. For the above built-in accounts, this can be None.
Otherwise a password must be specified.
run_interactive (bool): If this setting is True, the service will be
allowed to interact with the user. Not recommended for services that run
with elevated privileges.
allowed to interact with the user. Not recommended for services that
run with elevated privileges.
Returns:
dict: A dictionary containing information about the new service
@ -975,6 +995,9 @@ def create(name,
if display_name is None:
display_name = kwargs.pop('DisplayName')
if display_name is None:
display_name = name
if 'type' in kwargs:
salt.utils.warn_until(
'Oxygen',
@ -1080,7 +1103,9 @@ def create(name,
raise CommandExecutionError(
'Invalid Parameter: start_delayed requires start_type "auto"')
if account_name in ['LocalSystem', 'LocalService', 'NetworkService']:
if account_name in ['LocalSystem', '.\\LocalSystem',
'LocalService', '.\\LocalService',
'NetworkService', '.\\NetworkService']:
account_password = ''
# Connect to Service Control Manager
@ -1144,12 +1169,13 @@ def config(name,
name (str): Specifies the service name. This is not the display_name
bin_path (str): Specifies the path to the service binary file.
Backslashes must be escaped, eg: C:\\path\\to\\binary.exe
Backslashes must be escaped, eg: C:\\path\\to\\binary.exe
display_name (str): the name to be displayed in the service manager
svc_type (str): Specifies the service type. Default is ``own``.
Valid options are as follows:
Valid options are as follows:
- kernel: Driver service
- filesystem: File system driver service
- adapter: Adapter driver service (reserved)
@ -1159,6 +1185,7 @@ def config(name,
start_type (str): Specifies the service start type. Valid options are as
follows:
- boot: Device driver that is loaded by the boot loader
- system: Device driver that is started during kernel initialization
- auto: Service that automatically starts
@ -1166,7 +1193,8 @@ def config(name,
- disabled: Service cannot be started
error (str): The severity of the error, and action taken, if this
service fails to start. Valid options are as follows:
service fails to start. Valid options are as follows:
- normal (normal): Error is logged and a message box is displayed
- severe: Error is logged and computer attempts a restart with the
last known good configuration
@ -1179,22 +1207,24 @@ def config(name,
belongs
depend (list): A list of services or load ordering groups that
must start before this service
must start before this service
obj (str): The name of the account under which the service should run.
For ``own`` type services this should be in the ``domain\username``
format. The following are examples of valid built-in service
accounts:
obj (str): The name of the account under which the service
should run. For ``own`` type services this should be in the
``domain\username`` format. The following are examples of valid built-in
service accounts:
- NT Authority\\LocalService
- NT Authority\\NetworkService
- NT Authority\\LocalSystem
- .\\LocalSystem
password (str): The password for the account name specified in
``account_name``. For the above built-in accounts, this can be None.
Otherwise a password must be specified.
``account_name``. For the above built-in accounts, this can be None.
Otherwise a password must be specified.
Returns:
dict: a dictionary of changes made
CLI Example:

View file

@ -572,33 +572,6 @@ def set_hostname(hostname):
return "successful" in ret
def _lookup_error(number):
'''
Lookup the error based on the passed number
.. versionadded:: 2015.5.7
.. versionadded:: 2015.8.2
:param int number: Number code to lookup
:return: The text that corresponds to the error number
:rtype: str
'''
return_values = {
2: 'Invalid OU or specifying OU is not supported',
5: 'Access is denied',
53: 'The network path was not found',
87: 'The parameter is incorrect',
110: 'The system cannot open the specified object',
1323: 'Unable to update the password',
1326: 'Logon failure: unknown username or bad password',
1355: 'The specified domain either does not exist or could not be contacted',
2224: 'The account already exists',
2691: 'The machine is already joined to the domain',
2692: 'The machine is not currently joined to a domain',
}
return return_values[number]
def join_domain(domain,
username=None,
password=None,
@ -693,7 +666,7 @@ def join_domain(domain,
ret['Restart'] = reboot()
return ret
log.error(_lookup_error(err[0]))
log.error(win32api.FormatMessage(err[0]).rstrip())
return False
@ -782,11 +755,11 @@ def unjoin_domain(username=None,
return ret
else:
log.error(_lookup_error(err[0]))
log.error(win32api.FormatMessage(err[0]).rstrip())
log.error('Failed to join the computer to {0}'.format(workgroup))
return False
else:
log.error(_lookup_error(err[0]))
log.error(win32api.FormatMessage(err[0]).rstrip())
log.error('Failed to unjoin computer from {0}'.format(status['Domain']))
return False

View file

@ -224,7 +224,7 @@ def set_(name, path):
if __opts__['test']:
ret['comment'] = (
'Alternative for {0} will be set to path {1}'
).format(name, current)
).format(name, path)
ret['result'] = None
return ret
__salt__['alternatives.set'](name, path)

View file

@ -157,7 +157,7 @@ def mounted(name,
password=badsecret
extra_ignore_fs_keys
extra_mount_ignore_fs_keys
A dict of filesystem options which should not force a remount. This will update
the internal dictionary. The dict should look like this::
@ -370,6 +370,9 @@ def mounted(name,
if fstype in ['cifs'] and opt.split('=')[0] == 'user':
opt = "username={0}".format(opt.split('=')[1])
if opt.split('=')[0] in mount_ignore_fs_keys.get(fstype, []):
opt = opt.split('=')[0]
# convert uid/gid to numeric value from user/group name
name_id_opts = {'uid': 'user.info',
'gid': 'group.info'}

View file

@ -259,7 +259,8 @@ def installed(name,
env_vars=None,
use_vt=False,
trusted_host=None,
no_cache_dir=False):
no_cache_dir=False,
cache_dir=None):
'''
Make sure the package is installed

View file

@ -2661,6 +2661,7 @@ def mod_aggregate(low, chunks, running):
low chunks and merges them into a single pkgs ref in the present low data
'''
pkgs = []
pkg_type = None
agg_enabled = [
'installed',
'latest',
@ -2683,18 +2684,34 @@ def mod_aggregate(low, chunks, running):
# Check for the same repo
if chunk.get('fromrepo') != low.get('fromrepo'):
continue
# Pull out the pkg names!
if 'pkgs' in chunk:
pkgs.extend(chunk['pkgs'])
chunk['__agg__'] = True
elif 'name' in chunk:
pkgs.append(chunk['name'])
chunk['__agg__'] = True
if pkgs:
if 'pkgs' in low:
low['pkgs'].extend(pkgs)
# Check first if 'sources' was passed so we don't aggregate pkgs
# and sources together.
if 'sources' in chunk:
if pkg_type is None:
pkg_type = 'sources'
if pkg_type == 'sources':
pkgs.extend(chunk['sources'])
chunk['__agg__'] = True
else:
if pkg_type is None:
pkg_type = 'pkgs'
if pkg_type == 'pkgs':
# Pull out the pkg names!
if 'pkgs' in chunk:
pkgs.extend(chunk['pkgs'])
chunk['__agg__'] = True
elif 'name' in chunk:
version = chunk.pop('version', None)
if version is not None:
pkgs.append({chunk['name']: version})
else:
pkgs.append(chunk['name'])
chunk['__agg__'] = True
if pkg_type is not None and pkgs:
if pkg_type in low:
low[pkg_type].extend(pkgs)
else:
low['pkgs'] = pkgs
low[pkg_type] = pkgs
return low

View file

@ -142,9 +142,9 @@ def _absent_test(user, name, enc, comment, options, source, config):
if keys:
comment = ''
for key, status in list(keys.items()):
if status == 'exists':
if status == 'add':
continue
comment += 'Set to {0}: {1}\n'.format(status, key)
comment += 'Set to remove: {0}\n'.format(key)
if comment:
return result, comment
err = sys.modules[

View file

@ -55,6 +55,8 @@ def managed(name,
no_use_wheel=False,
pip_upgrade=False,
pip_pkgs=None,
pip_no_cache_dir=False,
pip_cache_dir=None,
process_dependency_links=False):
'''
Create a virtualenv and optionally manage it with pip
@ -287,7 +289,9 @@ def managed(name,
no_deps=no_deps,
proxy=proxy,
use_vt=use_vt,
env_vars=env_vars
env_vars=env_vars,
no_cache_dir=pip_no_cache_dir,
cache_dir=pip_cache_dir
)
ret['result'] &= pip_ret['retcode'] == 0
if pip_ret['retcode'] > 0:

View file

@ -121,7 +121,14 @@ def add(name, num, minimum=0, maximum=0, ref=None):
- name: myregentry
- num: 5
'''
return calc(name, num, 'add', ref)
return calc(
name=name,
num=num,
oper='add',
minimum=minimum,
maximum=maximum,
ref=ref
)
def mul(name, num, minimum=0, maximum=0, ref=None):
@ -137,7 +144,14 @@ def mul(name, num, minimum=0, maximum=0, ref=None):
- name: myregentry
- num: 5
'''
return calc(name, num, 'mul', ref)
return calc(
name=name,
num=num,
oper='mul',
minimum=minimum,
maximum=maximum,
ref=ref
)
def mean(name, num, minimum=0, maximum=0, ref=None):
@ -153,7 +167,14 @@ def mean(name, num, minimum=0, maximum=0, ref=None):
- name: myregentry
- num: 5
'''
return calc(name, num, 'mean', ref)
return calc(
name=name,
num=num,
oper='mean',
minimum=minimum,
maximum=maximum,
ref=ref
)
def median(name, num, minimum=0, maximum=0, ref=None):
@ -169,7 +190,14 @@ def median(name, num, minimum=0, maximum=0, ref=None):
- name: myregentry
- num: 5
'''
return calc(name, num, 'median', ref)
return calc(
name=name,
num=num,
oper='median',
minimum=minimum,
maximum=maximum,
ref=ref
)
def median_low(name, num, minimum=0, maximum=0, ref=None):
@ -185,7 +213,14 @@ def median_low(name, num, minimum=0, maximum=0, ref=None):
- name: myregentry
- num: 5
'''
return calc(name, num, 'median_low', ref)
return calc(
name=name,
num=num,
oper='median_low',
minimum=minimum,
maximum=maximum,
ref=ref
)
def median_high(name, num, minimum=0, maximum=0, ref=None):
@ -201,7 +236,14 @@ def median_high(name, num, minimum=0, maximum=0, ref=None):
- name: myregentry
- num: 5
'''
return calc(name, num, 'median_high', ref)
return calc(
name=name,
num=num,
oper='median_high',
minimum=minimum,
maximum=maximum,
ref=ref
)
def median_grouped(name, num, minimum=0, maximum=0, ref=None):
@ -218,7 +260,14 @@ def median_grouped(name, num, minimum=0, maximum=0, ref=None):
- name: myregentry
- num: 5
'''
return calc(name, num, 'median_grouped', ref)
return calc(
name=name,
num=num,
oper='median_grouped',
minimum=minimum,
maximum=maximum,
ref=ref
)
def mode(name, num, minimum=0, maximum=0, ref=None):
@ -234,4 +283,11 @@ def mode(name, num, minimum=0, maximum=0, ref=None):
- name: myregentry
- num: 5
'''
return calc(name, num, 'mode', ref)
return calc(
name=name,
num=num,
oper='mode',
minimum=minimum,
maximum=maximum,
ref=ref
)

View file

@ -1787,6 +1787,14 @@ def is_openbsd():
return sys.platform.startswith('openbsd')
@real_memoize
def is_aix():
'''
Simple function to return if host is AIX or not
'''
return sys.platform.startswith('aix')
def is_fcntl_available(check_sunos=False):
'''
Simple function to check if the `fcntl` module is available or not.

View file

@ -944,10 +944,39 @@ def _get_iface_info(iface):
return None, error_msg
def _hw_addr_aix(iface):
'''
Return the hardware address (a.k.a. MAC address) for a given interface on AIX
MAC address not available in through interfaces
'''
cmd = subprocess.Popen(
'entstat -d {0} | grep \'Hardware Address\''.format(iface),
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT).communicate()[0]
if cmd:
comps = cmd.split(' ')
if len(comps) == 3:
mac_addr = comps[2].strip('\'').strip()
return mac_addr
error_msg = ('Interface "{0}" either not available or does not contain a hardware address'.format(iface))
log.error(error_msg)
return error_msg
def hw_addr(iface):
'''
Return the hardware address (a.k.a. MAC address) for a given interface
.. versionchanged:: 2016.11.4
Added support for AIX
'''
if salt.utils.is_aix():
return _hw_addr_aix
iface_info, error = _get_iface_info(iface)
if error is False:
@ -998,8 +1027,10 @@ def _subnets(proto='inet', interfaces_=None):
if proto == 'inet':
subnet = 'netmask'
dflt_cidr = 32
elif proto == 'inet6':
subnet = 'prefixlen'
dflt_cidr = 128
else:
log.error('Invalid proto {0} calling subnets()'.format(proto))
return
@ -1009,7 +1040,10 @@ def _subnets(proto='inet', interfaces_=None):
addrs.extend([addr for addr in ip_info.get('secondary', []) if addr.get('type') == proto])
for intf in addrs:
intf = ipaddress.ip_interface('{0}/{1}'.format(intf['address'], intf[subnet]))
if subnet in intf:
intf = ipaddress.ip_interface('{0}/{1}'.format(intf['address'], intf[subnet]))
else:
intf = ipaddress.ip_interface('{0}/{1}'.format(intf['address'], dflt_cidr))
if not intf.is_loopback:
ret.add(intf.network)
return [str(net) for net in sorted(ret)]

View file

@ -197,3 +197,20 @@ class ArchiveTest(integration.ModuleCase,
self.assertSaltTrueReturn(ret)
self._check_extracted(UNTAR_FILE)
def test_archive_extracted_with_cmd_unzip_false(self):
'''
test archive.extracted using use_cmd_unzip argument as false
'''
ret = self.run_state('archive.extracted', name=ARCHIVE_DIR,
source=ARCHIVE_TAR_SOURCE,
source_hash=ARCHIVE_TAR_HASH,
use_cmd_unzip=False,
archive_format='tar')
log.debug('ret = %s', ret)
if 'Timeout' in ret:
self.skipTest('Timeout talking to local tornado server.')
self.assertSaltTrueReturn(ret)
self._check_extracted(UNTAR_FILE)

View file

@ -21,7 +21,13 @@ class BatchTestCase(TestCase):
'''
def setUp(self):
opts = {'batch': '', 'conf_file': {}, 'tgt': '', 'transport': '', 'timeout': 5}
opts = {'batch': '',
'conf_file': {},
'tgt': '',
'transport': '',
'timeout': 5,
'gather_job_timeout': 5}
mock_client = MagicMock()
with patch('salt.client.get_local_client', MagicMock(return_value=mock_client)):
with patch('salt.client.LocalClient.cmd_iter', MagicMock(return_value=[])):

View file

@ -41,6 +41,8 @@ class DockerTestCase(TestCase, LoaderModuleMockMixin):
def setup_loader_modules(self):
return {docker_mod: {'__context__': {'docker.docker_version': ''}}}
docker_version = docker_mod.docker.version_info
def test_ps_with_host_true(self):
'''
Check that docker.ps called with host is ``True``,
@ -130,6 +132,8 @@ class DockerTestCase(TestCase, LoaderModuleMockMixin):
@skipIf(_docker_py_version() < (1, 5, 0),
'docker module must be installed to run this test or is too old. >=1.5.0')
@patch('salt.modules.docker._get_docker_py_versioninfo',
MagicMock(return_value=docker_version))
def test_list_networks(self, *args):
'''
test list networks.
@ -154,8 +158,10 @@ class DockerTestCase(TestCase, LoaderModuleMockMixin):
ids=['01234'],
)
@skipIf(_docker_py_version() < (1, 5, 0),
@skipIf(docker_version < (1, 5, 0),
'docker module must be installed to run this test or is too old. >=1.5.0')
@patch('salt.modules.docker._get_docker_py_versioninfo',
MagicMock(return_value=docker_version))
def test_create_network(self, *args):
'''
test create network.
@ -180,8 +186,10 @@ class DockerTestCase(TestCase, LoaderModuleMockMixin):
driver='bridge',
)
@skipIf(_docker_py_version() < (1, 5, 0),
@skipIf(docker_version < (1, 5, 0),
'docker module must be installed to run this test or is too old. >=1.5.0')
@patch('salt.modules.docker._get_docker_py_versioninfo',
MagicMock(return_value=docker_version))
def test_remove_network(self, *args):
'''
test remove network.
@ -200,8 +208,10 @@ class DockerTestCase(TestCase, LoaderModuleMockMixin):
docker_mod.remove_network('foo')
client.remove_network.assert_called_once_with('foo')
@skipIf(_docker_py_version() < (1, 5, 0),
@skipIf(docker_version < (1, 5, 0),
'docker module must be installed to run this test or is too old. >=1.5.0')
@patch('salt.modules.docker._get_docker_py_versioninfo',
MagicMock(return_value=docker_version))
def test_inspect_network(self, *args):
'''
test inspect network.
@ -220,8 +230,10 @@ class DockerTestCase(TestCase, LoaderModuleMockMixin):
docker_mod.inspect_network('foo')
client.inspect_network.assert_called_once_with('foo')
@skipIf(_docker_py_version() < (1, 5, 0),
@skipIf(docker_version < (1, 5, 0),
'docker module must be installed to run this test or is too old. >=1.5.0')
@patch('salt.modules.docker._get_docker_py_versioninfo',
MagicMock(return_value=docker_version))
def test_connect_container_to_network(self, *args):
'''
test inspect network.
@ -244,8 +256,10 @@ class DockerTestCase(TestCase, LoaderModuleMockMixin):
client.connect_container_to_network.assert_called_once_with(
'container', 'foo', None)
@skipIf(_docker_py_version() < (1, 5, 0),
@skipIf(docker_version < (1, 5, 0),
'docker module must be installed to run this test or is too old. >=1.5.0')
@patch('salt.modules.docker._get_docker_py_versioninfo',
MagicMock(return_value=docker_version))
def test_disconnect_container_from_network(self, *args):
'''
test inspect network.
@ -265,8 +279,10 @@ class DockerTestCase(TestCase, LoaderModuleMockMixin):
client.disconnect_container_from_network.assert_called_once_with(
'container', 'foo')
@skipIf(_docker_py_version() < (1, 5, 0),
@skipIf(docker_version < (1, 5, 0),
'docker module must be installed to run this test or is too old. >=1.5.0')
@patch('salt.modules.docker._get_docker_py_versioninfo',
MagicMock(return_value=docker_version))
def test_list_volumes(self, *args):
'''
test list volumes.
@ -288,8 +304,10 @@ class DockerTestCase(TestCase, LoaderModuleMockMixin):
filters={'dangling': [True]},
)
@skipIf(_docker_py_version() < (1, 5, 0),
@skipIf(docker_version < (1, 5, 0),
'docker module must be installed to run this test or is too old. >=1.5.0')
@patch('salt.modules.docker._get_docker_py_versioninfo',
MagicMock(return_value=docker_version))
def test_create_volume(self, *args):
'''
test create volume.
@ -315,8 +333,10 @@ class DockerTestCase(TestCase, LoaderModuleMockMixin):
driver_opts={},
)
@skipIf(_docker_py_version() < (1, 5, 0),
@skipIf(docker_version < (1, 5, 0),
'docker module must be installed to run this test or is too old. >=1.5.0')
@patch('salt.modules.docker._get_docker_py_versioninfo',
MagicMock(return_value=docker_version))
def test_remove_volume(self, *args):
'''
test remove volume.
@ -334,8 +354,10 @@ class DockerTestCase(TestCase, LoaderModuleMockMixin):
docker_mod.remove_volume('foo')
client.remove_volume.assert_called_once_with('foo')
@skipIf(_docker_py_version() < (1, 5, 0),
@skipIf(docker_version < (1, 5, 0),
'docker module must be installed to run this test or is too old. >=1.5.0')
@patch('salt.modules.docker._get_docker_py_versioninfo',
MagicMock(return_value=docker_version))
def test_inspect_volume(self, *args):
'''
test inspect volume.

View file

@ -469,19 +469,38 @@ class PipTestCase(TestCase, LoaderModuleMockMixin):
python_shell=False,
)
def test_install_download_cache_argument_in_resulting_command(self):
def test_install_download_cache_dir_arguments_in_resulting_command(self):
pkg = 'pep8'
cache_dir_arg_mapping = {
'1.5.6': '--download-cache',
'6.0': '--cache-dir',
}
download_cache = '/tmp/foo'
mock = MagicMock(return_value={'retcode': 0, 'stdout': ''})
with patch.dict(pip.__salt__, {'cmd.run_all': mock}):
pip.install(pkg, download_cache='/tmp/foo')
mock.assert_called_once_with(
['pip', 'install', '--download-cache', download_cache, pkg],
saltenv='base',
runas=None,
use_vt=False,
python_shell=False,
)
for pip_version, cmd_arg in cache_dir_arg_mapping.items():
with patch('salt.modules.pip.version',
MagicMock(return_value=pip_version)):
# test `download_cache` kwarg
pip.install(pkg, download_cache='/tmp/foo')
mock.assert_called_with(
['pip', 'install', cmd_arg, download_cache, pkg],
saltenv='base',
runas=None,
use_vt=False,
python_shell=False,
)
# test `cache_dir` kwarg
pip.install(pkg, cache_dir='/tmp/foo')
mock.assert_called_with(
['pip', 'install', cmd_arg, download_cache, pkg],
saltenv='base',
runas=None,
use_vt=False,
python_shell=False,
)
def test_install_source_argument_in_resulting_command(self):
pkg = 'pep8'

View file

@ -196,7 +196,7 @@ class AlternativesTestCase(TestCase, LoaderModuleMockMixin):
ret.update({'comment': comt})
self.assertDictEqual(alternatives.set_(name, path), ret)
comt = ('Alternative for {0} will be set to path False'
comt = ('Alternative for {0} will be set to path /usr/bin/less'
).format(name)
ret.update({'comment': comt, 'result': None})
with patch.dict(alternatives.__opts__, {'test': True}):