Merge branch 'master' into master-port-49955

This commit is contained in:
Gareth J. Greenaway 2020-04-22 17:27:39 -07:00 committed by GitHub
commit 0548fc5856
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
103 changed files with 6565 additions and 1679 deletions

View file

@ -13,6 +13,10 @@ Versions are `MAJOR.PATCH`.
### Deprecated
### Changed
- [#56751](https://github.com/saltstack/salt/pull/56751) - Backport 49981
- [#56731](https://github.com/saltstack/salt/pull/56731) - Backport #53994
- [#56753](https://github.com/saltstack/salt/pull/56753) - Backport 51095
### Fixed
- [#56237](https://github.com/saltstack/salt/pull/56237) - Fix alphabetical ordering and remove duplicates across all documentation indexes - [@myii](https://github.com/myii)

View file

@ -161,6 +161,8 @@ MOCK_MODULES = [
"jnpr.junos.utils.config",
"jnpr.junos.utils.sw",
"keyring",
"kubernetes",
"kubernetes.config",
"libvirt",
"lxml",
"lxml.etree",

View file

@ -66,6 +66,7 @@ state modules
boto_sqs
boto_vpc
bower
btrfs
cabal
ceph
chef

View file

@ -0,0 +1,6 @@
=================
salt.states.btrfs
=================
.. automodule:: salt.states.btrfs
:members:

View file

@ -232,6 +232,10 @@ There are several corresponding requisite_any statements:
* ``onchanges_any``
* ``onfail_any``
Lastly, onfail has one special ``onfail_all`` form to account for when `AND`
logic is desired instead of the default `OR` logic of onfail/onfail_any (which
are equivalent).
All of the requisites define specific relationships and always work with the
dependency logic defined above.
@ -521,6 +525,46 @@ The ``onfail`` requisite is applied in the same way as ``require`` as ``watch``:
- onfail:
- mount: primary_mount
.. code-block:: yaml
build_site:
cmd.run:
- name: /srv/web/app/build_site
notify-build_failure:
hipchat.send_message:
- room_id: 123456
- message: "Building website fail on {{ salt.grains.get('id') }}"
The default behavior of the ``onfail`` when multiple requisites are listed is
the opposite of other requisites in the salt state engine, it acts by default
like ``any()`` instead of ``all()``. This means that when you list multiple
onfail requisites on a state, if *any* fail the requisite will be satisfied.
If you instead need *all* logic to be applied, you can use ``onfail_all``
form:
.. code-block:: yaml
test_site_a:
cmd.run:
- name: ping -c1 10.0.0.1
test_site_b:
cmd.run:
- name: ping -c1 10.0.0.2
notify_site_down:
hipchat.send_message:
- room_id: 123456
- message: "Both primary and backup sites are down!"
- onfail_all:
- cmd: test_site_a
- cmd: test_site_b
In this contrived example `notify_site_down` will run when both 10.0.0.1 and
10.0.0.2 fail to respond to ping.
.. note::
Setting failhard (:ref:`globally <global-failhard>` or in
@ -535,6 +579,8 @@ The ``onfail`` requisite is applied in the same way as ``require`` as ``watch``:
Beginning in the ``2016.11.0`` release of Salt, ``onfail`` uses OR logic for
multiple listed ``onfail`` requisites. Prior to the ``2016.11.0`` release,
``onfail`` used AND logic. See `Issue #22370`_ for more information.
Beginning in the ``Neon`` release of Salt, a new ``onfail_all`` requisite
form is available if AND logic is desired.
.. _Issue #22370: https://github.com/saltstack/salt/issues/22370

View file

@ -87,7 +87,7 @@ the context into the included file is required:
.. code-block:: jinja
{% from 'lib.sls' import test with context %}
Includes must use full paths, like so:
.. code-block:: jinja
@ -649,6 +649,56 @@ Returns:
1, 4
.. jinja_ref:: method_call
``method_call``
---------------
.. versionadded:: Sodium
Returns a result of object's method call.
Example #1:
.. code-block:: jinja
{{ [1, 2, 1, 3, 4] | method_call('index', 1, 1, 3) }}
Returns:
.. code-block:: text
2
This filter can be used with the `map filter`_ to apply object methods without
using loop constructs or temporary variables.
Example #2:
.. code-block:: jinja
{% set host_list = ['web01.example.com', 'db01.example.com'] %}
{% set host_list_split = [] %}
{% for item in host_list %}
{% do host_list_split.append(item.split('.', 1)) %}
{% endfor %}
{{ host_list_split }}
Example #3:
.. code-block:: jinja
{{ host_list|map('method_call', 'split', '.', 1)|list }}
Return of examples #2 and #3:
.. code-block:: text
[[web01, example.com], [db01, example.com]]
.. _`map filter`: http://jinja.pocoo.org/docs/2.10/templates/#map
.. jinja_ref:: is_sorted
``is_sorted``

View file

@ -17,6 +17,16 @@ The old syntax for the mine_function - as a dict, or as a list with dicts that
contain more than exactly one key - is still supported but discouraged in favor
of the more uniform syntax of module.run.
State Execution Module
======================
The :mod:`state.test <salt.modules.state.test>` function
can be used to test a state on a minion. This works by executing the
:mod:`state.apply <salt.modules.state.apply>` function while forcing the ``test`` kwarg
to ``True`` so that the ``state.apply`` function is not required to be called by the
user directly. This also allows you to add the ``state.test`` function to a minion's
``minion_blackout_whitelist`` pillar if you wish to be able to test a state while a
minion is in blackout.
New Grains
==========

View file

@ -79,6 +79,11 @@ def communicator(func):
queue.put("ERROR")
queue.put("Exception")
queue.put("{0}\n{1}\n".format(ex, trace))
except SystemExit as ex:
trace = traceback.format_exc()
queue.put("ERROR")
queue.put("System exit")
queue.put("{0}\n{1}\n".format(ex, trace))
return ret
return _call

View file

@ -331,20 +331,39 @@ def get_resources_vms(call=None, resFilter=None, includeConfig=True):
salt-cloud -f get_resources_vms my-proxmox-config
"""
log.debug("Getting resource: vms.. (filter: %s)", resFilter)
resources = query("get", "cluster/resources")
timeoutTime = time.time() + 60
while True:
log.debug("Getting resource: vms.. (filter: %s)", resFilter)
resources = query("get", "cluster/resources")
ret = {}
badResource = False
for resource in resources:
if "type" in resource and resource["type"] in ["openvz", "qemu", "lxc"]:
try:
name = resource["name"]
except KeyError:
badResource = True
log.debug("No name in VM resource %s", repr(resource))
break
ret = {}
for resource in resources:
if "type" in resource and resource["type"] in ["openvz", "qemu", "lxc"]:
name = resource["name"]
ret[name] = resource
ret[name] = resource
if includeConfig:
# Requested to include the detailed configuration of a VM
ret[name]["config"] = get_vmconfig(
ret[name]["vmid"], ret[name]["node"], ret[name]["type"]
)
if includeConfig:
# Requested to include the detailed configuration of a VM
ret[name]["config"] = get_vmconfig(
ret[name]["vmid"], ret[name]["node"], ret[name]["type"]
)
if time.time() > timeoutTime:
raise SaltCloudExecutionTimeout(
"FAILED to get the proxmox " "resources vms"
)
# Carry on if there wasn't a bad resource return from Proxmox
if not badResource:
break
time.sleep(0.5)
if resFilter is not None:
log.debug("Filter given: %s, returning requested " "resource: nodes", resFilter)
@ -905,6 +924,13 @@ def create_node(vm_, newid):
): # if the property is set, use it for the VM request
postParams[prop] = vm_["clone_" + prop]
try:
int(vm_["clone_from"])
except ValueError:
if ":" in vm_["clone_from"]:
vmhost = vm_["clone_from"].split(":")[0]
vm_["clone_from"] = vm_["clone_from"].split(":")[1]
node = query(
"post",
"nodes/{0}/qemu/{1}/clone".format(vmhost, vm_["clone_from"]),

View file

@ -268,6 +268,10 @@ def create(vm_):
"deploy", vm_, __opts__, default=False
)
# If ssh_host is not set, default to the minion name
if not config.get_cloud_config_value("ssh_host", vm_, __opts__, default=""):
vm_["ssh_host"] = vm_["name"]
if deploy_config:
wol_mac = config.get_cloud_config_value(
"wake_on_lan_mac", vm_, __opts__, default=""

View file

@ -4627,7 +4627,7 @@ def reboot_host(kwargs=None, call=None):
if not host_ref.capability.rebootSupported:
raise SaltCloudSystemExit("Specified host system does not support reboot.")
if not host_ref.runtime.inMaintenanceMode:
if not host_ref.runtime.inMaintenanceMode and not force:
raise SaltCloudSystemExit(
"Specified host system is not in maintenance mode. Specify force=True to "
"force reboot even if there are virtual machines running or other operations "
@ -4715,3 +4715,67 @@ def create_datastore_cluster(kwargs=None, call=None):
return False
return {datastore_cluster_name: "created"}
def shutdown_host(kwargs=None, call=None):
"""
Shut down the specified host system in this VMware environment
.. note::
If the host system is not in maintenance mode, it will not be shut down. If you
want to shut down the host system regardless of whether it is in maintenance mode,
set ``force=True``. Default is ``force=False``.
CLI Example:
.. code-block:: bash
salt-cloud -f shutdown_host my-vmware-config host="myHostSystemName" [force=True]
"""
if call != "function":
raise SaltCloudSystemExit(
"The shutdown_host function must be called with " "-f or --function."
)
host_name = kwargs.get("host") if kwargs and "host" in kwargs else None
force = _str_to_bool(kwargs.get("force")) if kwargs and "force" in kwargs else False
if not host_name:
raise SaltCloudSystemExit("You must specify name of the host system.")
# Get the service instance
si = _get_si()
host_ref = salt.utils.vmware.get_mor_by_property(si, vim.HostSystem, host_name)
if not host_ref:
raise SaltCloudSystemExit("Specified host system does not exist.")
if host_ref.runtime.connectionState == "notResponding":
raise SaltCloudSystemExit(
"Specified host system cannot be shut down in it's current state (not responding)."
)
if not host_ref.capability.rebootSupported:
raise SaltCloudSystemExit("Specified host system does not support shutdown.")
if not host_ref.runtime.inMaintenanceMode and not force:
raise SaltCloudSystemExit(
"Specified host system is not in maintenance mode. Specify force=True to "
"force reboot even if there are virtual machines running or other operations "
"in progress."
)
try:
host_ref.ShutdownHost_Task(force)
except Exception as exc: # pylint: disable=broad-except
log.error(
"Error while shutting down host %s: %s",
host_name,
exc,
# Show the traceback if the debug logging level is enabled
exc_info_on_loglevel=logging.DEBUG,
)
return {host_name: "failed to shut down host"}
return {host_name: "shut down host"}

View file

@ -13,7 +13,6 @@
logger instance uses our ``salt.log.setup.SaltLoggingClass``.
"""
# Import python libs
from __future__ import absolute_import, print_function, unicode_literals
import logging
@ -26,7 +25,6 @@ import time
import traceback
import types
# Import salt libs
# pylint: disable=unused-import
from salt._logging import (
LOG_COLORS,
@ -50,12 +48,8 @@ from salt._logging.impl import (
SaltLogRecord,
)
from salt._logging.impl import set_log_record_factory as setLogRecordFactory
# Import 3rd-party libs
from salt.ext import six
from salt.ext.six.moves.urllib.parse import ( # pylint: disable=import-error,no-name-in-module
urlparse,
)
from salt.ext.six.moves.urllib.parse import urlparse
# pylint: enable=unused-import
@ -881,7 +875,14 @@ def __remove_temp_logging_handler():
logging.captureWarnings(True)
def __global_logging_exception_handler(exc_type, exc_value, exc_traceback):
def __global_logging_exception_handler(
exc_type,
exc_value,
exc_traceback,
_logger=logging.getLogger(__name__),
_stderr=sys.__stderr__,
_format_exception=traceback.format_exception,
):
"""
This function will log all un-handled python exceptions.
"""
@ -890,19 +891,37 @@ def __global_logging_exception_handler(exc_type, exc_value, exc_traceback):
# Stop the logging queue listener thread
if is_mp_logging_listener_configured():
shutdown_multiprocessing_logging_listener()
else:
# Log the exception
logging.getLogger(__name__).error(
"An un-handled exception was caught by salt's global exception "
"handler:\n%s: %s\n%s",
return
# Log the exception
msg = "An un-handled exception was caught by salt's global exception handler:"
try:
msg = "{}\n{}: {}\n{}".format(
msg,
exc_type.__name__,
exc_value,
"".join(
traceback.format_exception(exc_type, exc_value, exc_traceback)
).strip(),
"".join(_format_exception(exc_type, exc_value, exc_traceback)).strip(),
)
# Call the original sys.excepthook
except Exception: # pylint: disable=broad-except
msg = "{}\n{}: {}\n(UNABLE TO FORMAT TRACEBACK)".format(
msg, exc_type.__name__, exc_value,
)
try:
_logger.error(msg)
except Exception: # pylint: disable=broad-except
# Python is shutting down and logging has been set to None already
try:
_stderr.write(msg + "\n")
except Exception: # pylint: disable=broad-except
# We have also lost reference to sys.__stderr__ ?!
print(msg)
# Call the original sys.excepthook
try:
sys.__excepthook__(exc_type, exc_value, exc_traceback)
except Exception: # pylint: disable=broad-except
# Python is shutting down and sys has been set to None already
pass
# Set our own exception handler as the one to use

View file

@ -116,6 +116,14 @@ def describe_topic(name, region=None, key=None, keyid=None, profile=None):
ret["Attributes"] = get_topic_attributes(
arn, region=region, key=key, keyid=keyid, profile=profile
)
# Grab extended attributes for the above subscriptions
for sub in range(len(ret["Subscriptions"])):
sub_arn = ret["Subscriptions"][sub]["SubscriptionArn"]
if not sub_arn.startswith("arn:aws:sns:"):
# Sometimes a sub is in e.g. PendingAccept or other
# wierd states and doesn't have an ARN yet
log.debug("Subscription with invalid ARN %s skipped...", sub_arn)
continue
return ret
@ -382,6 +390,17 @@ def unsubscribe(SubscriptionArn, region=None, key=None, keyid=None, profile=None
salt myminion boto3_sns.unsubscribe my_subscription_arn region=us-east-1
"""
if not SubscriptionArn.startswith("arn:aws:sns:"):
# Grrr, AWS sent us an ARN that's NOT and ARN....
# This can happen if, for instance, a subscription is left in PendingAcceptance or similar
# Note that anything left in PendingConfirmation will be auto-deleted by AWS after 30 days
# anyway, so this isn't as ugly a hack as it might seem at first...
log.info(
"Invalid subscription ARN `%s` passed - likely a PendingConfirmaton or such. "
"Skipping unsubscribe attempt as it would almost certainly fail...",
SubscriptionArn,
)
return True
subs = list_subscriptions(region=region, key=key, keyid=keyid, profile=profile)
sub = [s for s in subs if s.get("SubscriptionArn") == SubscriptionArn]
if not sub:

View file

@ -264,6 +264,7 @@ def create_function(
.. code-block:: bash
salt myminion boto_lamba.create_function my_function python2.7 my_role my_file.my_function my_function.zip
salt myminion boto_lamba.create_function my_function python2.7 my_role my_file.my_function salt://files/my_function.zip
"""
@ -276,6 +277,13 @@ def create_function(
"Either ZipFile must be specified, or "
"S3Bucket and S3Key must be provided."
)
if "://" in ZipFile: # Looks like a remote URL to me...
dlZipFile = __salt__["cp.cache_file"](path=ZipFile)
if dlZipFile is False:
ret["result"] = False
ret["comment"] = "Failed to cache ZipFile `{0}`.".format(ZipFile)
return ret
ZipFile = dlZipFile
code = {
"ZipFile": _filedata(ZipFile),
}

View file

@ -398,10 +398,20 @@ def convert_to_group_ids(
)
if not group_id:
# Security groups are a big deal - need to fail if any can't be resolved...
raise CommandExecutionError(
"Could not resolve Security Group name "
"{0} to a Group ID".format(group)
)
# But... if we're running in test mode, it may just be that the SG is scheduled
# to be created, and thus WOULD have been there if running "for real"...
if __opts__["test"]:
log.warn(
"Security Group `%s` could not be resolved to an ID. This may "
"cause a failure when not running in test mode.",
group,
)
return []
else:
raise CommandExecutionError(
"Could not resolve Security Group name "
"{0} to a Group ID".format(group)
)
else:
group_ids.append(six.text_type(group_id))
log.debug("security group contents %s post-conversion", group_ids)

View file

@ -4426,7 +4426,7 @@ def extract_hash(
else:
hash_len_expr = six.text_type(hash_len)
filename_separators = string.whitespace + r"\/"
filename_separators = string.whitespace + r"\/*"
if source_hash_name:
if not isinstance(source_hash_name, six.string_types):

View file

@ -58,6 +58,8 @@ def _gluster_output_cleanup(result):
for line in result.splitlines():
if line.startswith("gluster>"):
ret += line[9:].strip()
elif line.startswith("Welcome to gluster prompt"):
pass
else:
ret += line.strip()

View file

@ -136,7 +136,7 @@ def destroy(device):
stop_cmd = ["mdadm", "--stop", device]
zero_cmd = ["mdadm", "--zero-superblock"]
if __salt__["cmd.retcode"](stop_cmd, python_shell=False):
if __salt__["cmd.retcode"](stop_cmd, python_shell=False) == 0:
for number in details["members"]:
zero_cmd.append(details["members"][number]["device"])
__salt__["cmd.retcode"](zero_cmd, python_shell=False)

View file

@ -15,12 +15,7 @@ import socket
# Import salt libs
import salt.utils.decorators.path
import salt.utils.files
import salt.utils.functools
import salt.utils.network
import salt.utils.path
import salt.utils.platform
import salt.utils.stringutils
import salt.utils.validate.net
from salt._compat import ipaddress
from salt.exceptions import CommandExecutionError
@ -37,7 +32,7 @@ def __virtual__():
Only work on POSIX-like systems
"""
# Disable on Windows, a specific file module exists:
if salt.utils.platform.is_windows():
if __utils__["platform.is_windows"]():
return (
False,
"The network execution module cannot be loaded on Windows: use win_network instead.",
@ -57,7 +52,7 @@ def wol(mac, bcast="255.255.255.255", destport=9):
salt '*' network.wol 080027136977 255.255.255.255 7
salt '*' network.wol 08:00:27:13:69:77 255.255.255.255 7
"""
dest = salt.utils.network.mac_str_to_bytes(mac)
dest = __utils__["network.mac_str_to_bytes"](mac)
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)
sock.sendto(b"\xff" * 6 + dest * 16, (bcast, int(destport)))
@ -94,14 +89,14 @@ def ping(host, timeout=False, return_boolean=False):
if timeout:
if __grains__["kernel"] == "SunOS":
cmd = "ping -c 4 {1} {0}".format(
timeout, salt.utils.network.sanitize_host(host)
timeout, __utils__["network.sanitize_host"](host)
)
else:
cmd = "ping -W {0} -c 4 {1}".format(
timeout, salt.utils.network.sanitize_host(host)
timeout, __utils__["network.sanitize_host"](host)
)
else:
cmd = "ping -c 4 {0}".format(salt.utils.network.sanitize_host(host))
cmd = "ping -c 4 {0}".format(__utils__["network.sanitize_host"](host))
if return_boolean:
ret = __salt__["cmd.run_all"](cmd)
if ret["retcode"] != 0:
@ -855,7 +850,7 @@ def netstat():
salt '*' network.netstat
"""
if __grains__["kernel"] == "Linux":
if not salt.utils.path.which("netstat"):
if not __utils__["path.which"]("netstat"):
return _ss_linux()
else:
return _netstat_linux()
@ -883,7 +878,7 @@ def active_tcp():
salt '*' network.active_tcp
"""
if __grains__["kernel"] == "Linux":
return salt.utils.network.active_tcp()
return __utils__["network.active_tcp"]()
elif __grains__["kernel"] == "SunOS":
# lets use netstat to mimic linux as close as possible
ret = {}
@ -936,11 +931,11 @@ def traceroute(host):
salt '*' network.traceroute archlinux.org
"""
ret = []
cmd = "traceroute {0}".format(salt.utils.network.sanitize_host(host))
cmd = "traceroute {0}".format(__utils__["network.sanitize_host"](host))
out = __salt__["cmd.run"](cmd)
# Parse version of traceroute
if salt.utils.platform.is_sunos() or salt.utils.platform.is_aix():
if __utils__["platform.is_sunos"]() or __utils__["platform.is_aix"]():
traceroute_version = [0, 0, 0]
else:
version_out = __salt__["cmd.run"]("traceroute --version")
@ -975,7 +970,7 @@ def traceroute(host):
skip_line = True
if line.startswith("traceroute"):
skip_line = True
if salt.utils.platform.is_aix():
if __utils__["platform.is_aix"]():
if line.startswith("trying to get source for"):
skip_line = True
if line.startswith("source should be"):
@ -1073,7 +1068,7 @@ def dig(host):
salt '*' network.dig archlinux.org
"""
cmd = "dig {0}".format(salt.utils.network.sanitize_host(host))
cmd = "dig {0}".format(__utils__["network.sanitize_host"](host))
return __salt__["cmd.run"](cmd)
@ -1125,7 +1120,7 @@ def interfaces():
salt '*' network.interfaces
"""
return salt.utils.network.interfaces()
return __utils__["network.interfaces"]()
def hw_addr(iface):
@ -1138,7 +1133,7 @@ def hw_addr(iface):
salt '*' network.hw_addr eth0
"""
return salt.utils.network.hw_addr(iface)
return __utils__["network.hw_addr"](iface)
# Alias hwaddr to preserve backward compat
@ -1157,7 +1152,7 @@ def interface(iface):
salt '*' network.interface eth0
"""
return salt.utils.network.interface(iface)
return __utils__["network.interface"](iface)
def interface_ip(iface):
@ -1172,7 +1167,7 @@ def interface_ip(iface):
salt '*' network.interface_ip eth0
"""
return salt.utils.network.interface_ip(iface)
return __utils__["network.interface_ip"](iface)
def subnets(interfaces=None):
@ -1186,7 +1181,7 @@ def subnets(interfaces=None):
salt '*' network.subnets
salt '*' network.subnets interfaces=eth1
"""
return salt.utils.network.subnets(interfaces)
return __utils__["network.subnets"](interfaces)
def subnets6():
@ -1199,7 +1194,7 @@ def subnets6():
salt '*' network.subnets
"""
return salt.utils.network.subnets6()
return __utils__["network.subnets6"]()
def in_subnet(cidr):
@ -1212,7 +1207,7 @@ def in_subnet(cidr):
salt '*' network.in_subnet 10.0.0.0/16
"""
return salt.utils.network.in_subnet(cidr)
return __utils__["network.in_subnet"](cidr)
def ip_in_subnet(ip_addr, cidr):
@ -1225,7 +1220,7 @@ def ip_in_subnet(ip_addr, cidr):
salt '*' network.ip_in_subnet 172.17.0.4 172.16.0.0/12
"""
return salt.utils.network.in_subnet(cidr, ip_addr)
return __utils__["network.in_subnet"](cidr, ip_addr)
def convert_cidr(cidr):
@ -1263,7 +1258,7 @@ def calc_net(ip_addr, netmask=None):
.. versionadded:: 2015.8.0
"""
return salt.utils.network.calc_net(ip_addr, netmask)
return __utils__["network.calc_net"](ip_addr, netmask)
def ip_addrs(interface=None, include_loopback=False, cidr=None, type=None):
@ -1275,17 +1270,21 @@ def ip_addrs(interface=None, include_loopback=False, cidr=None, type=None):
which are within that subnet. If 'type' is 'public', then only public
addresses will be returned. Ditto for 'type'='private'.
.. versionchanged:: Sodium
``interface`` can now be a single interface name or a list of
interfaces. Globbing is also supported.
CLI Example:
.. code-block:: bash
salt '*' network.ip_addrs
"""
addrs = salt.utils.network.ip_addrs(
addrs = __utils__["network.ip_addrs"](
interface=interface, include_loopback=include_loopback
)
if cidr:
return [i for i in addrs if salt.utils.network.in_subnet(cidr, [i])]
return [i for i in addrs if __utils__["network.in_subnet"](cidr, [i])]
else:
if type == "public":
return [i for i in addrs if not is_private(i)]
@ -1306,17 +1305,21 @@ def ip_addrs6(interface=None, include_loopback=False, cidr=None):
Providing a CIDR via 'cidr="2000::/3"' will return only the addresses
which are within that subnet.
.. versionchanged:: Sodium
``interface`` can now be a single interface name or a list of
interfaces. Globbing is also supported.
CLI Example:
.. code-block:: bash
salt '*' network.ip_addrs6
"""
addrs = salt.utils.network.ip_addrs6(
addrs = __utils__["network.ip_addrs6"](
interface=interface, include_loopback=include_loopback
)
if cidr:
return [i for i in addrs if salt.utils.network.in_subnet(cidr, [i])]
return [i for i in addrs if __utils__["network.in_subnet"](cidr, [i])]
else:
return addrs
@ -1377,16 +1380,16 @@ def mod_hostname(hostname):
if hostname is None:
return False
hostname_cmd = salt.utils.path.which("hostnamectl") or salt.utils.path.which(
hostname_cmd = __utils__["path.which"]("hostnamectl") or __utils__["path.which"](
"hostname"
)
if salt.utils.platform.is_sunos():
if __utils__["platform.is_sunos"]():
uname_cmd = (
"/usr/bin/uname"
if salt.utils.platform.is_smartos()
else salt.utils.path.which("uname")
if __utils__["platform.is_smartos"]()
else __utils__["path.which"]("uname")
)
check_hostname_cmd = salt.utils.path.which("check-hostname")
check_hostname_cmd = __utils__["path.which"]("check-hostname")
# Grab the old hostname so we know which hostname to change and then
# change the hostname using the hostname command
@ -1401,7 +1404,7 @@ def mod_hostname(hostname):
else:
log.debug("{0} was unable to get hostname".format(hostname_cmd))
o_hostname = __salt__["network.get_hostname"]()
elif not salt.utils.platform.is_sunos():
elif not __utils__["platform.is_sunos"]():
# don't run hostname -f because -f is not supported on all platforms
o_hostname = socket.getfqdn()
else:
@ -1419,57 +1422,57 @@ def mod_hostname(hostname):
)
)
return False
elif not salt.utils.platform.is_sunos():
elif not __utils__["platform.is_sunos"]():
__salt__["cmd.run"]("{0} {1}".format(hostname_cmd, hostname))
else:
__salt__["cmd.run"]("{0} -S {1}".format(uname_cmd, hostname.split(".")[0]))
# Modify the /etc/hosts file to replace the old hostname with the
# new hostname
with salt.utils.files.fopen("/etc/hosts", "r") as fp_:
host_c = [salt.utils.stringutils.to_unicode(_l) for _l in fp_.readlines()]
with __utils__["files.fopen"]("/etc/hosts", "r") as fp_:
host_c = [__utils__["stringutils.to_unicode"](_l) for _l in fp_.readlines()]
with salt.utils.files.fopen("/etc/hosts", "w") as fh_:
with __utils__["files.fopen"]("/etc/hosts", "w") as fh_:
for host in host_c:
host = host.split()
try:
host[host.index(o_hostname)] = hostname
if salt.utils.platform.is_sunos():
if __utils__["platform.is_sunos"]():
# also set a copy of the hostname
host[host.index(o_hostname.split(".")[0])] = hostname.split(".")[0]
except ValueError:
pass
fh_.write(salt.utils.stringutils.to_str("\t".join(host) + "\n"))
fh_.write(__utils__["stringutils.to_str"]("\t".join(host) + "\n"))
# Modify the /etc/sysconfig/network configuration file to set the
# new hostname
if __grains__["os_family"] == "RedHat":
with salt.utils.files.fopen("/etc/sysconfig/network", "r") as fp_:
with __utils__["files.fopen"]("/etc/sysconfig/network", "r") as fp_:
network_c = [
salt.utils.stringutils.to_unicode(_l) for _l in fp_.readlines()
__utils__["stringutils.to_unicode"](_l) for _l in fp_.readlines()
]
with salt.utils.files.fopen("/etc/sysconfig/network", "w") as fh_:
with __utils__["files.fopen"]("/etc/sysconfig/network", "w") as fh_:
for net in network_c:
if net.startswith("HOSTNAME"):
old_hostname = net.split("=", 1)[1].rstrip()
quote_type = salt.utils.stringutils.is_quoted(old_hostname)
quote_type = __utils__["stringutils.is_quoted"](old_hostname)
fh_.write(
salt.utils.stringutils.to_str(
__utils__["stringutils.to_str"](
"HOSTNAME={1}{0}{1}\n".format(
salt.utils.stringutils.dequote(hostname), quote_type
__utils__["stringutils.dequote"](hostname), quote_type
)
)
)
else:
fh_.write(salt.utils.stringutils.to_str(net))
fh_.write(__utils__["stringutils.to_str"](net))
elif __grains__["os_family"] in ("Debian", "NILinuxRT"):
with salt.utils.files.fopen("/etc/hostname", "w") as fh_:
fh_.write(salt.utils.stringutils.to_str(hostname + "\n"))
with __utils__["files.fopen"]("/etc/hostname", "w") as fh_:
fh_.write(__utils__["stringutils.to_str"](hostname + "\n"))
if __grains__["lsb_distrib_id"] == "nilrt":
str_hostname = salt.utils.stringutils.to_str(hostname)
str_hostname = __utils__["stringutils.to_str"](hostname)
nirtcfg_cmd = "/usr/local/natinst/bin/nirtcfg"
nirtcfg_cmd += " --set section=SystemSettings,token='Host_Name',value='{0}'".format(
str_hostname
@ -1479,16 +1482,18 @@ def mod_hostname(hostname):
"Couldn't set hostname to: {0}\n".format(str_hostname)
)
elif __grains__["os_family"] == "OpenBSD":
with salt.utils.files.fopen("/etc/myname", "w") as fh_:
fh_.write(salt.utils.stringutils.to_str(hostname + "\n"))
with __utils__["files.fopen"]("/etc/myname", "w") as fh_:
fh_.write(__utils__["stringutils.to_str"](hostname + "\n"))
# Update /etc/nodename and /etc/defaultdomain on SunOS
if salt.utils.platform.is_sunos():
with salt.utils.files.fopen("/etc/nodename", "w") as fh_:
fh_.write(salt.utils.stringutils.to_str(hostname.split(".")[0] + "\n"))
with salt.utils.files.fopen("/etc/defaultdomain", "w") as fh_:
if __utils__["platform.is_sunos"]():
with __utils__["files.fopen"]("/etc/nodename", "w") as fh_:
fh_.write(__utils__["stringutils.to_str"](hostname.split(".")[0] + "\n"))
with __utils__["files.fopen"]("/etc/defaultdomain", "w") as fh_:
fh_.write(
salt.utils.stringutils.to_str(".".join(hostname.split(".")[1:]) + "\n")
__utils__["stringutils.to_str"](
".".join(hostname.split(".")[1:]) + "\n"
)
)
return True
@ -1535,7 +1540,7 @@ def connect(host, port=None, **kwargs):
):
address = host
else:
address = "{0}".format(salt.utils.network.sanitize_host(host))
address = "{0}".format(__utils__["network.sanitize_host"](host))
try:
if proto == "udp":
@ -1759,7 +1764,7 @@ def routes(family=None):
raise CommandExecutionError("Invalid address family {0}".format(family))
if __grains__["kernel"] == "Linux":
if not salt.utils.path.which("netstat"):
if not __utils__["path.which"]("netstat"):
routes_ = _ip_route_linux()
else:
routes_ = _netstat_route_linux()
@ -1893,7 +1898,7 @@ def get_route(ip):
ret["gateway"] = line[1].strip()
if "interface" in line[0]:
ret["interface"] = line[1].strip()
ret["source"] = salt.utils.network.interface_ip(line[1].strip())
ret["source"] = __utils__["network.interface_ip"](line[1].strip())
return ret
@ -2001,3 +2006,53 @@ def iphexval(ip):
a = ip.split(".")
hexval = ["%02X" % int(x) for x in a] # pylint: disable=E1321
return "".join(hexval)
def ip_networks(interface=None, include_loopback=False, verbose=False):
"""
.. versionadded:: Sodium
Returns a list of IPv4 networks to which the minion belongs.
interface
Restrict results to the specified interface(s). This value can be
either a single interface name or a list of interfaces. Globbing is
also supported.
CLI Example:
.. code-block:: bash
salt '*' network.list_networks
salt '*' network.list_networks interface=docker0
salt '*' network.list_networks interface=docker0,enp*
salt '*' network.list_networks interface=eth*
"""
return __utils__["network.ip_networks"](
interface=interface, include_loopback=include_loopback, verbose=verbose
)
def ip_networks6(interface=None, include_loopback=False, verbose=False):
"""
.. versionadded:: Sodium
Returns a list of IPv6 networks to which the minion belongs.
interface
Restrict results to the specified interface(s). This value can be
either a single interface name or a list of interfaces. Globbing is
also supported.
CLI Example:
.. code-block:: bash
salt '*' network.list_networks6
salt '*' network.list_networks6 interface=docker0
salt '*' network.list_networks6 interface=docker0,enp*
salt '*' network.list_networks6 interface=eth*
"""
return __utils__["network.ip_networks6"](
interface=interface, include_loopback=include_loopback, verbose=verbose
)

View file

@ -145,16 +145,16 @@ def _space_delimited_list(value):
"""
validate that a value contains one or more space-delimited values
"""
if isinstance(value, str):
if isinstance(value, six.string_types):
items = value.split(" ")
valid = items and all(items)
else:
valid = hasattr(value, "__iter__") and (value != [])
if valid:
return (True, "space-delimited string")
return True, "space-delimited string"
else:
return (False, "{0} is not a valid list.\n".format(value))
return False, "{0} is not a valid list.\n".format(value)
def _validate_ipv4(value):
@ -398,9 +398,9 @@ def _get_static_info(interface):
return data
def _get_interface_info(interface):
def _get_base_interface_info(interface):
"""
return details about given interface
return base details about given interface
"""
blacklist = {
"tcpip": {"name": [], "type": [], "additional_protocol": False},
@ -416,14 +416,14 @@ def _get_interface_info(interface):
},
"_": {"usb": "sys", "gadget": "uevent", "wlan": "uevent"},
}
data = {
return {
"label": interface.name,
"connectionid": interface.name,
"supported_adapter_modes": _get_possible_adapter_modes(
interface.name, blacklist
),
"adapter_mode": _get_adapter_mode_info(interface.name),
"up": False,
"up": interface.flags & IFF_RUNNING != 0,
"ipv4": {
"supportedrequestmodes": [
"dhcp_linklocal",
@ -435,38 +435,63 @@ def _get_interface_info(interface):
},
"hwaddr": interface.hwaddr[:-1],
}
needed_settings = []
if data["ipv4"]["requestmode"] == "static":
needed_settings += ["IP_Address", "Subnet_Mask", "Gateway", "DNS_Address"]
if data["adapter_mode"] == "ethercat":
needed_settings += ["MasterID"]
settings = _load_config(interface.name, needed_settings)
if interface.flags & IFF_RUNNING != 0:
data["up"] = True
data["ipv4"]["address"] = interface.sockaddrToStr(interface.addr)
data["ipv4"]["netmask"] = interface.sockaddrToStr(interface.netmask)
data["ipv4"]["gateway"] = "0.0.0.0"
data["ipv4"]["dns"] = _get_dns_info()
elif data["ipv4"]["requestmode"] == "static":
data["ipv4"]["address"] = settings["IP_Address"]
data["ipv4"]["netmask"] = settings["Subnet_Mask"]
data["ipv4"]["gateway"] = settings["Gateway"]
data["ipv4"]["dns"] = [settings["DNS_Address"]]
with salt.utils.files.fopen("/proc/net/route", "r") as route_file:
pattern = re.compile(
r"^{interface}\t[0]{{8}}\t([0-9A-Z]{{8}})".format(interface=interface.name),
re.MULTILINE,
def _get_ethercat_interface_info(interface):
"""
return details about given ethercat interface
"""
base_information = _get_base_interface_info(interface)
base_information["ethercat"] = {
"masterid": _load_config(interface.name, ["MasterID"])["MasterID"]
}
return base_information
def _get_tcpip_interface_info(interface):
"""
return details about given tcpip interface
"""
base_information = _get_base_interface_info(interface)
if base_information["ipv4"]["requestmode"] == "static":
settings = _load_config(
interface.name, ["IP_Address", "Subnet_Mask", "Gateway", "DNS_Address"]
)
match = pattern.search(route_file.read())
iface_gateway_hex = None if not match else match.group(1)
if iface_gateway_hex is not None and len(iface_gateway_hex) == 8:
data["ipv4"]["gateway"] = ".".join(
[str(int(iface_gateway_hex[i : i + 2], 16)) for i in range(6, -1, -2)]
)
if data["adapter_mode"] == "ethercat":
data["ethercat"] = {"masterid": settings["MasterID"]}
return data
base_information["ipv4"]["address"] = settings["IP_Address"]
base_information["ipv4"]["netmask"] = settings["Subnet_Mask"]
base_information["ipv4"]["gateway"] = settings["Gateway"]
base_information["ipv4"]["dns"] = [settings["DNS_Address"]]
elif base_information["up"]:
base_information["ipv4"]["address"] = interface.sockaddrToStr(interface.addr)
base_information["ipv4"]["netmask"] = interface.sockaddrToStr(interface.netmask)
base_information["ipv4"]["gateway"] = "0.0.0.0"
base_information["ipv4"]["dns"] = _get_dns_info()
with salt.utils.files.fopen("/proc/net/route", "r") as route_file:
pattern = re.compile(
r"^{interface}\t[0]{{8}}\t([0-9A-Z]{{8}})".format(
interface=interface.name
),
re.MULTILINE,
)
match = pattern.search(route_file.read())
iface_gateway_hex = None if not match else match.group(1)
if iface_gateway_hex is not None and len(iface_gateway_hex) == 8:
base_information["ipv4"]["gateway"] = ".".join(
[str(int(iface_gateway_hex[i : i + 2], 16)) for i in range(6, -1, -2)]
)
return base_information
def _get_interface_info(interface):
"""
return details about given interface
"""
adapter_mode = _get_adapter_mode_info(interface.name)
if adapter_mode == "disabled":
return _get_base_interface_info(interface)
elif adapter_mode == "ethercat":
return _get_ethercat_interface_info(interface)
return _get_tcpip_interface_info(interface)
def _dict_to_string(dictionary):

View file

@ -289,14 +289,91 @@ def refresh_db(failhard=False, **kwargs): # pylint: disable=unused-argument
return ret
def _is_testmode(**kwargs):
"""
Returns whether a test mode (noaction) operation was requested.
"""
return bool(kwargs.get("test") or __opts__.get("test"))
def _append_noaction_if_testmode(cmd, **kwargs):
"""
Adds the --noaction flag to the command if it's running in the test mode.
"""
if bool(kwargs.get("test") or __opts__.get("test")):
if _is_testmode(**kwargs):
cmd.append("--noaction")
def _build_install_command_list(cmd_prefix, to_install, to_downgrade, to_reinstall):
"""
Builds a list of install commands to be executed in sequence in order to process
each of the to_install, to_downgrade, and to_reinstall lists.
"""
cmds = []
if to_install:
cmd = copy.deepcopy(cmd_prefix)
cmd.extend(to_install)
cmds.append(cmd)
if to_downgrade:
cmd = copy.deepcopy(cmd_prefix)
cmd.append("--force-downgrade")
cmd.extend(to_downgrade)
cmds.append(cmd)
if to_reinstall:
cmd = copy.deepcopy(cmd_prefix)
cmd.append("--force-reinstall")
cmd.extend(to_reinstall)
cmds.append(cmd)
return cmds
def _parse_reported_packages_from_install_output(output):
"""
Parses the output of "opkg install" to determine what packages would have been
installed by an operation run with the --noaction flag.
We are looking for lines like:
Installing <package> (<version>) on <target>
or
Upgrading <package> from <oldVersion> to <version> on root
"""
reported_pkgs = {}
install_pattern = re.compile(
r"Installing\s(?P<package>.*?)\s\((?P<version>.*?)\)\son\s(?P<target>.*?)"
)
upgrade_pattern = re.compile(
r"Upgrading\s(?P<package>.*?)\sfrom\s(?P<oldVersion>.*?)\sto\s(?P<version>.*?)\son\s(?P<target>.*?)"
)
for line in salt.utils.itertools.split(output, "\n"):
match = install_pattern.match(line)
if match is None:
match = upgrade_pattern.match(line)
if match:
reported_pkgs[match.group("package")] = match.group("version")
return reported_pkgs
def _execute_install_command(cmd, parse_output, errors, parsed_packages):
"""
Executes a command for the install operation.
If the command fails, its error output will be appended to the errors list.
If the command succeeds and parse_output is true, updated packages will be appended
to the parsed_packages dictionary.
"""
out = __salt__["cmd.run_all"](cmd, output_loglevel="trace", python_shell=False)
if out["retcode"] != 0:
if out["stderr"]:
errors.append(out["stderr"])
else:
errors.append(out["stdout"])
elif parse_output:
parsed_packages.update(
_parse_reported_packages_from_install_output(out["stdout"])
)
def install(
name=None, refresh=False, pkgs=None, sources=None, reinstall=False, **kwargs
):
@ -440,24 +517,9 @@ def install(
# This should cause the command to fail.
to_install.append(pkgstr)
cmds = []
if to_install:
cmd = copy.deepcopy(cmd_prefix)
cmd.extend(to_install)
cmds.append(cmd)
if to_downgrade:
cmd = copy.deepcopy(cmd_prefix)
cmd.append("--force-downgrade")
cmd.extend(to_downgrade)
cmds.append(cmd)
if to_reinstall:
cmd = copy.deepcopy(cmd_prefix)
cmd.append("--force-reinstall")
cmd.extend(to_reinstall)
cmds.append(cmd)
cmds = _build_install_command_list(
cmd_prefix, to_install, to_downgrade, to_reinstall
)
if not cmds:
return {}
@ -466,16 +528,17 @@ def install(
refresh_db()
errors = []
is_testmode = _is_testmode(**kwargs)
test_packages = {}
for cmd in cmds:
out = __salt__["cmd.run_all"](cmd, output_loglevel="trace", python_shell=False)
if out["retcode"] != 0:
if out["stderr"]:
errors.append(out["stderr"])
else:
errors.append(out["stdout"])
_execute_install_command(cmd, is_testmode, errors, test_packages)
__context__.pop("pkg.list_pkgs", None)
new = list_pkgs()
if is_testmode:
new = copy.deepcopy(new)
new.update(test_packages)
ret = salt.utils.data.compare_dicts(old, new)
if pkg_type == "file" and reinstall:
@ -513,6 +576,26 @@ def install(
return ret
def _parse_reported_packages_from_remove_output(output):
"""
Parses the output of "opkg remove" to determine what packages would have been
removed by an operation run with the --noaction flag.
We are looking for lines like
Removing <package> (<version>) from <Target>...
"""
reported_pkgs = {}
remove_pattern = re.compile(
r"Removing\s(?P<package>.*?)\s\((?P<version>.*?)\)\sfrom\s(?P<target>.*?)..."
)
for line in salt.utils.itertools.split(output, "\n"):
match = remove_pattern.match(line)
if match:
reported_pkgs[match.group("package")] = ""
return reported_pkgs
def remove(name=None, pkgs=None, **kwargs): # pylint: disable=unused-argument
"""
Remove packages using ``opkg remove``.
@ -576,6 +659,9 @@ def remove(name=None, pkgs=None, **kwargs): # pylint: disable=unused-argument
__context__.pop("pkg.list_pkgs", None)
new = list_pkgs()
if _is_testmode(**kwargs):
reportedPkgs = _parse_reported_packages_from_remove_output(out["stdout"])
new = {k: v for k, v in new.items() if k not in reportedPkgs}
ret = salt.utils.data.compare_dicts(old, new)
rs_result = _get_restartcheck_result(errors)

View file

@ -115,6 +115,11 @@ Saltcheck Keywords
**skip:**
(bool) Optional keyword to skip running the individual test
.. versionadded:: 3000
Multiple assertions can be run against the output of a single ``module_and_function`` call. The ``assertion``,
``expected_return``, ``assertion_section``, and ``assertion_section_delimiter`` keys can be placed in a list under an
``assertions`` key. See the multiple assertions example below.
Sample Cases/Examples
=====================
@ -252,6 +257,22 @@ Example suppressing print results
assertion: assertNotIn
print_result: False
Example with multiple assertions and output_details
---------------------------------------------------
.. code-block:: yaml
multiple_validations:
module_and_function: network.netstat
assertions:
- assertion: assertEqual
assertion_section: "0:program"
expected_return: "systemd-resolve"
- assertion: assertEqual
assertion_section: "0:proto"
expected_return: "udp"
output_details: True
Supported assertions
====================
@ -572,90 +593,75 @@ class SaltCheck(object):
assertLess assertLessEqual
assertEmpty assertNotEmpty""".split()
def _check_assertions(self, dict):
"""Validate assertion keys"""
is_valid = True
assertion = dict.get("assertion", None)
# support old expected-return and newer name normalized expected_return
exp_ret_key = any(
key in dict.keys() for key in ["expected_return", "expected-return"]
)
exp_ret_val = dict.get("expected_return", dict.get("expected-return", None))
if assertion not in self.assertions_list:
log.error("Saltcheck: %s is not in the assertions list", assertion)
is_valid = False
# Only check expected returns for assertions which require them
if assertion not in [
"assertEmpty",
"assertNotEmpty",
"assertTrue",
"assertFalse",
]:
if exp_ret_key is None:
log.error("Saltcheck: missing expected_return")
is_valid = False
if exp_ret_val is None:
log.error("Saltcheck: expected_return missing a value")
is_valid = False
return is_valid
def __is_valid_test(self, test_dict):
"""
Determine if a test contains:
- a test name
- a valid module and function
- a valid assertion
- a valid assertion, or valid grouping under an assertions key
- an expected return value - if assertion type requires it
6 points needed for standard test
4 points needed for test with assertion not requiring expected return
"""
test_errors = []
tots = 0 # need total of >= 6 to be a valid test
log.info("Saltcheck: validating data: %s", test_dict)
is_valid = True
skip = test_dict.get("skip", False)
m_and_f = test_dict.get("module_and_function", None)
assertion = test_dict.get("assertion", None)
# support old expected-return and newer name normalized expected_return
exp_ret_key = any(
key in test_dict.keys() for key in ["expected_return", "expected-return"]
)
exp_ret_val = test_dict.get(
"expected_return", test_dict.get("expected-return", None)
)
log.info("__is_valid_test has test: %s", test_dict)
if skip:
required_total = 0
elif m_and_f in ["saltcheck.state_apply"]:
required_total = 2
elif assertion in [
"assertEmpty",
"assertNotEmpty",
"assertTrue",
"assertFalse",
]:
required_total = 4
# Running a state does not require assertions or checks
if m_and_f == "saltcheck.state_apply":
return is_valid
if test_dict.get("assertions"):
for assertion_group in test_dict.get("assertions"):
is_valid = self._check_assertions(assertion_group)
else:
required_total = 6
is_valid = self._check_assertions(test_dict)
if m_and_f:
tots += 1
module, function = m_and_f.split(".")
if _is_valid_module(module):
tots += 1
else:
test_errors.append("{0} is not a valid module".format(module))
if _is_valid_function(module, function):
tots += 1
else:
test_errors.append("{0} is not a valid function".format(function))
log.info("__is_valid_test has valid m_and_f")
if assertion in self.assertions_list:
log.info("__is_valid_test has valid_assertion")
tots += 1
if not _is_valid_module(module):
is_valid = False
log.error("Saltcheck: %s is not a valid module", module)
if not _is_valid_function(module, function):
is_valid = False
log.error("Saltcheck: %s is not a valid function", function)
else:
test_errors.append("{0} is not in the assertions list".format(assertion))
log.error("Saltcheck: missing module_and_function")
is_valid = False
if exp_ret_key:
tots += 1
else:
test_errors.append("No expected return key")
return is_valid
if exp_ret_val is not None:
tots += 1
else:
test_errors.append("expected_return does not have a value")
# log the test score for debug purposes
log.info("__test score: %s and required: %s", tots, required_total)
if not tots >= required_total:
log.error(
"Test failed with the following test verifications: %s",
", ".join(test_errors),
)
return tots >= required_total
def _call_salt_command(
self,
fun,
args,
kwargs,
assertion_section=None,
assertion_section_delimiter=DEFAULT_TARGET_DELIM,
):
def _call_salt_command(self, fun, args, kwargs):
"""
Generic call of salt Caller command
"""
@ -675,21 +681,127 @@ class SaltCheck(object):
else:
value = salt.client.Caller(mopts=mlocal_opts).cmd(fun)
__opts__["file_client"] = orig_file_client
if isinstance(value, dict) and assertion_section:
return_value = salt.utils.data.traverse_dict_and_list(
value,
return value
def _run_assertions(
self,
mod_and_func,
args,
data,
module_output,
output_details,
assert_print_result,
):
"""
Run assertion against input
"""
value = {}
assertion_section = data.get("assertion_section", None)
assertion_section_delimiter = data.get(
"assertion_section_delimiter", DEFAULT_TARGET_DELIM
)
if assertion_section:
module_output = salt.utils.data.traverse_dict_and_list(
module_output,
assertion_section,
default=False,
delimiter=assertion_section_delimiter,
)
return return_value
if mod_and_func in ["saltcheck.state_apply"]:
assertion = "assertNotEmpty"
else:
return value
assertion = data["assertion"]
expected_return = data.get("expected_return", data.get("expected-return", None))
if assertion not in [
"assertIn",
"assertNotIn",
"assertEmpty",
"assertNotEmpty",
"assertTrue",
"assertFalse",
]:
expected_return = self._cast_expected_to_returned_type(
expected_return, module_output
)
if assertion == "assertEqual":
assertion_desc = "=="
value["status"] = self.__assert_equal(
expected_return, module_output, assert_print_result
)
elif assertion == "assertNotEqual":
assertion_desc = "!="
value["status"] = self.__assert_not_equal(
expected_return, module_output, assert_print_result
)
elif assertion == "assertTrue":
assertion_desc = "True is"
value["status"] = self.__assert_true(module_output)
elif assertion == "assertFalse":
assertion_desc = "False is"
value["status"] = self.__assert_false(module_output)
elif assertion == "assertIn":
assertion_desc = "IN"
value["status"] = self.__assert_in(
expected_return, module_output, assert_print_result
)
elif assertion == "assertNotIn":
assertion_desc = "NOT IN"
value["status"] = self.__assert_not_in(
expected_return, module_output, assert_print_result
)
elif assertion == "assertGreater":
assertion_desc = ">"
value["status"] = self.__assert_greater(expected_return, module_output)
elif assertion == "assertGreaterEqual":
assertion_desc = ">="
value["status"] = self.__assert_greater_equal(
expected_return, module_output
)
elif assertion == "assertLess":
assertion_desc = "<"
value["status"] = self.__assert_less(expected_return, module_output)
elif assertion == "assertLessEqual":
assertion_desc = "<="
value["status"] = self.__assert_less_equal(expected_return, module_output)
elif assertion == "assertEmpty":
assertion_desc = "IS EMPTY"
value["status"] = self.__assert_empty(module_output)
elif assertion == "assertNotEmpty":
assertion_desc = "IS NOT EMPTY"
value["status"] = self.__assert_not_empty(module_output)
else:
value["status"] = "Fail - bad assertion"
if output_details:
if assertion_section:
assertion_section_repr_title = " {0}".format("assertion_section")
assertion_section_repr_value = " {0}".format(assertion_section)
else:
assertion_section_repr_title = ""
assertion_section_repr_value = ""
value[
"module.function [args]{0}".format(assertion_section_repr_title)
] = "{0} {1}{2}".format(
mod_and_func, dumps(args), assertion_section_repr_value,
)
value["saltcheck assertion"] = "{0}{1} {2}".format(
("" if expected_return is None else "{0} ".format(expected_return)),
assertion_desc,
("hidden" if not assert_print_result else module_output),
)
return value
def run_test(self, test_dict):
"""
Run a single saltcheck test
"""
result = {}
start = time.time()
global_output_details = __salt__["config.get"](
"saltcheck_output_details", False
@ -700,10 +812,7 @@ class SaltCheck(object):
if skip:
return {"status": "Skip", "duration": 0.0}
mod_and_func = test_dict["module_and_function"]
assertion_section = test_dict.get("assertion_section", None)
assertion_section_delimiter = test_dict.get(
"assertion_section_delimiter", DEFAULT_TARGET_DELIM
)
args = test_dict.get("args", None)
kwargs = test_dict.get("kwargs", None)
pillar_data = test_dict.get(
@ -718,103 +827,48 @@ class SaltCheck(object):
if kwargs:
kwargs.pop("pillar", None)
if mod_and_func in ["saltcheck.state_apply"]:
assertion = "assertNotEmpty"
else:
assertion = test_dict["assertion"]
expected_return = test_dict.get(
"expected_return", test_dict.get("expected-return", None)
)
assert_print_result = test_dict.get("print_result", True)
actual_return = self._call_salt_command(
mod_and_func,
args,
kwargs,
assertion_section,
assertion_section_delimiter,
)
if assertion not in [
"assertIn",
"assertNotIn",
"assertEmpty",
"assertNotEmpty",
"assertTrue",
"assertFalse",
]:
expected_return = self._cast_expected_to_returned_type(
expected_return, actual_return
)
if assertion == "assertEqual":
assertion_desc = "=="
value = self.__assert_equal(
expected_return, actual_return, assert_print_result
)
elif assertion == "assertNotEqual":
assertion_desc = "!="
value = self.__assert_not_equal(
expected_return, actual_return, assert_print_result
)
elif assertion == "assertTrue":
assertion_desc = "True is"
value = self.__assert_true(actual_return)
elif assertion == "assertFalse":
assertion_desc = "False is"
value = self.__assert_false(actual_return)
elif assertion == "assertIn":
assertion_desc = "IN"
value = self.__assert_in(
expected_return, actual_return, assert_print_result
)
elif assertion == "assertNotIn":
assertion_desc = "NOT IN"
value = self.__assert_not_in(
expected_return, actual_return, assert_print_result
)
elif assertion == "assertGreater":
assertion_desc = ">"
value = self.__assert_greater(expected_return, actual_return)
elif assertion == "assertGreaterEqual":
assertion_desc = ">="
value = self.__assert_greater_equal(expected_return, actual_return)
elif assertion == "assertLess":
assertion_desc = "<"
value = self.__assert_less(expected_return, actual_return)
elif assertion == "assertLessEqual":
assertion_desc = "<="
value = self.__assert_less_equal(expected_return, actual_return)
elif assertion == "assertEmpty":
assertion_desc = "IS EMPTY"
value = self.__assert_empty(actual_return)
elif assertion == "assertNotEmpty":
assertion_desc = "IS NOT EMPTY"
value = self.__assert_not_empty(actual_return)
actual_return = self._call_salt_command(mod_and_func, args, kwargs)
if test_dict.get("assertions"):
for num, assert_group in enumerate(
test_dict.get("assertions"), start=1
):
result["assertion{}".format(num)] = self._run_assertions(
mod_and_func,
args,
assert_group,
actual_return,
output_details,
assert_print_result,
)
# Walk individual assert status results to set the top level status
# key as needed
for k, v in copy.deepcopy(result).items():
if k.startswith("assertion"):
for assert_k, assert_v in result[k].items():
if assert_k.startswith("status"):
if result[k][assert_k] != "Pass":
result["status"] = "Fail"
if not result.get("status"):
result["status"] = "Pass"
else:
value = "Fail - bad assertion"
result.update(
self._run_assertions(
mod_and_func,
args,
test_dict,
actual_return,
output_details,
assert_print_result,
)
)
else:
value = "Fail - invalid test"
result["status"] = "Fail - invalid test"
end = time.time()
result = {}
result["status"] = value
if output_details:
if assertion_section:
assertion_section_repr_title = ".{0}".format("assertion_section")
assertion_section_repr_value = ".{0}".format(assertion_section)
else:
assertion_section_repr_title = ""
assertion_section_repr_value = ""
result[
"module.function [args]{0}".format(assertion_section_repr_title)
] = "{0} {1}{2}".format(
mod_and_func, dumps(args), assertion_section_repr_value,
)
result["saltcheck assertion"] = "{0}{1} {2}".format(
("" if expected_return is None else "{0} ".format(expected_return)),
assertion_desc,
("hidden" if not assert_print_result else actual_return),
)
result["duration"] = round(end - start, 4)
return result

View file

@ -6,6 +6,9 @@ Utility functions for use with or in SLS files
# Import Python libs
from __future__ import absolute_import, print_function, unicode_literals
import os
import textwrap
# Import Salt libs
import salt.exceptions
import salt.loader
@ -243,3 +246,184 @@ def deserialize(serializer, stream_or_string, **mod_kwargs):
"""
kwargs = salt.utils.args.clean_kwargs(**mod_kwargs)
return _get_serialize_fn(serializer, "deserialize")(stream_or_string, **kwargs)
def banner(
width=72,
commentchar="#",
borderchar="#",
blockstart=None,
blockend=None,
title=None,
text=None,
newline=False,
):
"""
Create a standardized comment block to include in a templated file.
A common technique in configuration management is to include a comment
block in managed files, warning users not to modify the file. This
function simplifies and standardizes those comment blocks.
:param width: The width, in characters, of the banner. Default is 72.
:param commentchar: The character to be used in the starting position of
each line. This value should be set to a valid line comment character
for the syntax of the file in which the banner is being inserted.
Multiple character sequences, like '//' are supported.
If the file's syntax does not support line comments (such as XML),
use the ``blockstart`` and ``blockend`` options.
:param borderchar: The character to use in the top and bottom border of
the comment box. Must be a single character.
:param blockstart: The character sequence to use at the beginning of a
block comment. Should be used in conjunction with ``blockend``
:param blockend: The character sequence to use at the end of a
block comment. Should be used in conjunction with ``blockstart``
:param title: The first field of the comment block. This field appears
centered at the top of the box.
:param text: The second filed of the comment block. This field appears
left-justifed at the bottom of the box.
:param newline: Boolean value to indicate whether the comment block should
end with a newline. Default is ``False``.
**Example 1 - the default banner:**
.. code-block:: jinja
{{ salt['slsutil.banner']() }}
.. code-block:: none
########################################################################
# #
# THIS FILE IS MANAGED BY SALT - DO NOT EDIT #
# #
# The contents of this file are managed by Salt. Any changes to this #
# file may be overwritten automatically and without warning. #
########################################################################
**Example 2 - a Javadoc-style banner:**
.. code-block:: jinja
{{ salt['slsutil.banner'](commentchar=' *', borderchar='*', blockstart='/**', blockend=' */') }}
.. code-block:: none
/**
***********************************************************************
* *
* THIS FILE IS MANAGED BY SALT - DO NOT EDIT *
* *
* The contents of this file are managed by Salt. Any changes to this *
* file may be overwritten automatically and without warning. *
***********************************************************************
*/
**Example 3 - custom text:**
.. code-block:: jinja
{{ set copyright='This file may not be copied or distributed without permission of SaltStack, Inc.' }}
{{ salt['slsutil.banner'](title='Copyright 2019 SaltStack, Inc.', text=copyright, width=60) }}
.. code-block:: none
############################################################
# #
# Copyright 2019 SaltStack, Inc. #
# #
# This file may not be copied or distributed without #
# permission of SaltStack, Inc. #
############################################################
"""
if title is None:
title = "THIS FILE IS MANAGED BY SALT - DO NOT EDIT"
if text is None:
text = (
"The contents of this file are managed by Salt. "
"Any changes to this file may be overwritten "
"automatically and without warning."
)
# Set up some typesetting variables
ledge = commentchar.rstrip()
redge = commentchar.strip()
lgutter = ledge + " "
rgutter = " " + redge
textwidth = width - len(lgutter) - len(rgutter)
# Check the width
if textwidth <= 0:
raise salt.exceptions.ArgumentValueError("Width is too small to render banner")
# Define the static elements
border_line = (
commentchar + borderchar[:1] * (width - len(ledge) - len(redge)) + redge
)
spacer_line = commentchar + " " * (width - len(commentchar) * 2) + commentchar
# Create the banner
wrapper = textwrap.TextWrapper(width=textwidth)
block = list()
if blockstart is not None:
block.append(blockstart)
block.append(border_line)
block.append(spacer_line)
for line in wrapper.wrap(title):
block.append(lgutter + line.center(textwidth) + rgutter)
block.append(spacer_line)
for line in wrapper.wrap(text):
block.append(lgutter + line + " " * (textwidth - len(line)) + rgutter)
block.append(border_line)
if blockend is not None:
block.append(blockend)
# Convert list to multi-line string
result = os.linesep.join(block)
# Add a newline character to the end of the banner
if newline:
return result + os.linesep
return result
def boolstr(value, true="true", false="false"):
"""
Convert a boolean value into a string. This function is
intended to be used from within file templates to provide
an easy way to take boolean values stored in Pillars or
Grains, and write them out in the apprpriate syntax for
a particular file template.
:param value: The boolean value to be converted
:param true: The value to return if ``value`` is ``True``
:param false: The value to return if ``value`` is ``False``
In this example, a pillar named ``smtp:encrypted`` stores a boolean
value, but the template that uses that value needs ``yes`` or ``no``
to be written, based on the boolean value.
*Note: this is written on two lines for clarity. The same result
could be achieved in one line.*
.. code-block:: jinja
{% set encrypted = salt[pillar.get]('smtp:encrypted', false) %}
use_tls: {{ salt['slsutil.boolstr'](encrypted, 'yes', 'no') }}
Result (assuming the value is ``True``):
.. code-block:: none
use_tls: yes
"""
if value:
return true
return false

View file

@ -60,6 +60,7 @@ __outputter__ = {
"template": "highstate",
"template_str": "highstate",
"apply_": "highstate",
"test": "highstate",
"request": "highstate",
"check_request": "highstate",
"run_request": "highstate",
@ -799,6 +800,21 @@ def apply_(mods=None, **kwargs):
return highstate(**kwargs)
def test(*args, **kwargs):
"""
.. versionadded:: Sodium
Alias for `state.apply` with the kwarg `test` forced to `True`.
This is a nicety to avoid the need to type out `test=True` and the possibility of
a typo causing changes you do not intend.
"""
kwargs["test"] = True
ret = apply_(*args, **kwargs)
return ret
def request(mods=None, **kwargs):
"""
.. versionadded:: 2015.5.0

View file

@ -73,6 +73,7 @@ import tempfile
# Import Salt libs
import salt.utils.data
import salt.utils.stringutils
# Import 3rd-party libs
# pylint: disable=no-name-in-module,import-error
@ -251,6 +252,13 @@ def _wget(cmd, opts=None, url="http://localhost:8080/manager", timeout=180):
except Exception: # pylint: disable=broad-except
ret["msg"] = "Failed to create HTTP request"
# Force all byte strings to utf-8 strings, for python >= 3.4
for key, value in enumerate(ret["msg"]):
try:
ret["msg"][key] = salt.utils.stringutils.to_unicode(value, "utf-8")
except (UnicodeDecodeError, AttributeError):
pass
if not ret["msg"][0].startswith("OK"):
ret["res"] = False

View file

@ -1701,6 +1701,7 @@ def update(
graphics=None,
live=True,
boot=None,
test=False,
**kwargs
):
"""
@ -1758,6 +1759,10 @@ def update(
.. versionadded:: 3000
:param test: run in dry-run mode if set to True
.. versionadded:: sodium
:return:
Returns a dictionary indicating the status of what has been done. It is structured in
@ -1805,8 +1810,8 @@ def update(
new_desc = ElementTree.fromstring(
_gen_xml(
name,
cpu,
mem,
cpu or 0,
mem or 0,
all_disks,
_get_merged_nics(hypervisor, nic_profile, interfaces),
hypervisor,
@ -1927,11 +1932,15 @@ def update(
item in changes["disk"]["new"]
and source_file
and not os.path.isfile(source_file)
and not test
):
_qemu_image_create(all_disks[idx])
try:
conn.defineXML(salt.utils.stringutils.to_str(ElementTree.tostring(desc)))
if not test:
conn.defineXML(
salt.utils.stringutils.to_str(ElementTree.tostring(desc))
)
status["definition"] = True
except libvirt.libvirtError as err:
conn.close()
@ -1988,7 +1997,7 @@ def update(
for cmd in commands:
try:
ret = getattr(domain, cmd["cmd"])(*cmd["args"])
ret = getattr(domain, cmd["cmd"])(*cmd["args"]) if not test else 0
device_type = cmd["device"]
if device_type in ["cpu", "mem"]:
status[device_type] = not bool(ret)

View file

@ -2615,12 +2615,18 @@ class _policy_info(object):
"lgpo_section": self.account_lockout_policy_gpedit_path,
"Settings": {
"Function": "_in_range_inclusive",
"Args": {"min": 0, "max": 6000000},
"Args": {
"min": 0,
"max": 6000000,
"zero_value": 0xFFFFFFFF,
},
},
"NetUserModal": {"Modal": 3, "Option": "lockout_duration"},
"Transform": {
"Get": "_seconds_to_minutes",
"Put": "_minutes_to_seconds",
"GetArgs": {"zero_value": 0xFFFFFFFF},
"PutArgs": {"zero_value": 0xFFFFFFFF},
},
},
"LockoutThreshold": {
@ -4252,7 +4258,10 @@ class _policy_info(object):
"""
converts a number of seconds to minutes
"""
zero_value = kwargs.get("zero_value", 0)
if val is not None:
if val == zero_value:
return 0
return val / 60
else:
return "Not Defined"
@ -4262,7 +4271,10 @@ class _policy_info(object):
"""
converts number of minutes to seconds
"""
zero_value = kwargs.get("zero_value", 0)
if val is not None:
if val == 0:
return zero_value
return val * 60
else:
return "Not Defined"

View file

@ -19,6 +19,7 @@ from datetime import datetime
# Import Salt libs
import salt.utils.platform
import salt.utils.winapi
from salt.exceptions import ArgumentValueError, CommandExecutionError
from salt.ext.six.moves import range
# Import 3rd Party Libraries
@ -624,6 +625,11 @@ def create_task_from_xml(
Returns:
bool: ``True`` if successful, otherwise ``False``
str: A string with the error message if there is an error
Raises:
ArgumentValueError: If arguments are invalid
CommandExecutionError
CLI Example:
@ -637,7 +643,7 @@ def create_task_from_xml(
return "{0} already exists".format(name)
if not xml_text and not xml_path:
return "Must specify either xml_text or xml_path"
raise ArgumentValueError("Must specify either xml_text or xml_path")
# Create the task service object
with salt.utils.winapi.Com():
@ -665,6 +671,7 @@ def create_task_from_xml(
logon_type = TASK_LOGON_INTERACTIVE_TOKEN
else:
password = None
logon_type = TASK_LOGON_NONE
# Save the task
try:
@ -674,17 +681,43 @@ def create_task_from_xml(
except pythoncom.com_error as error:
hr, msg, exc, arg = error.args # pylint: disable=W0633
error_code = hex(exc[5] + 2 ** 32)
fc = {
-2147216615: "Required element or attribute missing",
-2147216616: "Value incorrectly formatted or out of range",
-2147352571: "Access denied",
0x80041319: "Required element or attribute missing",
0x80041318: "Value incorrectly formatted or out of range",
0x80020005: "Access denied",
0x80041309: "A task's trigger is not found",
0x8004130A: "One or more of the properties required to run this "
"task have not been set",
0x8004130C: "The Task Scheduler service is not installed on this "
"computer",
0x8004130D: "The task object could not be opened",
0x8004130E: "The object is either an invalid task object or is not "
"a task object",
0x8004130F: "No account information could be found in the Task "
"Scheduler security database for the task indicated",
0x80041310: "Unable to establish existence of the account specified",
0x80041311: "Corruption was detected in the Task Scheduler "
"security database; the database has been reset",
0x80041313: "The task object version is either unsupported or invalid",
0x80041314: "The task has been configured with an unsupported "
"combination of account settings and run time options",
0x80041315: "The Task Scheduler Service is not running",
0x80041316: "The task XML contains an unexpected node",
0x80041317: "The task XML contains an element or attribute from an "
"unexpected namespace",
0x8004131A: "The task XML is malformed",
0x0004131C: "The task is registered, but may fail to start. Batch "
"logon privilege needs to be enabled for the task principal",
0x8004131D: "The task XML contains too many nodes of the same type",
}
try:
failure_code = fc[exc[5]]
failure_code = fc[error_code]
except KeyError:
failure_code = "Unknown Failure: {0}".format(error)
log.debug("Failed to create task: %s", failure_code)
failure_code = "Unknown Failure: {0}".format(error_code)
finally:
log.debug("Failed to create task: %s", failure_code)
raise CommandExecutionError(failure_code)
# Verify creation
return name in list_tasks(location)

View file

@ -209,24 +209,22 @@ def get_zone():
Returns:
str: Timezone in unix format
Raises:
CommandExecutionError: If timezone could not be gathered
CLI Example:
.. code-block:: bash
salt '*' timezone.get_zone
"""
win_zone = __utils__["reg.read_value"](
hive="HKLM",
key="SYSTEM\\CurrentControlSet\\Control\\TimeZoneInformation",
vname="TimeZoneKeyName",
)["vdata"]
# Some data may have null characters. We only need the first portion up to
# the first null character. See the following:
# https://github.com/saltstack/salt/issues/51940
# https://stackoverflow.com/questions/27716746/hklm-system-currentcontrolset-control-timezoneinformation-timezonekeyname-corrup
if "\0" in win_zone:
win_zone = win_zone.split("\0")[0]
return mapper.get_unix(win_zone.lower(), "Unknown")
cmd = ["tzutil", "/g"]
res = __salt__["cmd.run_all"](cmd, python_shell=False)
if res["retcode"] or not res["stdout"]:
raise CommandExecutionError(
"tzutil encountered an error getting timezone", info=res
)
return mapper.get_unix(res["stdout"].lower(), "Unknown")
def get_offset():

View file

@ -1722,7 +1722,7 @@ def install(
cmd.extend(targets)
out = _call_yum(cmd, ignore_retcode=False, redirect_stderr=True)
if out["retcode"] != 0:
errors.append(out["stdout"])
errors.append(out["stderr"])
targets = []
with _temporarily_unhold(to_downgrade, targets):
@ -1733,7 +1733,7 @@ def install(
cmd.extend(targets)
out = _call_yum(cmd)
if out["retcode"] != 0:
errors.append(out["stdout"])
errors.append(out["stderr"])
targets = []
with _temporarily_unhold(to_reinstall, targets):
@ -1744,7 +1744,7 @@ def install(
cmd.extend(targets)
out = _call_yum(cmd)
if out["retcode"] != 0:
errors.append(out["stdout"])
errors.append(out["stderr"])
__context__.pop("pkg.list_pkgs", None)
new = (

View file

@ -2244,7 +2244,7 @@ def list_products(all=False, refresh=False):
OEM_PATH = "/var/lib/suseRegister/OEM"
cmd = list()
if not all:
cmd.append("--disable-repos")
cmd.append("--disable-repositories")
cmd.append("products")
if not all:
cmd.append("-i")

View file

@ -75,6 +75,7 @@ STATE_REQUISITE_KEYWORDS = frozenset(
"onchanges_any",
"onfail",
"onfail_any",
"onfail_all",
"onfail_stop",
"prereq",
"prerequired",
@ -2583,6 +2584,8 @@ class State(object):
present = True
if "onfail_any" in low:
present = True
if "onfail_all" in low:
present = True
if "onchanges" in low:
present = True
if "onchanges_any" in low:
@ -2598,6 +2601,7 @@ class State(object):
"prereq": [],
"onfail": [],
"onfail_any": [],
"onfail_all": [],
"onchanges": [],
"onchanges_any": [],
}
@ -2697,7 +2701,7 @@ class State(object):
else:
if run_dict[tag].get("__state_ran__", True):
req_stats.add("met")
if r_state.endswith("_any"):
if r_state.endswith("_any") or r_state == "onfail":
if "met" in req_stats or "change" in req_stats:
if "fail" in req_stats:
req_stats.remove("fail")
@ -2707,8 +2711,14 @@ class State(object):
if "fail" in req_stats:
req_stats.remove("fail")
if "onfail" in req_stats:
if "fail" in req_stats:
# a met requisite in this case implies a success
if "met" in req_stats:
req_stats.remove("onfail")
if r_state.endswith("_all"):
if "onfail" in req_stats:
# a met requisite in this case implies a failure
if "met" in req_stats:
req_stats.remove("met")
fun_stats.update(req_stats)
if "unmet" in fun_stats:
@ -2720,8 +2730,8 @@ class State(object):
status = "met"
else:
status = "pre"
elif "onfail" in fun_stats and "met" not in fun_stats:
status = "onfail" # all onfail states are OK
elif "onfail" in fun_stats and "onchangesmet" not in fun_stats:
status = "onfail"
elif "onchanges" in fun_stats and "onchangesmet" not in fun_stats:
status = "onchanges"
elif "change" in fun_stats:
@ -3292,6 +3302,49 @@ class State(object):
return self.call_high(high)
class LazyAvailStates(object):
"""
The LazyAvailStates lazily loads the list of states of available
environments.
This is particularly usefull when top_file_merging_strategy=same and there
are many environments.
"""
def __init__(self, hs):
self._hs = hs
self._avail = {"base": None}
self._filled = False
def _fill(self):
if self._filled:
return
for saltenv in self._hs._get_envs():
if saltenv not in self._avail:
self._avail[saltenv] = None
self._filled = True
def __contains__(self, saltenv):
if saltenv == "base":
return True
self._fill()
return saltenv in self._avail
def __getitem__(self, saltenv):
if saltenv != "base":
self._fill()
if self._avail[saltenv] is None:
self._avail[saltenv] = self._hs.client.list_states(saltenv)
return self._avail[saltenv]
def items(self):
self._fill()
ret = []
for saltenv, states in self._avail:
ret.append((saltenv, self.__getitem__(saltenv)))
return ret
class BaseHighState(object):
"""
The BaseHighState is an abstract base class that is the foundation of
@ -3310,12 +3363,9 @@ class BaseHighState(object):
def __gather_avail(self):
"""
Gather the lists of available sls data from the master
Lazily gather the lists of available sls data from the master
"""
avail = {}
for saltenv in self._get_envs():
avail[saltenv] = self.client.list_states(saltenv)
return avail
return LazyAvailStates(self)
def __gen_opts(self, opts):
"""

View file

@ -233,7 +233,9 @@ def topic_present(
subscribe += [sub]
for sub in current_subs:
minimal = {"Protocol": sub["Protocol"], "Endpoint": sub["Endpoint"]}
if minimal not in obfuscated_subs:
if minimal not in obfuscated_subs and sub["SubscriptionArn"].startswith(
"arn:aws:sns:"
):
unsubscribe += [sub["SubscriptionArn"]]
for sub in subscribe:
prot = sub["Protocol"]

View file

@ -429,23 +429,20 @@ def _function_config_present(
func = __salt__["boto_lambda.describe_function"](
FunctionName, region=region, key=key, keyid=keyid, profile=profile
)["function"]
# pylint: disable=possibly-unused-variable
role_arn = _get_role_arn(Role, region, key, keyid, profile)
# pylint: enable=possibly-unused-variable
need_update = False
options = {
"Role": "role_arn",
"Handler": "Handler",
"Description": "Description",
"Timeout": "Timeout",
"MemorySize": "MemorySize",
"Role": _get_role_arn(Role, region, key, keyid, profile),
"Handler": Handler,
"Description": Description,
"Timeout": Timeout,
"MemorySize": MemorySize,
}
for val, var in six.iteritems(options):
if func[val] != locals()[var]:
for key, val in six.iteritems(options):
if func[key] != val:
need_update = True
ret["changes"].setdefault("new", {})[var] = locals()[var]
ret["changes"].setdefault("old", {})[var] = func[val]
ret["changes"].setdefault("old", {})[key] = func[key]
ret["changes"].setdefault("new", {})[key] = val
# VpcConfig returns the extra value 'VpcId' so do a special compare
oldval = func.get("VpcConfig")
if oldval is not None:
@ -508,6 +505,13 @@ def _function_code_present(
)["function"]
update = False
if ZipFile:
if "://" in ZipFile: # Looks like a remote URL to me...
dlZipFile = __salt__["cp.cache_file"](path=ZipFile)
if dlZipFile is False:
ret["result"] = False
ret["comment"] = "Failed to cache ZipFile `{0}`.".format(ZipFile)
return ret
ZipFile = dlZipFile
size = os.path.getsize(ZipFile)
if size == func["CodeSize"]:
sha = hashlib.sha256()
@ -787,13 +791,13 @@ def alias_present(
)["alias"]
need_update = False
options = {"FunctionVersion": "FunctionVersion", "Description": "Description"}
options = {"FunctionVersion": FunctionVersion, "Description": Description}
for val, var in six.iteritems(options):
if _describe[val] != locals()[var]:
for key, val in six.iteritems(options):
if _describe[key] != val:
need_update = True
ret["changes"].setdefault("new", {})[var] = locals()[var]
ret["changes"].setdefault("old", {})[var] = _describe[val]
ret["changes"].setdefault("old", {})[key] = _describe[key]
ret["changes"].setdefault("new", {})[key] = val
if need_update:
ret["comment"] = os.linesep.join(
[ret["comment"], "Alias config to be modified"]
@ -1026,13 +1030,13 @@ def event_source_mapping_present(
)["event_source_mapping"]
need_update = False
options = {"BatchSize": "BatchSize"}
options = {"BatchSize": BatchSize}
for val, var in six.iteritems(options):
if _describe[val] != locals()[var]:
for key, val in six.iteritems(options):
if _describe[key] != val:
need_update = True
ret["changes"].setdefault("new", {})[var] = locals()[var]
ret["changes"].setdefault("old", {})[var] = _describe[val]
ret["changes"].setdefault("old", {})[key] = _describe[key]
ret["changes"].setdefault("new", {})[key] = val
# verify FunctionName against FunctionArn
function_arn = _get_function_arn(
FunctionName, region=region, key=key, keyid=keyid, profile=profile

377
salt/states/btrfs.py Normal file
View file

@ -0,0 +1,377 @@
# -*- coding: utf-8 -*-
"""
:maintainer: Alberto Planas <aplanas@suse.com>
:maturity: new
:depends: None
:platform: Linux
"""
from __future__ import absolute_import, print_function, unicode_literals
import functools
import logging
import os.path
import tempfile
import traceback
from salt.exceptions import CommandExecutionError
log = logging.getLogger(__name__)
__virtualname__ = "btrfs"
def _mount(device, use_default):
"""
Mount the device in a temporary place.
"""
opts = "defaults" if use_default else "subvol=/"
dest = tempfile.mkdtemp()
res = __states__["mount.mounted"](
dest, device=device, fstype="btrfs", opts=opts, persist=False
)
if not res["result"]:
log.error("Cannot mount device %s in %s", device, dest)
_umount(dest)
return None
return dest
def _umount(path):
"""
Umount and clean the temporary place.
"""
__states__["mount.unmounted"](path)
__utils__["files.rm_rf"](path)
def _is_default(path, dest, name):
"""
Check if the subvolume is the current default.
"""
subvol_id = __salt__["btrfs.subvolume_show"](path)[name]["subvolume id"]
def_id = __salt__["btrfs.subvolume_get_default"](dest)["id"]
return subvol_id == def_id
def _set_default(path, dest, name):
"""
Set the subvolume as the current default.
"""
subvol_id = __salt__["btrfs.subvolume_show"](path)[name]["subvolume id"]
return __salt__["btrfs.subvolume_set_default"](subvol_id, dest)
def _is_cow(path):
"""
Check if the subvolume is copy on write
"""
dirname = os.path.dirname(path)
return "C" not in __salt__["file.lsattr"](dirname)[path]
def _unset_cow(path):
"""
Disable the copy on write in a subvolume
"""
return __salt__["file.chattr"](path, operator="add", attributes="C")
def __mount_device(action):
"""
Small decorator to makes sure that the mount and umount happends in
a transactional way.
"""
@functools.wraps(action)
def wrapper(*args, **kwargs):
name = kwargs["name"]
device = kwargs["device"]
use_default = kwargs.get("use_default", False)
ret = {
"name": name,
"result": False,
"changes": {},
"comment": ["Some error happends during the operation."],
}
try:
if device:
dest = _mount(device, use_default)
if not dest:
msg = "Device {} cannot be mounted".format(device)
ret["comment"].append(msg)
kwargs["__dest"] = dest
ret = action(*args, **kwargs)
except Exception as e: # pylint: disable=broad-except
log.error("""Traceback: {}""".format(traceback.format_exc()))
ret["comment"].append(e)
finally:
if device:
_umount(dest)
return ret
return wrapper
@__mount_device
def subvolume_created(
name,
device,
qgroupids=None,
set_default=False,
copy_on_write=True,
force_set_default=True,
__dest=None,
):
"""
Makes sure that a btrfs subvolume is present.
name
Name of the subvolume to add
device
Device where to create the subvolume
qgroupids
Add the newly created subcolume to a qgroup. This parameter
is a list
set_default
If True, this new subvolume will be set as default when
mounted, unless subvol option in mount is used
copy_on_write
If false, set the subvolume with chattr +C
force_set_default
If false and the subvolume is already present, it will not
force it as default if ``set_default`` is True
"""
ret = {
"name": name,
"result": False,
"changes": {},
"comment": [],
}
path = os.path.join(__dest, name)
exists = __salt__["btrfs.subvolume_exists"](path)
if exists:
ret["comment"].append("Subvolume {} already present".format(name))
# Resolve first the test case. The check is not complete, but at
# least we will report if a subvolume needs to be created. Can
# happend that the subvolume is there, but we also need to set it
# as default, or persist in fstab.
if __opts__["test"]:
ret["result"] = None
if not exists:
ret["changes"][name] = "Subvolume {} will be created".format(name)
return ret
if not exists:
# Create the directories where the subvolume lives
_path = os.path.dirname(path)
res = __states__["file.directory"](_path, makedirs=True)
if not res["result"]:
ret["comment"].append("Error creating {} directory".format(_path))
return ret
try:
__salt__["btrfs.subvolume_create"](name, dest=__dest, qgroupids=qgroupids)
except CommandExecutionError:
ret["comment"].append("Error creating subvolume {}".format(name))
return ret
ret["changes"][name] = "Created subvolume {}".format(name)
# If the volume was already present, we can opt-out the check for
# default subvolume.
if (
(not exists or (exists and force_set_default))
and set_default
and not _is_default(path, __dest, name)
):
ret["changes"][name + "_default"] = _set_default(path, __dest, name)
if not copy_on_write and _is_cow(path):
ret["changes"][name + "_no_cow"] = _unset_cow(path)
ret["result"] = True
return ret
@__mount_device
def subvolume_deleted(name, device, commit=False, __dest=None):
"""
Makes sure that a btrfs subvolume is removed.
name
Name of the subvolume to remove
device
Device where to remove the subvolume
commit
Wait until the transaction is over
"""
ret = {
"name": name,
"result": False,
"changes": {},
"comment": [],
}
path = os.path.join(__dest, name)
exists = __salt__["btrfs.subvolume_exists"](path)
if not exists:
ret["comment"].append("Subvolume {} already missing".format(name))
if __opts__["test"]:
ret["result"] = None
if exists:
ret["changes"][name] = "Subvolume {} will be removed".format(name)
return ret
# If commit is set, we wait until all is over
commit = "after" if commit else None
if not exists:
try:
__salt__["btrfs.subvolume_delete"](path, commit=commit)
except CommandExecutionError:
ret["comment"].append("Error removing subvolume {}".format(name))
return ret
ret["changes"][name] = "Removed subvolume {}".format(name)
ret["result"] = True
return ret
def _diff_properties(expected, current):
"""Calculate the difference between the current and the expected
properties
* 'expected' is expressed in a dictionary like: {'property': value}
* 'current' contains the same format retuned by 'btrfs.properties'
If the property is not available, will throw an exception.
"""
difference = {}
for _property, value in expected.items():
current_value = current[_property]["value"]
if value is False and current_value == "N/A":
needs_update = False
elif value != current_value:
needs_update = True
else:
needs_update = False
if needs_update:
difference[_property] = value
return difference
@__mount_device
def properties(name, device, use_default=False, __dest=None, **properties):
"""
Makes sure that a list of properties are set in a subvolume, file
or device.
name
Name of the object to change
device
Device where the object lives, if None, the device will be in
name
use_default
If True, this subvolume will be resolved to the default
subvolume assigned during the create operation
properties
Dictionary of properties
Valid properties are 'ro', 'label' or 'compression'. Check the
documentation to see where those properties are valid for each
object.
"""
ret = {
"name": name,
"result": False,
"changes": {},
"comment": [],
}
# 'name' will have always the name of the object that we want to
# change, but if the object is a device, we do not repeat it again
# in 'device'. This makes device sometimes optional.
if device:
if os.path.isabs(name):
path = os.path.join(__dest, os.path.relpath(name, os.path.sep))
else:
path = os.path.join(__dest, name)
else:
path = name
if not os.path.exists(path):
ret["comment"].append("Object {} not found".format(name))
return ret
# Convert the booleans to lowercase
properties = {
k: v if type(v) is not bool else str(v).lower() for k, v in properties.items()
}
current_properties = {}
try:
current_properties = __salt__["btrfs.properties"](path)
except CommandExecutionError as e:
ret["comment"].append("Error reading properties from {}".format(name))
ret["comment"].append("Current error {}".format(e))
return ret
try:
properties_to_set = _diff_properties(properties, current_properties)
except KeyError:
ret["comment"].append("Some property not found in {}".format(name))
return ret
if __opts__["test"]:
ret["result"] = None
if properties_to_set:
ret["changes"] = properties_to_set
else:
msg = "No properties will be changed in {}".format(name)
ret["comment"].append(msg)
return ret
if properties_to_set:
_properties = ",".join(
"{}={}".format(k, v) for k, v in properties_to_set.items()
)
__salt__["btrfs.properties"](path, set=_properties)
current_properties = __salt__["btrfs.properties"](path)
properties_failed = _diff_properties(properties, current_properties)
if properties_failed:
msg = "Properties {} failed to be changed in {}".format(
properties_failed, name
)
ret["comment"].append(msg)
return ret
ret["comment"].append("Properties changed in {}".format(name))
ret["changes"] = properties_to_set
else:
ret["comment"].append("Properties not changed in {}".format(name))
ret["result"] = True
return ret

View file

@ -162,6 +162,20 @@ def _get_comparison_spec(pkgver):
return oper, verstr
def _check_ignore_epoch(oper, desired_version, ignore_epoch=None):
"""
Conditionally ignore epoch, but only under all of the following
circumstances:
1. No value for ignore_epoch passed to state
2. desired_version has no epoch
3. oper does not contain a "<" or ">"
"""
if ignore_epoch is not None:
return ignore_epoch
return "<" not in oper and ">" not in oper and ":" not in desired_version
def _parse_version_string(version_conditions_string):
"""
Returns a list of two-tuples containing (operator, version).
@ -179,7 +193,7 @@ def _parse_version_string(version_conditions_string):
def _fulfills_version_string(
installed_versions,
version_conditions_string,
ignore_epoch=False,
ignore_epoch=None,
allow_updates=False,
):
"""
@ -196,12 +210,17 @@ def _fulfills_version_string(
>=1.2.3-4, <2.3.4-5
>=1.2.3-4, <2.3.4-5, !=1.2.4-1
ignore_epoch : False
ignore_epoch : None
When a package version contains an non-zero epoch (e.g.
``1:3.14.159-2.el7``, and a specific version of a package is desired,
``1:3.14.159-2.el7``), and a specific version of a package is desired,
set this option to ``True`` to ignore the epoch when comparing
versions.
.. versionchanged:: Sodium
If no value for this argument is passed to the state that calls
this helper function, and ``version_conditions_string`` contains no
epoch or greater-than/less-than, then the epoch will be ignored.
allow_updates : False
Allow the package to be updated outside Salt's control (e.g. auto updates on Windows).
This means a package on the Minion can have a newer version than the latest available in
@ -222,7 +241,7 @@ def _fulfills_version_string(
return False
def _fulfills_version_spec(versions, oper, desired_version, ignore_epoch=False):
def _fulfills_version_spec(versions, oper, desired_version, ignore_epoch=None):
"""
Returns True if any of the installed versions match the specified version,
otherwise returns False
@ -240,7 +259,7 @@ def _fulfills_version_spec(versions, oper, desired_version, ignore_epoch=False):
oper=oper,
ver2=desired_version,
cmp_func=cmp_func,
ignore_epoch=ignore_epoch,
ignore_epoch=_check_ignore_epoch(oper, desired_version, ignore_epoch),
):
return True
return False
@ -262,7 +281,7 @@ def _find_download_targets(
pkgs=None,
normalize=True,
skip_suggestions=False,
ignore_epoch=False,
ignore_epoch=None,
**kwargs
):
"""
@ -423,7 +442,7 @@ def _find_advisory_targets(name=None, advisory_ids=None, **kwargs):
def _find_remove_targets(
name=None, version=None, pkgs=None, normalize=True, ignore_epoch=False, **kwargs
name=None, version=None, pkgs=None, normalize=True, ignore_epoch=None, **kwargs
):
"""
Inspect the arguments to pkg.removed and discover what packages need to
@ -511,7 +530,7 @@ def _find_install_targets(
skip_suggestions=False,
pkg_verify=False,
normalize=True,
ignore_epoch=False,
ignore_epoch=None,
reinstall=False,
refresh=False,
**kwargs
@ -672,7 +691,9 @@ def _find_install_targets(
name in cur_pkgs
and (
version is None
or _fulfills_version_string(cur_pkgs[name], version)
or _fulfills_version_string(
cur_pkgs[name], version, ignore_epoch=ignore_epoch
)
)
)
]
@ -880,7 +901,7 @@ def _find_install_targets(
)
def _verify_install(desired, new_pkgs, ignore_epoch=False, new_caps=None):
def _verify_install(desired, new_pkgs, ignore_epoch=None, new_caps=None):
"""
Determine whether or not the installed packages match what was requested in
the SLS file.
@ -1000,7 +1021,7 @@ def installed(
allow_updates=False,
pkg_verify=False,
normalize=True,
ignore_epoch=False,
ignore_epoch=None,
reinstall=False,
update_holds=False,
**kwargs
@ -1336,33 +1357,17 @@ def installed(
- normalize: False
:param bool ignore_epoch:
When a package version contains an non-zero epoch (e.g.
``1:3.14.159-2.el7``, and a specific version of a package is desired,
set this option to ``True`` to ignore the epoch when comparing
versions. This allows for the following SLS to be used:
.. code-block:: yaml
# Actual vim-enhanced version: 2:7.4.160-1.el7
vim-enhanced:
pkg.installed:
- version: 7.4.160-1.el7
- ignore_epoch: True
Without this option set to ``True`` in the above example, the package
would be installed, but the state would report as failed because the
actual installed version would be ``2:7.4.160-1.el7``. Alternatively,
this option can be left as ``False`` and the full version string (with
epoch) can be specified in the SLS file:
.. code-block:: yaml
vim-enhanced:
pkg.installed:
- version: 2:7.4.160-1.el7
If this option is not explicitly set, and there is no epoch in the
desired package version, the epoch will be implicitly ignored. Set this
argument to ``True`` to explicitly ignore the epoch, and ``False`` to
strictly enforce it.
.. versionadded:: 2015.8.9
.. versionchanged:: Sodium
In prior releases, the default behavior was to strictly enforce
epochs unless this argument was set to ``True``.
|
**MULTIPLE PACKAGE INSTALLATION OPTIONS: (not supported in pkgng)**
@ -2777,7 +2782,7 @@ def _uninstall(
version=None,
pkgs=None,
normalize=True,
ignore_epoch=False,
ignore_epoch=None,
**kwargs
):
"""
@ -2895,9 +2900,7 @@ def _uninstall(
}
def removed(
name, version=None, pkgs=None, normalize=True, ignore_epoch=False, **kwargs
):
def removed(name, version=None, pkgs=None, normalize=True, ignore_epoch=None, **kwargs):
"""
Verify that a package is not installed, calling ``pkg.remove`` if necessary
to remove the package.
@ -2943,34 +2946,18 @@ def removed(
.. versionadded:: 2015.8.0
ignore_epoch : False
When a package version contains an non-zero epoch (e.g.
``1:3.14.159-2.el7``, and a specific version of a package is desired,
set this option to ``True`` to ignore the epoch when comparing
versions. This allows for the following SLS to be used:
.. code-block:: yaml
# Actual vim-enhanced version: 2:7.4.160-1.el7
vim-enhanced:
pkg.removed:
- version: 7.4.160-1.el7
- ignore_epoch: True
Without this option set to ``True`` in the above example, the state
would falsely report success since the actual installed version is
``2:7.4.160-1.el7``. Alternatively, this option can be left as
``False`` and the full version string (with epoch) can be specified in
the SLS file:
.. code-block:: yaml
vim-enhanced:
pkg.removed:
- version: 2:7.4.160-1.el7
ignore_epoch : None
If this option is not explicitly set, and there is no epoch in the
desired package version, the epoch will be implicitly ignored. Set this
argument to ``True`` to explicitly ignore the epoch, and ``False`` to
strictly enforce it.
.. versionadded:: 2015.8.9
.. versionchanged:: Sodium
In prior releases, the default behavior was to strictly enforce
epochs unless this argument was set to ``True``.
Multiple Package Options:
pkgs
@ -3005,7 +2992,7 @@ def removed(
return ret
def purged(name, version=None, pkgs=None, normalize=True, ignore_epoch=False, **kwargs):
def purged(name, version=None, pkgs=None, normalize=True, ignore_epoch=None, **kwargs):
"""
Verify that a package is not installed, calling ``pkg.purge`` if necessary
to purge the package. All configuration files are also removed.
@ -3051,34 +3038,18 @@ def purged(name, version=None, pkgs=None, normalize=True, ignore_epoch=False, **
.. versionadded:: 2015.8.0
ignore_epoch : False
When a package version contains an non-zero epoch (e.g.
``1:3.14.159-2.el7``, and a specific version of a package is desired,
set this option to ``True`` to ignore the epoch when comparing
versions. This allows for the following SLS to be used:
.. code-block:: yaml
# Actual vim-enhanced version: 2:7.4.160-1.el7
vim-enhanced:
pkg.purged:
- version: 7.4.160-1.el7
- ignore_epoch: True
Without this option set to ``True`` in the above example, the state
would falsely report success since the actual installed version is
``2:7.4.160-1.el7``. Alternatively, this option can be left as
``False`` and the full version string (with epoch) can be specified in
the SLS file:
.. code-block:: yaml
vim-enhanced:
pkg.purged:
- version: 2:7.4.160-1.el7
ignore_epoch : None
If this option is not explicitly set, and there is no epoch in the
desired package version, the epoch will be implicitly ignored. Set this
argument to ``True`` to explicitly ignore the epoch, and ``False`` to
strictly enforce it.
.. versionadded:: 2015.8.9
.. versionchanged:: Sodium
In prior releases, the default behavior was to strictly enforce
epochs unless this argument was set to ``True``.
Multiple Package Options:
pkgs
@ -3086,7 +3057,7 @@ def purged(name, version=None, pkgs=None, normalize=True, ignore_epoch=False, **
``name`` parameter will be ignored if this option is passed. It accepts
version numbers as well.
.. versionadded:: 0.16.0
.. versionadded:: 0.16.0
"""
kwargs["saltenv"] = __env__
try:

View file

@ -21,6 +21,7 @@ package managers are APT, DNF, YUM and Zypper. Here is some example SLS:
base:
pkgrepo.managed:
- humanname: Logstash PPA
- name: deb http://ppa.launchpad.net/wolfnet/logstash/ubuntu precise main
- dist: precise
- file: /etc/apt/sources.list.d/logstash.list
@ -37,6 +38,7 @@ package managers are APT, DNF, YUM and Zypper. Here is some example SLS:
base:
pkgrepo.managed:
- humanname: deb-multimedia
- name: deb http://www.deb-multimedia.org stable main
- file: /etc/apt/sources.list.d/deb-multimedia.list
- key_url: salt://deb-multimedia/files/marillat.pub
@ -45,6 +47,7 @@ package managers are APT, DNF, YUM and Zypper. Here is some example SLS:
base:
pkgrepo.managed:
- humanname: Google Chrome
- name: deb http://dl.google.com/linux/chrome/deb/ stable main
- dist: stable
- file: /etc/apt/sources.list.d/chrome-browser.list
@ -91,6 +94,7 @@ import salt.utils.data
import salt.utils.files
import salt.utils.pkg.deb
import salt.utils.pkg.rpm
import salt.utils.versions
# Import salt libs
from salt.exceptions import CommandExecutionError, SaltInvocationError
@ -104,9 +108,7 @@ def __virtual__():
"""
Only load if modifying repos is available for this package type
"""
if "pkg.mod_repo" in __salt__:
return True
return (False, "pkg module could not be loaded")
return "pkg.mod_repo" in __salt__
def managed(name, ppa=None, **kwargs):
@ -230,7 +232,7 @@ def managed(name, ppa=None, **kwargs):
Included to reduce confusion due to YUM/DNF/Zypper's use of the
``enabled`` argument. If this is passed for an APT-based distro, then
the reverse will be passed as ``disabled``. For example, passing
``enabled=False`` will assume ``disabled=True``.
``enabled=False`` will assume ``disabled=False``.
architectures
On apt-based systems, architectures can restrict the available
@ -299,10 +301,8 @@ def managed(name, ppa=None, **kwargs):
on debian based systems.
refresh_db : True
This argument has been deprecated. Please use ``refresh`` instead.
The ``refresh_db`` argument will continue to work to ensure backwards
compatibility, but we recommend using the preferred ``refresh``
argument instead.
.. deprecated:: 2018.3.0
Use ``refresh`` instead.
require_in
Set this to a list of pkg.installed or pkg.latest to trigger the
@ -310,6 +310,12 @@ def managed(name, ppa=None, **kwargs):
packages. Setting a require in the pkg state will not work for this.
"""
if "refresh_db" in kwargs:
salt.utils.versions.warn_until(
"Neon",
"The 'refresh_db' argument to 'pkg.mod_repo' has been "
"renamed to 'refresh'. Support for using 'refresh_db' will be "
"removed in the Neon release of Salt.",
)
kwargs["refresh"] = kwargs.pop("refresh_db")
ret = {"name": name, "changes": {}, "result": None, "comment": ""}
@ -395,7 +401,7 @@ def managed(name, ppa=None, **kwargs):
kwargs.pop(kwarg, None)
try:
pre = __salt__["pkg.get_repo"](repo, ppa_auth=kwargs.get("ppa_auth", None))
pre = __salt__["pkg.get_repo"](repo=repo, **kwargs)
except CommandExecutionError as exc:
ret["result"] = False
ret["comment"] = "Failed to examine repo '{0}': {1}".format(name, exc)
@ -431,7 +437,7 @@ def managed(name, ppa=None, **kwargs):
break
else:
break
elif kwarg == "comps":
elif kwarg in ("comps", "key_url"):
if sorted(sanitizedkwargs[kwarg]) != sorted(pre[kwarg]):
break
elif kwarg == "line" and __grains__["os_family"] == "Debian":
@ -516,7 +522,7 @@ def managed(name, ppa=None, **kwargs):
return ret
try:
post = __salt__["pkg.get_repo"](repo, ppa_auth=kwargs.get("ppa_auth", None))
post = __salt__["pkg.get_repo"](repo=repo, **kwargs)
if pre:
for kwarg in sanitizedkwargs:
if post.get(kwarg) != pre.get(kwarg):
@ -605,7 +611,7 @@ def absent(name, **kwargs):
return ret
try:
repo = __salt__["pkg.get_repo"](name, ppa_auth=kwargs.get("ppa_auth", None))
repo = __salt__["pkg.get_repo"](name, **kwargs)
except CommandExecutionError as exc:
ret["result"] = False
ret["comment"] = "Failed to configure repo '{0}': {1}".format(name, exc)

View file

@ -270,6 +270,198 @@ def powered_off(name, connection=None, username=None, password=None):
)
def defined(
name,
cpu=None,
mem=None,
vm_type=None,
disk_profile=None,
disks=None,
nic_profile=None,
interfaces=None,
graphics=None,
seed=True,
install=True,
pub_key=None,
priv_key=None,
connection=None,
username=None,
password=None,
os_type=None,
arch=None,
boot=None,
update=True,
):
"""
Starts an existing guest, or defines and starts a new VM with specified arguments.
.. versionadded:: sodium
:param name: name of the virtual machine to run
:param cpu: number of CPUs for the virtual machine to create
:param mem: amount of memory in MiB for the new virtual machine
:param vm_type: force virtual machine type for the new VM. The default value is taken from
the host capabilities. This could be useful for example to use ``'qemu'`` type instead
of the ``'kvm'`` one.
:param disk_profile:
Name of the disk profile to use for the new virtual machine
:param disks:
List of disk to create for the new virtual machine.
See :ref:`init-disk-def` for more details on the items on this list.
:param nic_profile:
Name of the network interfaces profile to use for the new virtual machine
:param interfaces:
List of network interfaces to create for the new virtual machine.
See :ref:`init-nic-def` for more details on the items on this list.
:param graphics:
Graphics device to create for the new virtual machine.
See :ref:`init-graphics-def` for more details on this dictionary
:param saltenv:
Fileserver environment (Default: ``'base'``).
See :mod:`cp module for more details <salt.modules.cp>`
:param seed: ``True`` to seed the disk image. Only used when the ``image`` parameter is provided.
(Default: ``True``)
:param install: install salt minion if absent (Default: ``True``)
:param pub_key: public key to seed with (Default: ``None``)
:param priv_key: public key to seed with (Default: ``None``)
:param seed_cmd: Salt command to execute to seed the image. (Default: ``'seed.apply'``)
:param connection: libvirt connection URI, overriding defaults
:param username: username to connect with, overriding defaults
:param password: password to connect with, overriding defaults
:param os_type:
type of virtualization as found in the ``//os/type`` element of the libvirt definition.
The default value is taken from the host capabilities, with a preference for ``hvm``.
Only used when creating a new virtual machine.
:param arch:
architecture of the virtual machine. The default value is taken from the host capabilities,
but ``x86_64`` is prefed over ``i686``. Only used when creating a new virtual machine.
:param boot:
Specifies kernel for the virtual machine, as well as boot parameters
for the virtual machine. This is an optionl parameter, and all of the
keys are optional within the dictionary. If a remote path is provided
to kernel or initrd, salt will handle the downloading of the specified
remote fild, and will modify the XML accordingly.
.. code-block:: python
{
'kernel': '/root/f8-i386-vmlinuz',
'initrd': '/root/f8-i386-initrd',
'cmdline': 'console=ttyS0 ks=http://example.com/f8-i386/os/'
}
:param update: set to ``False`` to prevent updating a defined domain. (Default: ``True``)
.. deprecated:: sodium
.. rubric:: Example States
Make sure a virtual machine called ``domain_name`` is defined:
.. code-block:: yaml
domain_name:
virt.defined:
- cpu: 2
- mem: 2048
- disk_profile: prod
- disks:
- name: system
size: 8192
overlay_image: True
pool: default
image: /path/to/image.qcow2
- name: data
size: 16834
- nic_profile: prod
- interfaces:
- name: eth0
mac: 01:23:45:67:89:AB
- name: eth1
type: network
source: admin
- graphics:
type: spice
listen:
type: address
address: 192.168.0.125
"""
ret = {
"name": name,
"changes": {},
"result": True if not __opts__["test"] else None,
"comment": "",
}
try:
if name in __salt__["virt.list_domains"](
connection=connection, username=username, password=password
):
status = {}
if update:
status = __salt__["virt.update"](
name,
cpu=cpu,
mem=mem,
disk_profile=disk_profile,
disks=disks,
nic_profile=nic_profile,
interfaces=interfaces,
graphics=graphics,
live=True,
connection=connection,
username=username,
password=password,
boot=boot,
test=__opts__["test"],
)
ret["changes"][name] = status
if not status.get("definition"):
ret["comment"] = "Domain {0} unchanged".format(name)
ret["result"] = True
elif status.get("errors"):
ret[
"comment"
] = "Domain {0} updated with live update(s) failures".format(name)
else:
ret["comment"] = "Domain {0} updated".format(name)
else:
if not __opts__["test"]:
__salt__["virt.init"](
name,
cpu=cpu,
mem=mem,
os_type=os_type,
arch=arch,
hypervisor=vm_type,
disk=disk_profile,
disks=disks,
nic=nic_profile,
interfaces=interfaces,
graphics=graphics,
seed=seed,
install=install,
pub_key=pub_key,
priv_key=priv_key,
connection=connection,
username=username,
password=password,
boot=boot,
start=False,
)
ret["changes"][name] = {"definition": True}
ret["comment"] = "Domain {0} defined".format(name)
except libvirt.libvirtError as err:
# Something bad happened when defining / updating the VM, report it
ret["comment"] = six.text_type(err)
ret["result"] = False
return ret
def running(
name,
cpu=None,
@ -349,9 +541,10 @@ def running(
:param seed_cmd: Salt command to execute to seed the image. (Default: ``'seed.apply'``)
.. versionadded:: 2019.2.0
:param update: set to ``True`` to update a defined module. (Default: ``False``)
:param update: set to ``True`` to update a defined domain. (Default: ``False``)
.. versionadded:: 2019.2.0
.. deprecated:: sodium
:param connection: libvirt connection URI, overriding defaults
.. versionadded:: 2019.2.0
@ -430,101 +623,62 @@ def running(
address: 192.168.0.125
"""
merged_disks = disks
ret = {
"name": name,
"changes": {},
"result": True,
"comment": "{0} is running".format(name),
}
if not update:
salt.utils.versions.warn_until(
"Aluminium",
"'update' parameter has been deprecated. Future behavior will be the one of update=True"
"It will be removed in {version}.",
)
ret = defined(
name,
cpu=cpu,
mem=mem,
vm_type=vm_type,
disk_profile=disk_profile,
disks=merged_disks,
nic_profile=nic_profile,
interfaces=interfaces,
graphics=graphics,
seed=seed,
install=install,
pub_key=pub_key,
priv_key=priv_key,
os_type=os_type,
arch=arch,
boot=boot,
update=update,
connection=connection,
username=username,
password=password,
)
try:
result = True if not __opts__["test"] else None
if ret["result"] is None or ret["result"]:
changed = ret["changes"][name].get("definition", False)
try:
domain_state = __salt__["virt.vm_state"](name)
if domain_state.get(name) != "running":
action_msg = "started"
if update:
status = __salt__["virt.update"](
if not __opts__["test"]:
__salt__["virt.start"](
name,
cpu=cpu,
mem=mem,
disk_profile=disk_profile,
disks=disks,
nic_profile=nic_profile,
interfaces=interfaces,
graphics=graphics,
live=False,
connection=connection,
username=username,
password=password,
boot=boot,
)
if status["definition"]:
action_msg = "updated and started"
__salt__["virt.start"](name)
ret["changes"][name] = "Domain {0}".format(action_msg)
ret["comment"] = "Domain {0} {1}".format(name, action_msg)
else:
if update:
status = __salt__["virt.update"](
name,
cpu=cpu,
mem=mem,
disk_profile=disk_profile,
disks=disks,
nic_profile=nic_profile,
interfaces=interfaces,
graphics=graphics,
connection=connection,
username=username,
password=password,
boot=boot,
)
ret["changes"][name] = status
if status.get("errors", None):
ret[
"comment"
] = "Domain {0} updated, but some live update(s) failed".format(
name
)
elif not status["definition"]:
ret["comment"] = "Domain {0} exists and is running".format(name)
else:
ret[
"comment"
] = "Domain {0} updated, restart to fully apply the changes".format(
name
)
else:
ret["comment"] = "Domain {0} exists and is running".format(name)
except CommandExecutionError:
__salt__["virt.init"](
name,
cpu=cpu,
mem=mem,
os_type=os_type,
arch=arch,
hypervisor=vm_type,
disk=disk_profile,
disks=disks,
nic=nic_profile,
interfaces=interfaces,
graphics=graphics,
seed=seed,
install=install,
pub_key=pub_key,
priv_key=priv_key,
connection=connection,
username=username,
password=password,
boot=boot,
)
ret["changes"][name] = "Domain defined and started"
ret["comment"] = "Domain {0} defined and started".format(name)
except libvirt.libvirtError as err:
# Something bad happened when starting / updating the VM, report it
ret["comment"] = six.text_type(err)
ret["result"] = False
comment = "Domain {} started".format(name)
if not ret["comment"].endswith("unchanged"):
comment = "{} and started".format(ret["comment"])
ret["comment"] = comment
ret["changes"][name]["started"] = True
elif not changed:
ret["comment"] = "Domain {0} exists and is running".format(name)
except libvirt.libvirtError as err:
# Something bad happened when starting / updating the VM, report it
ret["comment"] = six.text_type(err)
ret["result"] = False
return ret
@ -713,6 +867,113 @@ def reverted(
return ret
def network_defined(
name,
bridge,
forward,
vport=None,
tag=None,
ipv4_config=None,
ipv6_config=None,
autostart=True,
connection=None,
username=None,
password=None,
):
"""
Defines a new network with specified arguments.
:param bridge: Bridge name
:param forward: Forward mode(bridge, router, nat)
:param vport: Virtualport type (Default: ``'None'``)
:param tag: Vlan tag (Default: ``'None'``)
:param ipv4_config:
IPv4 network configuration. See the :py:func`virt.network_define
<salt.modules.virt.network_define>` function corresponding parameter documentation
for more details on this dictionary.
(Default: None).
:param ipv6_config:
IPv6 network configuration. See the :py:func`virt.network_define
<salt.modules.virt.network_define>` function corresponding parameter documentation
for more details on this dictionary.
(Default: None).
:param autostart: Network autostart (default ``'True'``)
:param connection: libvirt connection URI, overriding defaults
:param username: username to connect with, overriding defaults
:param password: password to connect with, overriding defaults
.. versionadded:: sodium
.. code-block:: yaml
network_name:
virt.network_defined
.. code-block:: yaml
network_name:
virt.network_defined:
- bridge: main
- forward: bridge
- vport: openvswitch
- tag: 180
- autostart: True
.. code-block:: yaml
network_name:
virt.network_defined:
- bridge: natted
- forward: nat
- ipv4_config:
cidr: 192.168.42.0/24
dhcp_ranges:
- start: 192.168.42.10
end: 192.168.42.25
- start: 192.168.42.100
end: 192.168.42.150
- autostart: True
"""
ret = {
"name": name,
"changes": {},
"result": True if not __opts__["test"] else None,
"comment": "",
}
try:
info = __salt__["virt.network_info"](
name, connection=connection, username=username, password=password
)
if info and info[name]:
ret["comment"] = "Network {0} exists".format(name)
ret["result"] = True
else:
if not __opts__["test"]:
__salt__["virt.network_define"](
name,
bridge,
forward,
vport=vport,
tag=tag,
ipv4_config=ipv4_config,
ipv6_config=ipv6_config,
autostart=autostart,
start=False,
connection=connection,
username=username,
password=password,
)
ret["changes"][name] = "Network defined"
ret["comment"] = "Network {0} defined".format(name)
except libvirt.libvirtError as err:
ret["result"] = False
ret["comment"] = err.get_error_message()
return ret
def network_running(
name,
bridge,
@ -760,13 +1021,13 @@ def network_running(
.. code-block:: yaml
domain_name:
virt.network_define
network_name:
virt.network_running
.. code-block:: yaml
network_name:
virt.network_define:
virt.network_running:
- bridge: main
- forward: bridge
- vport: openvswitch
@ -776,7 +1037,7 @@ def network_running(
.. code-block:: yaml
network_name:
virt.network_define:
virt.network_running:
- bridge: natted
- forward: nat
- ipv4_config:
@ -789,46 +1050,57 @@ def network_running(
- autostart: True
"""
ret = {"name": name, "changes": {}, "result": True, "comment": ""}
ret = network_defined(
name,
bridge,
forward,
vport=vport,
tag=tag,
ipv4_config=ipv4_config,
ipv6_config=ipv6_config,
autostart=autostart,
connection=connection,
username=username,
password=password,
)
try:
info = __salt__["virt.network_info"](
name, connection=connection, username=username, password=password
)
if info:
if info[name]["active"]:
ret["comment"] = "Network {0} exists and is running".format(name)
else:
__salt__["virt.network_start"](
name, connection=connection, username=username, password=password
)
ret["changes"][name] = "Network started"
ret["comment"] = "Network {0} started".format(name)
else:
__salt__["virt.network_define"](
name,
bridge,
forward,
vport=vport,
tag=tag,
ipv4_config=ipv4_config,
ipv6_config=ipv6_config,
autostart=autostart,
start=True,
connection=connection,
username=username,
password=password,
defined = name in ret["changes"] and ret["changes"][name].startswith(
"Network defined"
)
result = True if not __opts__["test"] else None
if ret["result"] is None or ret["result"]:
try:
info = __salt__["virt.network_info"](
name, connection=connection, username=username, password=password
)
ret["changes"][name] = "Network defined and started"
ret["comment"] = "Network {0} defined and started".format(name)
except libvirt.libvirtError as err:
ret["result"] = False
ret["comment"] = err.get_error_message()
# In the corner case where test=True and the network wasn't defined
# we may not get the network in the info dict and that is normal.
if info.get(name, {}).get("active", False):
ret["comment"] = "{} and is running".format(ret["comment"])
else:
if not __opts__["test"]:
__salt__["virt.network_start"](
name,
connection=connection,
username=username,
password=password,
)
change = "Network started"
if name in ret["changes"]:
change = "{} and started".format(ret["changes"][name])
ret["changes"][name] = change
ret["comment"] = "{} and started".format(ret["comment"])
ret["result"] = result
except libvirt.libvirtError as err:
ret["result"] = False
ret["comment"] = err.get_error_message()
return ret
def pool_running(
def pool_defined(
name,
ptype=None,
target=None,
@ -841,9 +1113,9 @@ def pool_running(
password=None,
):
"""
Defines and starts a new pool with specified arguments.
Defines a new pool with specified arguments.
.. versionadded:: 2019.2.0
.. versionadded:: sodium
:param ptype: libvirt pool type
:param target: full path to the target device or folder. (Default: ``None``)
@ -865,12 +1137,7 @@ def pool_running(
.. code-block:: yaml
pool_name:
virt.pool_define
.. code-block:: yaml
pool_name:
virt.pool_define:
virt.pool_defined:
- ptype: netfs
- target: /mnt/cifs
- permissions:
@ -945,53 +1212,28 @@ def pool_running(
password=password,
)
action = "started"
if info[name]["state"] == "running":
action = "restarted"
action = ""
if info[name]["state"] != "running":
if not __opts__["test"]:
__salt__["virt.pool_stop"](
__salt__["virt.pool_build"](
name,
connection=connection,
username=username,
password=password,
)
action = ", built"
if not __opts__["test"]:
__salt__["virt.pool_build"](
name,
connection=connection,
username=username,
password=password,
)
__salt__["virt.pool_start"](
name,
connection=connection,
username=username,
password=password,
)
autostart_str = ", autostart flag changed" if needs_autostart else ""
ret["changes"][name] = "Pool updated, built{0} and {1}".format(
autostart_str, action
)
ret["comment"] = "Pool {0} updated, built{1} and {2}".format(
name, autostart_str, action
action = (
"{}, autostart flag changed".format(action)
if needs_autostart
else action
)
ret["changes"][name] = "Pool updated{0}".format(action)
ret["comment"] = "Pool {0} updated{1}".format(name, action)
else:
if info[name]["state"] == "running":
ret["comment"] = "Pool {0} unchanged and is running".format(name)
ret["result"] = True
else:
ret["changes"][name] = "Pool started"
ret["comment"] = "Pool {0} started".format(name)
if not __opts__["test"]:
__salt__["virt.pool_start"](
name,
connection=connection,
username=username,
password=password,
)
ret["comment"] = "Pool {0} unchanged".format(name)
ret["result"] = True
else:
needs_autostart = autostart
if not __opts__["test"]:
@ -1018,18 +1260,12 @@ def pool_running(
__salt__["virt.pool_build"](
name, connection=connection, username=username, password=password
)
__salt__["virt.pool_start"](
name, connection=connection, username=username, password=password
)
if needs_autostart:
ret["changes"][name] = "Pool defined, started and marked for autostart"
ret[
"comment"
] = "Pool {0} defined, started and marked for autostart".format(name)
ret["changes"][name] = "Pool defined, marked for autostart"
ret["comment"] = "Pool {0} defined, marked for autostart".format(name)
else:
ret["changes"][name] = "Pool defined and started"
ret["comment"] = "Pool {0} defined and started".format(name)
ret["changes"][name] = "Pool defined"
ret["comment"] = "Pool {0} defined".format(name)
if needs_autostart:
if not __opts__["test"]:
@ -1047,6 +1283,138 @@ def pool_running(
return ret
def pool_running(
name,
ptype=None,
target=None,
permissions=None,
source=None,
transient=False,
autostart=True,
connection=None,
username=None,
password=None,
):
"""
Defines and starts a new pool with specified arguments.
.. versionadded:: 2019.2.0
:param ptype: libvirt pool type
:param target: full path to the target device or folder. (Default: ``None``)
:param permissions:
target permissions. See :ref:`pool-define-permissions` for more details on this structure.
:param source:
dictionary containing keys matching the ``source_*`` parameters in function
:func:`salt.modules.virt.pool_define`.
:param transient:
when set to ``True``, the pool will be automatically undefined after being stopped. (Default: ``False``)
:param autostart:
Whether to start the pool when booting the host. (Default: ``True``)
:param start:
When ``True``, define and start the pool, otherwise the pool will be left stopped.
:param connection: libvirt connection URI, overriding defaults
:param username: username to connect with, overriding defaults
:param password: password to connect with, overriding defaults
.. code-block:: yaml
pool_name:
virt.pool_running
.. code-block:: yaml
pool_name:
virt.pool_running:
- ptype: netfs
- target: /mnt/cifs
- permissions:
- mode: 0770
- owner: 1000
- group: 100
- source:
dir: samba_share
hosts:
- one.example.com
- two.example.com
format: cifs
- autostart: True
"""
ret = pool_defined(
name,
ptype=ptype,
target=target,
permissions=permissions,
source=source,
transient=transient,
autostart=autostart,
connection=connection,
username=username,
password=password,
)
defined = name in ret["changes"] and ret["changes"][name].startswith("Pool defined")
updated = name in ret["changes"] and ret["changes"][name].startswith("Pool updated")
result = True if not __opts__["test"] else None
if ret["result"] is None or ret["result"]:
try:
info = __salt__["virt.pool_info"](
name, connection=connection, username=username, password=password
)
action = "started"
# In the corner case where test=True and the pool wasn"t defined
# we may get not get our pool in the info dict and that is normal.
is_running = info.get(name, {}).get("state", "stopped") == "running"
if is_running:
if updated:
action = "built, restarted"
if not __opts__["test"]:
__salt__["virt.pool_stop"](
name,
connection=connection,
username=username,
password=password,
)
if not __opts__["test"]:
__salt__["virt.pool_build"](
name,
connection=connection,
username=username,
password=password,
)
else:
action = "already running"
result = True
if not is_running or updated or defined:
if not __opts__["test"]:
__salt__["virt.pool_start"](
name,
connection=connection,
username=username,
password=password,
)
comment = "Pool {0}".format(name)
change = "Pool"
if name in ret["changes"]:
comment = "{0},".format(ret["comment"])
change = "{0},".format(ret["changes"][name])
if action != "already running":
ret["changes"][name] = "{0} {1}".format(change, action)
ret["comment"] = "{0} {1}".format(comment, action)
ret["result"] = result
except libvirt.libvirtError as err:
ret["comment"] = err.get_error_message()
ret["result"] = False
return ret
def pool_deleted(name, purge=False, connection=None, username=None, password=None):
"""
Deletes a virtual storage pool.

View file

@ -26,6 +26,7 @@ IPADDR{{loop.index}}="{{i['ipaddr']}}"
PREFIX{{loop.index}}="{{i['prefix']}}"
{% endfor -%}
{%endif%}{% if gateway %}GATEWAY="{{gateway}}"
{%endif%}{% if arpcheck %}ARPCHECK="{{arpcheck}}"
{%endif%}{% if enable_ipv6 %}IPV6INIT="yes"
{% if ipv6_autoconf %}IPV6_AUTOCONF="{{ipv6_autoconf}}"
{%endif%}{% if dhcpv6c %}DHCPV6C="{{dhcpv6c}}"

View file

@ -1234,25 +1234,30 @@ class AsyncReqMessageClient(object):
# TODO: timeout all in-flight sessions, or error
def close(self):
if self._closing:
try:
if self._closing:
return
except AttributeError:
# We must have been called from __del__
# The python interpreter has nuked most attributes already
return
self._closing = True
if hasattr(self, "stream") and self.stream is not None:
if ZMQ_VERSION_INFO < (14, 3, 0):
# stream.close() doesn't work properly on pyzmq < 14.3.0
if self.stream.socket:
self.stream.socket.close()
self.stream.io_loop.remove_handler(self.stream.socket)
# set this to None, more hacks for messed up pyzmq
self.stream.socket = None
self.socket.close()
else:
self.stream.close()
self.socket = None
self.stream = None
if self.context.closed is False:
self.context.term()
else:
self._closing = True
if hasattr(self, "stream") and self.stream is not None:
if ZMQ_VERSION_INFO < (14, 3, 0):
# stream.close() doesn't work properly on pyzmq < 14.3.0
if self.stream.socket:
self.stream.socket.close()
self.stream.io_loop.remove_handler(self.stream.socket)
# set this to None, more hacks for messed up pyzmq
self.stream.socket = None
self.socket.close()
else:
self.stream.close()
self.socket = None
self.stream = None
if self.context.closed is False:
self.context.term()
def destroy(self):
# Bacwards compat

View file

@ -47,7 +47,6 @@ try:
UserPassCredentials,
ServicePrincipalCredentials,
)
from msrestazure.azure_active_directory import MSIAuthentication
from msrestazure.azure_cloud import (
MetadataEndpointError,
get_cloud_from_metadata_endpoint,
@ -123,7 +122,14 @@ def _determine_auth(**kwargs):
kwargs["username"], kwargs["password"], cloud_environment=cloud_env
)
elif "subscription_id" in kwargs:
credentials = MSIAuthentication(cloud_environment=cloud_env)
try:
from msrestazure.azure_active_directory import MSIAuthentication
credentials = MSIAuthentication(cloud_environment=cloud_env)
except ImportError:
raise SaltSystemExit(
msg="MSI authentication support not availabe (requires msrestazure >= 0.4.14)"
)
else:
raise SaltInvocationError(
@ -161,7 +167,7 @@ def get_client(client_type, **kwargs):
if client_type not in client_map:
raise SaltSystemExit(
"The Azure ARM client_type {0} specified can not be found.".format(
msg="The Azure ARM client_type {0} specified can not be found.".format(
client_type
)
)

View file

@ -670,6 +670,11 @@ def symmetric_difference(lst1, lst2):
)
@jinja_filter("method_call")
def method_call(obj, f_name, *f_args, **f_kwargs):
return getattr(obj, f_name, lambda *args, **kwargs: None)(*f_args, **f_kwargs)
@jinja2.contextfunction
def show_full_context(ctx):
return salt.utils.data.simple_types_filter(

View file

@ -44,18 +44,30 @@ def store_job(opts, load, event=None, mminion=None):
nocache=load.get("nocache", False)
)
except KeyError:
emsg = "Returner function not found: {0}".format(prep_fstr)
emsg = "Returner '{0}' does not support function prep_jid".format(job_cache)
log.error(emsg)
raise KeyError(emsg)
except Exception: # pylint: disable=broad-except
log.critical(
"The specified '{0}' returner threw a stack trace:\n".format(job_cache),
exc_info=True,
)
# save the load, since we don't have it
saveload_fstr = "{0}.save_load".format(job_cache)
try:
mminion.returners[saveload_fstr](load["jid"], load)
except KeyError:
emsg = "Returner function not found: {0}".format(saveload_fstr)
emsg = "Returner '{0}' does not support function save_load".format(
job_cache
)
log.error(emsg)
raise KeyError(emsg)
except Exception: # pylint: disable=broad-except
log.critical(
"The specified '{0}' returner threw a stack trace:\n".format(job_cache),
exc_info=True,
)
elif salt.utils.jid.is_jid(load["jid"]):
# Store the jid
jidstore_fstr = "{0}.prep_jid".format(job_cache)
@ -65,6 +77,11 @@ def store_job(opts, load, event=None, mminion=None):
emsg = "Returner '{0}' does not support function prep_jid".format(job_cache)
log.error(emsg)
raise KeyError(emsg)
except Exception: # pylint: disable=broad-except
log.critical(
"The specified '{0}' returner threw a stack trace:\n".format(job_cache),
exc_info=True,
)
if event:
# If the return data is invalid, just ignore it
@ -115,8 +132,19 @@ def store_job(opts, load, event=None, mminion=None):
mminion.returners[savefstr](load["jid"], load)
except KeyError as e:
log.error("Load does not contain 'jid': %s", e)
except Exception: # pylint: disable=broad-except
log.critical(
"The specified '{0}' returner threw a stack trace:\n".format(job_cache),
exc_info=True,
)
mminion.returners[fstr](load)
try:
mminion.returners[fstr](load)
except Exception: # pylint: disable=broad-except
log.critical(
"The specified '{0}' returner threw a stack trace:\n".format(job_cache),
exc_info=True,
)
if opts.get("job_cache_store_endtime") and updateetfstr in mminion.returners:
mminion.returners[updateetfstr](load["jid"], endtime)

View file

@ -8,6 +8,7 @@ Define some generic socket functions for network modules
from __future__ import absolute_import, print_function, unicode_literals
import collections
import fnmatch
import itertools
import logging
import os
@ -1272,6 +1273,39 @@ def in_subnet(cidr, addr=None):
return any(ipaddress.ip_address(item) in cidr for item in addr)
def _get_ips(ifaces, proto="inet"):
"""
Accepts a dict of interface data and returns a list of dictionaries
"""
ret = []
for ip_info in six.itervalues(ifaces):
ret.extend(ip_info.get(proto, []))
ret.extend(
[addr for addr in ip_info.get("secondary", []) if addr.get("type") == proto]
)
return ret
def _filter_interfaces(interface=None, interface_data=None):
"""
Gather interface data if not passed in, and optionally filter by the
specified interface name.
"""
ifaces = interface_data if isinstance(interface_data, dict) else interfaces()
if interface is None:
ret = ifaces
else:
interface = salt.utils.args.split_input(interface)
# pylint: disable=not-an-iterable
ret = {
k: v
for k, v in six.iteritems(ifaces)
if any((fnmatch.fnmatch(k, pat) for pat in interface))
}
# pylint: enable=not-an-iterable
return ret
def _ip_addrs(
interface=None, include_loopback=False, interface_data=None, proto="inet"
):
@ -1280,27 +1314,14 @@ def _ip_addrs(
proto = inet|inet6
"""
addrs = _get_ips(_filter_interfaces(interface, interface_data), proto=proto)
ret = set()
for addr in addrs:
addr = ipaddress.ip_address(addr.get("address"))
if not addr.is_loopback or include_loopback:
ret.add(addr)
ifaces = interface_data if isinstance(interface_data, dict) else interfaces()
if interface is None:
target_ifaces = ifaces
else:
target_ifaces = dict(
[(k, v) for k, v in six.iteritems(ifaces) if k == interface]
)
if not target_ifaces:
log.error("Interface {0} not found.".format(interface))
for ip_info in six.itervalues(target_ifaces):
addrs = ip_info.get(proto, [])
addrs.extend(
[addr for addr in ip_info.get("secondary", []) if addr.get("type") == proto]
)
for addr in addrs:
addr = ipaddress.ip_address(addr.get("address"))
if not addr.is_loopback or include_loopback:
ret.add(addr)
return [six.text_type(addr) for addr in sorted(ret)]
@ -1322,6 +1343,82 @@ def ip_addrs6(interface=None, include_loopback=False, interface_data=None):
return _ip_addrs(interface, include_loopback, interface_data, "inet6")
def _ip_networks(
interface=None,
include_loopback=False,
verbose=False,
interface_data=None,
proto="inet",
):
"""
Returns a list of networks to which the minion belongs. The results can be
restricted to a single interface using the ``interface`` argument.
"""
addrs = _get_ips(_filter_interfaces(interface, interface_data), proto=proto)
ret = set()
for addr in addrs:
_ip = addr.get("address")
_net = addr.get("netmask" if proto == "inet" else "prefixlen")
if _ip and _net:
try:
ip_net = ipaddress.ip_network("{0}/{1}".format(_ip, _net), strict=False)
except Exception: # pylint: disable=broad-except
continue
if not ip_net.is_loopback or include_loopback:
ret.add(ip_net)
if not verbose:
return [six.text_type(addr) for addr in sorted(ret)]
verbose_ret = {
six.text_type(x): {
"address": six.text_type(x.network_address),
"netmask": six.text_type(x.netmask),
"num_addresses": x.num_addresses,
"prefixlen": x.prefixlen,
}
for x in ret
}
return verbose_ret
def ip_networks(
interface=None, include_loopback=False, verbose=False, interface_data=None
):
"""
Returns the IPv4 networks to which the minion belongs. Networks will be
returned as a list of network/prefixlen. To get more information about a
each network, use verbose=True and a dictionary with more information will
be returned.
"""
return _ip_networks(
interface=interface,
include_loopback=include_loopback,
verbose=verbose,
interface_data=interface_data,
proto="inet",
)
def ip_networks6(
interface=None, include_loopback=False, verbose=False, interface_data=None
):
"""
Returns the IPv6 networks to which the minion belongs. Networks will be
returned as a list of network/prefixlen. To get more information about a
each network, use verbose=True and a dictionary with more information will
be returned.
"""
return _ip_networks(
interface=interface,
include_loopback=include_loopback,
verbose=verbose,
interface_data=interface_data,
proto="inet6",
)
def hex2ip(hex_ip, invert=False):
"""
Convert a hex string to an ip, if a failure occurs the original hex is

View file

@ -560,37 +560,31 @@ def pytest_runtest_setup(item):
# ----- Test Groups Selection --------------------------------------------------------------------------------------->
def get_group_size(total_items, total_groups):
def get_group_size_and_start(total_items, total_groups, group_id):
"""
Return the group size.
Calculate group size and start index.
"""
return int(total_items / total_groups)
base_size = total_items // total_groups
rem = total_items % total_groups
start = base_size * (group_id - 1) + min(group_id - 1, rem)
size = base_size + 1 if group_id <= rem else base_size
return (start, size)
def get_group(items, group_count, group_size, group_id):
def get_group(items, total_groups, group_id):
"""
Get the items from the passed in group based on group size.
"""
start = group_size * (group_id - 1)
end = start + group_size
total_items = len(items)
if not 0 < group_id <= total_groups:
raise ValueError("Invalid test-group argument")
if start >= total_items:
pytest.fail(
"Invalid test-group argument. start({})>=total_items({})".format(
start, total_items
)
)
elif start < 0:
pytest.fail("Invalid test-group argument. Start({})<0".format(start))
if group_count == group_id and end < total_items:
# If this is the last group and there are still items to test
# which don't fit in this group based on the group items count
# add them anyway
end = total_items
return items[start:end]
start, size = get_group_size_and_start(len(items), total_groups, group_id)
selected = items[start : start + size]
deselected = items[:start] + items[start + size :]
assert len(selected) + len(deselected) == len(items)
return selected, deselected
@pytest.hookimpl(hookwrapper=True, tryfirst=True)
@ -607,10 +601,11 @@ def pytest_collection_modifyitems(config, items):
total_items = len(items)
group_size = get_group_size(total_items, group_count)
tests_in_group = get_group(items, group_count, group_size, group_id)
tests_in_group, deselected = get_group(items, group_count, group_id)
# Replace all items in the list
items[:] = tests_in_group
if deselected:
config.hook.pytest_deselected(items=deselected)
terminal_reporter = config.pluginmanager.get_plugin("terminalreporter")
terminal_reporter.write(

View file

@ -3,21 +3,17 @@
:codeauthor: Li Kexian <doyenli@tencent.com>
"""
# Import Python Libs
from __future__ import absolute_import, print_function, unicode_literals
import os
# Import Salt Libs
from salt.config import cloud_providers_config
# Import Salt Testing Libs
from tests.support.case import ShellCase
from tests.support.helpers import expensiveTest, generate_random_name
from tests.support.helpers import expensiveTest, random_string
from tests.support.runtests import RUNTIME_VARS
# Create the cloud instance name to be used throughout the tests
INSTANCE_NAME = generate_random_name("CLOUD-TEST-")
INSTANCE_NAME = random_string("CLOUD-TEST-", lowercase=False)
PROVIDER_NAME = "tencentcloud"

View file

@ -3,7 +3,6 @@
Tests for the Openstack Cloud Provider
"""
# Import python libs
from __future__ import absolute_import, print_function, unicode_literals
import logging
@ -11,14 +10,11 @@ import os
import shutil
from time import sleep
# Import Salt Libs
from salt.config import cloud_config, cloud_providers_config
from salt.ext.six.moves import range
from salt.utils.yaml import safe_load
# Import Salt Testing libs
from tests.support.case import ShellCase
from tests.support.helpers import expensiveTest, generate_random_name
from tests.support.helpers import expensiveTest, random_string
from tests.support.paths import FILES
from tests.support.runtests import RUNTIME_VARS
@ -161,9 +157,9 @@ class CloudTest(ShellCase):
# Create the cloud instance name to be used throughout the tests
subclass = self.__class__.__name__.strip("Test")
# Use the first three letters of the subclass, fill with '-' if too short
self._instance_name = generate_random_name(
"cloud-test-{:-<3}-".format(subclass[:3])
).lower()
self._instance_name = random_string(
"cloud-test-{:-<3}-".format(subclass[:3]), uppercase=False
)
return self._instance_name
@property

View file

@ -0,0 +1,54 @@
a:
cmd.run:
- name: exit 0
b:
cmd.run:
- name: exit 0
c:
cmd.run:
- name: exit 0
d:
cmd.run:
- name: exit 1
e:
cmd.run:
- name: exit 1
f:
cmd.run:
- name: exit 1
reqs not met:
cmd.run:
- name: echo itdidntonfail
- onfail_all:
- cmd: a
- cmd: e
reqs also not met:
cmd.run:
- name: echo italsodidnonfail
- onfail_all:
- cmd: a
- cmd: b
- cmd: c
reqs met:
cmd.run:
- name: echo itonfailed
- onfail_all:
- cmd: d
- cmd: e
- cmd: f
reqs also met:
cmd.run:
- name: echo itonfailed
- onfail_all:
- cmd: d
- require:
- cmd: a

View file

@ -2,6 +2,10 @@ a:
cmd.run:
- name: exit 1
pass:
cmd.run:
- name: exit 0
b:
cmd.run:
- name: echo b
@ -23,3 +27,19 @@ d:
- cmd: a
- require:
- cmd: c
e:
cmd.run:
- name: echo e
- onfail:
- cmd: pass
- require:
- cmd: c
f:
cmd.run:
- name: echo f
- onfail:
- cmd: pass
- onchanges:
- cmd: b

View file

@ -5,3 +5,26 @@ echo_test_hello:
kwargs:
assertion: assertEqual
expected_return: 'hello'
test_args:
module_and_function: test.arg
args:
- 1
- "two"
kwargs:
a: "something"
b: "hello"
assertions:
- assertion_section: kwargs:b
expected_return: hello
assertion: assertIn
- assertion: assertEqual
assertion_section: kwargs:a
expected_return: something
- assertion: assertIn
assertion_section: args
expected_return: "two"
- assertion: assertIn
assertion_section: args
expected_return: 1
print_result: True

View file

@ -39,18 +39,21 @@ class LoaderGrainsTest(ModuleCase):
# before trying to get the grains. This test may execute before the
# minion has finished syncing down the files it needs.
module = os.path.join(
RUNTIME_VARS.TMP,
"rootdir",
"cache",
RUNTIME_VARS.RUNTIME_CONFIGS["minion"]["cachedir"],
"files",
"base",
"_grains",
"test_custom_grain2.py",
"custom_grain2.py",
)
tries = 0
while not os.path.exists(module):
tries += 1
if tries > 60:
self.fail(
"Failed to found custom grains module in cache path {}".format(
module
)
)
break
time.sleep(1)
grains = self.run_function("grains.items")

View file

@ -2,9 +2,6 @@
from __future__ import absolute_import, print_function, unicode_literals
import random
import string
import pytest
import salt.utils.files
import salt.utils.platform
@ -12,7 +9,7 @@ import salt.utils.stringutils
from salt.ext import six
from salt.ext.six.moves import range
from tests.support.case import ModuleCase
from tests.support.helpers import destructiveTest, skip_if_not_root
from tests.support.helpers import destructiveTest, random_string, skip_if_not_root
from tests.support.unit import skipIf
if not salt.utils.platform.is_windows():
@ -32,11 +29,11 @@ class GroupModuleTest(ModuleCase):
Get current settings
"""
super(GroupModuleTest, self).setUp()
self._user = self.__random_string()
self._user1 = self.__random_string()
self._no_user = self.__random_string()
self._group = self.__random_string()
self._no_group = self.__random_string()
self._user = random_string("tg-", uppercase=False)
self._user1 = random_string("tg-", uppercase=False)
self._no_user = random_string("tg-", uppercase=False)
self._group = random_string("tg-", uppercase=False)
self._no_group = random_string("tg-", uppercase=False)
self.os_grain = self.run_function("grains.item", ["kernel"])
self._gid = 64989 if "Windows" not in self.os_grain["kernel"] else None
self._new_gid = 64998 if "Windows" not in self.os_grain["kernel"] else None
@ -53,14 +50,6 @@ class GroupModuleTest(ModuleCase):
self.run_function("user.delete", [self._user1])
self.run_function("group.delete", [self._group])
def __random_string(self, size=6):
"""
Generates a random names
"""
return "tg-" + "".join(
random.choice(string.ascii_lowercase + string.digits) for x in range(size)
)
def __get_system_group_gid_range(self):
"""
Returns (SYS_GID_MIN, SYS_GID_MAX)

View file

@ -3,23 +3,20 @@
integration tests for shadow linux
"""
# Import Python libs
from __future__ import absolute_import, print_function, unicode_literals
import os
import random
import string
import salt.modules.linux_shadow as shadow
# Import Salt libs
import salt.modules.linux_shadow
import salt.utils.files
import salt.utils.platform
from salt.ext.six.moves import range
# Import Salt Testing libs
from tests.support.case import ModuleCase
from tests.support.helpers import destructiveTest, flaky, skip_if_not_root
from tests.support.helpers import (
destructiveTest,
flaky,
random_string,
skip_if_not_root,
)
from tests.support.unit import skipIf
@ -38,20 +35,9 @@ class ShadowModuleTest(ModuleCase):
if "ERROR" in self._password:
self.fail("Failed to generate password: {0}".format(self._password))
super(ShadowModuleTest, self).setUp()
os_grain = self.run_function("grains.item", ["kernel"])
if os_grain["kernel"] not in "Linux":
self.skipTest("Test not applicable to '{kernel}' kernel".format(**os_grain))
self._test_user = self.__random_string()
self._no_user = self.__random_string()
self._password = shadow.gen_password("Password1234")
def __random_string(self, size=6):
"""
Generates a random username
"""
return "tu-" + "".join(
random.choice(string.ascii_lowercase + string.digits) for x in range(size)
)
self._no_user = random_string("tu-", uppercase=False)
self._test_user = random_string("tu-", uppercase=False)
self._password = salt.modules.linux_shadow.gen_password("Password1234")
@destructiveTest
@skipIf(True, "SLOWTEST skip")

View file

@ -3,40 +3,20 @@
:codeauthor: Nicole Thomas <nicole@saltstack.com>
"""
# Import Python Libs
from __future__ import absolute_import, print_function, unicode_literals
import random
import string
# Import Salt Libs
from salt.exceptions import CommandExecutionError
from salt.ext import six
# Import 3rd-party libs
from salt.ext.six.moves import range # pylint: disable=import-error,redefined-builtin
# Import Salt Testing Libs
from tests.support.case import ModuleCase
from tests.support.helpers import destructiveTest, skip_if_not_root
from tests.support.helpers import destructiveTest, random_string, skip_if_not_root
from tests.support.unit import skipIf
def __random_string(size=6):
"""
Generates a random username
"""
return "RS-" + "".join(
random.choice(string.ascii_uppercase + string.digits) for x in range(size)
)
# Create group name strings for tests
ADD_GROUP = __random_string()
DEL_GROUP = __random_string()
CHANGE_GROUP = __random_string()
ADD_USER = __random_string()
REP_USER_GROUP = __random_string()
ADD_GROUP = random_string("RS-", lowercase=False)
DEL_GROUP = random_string("RS-", lowercase=False)
CHANGE_GROUP = random_string("RS-", lowercase=False)
ADD_USER = random_string("RS-", lowercase=False)
REP_USER_GROUP = random_string("RS-", lowercase=False)
@destructiveTest

View file

@ -3,35 +3,18 @@
integration tests for mac_shadow
"""
# Import Python libs
from __future__ import absolute_import, print_function, unicode_literals
import datetime
import random
import string
# Import Salt libs
import salt.utils.path
import salt.utils.platform
from salt.ext.six.moves import range
from tests.support.case import ModuleCase
from tests.support.helpers import destructiveTest, skip_if_not_root
# Import Salt Testing libs
from tests.support.helpers import destructiveTest, random_string, skip_if_not_root
from tests.support.unit import skipIf
def __random_string(size=6):
"""
Generates a random username
"""
return "RS-" + "".join(
random.choice(string.ascii_uppercase + string.digits) for x in range(size)
)
TEST_USER = __random_string()
NO_USER = __random_string()
TEST_USER = random_string("RS-", lowercase=False)
NO_USER = random_string("RS-", lowercase=False)
@skip_if_not_root

View file

@ -3,37 +3,26 @@
integration tests for mac_system
"""
# Import python libs
from __future__ import absolute_import, print_function, unicode_literals
import logging
import random
import string
# Import salt libs
import salt.utils.path
import salt.utils.platform
from salt.ext.six.moves import range
# Import Salt Testing libs
from tests.support.case import ModuleCase
from tests.support.helpers import destructiveTest, flaky, skip_if_not_root
from tests.support.helpers import (
destructiveTest,
flaky,
random_string,
skip_if_not_root,
)
from tests.support.unit import skipIf
log = logging.getLogger(__name__)
def __random_string(size=6):
"""
Generates a random username
"""
return "RS-" + "".join(
random.choice(string.ascii_uppercase + string.digits) for x in range(size)
)
SET_COMPUTER_NAME = __random_string()
SET_SUBNET_NAME = __random_string()
SET_COMPUTER_NAME = random_string("RS-", lowercase=False)
SET_SUBNET_NAME = random_string("RS-", lowercase=False)
@skip_if_not_root

View file

@ -3,43 +3,22 @@
:codeauthor: Nicole Thomas <nicole@saltstack.com>
"""
# Import Python Libs
from __future__ import absolute_import, print_function, unicode_literals
import os
import random
import string
import salt.ext.six as six
import salt.ext.six as six
# Import Salt Libs
import salt.utils.files
from salt.exceptions import CommandExecutionError
# Import 3rd-party libs
from salt.ext.six.moves import range # pylint: disable=import-error,redefined-builtin
# Import Salt Testing Libs
from tests.support.case import ModuleCase
from tests.support.helpers import destructiveTest, skip_if_not_root
from tests.support.helpers import destructiveTest, random_string, skip_if_not_root
from tests.support.unit import skipIf
def __random_string(size=6):
"""
Generates a random username
"""
return "RS-" + "".join(
random.choice(string.ascii_uppercase + string.digits) for x in range(size)
)
# Create user strings for tests
ADD_USER = __random_string()
DEL_USER = __random_string()
PRIMARY_GROUP_USER = __random_string()
CHANGE_USER = __random_string()
ADD_USER = random_string("RS-", lowercase=False)
DEL_USER = random_string("RS-", lowercase=False)
PRIMARY_GROUP_USER = random_string("RS-", lowercase=False)
CHANGE_USER = random_string("RS-", lowercase=False)
@destructiveTest

View file

@ -129,8 +129,11 @@ class Nilrt_ipModuleTest(ModuleCase):
info = self.run_function("ip.get_interfaces_details")
for interface in info["interfaces"]:
self.assertIn("8.8.4.4", interface["ipv4"]["dns"])
self.assertIn("8.8.8.8", interface["ipv4"]["dns"])
if self.os_grain["lsb_distrib_id"] != "nilrt":
self.assertIn("8.8.4.4", interface["ipv4"]["dns"])
self.assertIn("8.8.8.8", interface["ipv4"]["dns"])
else:
self.assertEqual(interface["ipv4"]["dns"], ["8.8.4.4"])
self.assertEqual(interface["ipv4"]["requestmode"], "static")
self.assertEqual(interface["ipv4"]["address"], "192.168.10.4")
self.assertEqual(interface["ipv4"]["netmask"], "255.255.255.0")

View file

@ -7,13 +7,9 @@
"""
from __future__ import absolute_import, print_function, unicode_literals
import random
import string
import pytest
from salt.ext.six.moves import range
from tests.support.case import ModuleCase
from tests.support.helpers import destructiveTest, skip_if_not_root
from tests.support.helpers import destructiveTest, random_string, skip_if_not_root
@pytest.mark.skip_unless_on_freebsd
@ -24,17 +20,11 @@ class PwUserModuleTest(ModuleCase):
if os_grain["kernel"] != "FreeBSD":
self.skipTest("Test not applicable to '{kernel}' kernel".format(**os_grain))
def __random_string(self, size=6):
return "".join(
random.choice(string.ascii_uppercase + string.digits) for x in range(size)
)
@destructiveTest
@skip_if_not_root
def test_groups_includes_primary(self):
# Let's create a user, which usually creates the group matching the
# name
uname = self.__random_string()
# Let's create a user, which usually creates the group matching the name
uname = random_string("PWU-", lowercase=False)
if self.run_function("user.add", [uname]) is not True:
# Skip because creating is not what we're testing here
self.run_function("user.delete", [uname, True, True])
@ -50,7 +40,7 @@ class PwUserModuleTest(ModuleCase):
self.run_function("user.delete", [uname, True, True])
# Now, a weird group id
gname = self.__random_string()
gname = random_string("PWU-", lowercase=False)
if self.run_function("group.add", [gname]) is not True:
self.run_function("group.delete", [gname, True, True])
self.skipTest("Failed to create group")

View file

@ -39,6 +39,7 @@ class SaltcheckModuleTest(ModuleCase):
self.assertDictContainsSubset(
{"status": "Pass"}, ret[0]["validate-saltcheck"]["echo_test_hello"]
)
self.assertDictContainsSubset({"Failed": 0}, ret[1]["TEST RESULTS"])
@skipIf(True, "SLOWTEST skip")
def test_topfile_validation(self):

View file

@ -1069,6 +1069,80 @@ class StateModuleTest(ModuleCase, SaltReturnAssertsMixin):
self.assertReturnNonEmptySaltType(ret)
self.assertEqual(expected_result, result)
@skipIf(True, "SLOWTEST skip")
def test_requisites_onfail_all(self):
"""
Call sls file containing several onfail-all
Ensure that some of them are failing and that the order is right.
"""
expected_result = {
"cmd_|-a_|-exit 0_|-run": {
"__run_num__": 0,
"changes": True,
"comment": 'Command "exit 0" run',
"result": True,
},
"cmd_|-b_|-exit 0_|-run": {
"__run_num__": 1,
"changes": True,
"comment": 'Command "exit 0" run',
"result": True,
},
"cmd_|-c_|-exit 0_|-run": {
"__run_num__": 2,
"changes": True,
"comment": 'Command "exit 0" run',
"result": True,
},
"cmd_|-d_|-exit 1_|-run": {
"__run_num__": 3,
"changes": True,
"comment": 'Command "exit 1" run',
"result": False,
},
"cmd_|-e_|-exit 1_|-run": {
"__run_num__": 4,
"changes": True,
"comment": 'Command "exit 1" run',
"result": False,
},
"cmd_|-f_|-exit 1_|-run": {
"__run_num__": 5,
"changes": True,
"comment": 'Command "exit 1" run',
"result": False,
},
"cmd_|-reqs also met_|-echo itonfailed_|-run": {
"__run_num__": 9,
"changes": True,
"comment": 'Command "echo itonfailed" run',
"result": True,
},
"cmd_|-reqs also not met_|-echo italsodidnonfail_|-run": {
"__run_num__": 7,
"changes": False,
"comment": "State was not run because onfail req did not change",
"result": True,
},
"cmd_|-reqs met_|-echo itonfailed_|-run": {
"__run_num__": 8,
"changes": True,
"comment": 'Command "echo itonfailed" run',
"result": True,
},
"cmd_|-reqs not met_|-echo itdidntonfail_|-run": {
"__run_num__": 6,
"changes": False,
"comment": "State was not run because onfail req did not change",
"result": True,
},
}
ret = self.run_function("state.sls", mods="requisites.onfail_all")
result = self.normalize_ret(ret)
self.assertReturnNonEmptySaltType(ret)
self.assertEqual(expected_result, result)
@skipIf(True, "SLOWTEST skip")
def test_requisites_full_sls(self):
"""
@ -1814,6 +1888,12 @@ class StateModuleTest(ModuleCase, SaltReturnAssertsMixin):
stdout = state_run["cmd_|-d_|-echo d_|-run"]["changes"]["stdout"]
self.assertEqual(stdout, "d")
comment = state_run["cmd_|-e_|-echo e_|-run"]["comment"]
self.assertEqual(comment, "State was not run because onfail req did not change")
stdout = state_run["cmd_|-f_|-echo f_|-run"]["changes"]["stdout"]
self.assertEqual(stdout, "f")
@skipIf(True, "SLOWTEST skip")
def test_multiple_onfail_requisite_with_required_no_run(self):
"""
@ -2274,6 +2354,37 @@ class StateModuleTest(ModuleCase, SaltReturnAssertsMixin):
self.assertEqual(val["comment"], "File {0} updated".format(file_name))
self.assertEqual(val["changes"]["diff"], "New file")
def test_state_test_pillar_false(self):
"""
test state.test forces test kwarg to True even when pillar is set to False
"""
self._add_runtime_pillar(pillar={"test": False})
testfile = os.path.join(RUNTIME_VARS.TMP, "testfile")
comment = "The file {0} is set to be changed\nNote: No changes made, actual changes may\nbe different due to other states.".format(
testfile
)
ret = self.run_function("state.test", ["core"])
for key, val in ret.items():
self.assertEqual(val["comment"], comment)
self.assertEqual(val["changes"], {"newfile": testfile})
def test_state_test_test_false_pillar_false(self):
"""
test state.test forces test kwarg to True even when pillar and kwarg are set
to False
"""
self._add_runtime_pillar(pillar={"test": False})
testfile = os.path.join(RUNTIME_VARS.TMP, "testfile")
comment = "The file {0} is set to be changed\nNote: No changes made, actual changes may\nbe different due to other states.".format(
testfile
)
ret = self.run_function("state.test", ["core"], test=False)
for key, val in ret.items():
self.assertEqual(val["comment"], comment)
self.assertEqual(val["changes"], {"newfile": testfile})
@skipIf(
six.PY3 and salt.utils.platform.is_darwin(), "Test is broken on macosx and PY3"
)

View file

@ -2,15 +2,12 @@
from __future__ import absolute_import, print_function, unicode_literals
import random
import string
import pytest
import salt.utils.platform
from salt.ext.six.moves import range
from tests.support.case import ModuleCase
from tests.support.helpers import (
destructiveTest,
random_string,
requires_system_grains,
skip_if_not_root,
)
@ -28,17 +25,12 @@ class UseraddModuleTestLinux(ModuleCase):
if os_grain["kernel"] not in ("Linux", "Darwin"):
self.skipTest("Test not applicable to '{kernel}' kernel".format(**os_grain))
def __random_string(self, size=6):
return "RS-" + "".join(
random.choice(string.ascii_uppercase + string.digits) for x in range(size)
)
@requires_system_grains
@skipIf(True, "SLOWTEST skip")
def test_groups_includes_primary(self, grains):
# Let's create a user, which usually creates the group matching the
# name
uname = self.__random_string()
uname = random_string("RS-", lowercase=False)
if self.run_function("user.add", [uname]) is not True:
# Skip because creating is not what we're testing here
self.run_function("user.delete", [uname, True, True])
@ -57,7 +49,7 @@ class UseraddModuleTestLinux(ModuleCase):
self.run_function("user.delete", [uname, True, True])
# Now, a weird group id
gname = self.__random_string()
gname = random_string("RS-", lowercase=False)
if self.run_function("group.add", [gname]) is not True:
self.run_function("group.delete", [gname, True, True])
self.skipTest("Failed to create group")
@ -105,14 +97,9 @@ class UseraddModuleTestLinux(ModuleCase):
@skip_if_not_root
@pytest.mark.windows_whitelisted
class UseraddModuleTestWindows(ModuleCase):
def __random_string(self, size=6):
return "RS-" + "".join(
random.choice(string.ascii_uppercase + string.digits) for x in range(size)
)
def setUp(self):
self.user_name = self.__random_string()
self.group_name = self.__random_string()
self.user_name = random_string("RS-", lowercase=False)
self.group_name = random_string("RS-", lowercase=False)
def tearDown(self):
self.run_function("user.delete", [self.user_name, True, True])

View file

@ -3,44 +3,43 @@
Integration tests for the vault execution module
"""
# Import Python Libs
from __future__ import absolute_import, print_function, unicode_literals
import inspect
import logging
import time
# Import Salt Libs
import salt.utils.path
from tests.support.case import ModuleCase
from tests.support.helpers import destructiveTest
from tests.support.paths import FILES
# Import Salt Testing Libs
from tests.support.unit import skipIf
from tests.support.runtests import RUNTIME_VARS
from tests.support.sminion import create_sminion
from tests.support.unit import SkipTest, skipIf
log = logging.getLogger(__name__)
VAULT_BINARY_PATH = salt.utils.path.which("vault")
@destructiveTest
@skipIf(not salt.utils.path.which("dockerd"), "Docker not installed")
@skipIf(not salt.utils.path.which("vault"), "Vault not installed")
@skipIf(not VAULT_BINARY_PATH, "Vault not installed")
class VaultTestCase(ModuleCase):
"""
Test vault module
"""
count = 0
def setUp(self):
"""
SetUp vault container
"""
if self.count == 0:
config = '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h", "disable_mlock": true}'
self.run_state("docker_image.present", name="vault", tag="0.9.6")
self.run_state(
"docker_container.running",
@classmethod
def setUpClass(cls):
cls.sminion = sminion = create_sminion()
config = '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h", "disable_mlock": true}'
sminion.states.docker_image.present(name="vault", tag="0.9.6")
login_attempts = 1
container_created = False
while True:
if container_created:
sminion.states.docker_container.stopped(name="vault")
sminion.states.docker_container.absent(name="vault")
ret = sminion.states.docker_container.running(
name="vault",
image="vault:0.9.6",
port_bindings="8200:8200",
@ -49,38 +48,37 @@ class VaultTestCase(ModuleCase):
"VAULT_LOCAL_CONFIG": config,
},
)
log.debug("docker_container.running return: %s", ret)
container_created = ret["result"]
time.sleep(5)
ret = self.run_function(
"cmd.retcode",
cmd="/usr/local/bin/vault login token=testsecret",
ret = sminion.functions.cmd.run_all(
cmd="{} login token=testsecret".format(VAULT_BINARY_PATH),
env={"VAULT_ADDR": "http://127.0.0.1:8200"},
hide_output=False,
)
if ret != 0:
self.skipTest("unable to login to vault")
ret = self.run_function(
"cmd.retcode",
cmd="/usr/local/bin/vault policy write testpolicy {0}/vault.hcl".format(
FILES
),
env={"VAULT_ADDR": "http://127.0.0.1:8200"},
)
if ret != 0:
self.skipTest("unable to assign policy to vault")
self.count += 1
if ret["retcode"] == 0:
break
log.debug("Vault login failed. Return: %s", ret)
login_attempts += 1
def tearDown(self):
"""
TearDown vault container
"""
if login_attempts >= 3:
raise SkipTest("unable to login to vault")
def count_tests(funcobj):
return inspect.ismethod(funcobj) and funcobj.__name__.startswith("test_")
ret = sminion.functions.cmd.retcode(
cmd="{} policy write testpolicy {}/vault.hcl".format(
VAULT_BINARY_PATH, RUNTIME_VARS.FILES
),
env={"VAULT_ADDR": "http://127.0.0.1:8200"},
)
if ret != 0:
raise SkipTest("unable to assign policy to vault")
numtests = len(inspect.getmembers(VaultTestCase, predicate=count_tests))
if self.count >= numtests:
self.run_state("docker_container.stopped", name="vault")
self.run_state("docker_container.absent", name="vault")
self.run_state("docker_image.absent", name="vault", force=True)
@classmethod
def tearDownClass(cls):
cls.sminion.states.docker_container.stopped(name="vault")
cls.sminion.states.docker_container.absent(name="vault")
cls.sminion.states.docker_image.absent(name="vault", force=True)
cls.sminion = None
@skipIf(True, "SLOWTEST skip")
def test_write_read_secret(self):
@ -151,17 +149,18 @@ class VaultTestCaseCurrent(ModuleCase):
Test vault module against current vault
"""
count = 0
def setUp(self):
"""
SetUp vault container
"""
if self.count == 0:
config = '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h", "disable_mlock": true}'
self.run_state("docker_image.present", name="vault", tag="1.3.1")
self.run_state(
"docker_container.running",
@classmethod
def setUpClass(cls):
cls.sminion = sminion = create_sminion()
config = '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h", "disable_mlock": true}'
sminion.states.docker_image.present(name="vault", tag="1.3.1")
login_attempts = 1
container_created = False
while True:
if container_created:
sminion.states.docker_container.stopped(name="vault")
sminion.states.docker_container.absent(name="vault")
ret = sminion.states.docker_container.running(
name="vault",
image="vault:1.3.1",
port_bindings="8200:8200",
@ -170,38 +169,37 @@ class VaultTestCaseCurrent(ModuleCase):
"VAULT_LOCAL_CONFIG": config,
},
)
log.debug("docker_container.running return: %s", ret)
container_created = ret["result"]
time.sleep(5)
ret = self.run_function(
"cmd.retcode",
cmd="/usr/local/bin/vault login token=testsecret",
ret = sminion.functions.cmd.run_all(
cmd="{} login token=testsecret".format(VAULT_BINARY_PATH),
env={"VAULT_ADDR": "http://127.0.0.1:8200"},
hide_output=False,
)
if ret != 0:
self.skipTest("unable to login to vault")
ret = self.run_function(
"cmd.retcode",
cmd="/usr/local/bin/vault policy write testpolicy {0}/vault.hcl".format(
FILES
),
env={"VAULT_ADDR": "http://127.0.0.1:8200"},
)
if ret != 0:
self.skipTest("unable to assign policy to vault")
self.count += 1
if ret["retcode"] == 0:
break
log.debug("Vault login failed. Return: %s", ret)
login_attempts += 1
def tearDown(self):
"""
TearDown vault container
"""
if login_attempts >= 3:
raise SkipTest("unable to login to vault")
def count_tests(funcobj):
return inspect.ismethod(funcobj) and funcobj.__name__.startswith("test_")
ret = sminion.functions.cmd.retcode(
cmd="{} policy write testpolicy {}/vault.hcl".format(
VAULT_BINARY_PATH, RUNTIME_VARS.FILES
),
env={"VAULT_ADDR": "http://127.0.0.1:8200"},
)
if ret != 0:
raise SkipTest("unable to assign policy to vault")
numtests = len(inspect.getmembers(VaultTestCaseCurrent, predicate=count_tests))
if self.count >= numtests:
self.run_state("docker_container.stopped", name="vault")
self.run_state("docker_container.absent", name="vault")
self.run_state("docker_image.absent", name="vault", force=True)
@classmethod
def tearDownClass(cls):
cls.sminion.states.docker_container.stopped(name="vault")
cls.sminion.states.docker_container.absent(name="vault")
cls.sminion.states.docker_image.absent(name="vault", force=True)
cls.sminion = None
@skipIf(True, "SLOWTEST skip")
def test_write_read_secret_kv2(self):

View file

@ -1,6 +1,5 @@
# -*- coding: utf-8 -*-
# Import Python libs
from __future__ import absolute_import, print_function, unicode_literals
import io
@ -8,14 +7,11 @@ import logging
import os
import re
# Import Salt libs
import salt.utils.files
import salt.utils.platform
import salt.utils.win_reg as reg
# Import Salt Testing libs
from tests.support.case import ModuleCase
from tests.support.helpers import destructiveTest, generate_random_name
from tests.support.helpers import destructiveTest, random_string
from tests.support.runtests import RUNTIME_VARS
from tests.support.unit import skipIf
@ -103,7 +99,7 @@ class WinLgpoTest(ModuleCase):
)
self.assertTrue(ret)
secedit_output_file = os.path.join(
RUNTIME_VARS.TMP, generate_random_name("secedit-output-")
RUNTIME_VARS.TMP, random_string("secedit-output-")
)
secedit_output = self.run_function(
"cmd.run", (), cmd="secedit /export /cfg {0}".format(secedit_output_file)
@ -559,6 +555,27 @@ class WinLgpoTest(ModuleCase):
],
)
@destructiveTest
def test_set_computer_policy_LockoutDuration(self):
"""
Test setting LockoutDuration
"""
# For LockoutDuration to be meaningful, first configure
# LockoutThreshold
self._testSeceditPolicy("LockoutThreshold", 3, [r"^LockoutBadCount = 3"])
# Next set the LockoutDuration non-zero value, as this is required
# before setting LockoutWindow
self._testSeceditPolicy("LockoutDuration", 60, [r"^LockoutDuration = 60"])
# Now set LockoutWindow to a valid value <= LockoutDuration. If this
# is not set, then the LockoutDuration zero value is ignored by the
# Windows API (leading to a false sense of accomplishment)
self._testSeceditPolicy("LockoutWindow", 60, [r"^ResetLockoutCount = 60"])
# set LockoutDuration zero value, the secedit zero value is -1
self._testSeceditPolicy("LockoutDuration", 0, [r"^LockoutDuration = -1"])
@destructiveTest
def test_set_computer_policy_GuestAccountStatus(self):
"""

View file

@ -0,0 +1,83 @@
# -*- coding: utf-8 -*-
from __future__ import absolute_import, unicode_literals
import salt.modules.win_task as task
import salt.utils.platform
from salt.exceptions import CommandExecutionError
from tests.support.case import ModuleCase
from tests.support.helpers import destructiveTest
from tests.support.unit import skipIf
@skipIf(not salt.utils.platform.is_windows(), "windows test only")
class WinTasksTest(ModuleCase):
"""
Tests for salt.modules.win_task.
"""
@destructiveTest
def test_adding_task_with_xml(self):
"""
Test adding a task using xml
"""
xml_text = r"""
<Task version="1.2" xmlns="http://schemas.microsoft.com/windows/2004/02/mit/task">
<RegistrationInfo>
<Date>2015-06-12T15:59:35.691983</Date>
<Author>System</Author>
</RegistrationInfo>
<Triggers>
<LogonTrigger>
<Enabled>true</Enabled>
<Delay>PT30S</Delay>
</LogonTrigger>
</Triggers>
<Principals>
<Principal id="Author">
<UserId>System</UserId>
<LogonType>InteractiveToken</LogonType>
<RunLevel>HighestAvailable</RunLevel>
</Principal>
</Principals>
<Settings>
<MultipleInstancesPolicy>IgnoreNew</MultipleInstancesPolicy>
<DisallowStartIfOnBatteries>true</DisallowStartIfOnBatteries>
<StopIfGoingOnBatteries>true</StopIfGoingOnBatteries>
<AllowHardTerminate>true</AllowHardTerminate>
<StartWhenAvailable>false</StartWhenAvailable>
<RunOnlyIfNetworkAvailable>false</RunOnlyIfNetworkAvailable>
<IdleSettings>
<StopOnIdleEnd>true</StopOnIdleEnd>
<RestartOnIdle>false</RestartOnIdle>
</IdleSettings>
<AllowStartOnDemand>true</AllowStartOnDemand>
<Enabled>true</Enabled>
<Hidden>false</Hidden>
<RunOnlyIfIdle>false</RunOnlyIfIdle>
<WakeToRun>false</WakeToRun>
<ExecutionTimeLimit>P3D</ExecutionTimeLimit>
<Priority>4</Priority>
</Settings>
<Actions Context="Author">
<Exec>
<Command>echo</Command>
<Arguments>"hello"</Arguments>
</Exec>
</Actions>
</Task>
"""
self.assertEquals(
self.run_function("task.create_task_from_xml", "foo", xml_text=xml_text),
True,
)
all_tasks = self.run_function("task.list_tasks")
self.assertIn("foo", all_tasks)
@destructiveTest
def test_adding_task_with_invalid_xml(self):
"""
Test adding a task using a malformed xml
"""
xml_text = r"""<Malformed"""
with self.assertRaises(CommandExecutionError):
task.create_task_from_xml("foo", xml_text=xml_text)

View file

@ -21,7 +21,6 @@ from tests.support.unit import skipIf
log = logging.getLogger(__name__)
@destructiveTest
@skipIf(not salt.utils.path.which("dockerd"), "Docker not installed")
@skipIf(not salt.utils.path.which("vault"), "Vault not installed")
class VaultTestCase(ModuleCase, ShellCase):
@ -35,6 +34,8 @@ class VaultTestCase(ModuleCase, ShellCase):
"""
SetUp vault container
"""
vault_binary = salt.utils.path.which("vault")
if VaultTestCase.count == 0:
config = '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}'
self.run_state("docker_image.present", name="vault", tag="0.9.6")
@ -52,7 +53,7 @@ class VaultTestCase(ModuleCase, ShellCase):
time.sleep(5)
ret = self.run_function(
"cmd.retcode",
cmd="/usr/local/bin/vault login token=testsecret",
cmd="{} login token=testsecret".format(vault_binary),
env={"VAULT_ADDR": "http://127.0.0.1:8200"},
)
login_attempts = 1
@ -74,7 +75,7 @@ class VaultTestCase(ModuleCase, ShellCase):
time.sleep(5)
ret = self.run_function(
"cmd.retcode",
cmd="/usr/local/bin/vault login token=testsecret",
cmd="{} login token=testsecret".format(vault_binary),
env={"VAULT_ADDR": "http://127.0.0.1:8200"},
)
login_attempts += 1
@ -82,8 +83,8 @@ class VaultTestCase(ModuleCase, ShellCase):
self.skipTest("unable to login to vault")
ret = self.run_function(
"cmd.retcode",
cmd="/usr/local/bin/vault policy write testpolicy {0}/vault.hcl".format(
RUNTIME_VARS.FILES
cmd="{} policy write testpolicy {}/vault.hcl".format(
vault_binary, RUNTIME_VARS.FILES
),
env={"VAULT_ADDR": "http://127.0.0.1:8200"},
)

View file

@ -69,6 +69,9 @@ class PkgTest(ModuleCase, SaltReturnAssertsMixin):
elif grains["osmajorrelease"] == 7:
cls._PKG_DOT_TARGETS = ["tomcat-el-2.2-api"]
cls._PKG_EPOCH_TARGETS = ["comps-extras"]
elif grains["osmajorrelease"] == 8:
cls._PKG_DOT_TARGETS = ["vid.stab"]
cls._PKG_EPOCH_TARGETS = ["traceroute"]
elif grains["os_family"] == "Suse":
cls._PKG_TARGETS = ["lynx", "htop"]
if grains["os"] == "SUSE":
@ -140,13 +143,15 @@ class PkgTest(ModuleCase, SaltReturnAssertsMixin):
ret = self.run_state("pkg.removed", name=target)
self.assertSaltTrueReturn(ret)
@skipIf(not _VERSION_SPEC_SUPPORTED, "Version specification not supported")
@requires_salt_states("pkg.installed", "pkg.removed")
@skipIf(True, "SLOWTEST skip")
def test_pkg_002_installed_with_version(self):
"""
This is a destructive test as it installs and then removes a package
"""
if not self._VERSION_SPEC_SUPPORTED:
self.skipTest("Version specification not supported")
target = self._PKG_TARGETS[0]
version = self.latest_version(target)
@ -187,13 +192,15 @@ class PkgTest(ModuleCase, SaltReturnAssertsMixin):
ret = self.run_state("pkg.removed", name=None, pkgs=self._PKG_TARGETS)
self.assertSaltTrueReturn(ret)
@skipIf(not _VERSION_SPEC_SUPPORTED, "Version specification not supported")
@requires_salt_states("pkg.installed", "pkg.removed")
@skipIf(True, "SLOWTEST skip")
def test_pkg_004_installed_multipkg_with_version(self):
"""
This is a destructive test as it installs and then removes two packages
"""
if not self._VERSION_SPEC_SUPPORTED:
self.skipTest("Version specification not supported")
version = self.latest_version(self._PKG_TARGETS[0])
# If this assert fails, we need to find new targets, this test needs to
@ -210,13 +217,15 @@ class PkgTest(ModuleCase, SaltReturnAssertsMixin):
ret = self.run_state("pkg.removed", name=None, pkgs=self._PKG_TARGETS)
self.assertSaltTrueReturn(ret)
@skipIf(not _PKG_32_TARGETS, "No 32 bit packages have been specified for testing")
@requires_salt_modules("pkg.version")
@requires_salt_states("pkg.installed", "pkg.removed")
def test_pkg_005_installed_32bit(self):
"""
This is a destructive test as it installs and then removes a package
"""
if not self._PKG_32_TARGETS:
self.skipTest("No 32 bit packages have been specified for testing")
target = self._PKG_32_TARGETS[0]
# _PKG_TARGETS_32 is only populated for platforms for which Salt has to
@ -235,12 +244,14 @@ class PkgTest(ModuleCase, SaltReturnAssertsMixin):
ret = self.run_state("pkg.removed", name=target)
self.assertSaltTrueReturn(ret)
@skipIf(not _PKG_32_TARGETS, "No 32 bit packages have been specified for testing")
@requires_salt_states("pkg.installed", "pkg.removed")
def test_pkg_006_installed_32bit_with_version(self):
"""
This is a destructive test as it installs and then removes a package
"""
if not self._PKG_32_TARGETS:
self.skipTest("No 32 bit packages have been specified for testing")
target = self._PKG_32_TARGETS[0]
# _PKG_TARGETS_32 is only populated for platforms for which Salt has to
@ -261,10 +272,6 @@ class PkgTest(ModuleCase, SaltReturnAssertsMixin):
ret = self.run_state("pkg.removed", name=target)
self.assertSaltTrueReturn(ret)
@skipIf(
not _PKG_DOT_TARGETS,
'No packages with "." in their name have been configured for',
)
@requires_salt_states("pkg.installed", "pkg.removed")
def test_pkg_007_with_dot_in_pkgname(self=None):
"""
@ -273,6 +280,9 @@ class PkgTest(ModuleCase, SaltReturnAssertsMixin):
This is a destructive test as it installs a package
"""
if not self._PKG_DOT_TARGETS:
self.skipTest('No packages with "." in their name have been specified',)
target = self._PKG_DOT_TARGETS[0]
version = self.latest_version(target)
@ -286,10 +296,6 @@ class PkgTest(ModuleCase, SaltReturnAssertsMixin):
ret = self.run_state("pkg.removed", name=target)
self.assertSaltTrueReturn(ret)
@skipIf(
not _PKG_EPOCH_TARGETS,
'No targets have been configured with "epoch" in the version',
)
@requires_salt_states("pkg.installed", "pkg.removed")
def test_pkg_008_epoch_in_version(self):
"""
@ -298,6 +304,9 @@ class PkgTest(ModuleCase, SaltReturnAssertsMixin):
This is a destructive test as it installs a package
"""
if not self._PKG_EPOCH_TARGETS:
self.skipTest('No targets have been configured with "epoch" in the version')
target = self._PKG_EPOCH_TARGETS[0]
version = self.latest_version(target)
@ -410,13 +419,15 @@ class PkgTest(ModuleCase, SaltReturnAssertsMixin):
"Package {0} is already up-to-date".format(target),
)
@skipIf(not _WILDCARDS_SUPPORTED, "Wildcards in pkg.install are not supported")
@requires_salt_modules("pkg.version")
@requires_salt_states("pkg.installed", "pkg.removed")
def test_pkg_012_installed_with_wildcard_version(self):
"""
This is a destructive test as it installs and then removes a package
"""
if not self._WILDCARDS_SUPPORTED:
self.skipTest("Wildcards in pkg.install are not supported")
target = self._PKG_TARGETS[0]
version = self.run_function("pkg.version", [target])
@ -582,13 +593,51 @@ class PkgTest(ModuleCase, SaltReturnAssertsMixin):
ret = self.run_state("pkg.removed", name=versionlock_pkg)
self.assertSaltTrueReturn(ret)
@skipIf(not _PKG_CAP_TARGETS, "Capability not provided")
@requires_salt_states("pkg.installed", "pkg.removed")
def test_pkg_016_conditionally_ignore_epoch(self):
"""
See
https://github.com/saltstack/salt/issues/56654#issuecomment-615034952
This is a destructive test as it installs a package
"""
if not self._PKG_EPOCH_TARGETS:
self.skipTest('No targets have been configured with "epoch" in the version')
target = self._PKG_EPOCH_TARGETS[0]
# Strip the epoch from the latest available version
version = self.latest_version(target).split(":", 1)[-1]
# If this assert fails, we need to find a new target. This test
# needs to be able to test successful installation of the package, so
# the target needs to not be installed before we run the
# pkg.installed state below
self.assertTrue(version)
# CASE 1: package name passed in "name" param
ret = self.run_state(
"pkg.installed", name=target, version=version, refresh=False
)
self.assertSaltTrueReturn(ret)
ret = self.run_state("pkg.removed", name=target)
self.assertSaltTrueReturn(ret)
# CASE 2: same as case 1 but with "pkgs"
ret = self.run_state(
"pkg.installed", name="foo", pkgs=[{target: version}], refresh=False
)
self.assertSaltTrueReturn(ret)
ret = self.run_state("pkg.removed", name=target)
self.assertSaltTrueReturn(ret)
@requires_salt_modules("pkg.version")
@requires_salt_states("pkg.installed", "pkg.removed")
def test_pkg_cap_001_installed(self):
"""
This is a destructive test as it installs and then removes a package
"""
if not self._PKG_CAP_TARGETS:
self.skipTest("Capability not provided")
target, realpkg = self._PKG_CAP_TARGETS[0]
version = self.run_function("pkg.version", [target])
@ -622,12 +671,14 @@ class PkgTest(ModuleCase, SaltReturnAssertsMixin):
ret = self.run_state("pkg.removed", name=realpkg)
self.assertSaltTrueReturn(ret)
@skipIf(not _PKG_CAP_TARGETS, "Capability not available")
@requires_salt_states("pkg.installed", "pkg.removed")
def test_pkg_cap_002_already_installed(self):
"""
This is a destructive test as it installs and then removes a package
"""
if not self._PKG_CAP_TARGETS:
self.skipTest("Capability not provided")
target, realpkg = self._PKG_CAP_TARGETS[0]
version = self.run_function("pkg.version", [target])
realver = self.run_function("pkg.version", [realpkg])
@ -665,13 +716,17 @@ class PkgTest(ModuleCase, SaltReturnAssertsMixin):
ret = self.run_state("pkg.removed", name=realpkg)
self.assertSaltTrueReturn(ret)
@skipIf(not _PKG_CAP_TARGETS, "Capability not available")
@skipIf(not _VERSION_SPEC_SUPPORTED, "Version specification not supported")
@requires_salt_states("pkg.installed", "pkg.removed")
def test_pkg_cap_003_installed_multipkg_with_version(self):
"""
This is a destructive test as it installs and then removes two packages
"""
if not self._PKG_CAP_TARGETS:
self.skipTest("Capability not provided")
if not self._VERSION_SPEC_SUPPORTED:
self.skipTest("Version specification not supported")
target, realpkg = self._PKG_CAP_TARGETS[0]
version = self.latest_version(target)
realver = self.latest_version(realpkg)
@ -725,7 +780,6 @@ class PkgTest(ModuleCase, SaltReturnAssertsMixin):
)
self.assertSaltTrueReturn(ret)
@skipIf(not _PKG_CAP_TARGETS, "Capability not available")
@requires_salt_modules("pkg.version")
@requires_salt_states("pkg.latest", "pkg.removed")
def test_pkg_cap_004_latest(self):
@ -733,6 +787,9 @@ class PkgTest(ModuleCase, SaltReturnAssertsMixin):
This tests pkg.latest with a package that has no epoch (or a zero
epoch).
"""
if not self._PKG_CAP_TARGETS:
self.skipTest("Capability not provided")
target, realpkg = self._PKG_CAP_TARGETS[0]
version = self.run_function("pkg.version", [target])
realver = self.run_function("pkg.version", [realpkg])
@ -771,13 +828,15 @@ class PkgTest(ModuleCase, SaltReturnAssertsMixin):
ret = self.run_state("pkg.removed", name=realpkg)
self.assertSaltTrueReturn(ret)
@skipIf(not _PKG_CAP_TARGETS, "Capability not available")
@requires_salt_modules("pkg.version")
@requires_salt_states("pkg.installed", "pkg.removed", "pkg.downloaded")
def test_pkg_cap_005_downloaded(self):
"""
This is a destructive test as it installs and then removes a package
"""
if not self._PKG_CAP_TARGETS:
self.skipTest("Capability not provided")
target, realpkg = self._PKG_CAP_TARGETS[0]
version = self.run_function("pkg.version", [target])
realver = self.run_function("pkg.version", [realpkg])
@ -807,13 +866,15 @@ class PkgTest(ModuleCase, SaltReturnAssertsMixin):
)
self.assertSaltTrueReturn(ret)
@skipIf(not _PKG_CAP_TARGETS, "Capability not available")
@requires_salt_modules("pkg.version")
@requires_salt_states("pkg.installed", "pkg.removed", "pkg.uptodate")
def test_pkg_cap_006_uptodate(self):
"""
This is a destructive test as it installs and then removes a package
"""
if not self._PKG_CAP_TARGETS:
self.skipTest("Capability not provided")
target, realpkg = self._PKG_CAP_TARGETS[0]
version = self.run_function("pkg.version", [target])
realver = self.run_function("pkg.version", [realpkg])

View file

@ -10,7 +10,7 @@ import pytest
import salt.utils.platform
import salt.utils.win_reg as reg
from tests.support.case import ModuleCase
from tests.support.helpers import destructiveTest, generate_random_name
from tests.support.helpers import destructiveTest, random_string
from tests.support.mixins import SaltReturnAssertsMixin
from tests.support.unit import skipIf
@ -20,7 +20,7 @@ UNICODE_VALUE_NAME = "Unicode Key \N{TRADE MARK SIGN}"
UNICODE_VALUE = (
"Unicode Value " "\N{COPYRIGHT SIGN},\N{TRADE MARK SIGN},\N{REGISTERED SIGN}"
)
FAKE_KEY = "SOFTWARE\\{0}".format(generate_random_name("SaltTesting-"))
FAKE_KEY = "SOFTWARE\\{0}".format(random_string("SaltTesting-", lowercase=False))
@destructiveTest

View file

@ -917,6 +917,14 @@ class ModuleCase(TestCase, SaltClientTestCaseMixin):
if "f_timeout" in kwargs:
kwargs["timeout"] = kwargs.pop("f_timeout")
client = self.client if master_tgt is None else self.clients[master_tgt]
log.debug(
"Running client.cmd(minion_tgt=%r, function=%r, arg=%r, timeout=%r, kwarg=%r)",
minion_tgt,
function,
arg,
timeout,
kwargs,
)
orig = client.cmd(minion_tgt, function, arg, timeout=timeout, kwarg=kwargs)
if RUNTIME_VARS.PYTEST_SESSION:

View file

@ -11,7 +11,6 @@
"""
# pylint: disable=repr-flag-used-in-string,wrong-import-order
# Import Python libs
from __future__ import absolute_import, print_function, unicode_literals
import base64
@ -35,21 +34,16 @@ import types
import salt.ext.tornado.ioloop
import salt.ext.tornado.web
# Import Salt libs
import salt.utils.files
import salt.utils.platform
import salt.utils.stringutils
import salt.utils.versions
from pytestsalt.utils import get_unused_localhost_port
# Import 3rd-party libs
from salt.ext import six
from salt.ext.six.moves import builtins, range
from tests.support.mock import patch
from tests.support.runtests import RUNTIME_VARS
from tests.support.sminion import create_sminion
# Import Salt Tests Support libs
from tests.support.unit import SkipTest, _id, skip
log = logging.getLogger(__name__)
@ -622,6 +616,9 @@ def requires_network(only_local_network=False):
cls.skipTest("No local network was detected")
return func(cls)
if os.environ.get("NO_INTERNET"):
cls.skipTest("Environment variable NO_INTERNET is set.")
# We are using the google.com DNS records as numerical IPs to avoid
# DNS lookups which could greatly slow down this check
for addr in (
@ -1411,9 +1408,42 @@ def generate_random_name(prefix, size=6):
size
The number of characters to generate. Default: 6.
"""
return prefix + "".join(
random.choice(string.ascii_uppercase + string.digits) for x in range(size)
salt.utils.versions.warn_until_date(
"20220101",
"Please replace your call 'generate_random_name({0})' with 'random_string({0}, lowercase=False)' as "
"'generate_random_name' will be removed after {{date}}".format(prefix),
)
return random_string(prefix, size=size, lowercase=False)
def random_string(prefix, size=6, uppercase=True, lowercase=True, digits=True):
"""
Generates a random string.
..versionadded: 3001
Args:
prefix(str): The prefix for the random string
size(int): The size of the random string
uppercase(bool): If true, include uppercased ascii chars in choice sample
lowercase(bool): If true, include lowercased ascii chars in choice sample
digits(bool): If true, include digits in choice sample
Returns:
str: The random string
"""
if not any([uppercase, lowercase, digits]):
raise RuntimeError(
"At least one of 'uppercase', 'lowercase' or 'digits' needs to be true"
)
choices = []
if uppercase:
choices.extend(string.ascii_uppercase)
if lowercase:
choices.extend(string.ascii_lowercase)
if digits:
choices.extend(string.digits)
return prefix + "".join(random.choice(choices) for _ in range(size))
class Webserver(object):

View file

@ -11,7 +11,7 @@ from salt.cloud.clouds import proxmox
# Import Salt Testing Libs
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.mock import MagicMock, patch
from tests.support.mock import ANY, MagicMock, patch
from tests.support.unit import TestCase
@ -21,6 +21,7 @@ class ProxmoxTest(TestCase, LoaderModuleMockMixin):
proxmox: {
"__utils__": {
"cloud.fire_event": MagicMock(),
"cloud.filter_event": MagicMock(),
"cloud.bootstrap": MagicMock(),
},
"__opts__": {
@ -107,3 +108,35 @@ class ProxmoxTest(TestCase, LoaderModuleMockMixin):
query.assert_any_call(
"post", "nodes/127.0.0.1/qemu/0/config", {"scsi0": "data"}
)
def test_clone(self):
"""
Test that an integer value for clone_from
"""
mock_query = MagicMock(return_value="")
with patch(
"salt.cloud.clouds.proxmox._get_properties", MagicMock(return_value=[])
), patch("salt.cloud.clouds.proxmox.query", mock_query):
vm_ = {
"technology": "qemu",
"name": "new2",
"host": "myhost",
"clone": True,
"clone_from": 123,
}
# CASE 1: Numeric ID
result = proxmox.create_node(vm_, ANY)
mock_query.assert_called_once_with(
"post", "nodes/myhost/qemu/123/clone", {"newid": ANY},
)
assert result == {}
# CASE 2: host:ID notation
mock_query.reset_mock()
vm_["clone_from"] = "otherhost:123"
result = proxmox.create_node(vm_, ANY)
mock_query.assert_called_once_with(
"post", "nodes/otherhost/qemu/123/clone", {"newid": ANY},
)
assert result == {}

View file

@ -82,6 +82,31 @@ class SaltifyTestCase(TestCase, LoaderModuleMockMixin):
mock_cmd.assert_called_once_with(vm_, ANY)
self.assertTrue(result)
def test_create_no_ssh_host(self):
"""
Test that ssh_host is set to the vm name if not defined
"""
mock_cmd = MagicMock(return_value=True)
with patch.dict(
"salt.cloud.clouds.saltify.__utils__", {"cloud.bootstrap": mock_cmd}
):
vm_ = {
"deploy": True,
"driver": "saltify",
"name": "new2",
"profile": "testprofile2",
}
result = saltify.create(vm_)
mock_cmd.assert_called_once_with(vm_, ANY)
assert result
# Make sure that ssh_host was added to the vm. Note that this is
# done in two asserts so that the failure is more explicit about
# what is wrong. If ssh_host wasn't inserted in the vm_ dict, the
# failure would be a KeyError, which would be harder to
# troubleshoot.
assert "ssh_host" in vm_
assert vm_["ssh_host"] == "new2"
def test_create_wake_on_lan(self):
"""
Test if wake on lan works

View file

@ -18,7 +18,7 @@ from salt.exceptions import SaltCloudSystemExit
# Import Salt Testing Libs
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.mock import MagicMock, patch
from tests.support.mock import MagicMock, Mock, patch
from tests.support.unit import TestCase, skipIf
# Attempt to import pyVim and pyVmomi libs
@ -428,6 +428,21 @@ class VMwareTestCase(ExtendedTestCase):
SaltCloudSystemExit, vmware.destroy, name=VM_NAME, call="function"
)
def test_shutdown_host_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call convert_to_template
with anything other than --action or -a.
"""
with patch.object(vmware, "_get_si", Mock()), patch(
"salt.utils.vmware.get_mor_by_property", Mock()
):
self.assertRaises(
SaltCloudSystemExit,
vmware.shutdown_host,
kwargs={"host": VM_NAME},
call="action",
)
def test_upgrade_tools_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call upgrade_tools

View file

@ -105,7 +105,8 @@ class AutoKeyTest(TestCase):
@patch_check_permissions()
def test_check_permissions_group_can_write_not_permissive(self):
"""
Assert that a file is accepted, when group can write to it and perkissive_pki_access=False
Assert that a file is accepted, when group can write to it and
permissive_pki_access=False
"""
self.stats["testfile"] = {"mode": gen_permissions("w", "w", ""), "gid": 1}
if salt.utils.platform.is_windows():
@ -116,7 +117,8 @@ class AutoKeyTest(TestCase):
@patch_check_permissions(permissive_pki=True)
def test_check_permissions_group_can_write_permissive(self):
"""
Assert that a file is accepted, when group can write to it and perkissive_pki_access=True
Assert that a file is accepted, when group can write to it and
permissive_pki_access=True
"""
self.stats["testfile"] = {"mode": gen_permissions("w", "w", ""), "gid": 1}
self.assertTrue(self.auto_key.check_permissions("testfile"))
@ -124,8 +126,8 @@ class AutoKeyTest(TestCase):
@patch_check_permissions(uid=0, permissive_pki=True)
def test_check_permissions_group_can_write_permissive_root_in_group(self):
"""
Assert that a file is accepted, when group can write to it, perkissive_pki_access=False,
salt is root and in the file owning group
Assert that a file is accepted, when group can write to it,
permissive_pki_access=False, salt is root and in the file owning group
"""
self.stats["testfile"] = {"mode": gen_permissions("w", "w", ""), "gid": 0}
self.assertTrue(self.auto_key.check_permissions("testfile"))
@ -133,8 +135,9 @@ class AutoKeyTest(TestCase):
@patch_check_permissions(uid=0, permissive_pki=True)
def test_check_permissions_group_can_write_permissive_root_not_in_group(self):
"""
Assert that no file is accepted, when group can write to it, perkissive_pki_access=False,
salt is root and **not** in the file owning group
Assert that no file is accepted, when group can write to it,
permissive_pki_access=False, salt is root and **not** in the file owning
group
"""
self.stats["testfile"] = {"mode": gen_permissions("w", "w", ""), "gid": 1}
if salt.utils.platform.is_windows():

View file

@ -960,9 +960,9 @@ class FileModuleTestCase(TestCase, LoaderModuleMockMixin):
tfile.write(
salt.utils.stringutils.to_bytes(
"rc.conf ef6e82e4006dee563d98ada2a2a80a27\n"
"ead48423703509d37c4a90e6a0d53e143b6fc268 example.tar.gz\n"
"ead48423703509d37c4a90e6a0d53e143b6fc268 example.tar.gz\n"
"fe05bcdcdc4928012781a5f1a2a77cbb5398e106 ./subdir/example.tar.gz\n"
"ad782ecdac770fc6eb9a62e44f90873fb97fb26b foo.tar.bz2\n"
"ad782ecdac770fc6eb9a62e44f90873fb97fb26b *foo.tar.bz2\n"
)
)
tfile.flush()

View file

@ -1,7 +1,4 @@
# -*- coding: utf-8 -*-
"""
:codeauthor: Jayesh Kariya <jayeshk@saltstack.com>
"""
# Import Python Libs
from __future__ import absolute_import, print_function, unicode_literals
@ -10,10 +7,9 @@ import logging
import os.path
import socket
import salt.modules.network as network
# Import Salt Libs
import salt.utils.network
import salt.config
import salt.modules.network as network
import salt.utils.path
from salt._compat import ipaddress
from salt.exceptions import CommandExecutionError
@ -32,7 +28,13 @@ class NetworkTestCase(TestCase, LoaderModuleMockMixin):
"""
def setup_loader_modules(self):
return {network: {}}
opts = salt.config.DEFAULT_MINION_OPTS.copy()
utils = salt.loader.utils(
opts, whitelist=["network", "path", "platform", "stringutils"]
)
return {
network: {"__utils__": utils},
}
def test_wol_bad_mac(self):
"""
@ -68,7 +70,9 @@ class NetworkTestCase(TestCase, LoaderModuleMockMixin):
"""
Test for Performs a ping to a host
"""
with patch.object(salt.utils.network, "sanitize_host", return_value="A"):
with patch.dict(
network.__utils__, {"network.sanitize_host": MagicMock(return_value="A")}
):
mock_all = MagicMock(side_effect=[{"retcode": 1}, {"retcode": 0}])
with patch.dict(network.__salt__, {"cmd.run_all": mock_all}):
self.assertFalse(network.ping("host", return_boolean=True))
@ -99,7 +103,9 @@ class NetworkTestCase(TestCase, LoaderModuleMockMixin):
Test for return a dict containing information on all
of the running TCP connections
"""
with patch.object(salt.utils.network, "active_tcp", return_value="A"):
with patch.dict(
network.__utils__, {"network.active_tcp": MagicMock(return_value="A")}
):
with patch.dict(network.__grains__, {"kernel": "Linux"}):
self.assertEqual(network.active_tcp(), "A")
@ -111,8 +117,9 @@ class NetworkTestCase(TestCase, LoaderModuleMockMixin):
with patch.dict(network.__salt__, {"cmd.run": MagicMock(return_value="")}):
self.assertListEqual(network.traceroute("gentoo.org"), [])
with patch.object(
salt.utils.network, "sanitize_host", return_value="gentoo.org"
with patch.dict(
network.__utils__,
{"network.sanitize_host": MagicMock(return_value="gentoo.org")},
):
with patch.dict(
network.__salt__, {"cmd.run": MagicMock(return_value="")}
@ -123,13 +130,9 @@ class NetworkTestCase(TestCase, LoaderModuleMockMixin):
"""
Test for Performs a DNS lookup with dig
"""
with patch(
"salt.utils.path.which", MagicMock(return_value="dig")
), patch.object(
salt.utils.network, "sanitize_host", return_value="A"
), patch.dict(
network.__salt__, {"cmd.run": MagicMock(return_value="A")}
):
with patch("salt.utils.path.which", MagicMock(return_value="dig")), patch.dict(
network.__utils__, {"network.sanitize_host": MagicMock(return_value="A")}
), patch.dict(network.__salt__, {"cmd.run": MagicMock(return_value="A")}):
self.assertEqual(network.dig("host"), "A")
def test_arp(self):
@ -146,7 +149,9 @@ class NetworkTestCase(TestCase, LoaderModuleMockMixin):
Test for return a dictionary of information about
all the interfaces on the minion
"""
with patch.object(salt.utils.network, "interfaces", return_value={}):
with patch.dict(
network.__utils__, {"network.interfaces": MagicMock(return_value={})}
):
self.assertDictEqual(network.interfaces(), {})
def test_hw_addr(self):
@ -154,28 +159,36 @@ class NetworkTestCase(TestCase, LoaderModuleMockMixin):
Test for return the hardware address (a.k.a. MAC address)
for a given interface
"""
with patch.object(salt.utils.network, "hw_addr", return_value={}):
with patch.dict(
network.__utils__, {"network.hw_addr": MagicMock(return_value={})}
):
self.assertDictEqual(network.hw_addr("iface"), {})
def test_interface(self):
"""
Test for return the inet address for a given interface
"""
with patch.object(salt.utils.network, "interface", return_value={}):
with patch.dict(
network.__utils__, {"network.interface": MagicMock(return_value={})}
):
self.assertDictEqual(network.interface("iface"), {})
def test_interface_ip(self):
"""
Test for return the inet address for a given interface
"""
with patch.object(salt.utils.network, "interface_ip", return_value={}):
with patch.dict(
network.__utils__, {"network.interface_ip": MagicMock(return_value={})}
):
self.assertDictEqual(network.interface_ip("iface"), {})
def test_subnets(self):
"""
Test for returns a list of subnets to which the host belongs
"""
with patch.object(salt.utils.network, "subnets", return_value={}):
with patch.dict(
network.__utils__, {"network.subnets": MagicMock(return_value={})}
):
self.assertDictEqual(network.subnets(), {})
def test_in_subnet(self):
@ -183,20 +196,25 @@ class NetworkTestCase(TestCase, LoaderModuleMockMixin):
Test for returns True if host is within specified
subnet, otherwise False.
"""
with patch.object(salt.utils.network, "in_subnet", return_value={}):
with patch.dict(
network.__utils__, {"network.in_subnet": MagicMock(return_value={})}
):
self.assertDictEqual(network.in_subnet("iface"), {})
def test_ip_addrs(self):
"""
Test for returns a list of IPv4 addresses assigned to the host.
"""
with patch.object(salt.utils.network, "ip_addrs", return_value=["0.0.0.0"]):
with patch.object(salt.utils.network, "in_subnet", return_value=True):
self.assertListEqual(
network.ip_addrs("interface", "include_loopback", "cidr"),
["0.0.0.0"],
)
with patch.dict(
network.__utils__,
{
"network.ip_addrs": MagicMock(return_value=["0.0.0.0"]),
"network.in_subnet": MagicMock(return_value=True),
},
):
self.assertListEqual(
network.ip_addrs("interface", "include_loopback", "cidr"), ["0.0.0.0"]
)
self.assertListEqual(
network.ip_addrs("interface", "include_loopback"), ["0.0.0.0"]
)
@ -205,14 +223,16 @@ class NetworkTestCase(TestCase, LoaderModuleMockMixin):
"""
Test for returns a list of IPv6 addresses assigned to the host.
"""
with patch.object(salt.utils.network, "ip_addrs6", return_value=["A"]):
with patch.dict(
network.__utils__, {"network.ip_addrs6": MagicMock(return_value=["A"])}
):
self.assertListEqual(network.ip_addrs6("int", "include"), ["A"])
def test_get_hostname(self):
"""
Test for Get hostname
"""
with patch.object(network.socket, "gethostname", return_value="A"):
with patch.object(socket, "gethostname", return_value="A"):
self.assertEqual(network.get_hostname(), "A")
def test_mod_hostname(self):
@ -222,14 +242,16 @@ class NetworkTestCase(TestCase, LoaderModuleMockMixin):
self.assertFalse(network.mod_hostname(None))
file_d = "\n".join(["#", "A B C D,E,F G H"])
with patch.object(
salt.utils.path, "which", return_value="hostname"
with patch.dict(
network.__utils__,
{
"path.which": MagicMock(return_value="hostname"),
"files.fopen": mock_open(read_data=file_d),
},
), patch.dict(
network.__salt__, {"cmd.run": MagicMock(return_value=None)}
), patch.dict(
network.__grains__, {"os_family": "A"}
), patch(
"salt.utils.files.fopen", mock_open(read_data=file_d)
):
self.assertTrue(network.mod_hostname("hostname"))
@ -251,7 +273,10 @@ class NetworkTestCase(TestCase, LoaderModuleMockMixin):
ret = "Unable to connect to host (0) on tcp port port"
mock_socket.side_effect = Exception("foo")
with patch.object(salt.utils.network, "sanitize_host", return_value="A"):
with patch.dict(
network.__utils__,
{"network.sanitize_host": MagicMock(return_value="A")},
):
with patch.object(
socket,
"getaddrinfo",
@ -267,7 +292,10 @@ class NetworkTestCase(TestCase, LoaderModuleMockMixin):
mock_socket.settimeout().return_value = None
mock_socket.connect().return_value = None
mock_socket.shutdown().return_value = None
with patch.object(salt.utils.network, "sanitize_host", return_value="A"):
with patch.dict(
network.__utils__,
{"network.sanitize_host": MagicMock(return_value="A")},
):
with patch.object(
socket,
"getaddrinfo",

View file

@ -3,58 +3,55 @@
:synopsis: Unit Tests for Package Management module 'module.opkg'
:platform: Linux
"""
# pylint: disable=import-error,3rd-party-module-not-gated
# Import Python libs
from __future__ import absolute_import, print_function, unicode_literals
import collections
import copy
import salt.modules.opkg as opkg
# Import Salt Libs
from salt.ext import six
# Import Salt Testing Libs
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.mock import MagicMock, patch
from tests.support.unit import TestCase
# pylint: disable=import-error,3rd-party-module-not-gated
OPKG_VIM_INFO = {
"vim": {
"Package": "vim",
"Version": "7.4.769-r0.31",
"Status": "install ok installed",
}
}
OPKG_VIM_FILES = {
"errors": [],
"packages": {
"vim": [
"/usr/bin/view",
"/usr/bin/vim.vim",
"/usr/bin/xxd",
"/usr/bin/vimdiff",
"/usr/bin/rview",
"/usr/bin/rvim",
"/usr/bin/ex",
]
},
}
INSTALLED = {"vim": {"new": "7.4", "old": six.text_type()}}
REMOVED = {"vim": {"new": six.text_type(), "old": "7.4"}}
PACKAGES = {"vim": "7.4"}
class OpkgTestCase(TestCase, LoaderModuleMockMixin):
"""
Test cases for salt.modules.opkg
"""
@classmethod
def setUpClass(cls):
cls.opkg_vim_info = {
"vim": {
"Package": "vim",
"Version": "7.4.769-r0.31",
"Status": "install ok installed",
}
}
cls.opkg_vim_files = {
"errors": [],
"packages": {
"vim": [
"/usr/bin/view",
"/usr/bin/vim.vim",
"/usr/bin/xxd",
"/usr/bin/vimdiff",
"/usr/bin/rview",
"/usr/bin/rvim",
"/usr/bin/ex",
]
},
}
cls.installed = {"vim": {"new": "7.4", "old": ""}}
cls.removed = {"vim": {"new": "", "old": "7.4"}}
cls.packages = {"vim": "7.4"}
@classmethod
def tearDownClass(cls):
cls.opkg_vim_info = (
cls.opkg_vim_files
) = cls.installed = cls.removed = cls.packages = None
def setup_loader_modules(self): # pylint: disable=no-self-use
"""
Tested modules
@ -66,7 +63,7 @@ class OpkgTestCase(TestCase, LoaderModuleMockMixin):
Test - Returns a string representing the package version or an empty string if
not installed.
"""
version = OPKG_VIM_INFO["vim"]["Version"]
version = self.opkg_vim_info["vim"]["Version"]
mock = MagicMock(return_value=version)
with patch.dict(opkg.__salt__, {"pkg_resource.version": mock}):
self.assertEqual(opkg.version(*["vim"]), version)
@ -82,22 +79,22 @@ class OpkgTestCase(TestCase, LoaderModuleMockMixin):
"""
Test - List the files that belong to a package, grouped by package.
"""
std_out = "\n".join(OPKG_VIM_FILES["packages"]["vim"])
std_out = "\n".join(self.opkg_vim_files["packages"]["vim"])
ret_value = {"stdout": std_out}
mock = MagicMock(return_value=ret_value)
with patch.dict(opkg.__salt__, {"cmd.run_all": mock}):
self.assertEqual(opkg.file_dict("vim"), OPKG_VIM_FILES)
self.assertEqual(opkg.file_dict("vim"), self.opkg_vim_files)
def test_file_list(self):
"""
Test - List the files that belong to a package.
"""
std_out = "\n".join(OPKG_VIM_FILES["packages"]["vim"])
std_out = "\n".join(self.opkg_vim_files["packages"]["vim"])
ret_value = {"stdout": std_out}
mock = MagicMock(return_value=ret_value)
files = {
"errors": OPKG_VIM_FILES["errors"],
"files": OPKG_VIM_FILES["packages"]["vim"],
"errors": self.opkg_vim_files["errors"],
"files": self.opkg_vim_files["packages"]["vim"],
}
with patch.dict(opkg.__salt__, {"cmd.run_all": mock}):
self.assertEqual(opkg.file_list("vim"), files)
@ -116,7 +113,7 @@ class OpkgTestCase(TestCase, LoaderModuleMockMixin):
Test - Install packages.
"""
with patch(
"salt.modules.opkg.list_pkgs", MagicMock(side_effect=[{}, PACKAGES])
"salt.modules.opkg.list_pkgs", MagicMock(side_effect=[{}, self.packages])
):
ret_value = {"retcode": 0}
mock = MagicMock(return_value=ret_value)
@ -132,14 +129,15 @@ class OpkgTestCase(TestCase, LoaderModuleMockMixin):
}
}
with patch.multiple(opkg, **patch_kwargs):
self.assertEqual(opkg.install("vim:7.4"), INSTALLED)
self.assertEqual(opkg.install("vim:7.4"), self.installed)
def test_install_noaction(self):
"""
Test - Install packages.
"""
with patch("salt.modules.opkg.list_pkgs", MagicMock(return_value=({}))):
ret_value = {"retcode": 0}
with patch("salt.modules.opkg.list_pkgs", MagicMock(side_effect=({}, {}))):
std_out = "Downloading http://feedserver/feeds/test/vim_7.4_arch.ipk.\n\nInstalling vim (7.4) on root\n"
ret_value = {"retcode": 0, "stdout": std_out}
mock = MagicMock(return_value=ret_value)
patch_kwargs = {
"__salt__": {
@ -153,14 +151,14 @@ class OpkgTestCase(TestCase, LoaderModuleMockMixin):
}
}
with patch.multiple(opkg, **patch_kwargs):
self.assertEqual(opkg.install("vim:7.4", test=True), {})
self.assertEqual(opkg.install("vim:7.4", test=True), self.installed)
def test_remove(self):
"""
Test - Remove packages.
"""
with patch(
"salt.modules.opkg.list_pkgs", MagicMock(side_effect=[PACKAGES, {}])
"salt.modules.opkg.list_pkgs", MagicMock(side_effect=[self.packages, {}])
):
ret_value = {"retcode": 0}
mock = MagicMock(return_value=ret_value)
@ -176,14 +174,18 @@ class OpkgTestCase(TestCase, LoaderModuleMockMixin):
}
}
with patch.multiple(opkg, **patch_kwargs):
self.assertEqual(opkg.remove("vim"), REMOVED)
self.assertEqual(opkg.remove("vim"), self.removed)
def test_remove_noaction(self):
"""
Test - Remove packages.
"""
with patch("salt.modules.opkg.list_pkgs", MagicMock(return_value=({}))):
ret_value = {"retcode": 0}
with patch(
"salt.modules.opkg.list_pkgs",
MagicMock(side_effect=[self.packages, self.packages]),
):
std_out = "\nRemoving vim (7.4) from root...\n"
ret_value = {"retcode": 0, "stdout": std_out}
mock = MagicMock(return_value=ret_value)
patch_kwargs = {
"__salt__": {
@ -197,17 +199,19 @@ class OpkgTestCase(TestCase, LoaderModuleMockMixin):
}
}
with patch.multiple(opkg, **patch_kwargs):
self.assertEqual(opkg.remove("vim:7.4", test=True), {})
self.assertEqual(opkg.remove("vim:7.4", test=True), self.removed)
def test_info_installed(self):
"""
Test - Return the information of the named package(s) installed on the system.
"""
installed = copy.deepcopy(OPKG_VIM_INFO["vim"])
installed = copy.deepcopy(self.opkg_vim_info["vim"])
del installed["Package"]
ordered_info = collections.OrderedDict(sorted(installed.items()))
expected_dict = {"vim": {k.lower(): v for k, v in ordered_info.items()}}
std_out = "\n".join([k + ": " + v for k, v in OPKG_VIM_INFO["vim"].items()])
std_out = "\n".join(
[k + ": " + v for k, v in self.opkg_vim_info["vim"].items()]
)
ret_value = {"stdout": std_out, "retcode": 0}
mock = MagicMock(return_value=ret_value)
with patch.dict(opkg.__salt__, {"cmd.run_all": mock}):

View file

@ -1,16 +1,12 @@
# -*- coding: utf-8 -*-
# Import Python Libs
from __future__ import absolute_import, print_function, unicode_literals
# Import Salt Libs
import salt.modules.reg as reg
import salt.utils.stringutils
import salt.utils.win_reg
from salt.exceptions import CommandExecutionError
# Import Salt Testing Libs
from tests.support.helpers import destructiveTest, generate_random_name
from tests.support.helpers import destructiveTest, random_string
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.mock import MagicMock, patch
from tests.support.unit import TestCase, skipIf
@ -26,7 +22,7 @@ UNICODE_KEY = "Unicode Key \N{TRADE MARK SIGN}"
UNICODE_VALUE = (
"Unicode Value " "\N{COPYRIGHT SIGN},\N{TRADE MARK SIGN},\N{REGISTERED SIGN}"
)
FAKE_KEY = "\\".join(["SOFTWARE", generate_random_name("SaltTesting-")])
FAKE_KEY = "SOFTWARE\\{}".format(random_string("SaltTesting-", lowercase=False))
@skipIf(not HAS_WIN32, "Tests require win32 libraries")

View file

@ -491,6 +491,52 @@ class SaltcheckTestCase(TestCase, LoaderModuleMockMixin):
)
self.assertEqual(returned["status"], "Pass")
def test_run_test_muliassert(self):
"""test"""
with patch.dict(
saltcheck.__salt__,
{
"config.get": MagicMock(return_value=True),
"sys.list_modules": MagicMock(return_value=["test"]),
"sys.list_functions": MagicMock(return_value=["test.echo"]),
"cp.cache_master": MagicMock(return_value=[True]),
},
):
returned = saltcheck.run_test(
test={
"module_and_function": "test.echo",
"assertions": [
{"assertion": "assertEqual", "expected_return": "This works!"},
{"assertion": "assertEqual", "expected_return": "This works!"},
],
"args": ["This works!"],
}
)
self.assertEqual(returned["status"], "Pass")
def test_run_test_muliassert_failure(self):
"""test"""
with patch.dict(
saltcheck.__salt__,
{
"config.get": MagicMock(return_value=True),
"sys.list_modules": MagicMock(return_value=["test"]),
"sys.list_functions": MagicMock(return_value=["test.echo"]),
"cp.cache_master": MagicMock(return_value=[True]),
},
):
returned = saltcheck.run_test(
test={
"module_and_function": "test.echo",
"assertions": [
{"assertion": "assertEqual", "expected_return": "WRONG"},
{"assertion": "assertEqual", "expected_return": "This works!"},
],
"args": ["This works!"],
}
)
self.assertEqual(returned["status"], "Fail")
def test_report_highstate_tests(self):
"""test report_highstate_tests"""
expected_output = {
@ -660,6 +706,34 @@ class SaltcheckTestCase(TestCase, LoaderModuleMockMixin):
val_ret = sc_instance._SaltCheck__is_valid_test(test_dict)
self.assertEqual(val_ret, expected_return)
# Succeed on multiple assertions
test_dict = {
"module_and_function": "test.echo",
"args": ["somearg"],
"assertions": [
{
"assertion": "assertEqual",
"assertion_section": "0:program",
"expected_return": "systemd-resolve",
},
{
"assertion": "assertEqual",
"assertion_section": "0:proto",
"expected_return": "udp",
},
],
}
expected_return = True
with patch.dict(
saltcheck.__salt__,
{
"sys.list_modules": MagicMock(return_value=["test"]),
"sys.list_functions": MagicMock(return_value=["test.echo"]),
},
):
val_ret = sc_instance._SaltCheck__is_valid_test(test_dict)
self.assertEqual(val_ret, expected_return)
def test_sls_path_generation(self):
"""test generation of sls paths"""
with patch.dict(

View file

@ -0,0 +1,61 @@
# -*- coding: utf-8 -*-
from __future__ import absolute_import
import logging
import salt.modules.slsutil as slsutil
from tests.support.unit import TestCase
log = logging.getLogger(__name__)
class SlsUtilTestCase(TestCase):
"""
Test cases for salt.modules.slsutil
"""
def test_banner(self):
"""
Test banner function
"""
self.check_banner()
self.check_banner(width=81)
self.check_banner(width=20)
self.check_banner(commentchar="//", borderchar="-")
self.check_banner(title="title here", text="text here")
self.check_banner(commentchar=" *")
def check_banner(
self,
width=72,
commentchar="#",
borderchar="#",
blockstart=None,
blockend=None,
title=None,
text=None,
newline=True,
):
result = slsutil.banner(
width=width,
commentchar=commentchar,
borderchar=borderchar,
blockstart=blockstart,
blockend=blockend,
title=title,
text=text,
newline=newline,
).splitlines()
for line in result:
self.assertEqual(len(line), width)
self.assertTrue(line.startswith(commentchar))
self.assertTrue(line.endswith(commentchar.strip()))
def test_boolstr(self):
"""
Test boolstr function
"""
self.assertEqual("yes", slsutil.boolstr(True, true="yes", false="no"))
self.assertEqual("no", slsutil.boolstr(False, true="yes", false="no"))

View file

@ -432,6 +432,23 @@ class StateTestCase(TestCase, LoaderModuleMockMixin):
with patch.object(state, "highstate", mock):
self.assertTrue(state.apply_(None))
def test_test(self):
"""
Test to apply states in test mode
"""
with patch.dict(state.__opts__, {"test": False}):
mock = MagicMock(return_value=True)
with patch.object(state, "sls", mock):
self.assertTrue(state.test(True))
mock.assert_called_once_with(True, test=True)
self.assertEqual(state.__opts__["test"], False)
mock = MagicMock(return_value=True)
with patch.object(state, "highstate", mock):
self.assertTrue(state.test(None))
mock.assert_called_once_with(test=True)
self.assertEqual(state.__opts__["test"], False)
def test_list_disabled(self):
"""
Test to list disabled states
@ -1194,6 +1211,14 @@ class StateTestCase(TestCase, LoaderModuleMockMixin):
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
state.apply_(saltenv="base")
# Test "test" with SLS
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
state.test("foo", saltenv="base")
# Test "test" with Highstate
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
state.test(saltenv="base")
# Test highstate
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
state.highstate(saltenv="base")

View file

@ -0,0 +1,72 @@
# -*- coding: utf-8 -*-
# Import future libs
from __future__ import absolute_import, print_function, unicode_literals
# Import 3rd-party libs
from io import BytesIO, StringIO
# Import salt module
import salt.modules.tomcat as tomcat
from salt.ext.six import string_types
from salt.ext.six.moves.urllib.request import (
HTTPBasicAuthHandler as _HTTPBasicAuthHandler,
)
from salt.ext.six.moves.urllib.request import (
HTTPDigestAuthHandler as _HTTPDigestAuthHandler,
)
from salt.ext.six.moves.urllib.request import build_opener as _build_opener
# Import Salt Testing libs
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.mock import MagicMock, patch
from tests.support.unit import TestCase
class TomcatTestCasse(TestCase, LoaderModuleMockMixin):
"""
Tests cases for salt.modules.tomcat
"""
def setup_loader_modules(self):
return {tomcat: {}}
def test_tomcat_wget_no_bytestring(self):
responses = {
"string": StringIO("Best response ever\r\nAnd you know it!"),
"bytes": BytesIO(b"Best response ever\r\nAnd you know it!"),
}
string_mock = MagicMock(return_value=responses["string"])
bytes_mock = MagicMock(return_value=responses["bytes"])
with patch(
"salt.modules.tomcat._auth",
MagicMock(
return_value=_build_opener(
_HTTPBasicAuthHandler(), _HTTPDigestAuthHandler()
)
),
):
with patch("salt.modules.tomcat._urlopen", string_mock):
response = tomcat._wget(
"tomcat.wait", url="http://localhost:8080/nofail"
)
for line in response["msg"]:
self.assertIsInstance(line, string_types)
with patch("salt.modules.tomcat._urlopen", bytes_mock):
try:
response = tomcat._wget(
"tomcat.wait", url="http://localhost:8080/nofail"
)
except TypeError as type_error:
if (
type_error.args[0]
== "startswith first arg must be bytes or a tuple of bytes, not str"
):
self.fail("Got back a byte string, should've been a string")
else:
raise type_error
for line in response["msg"]:
self.assertIsInstance(line, string_types)

View file

@ -1336,6 +1336,40 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin):
define_mock = MagicMock(return_value=True)
self.mock_conn.defineXML = define_mock
# No parameter passed case
self.assertEqual(
{
"definition": False,
"disk": {"attached": [], "detached": []},
"interface": {"attached": [], "detached": []},
},
virt.update("my_vm"),
)
# Same parameters passed than in default virt.defined state case
self.assertEqual(
{
"definition": False,
"disk": {"attached": [], "detached": []},
"interface": {"attached": [], "detached": []},
},
virt.update(
"my_vm",
cpu=None,
mem=None,
disk_profile=None,
disks=None,
nic_profile=None,
interfaces=None,
graphics=None,
live=True,
connection=None,
username=None,
password=None,
boot=None,
),
)
# Update vcpus case
setvcpus_mock = MagicMock(return_value=0)
domain_mock.setVcpusFlags = setvcpus_mock

View file

@ -418,7 +418,7 @@ class WinLGPOGetPolicyFromPolicyInfoTestCase(TestCase, LoaderModuleMockMixin):
def test_get_policy_name(self):
result = win_lgpo.get_policy(
policy_name="Network firewall: Public: Settings: Display a " "notification",
policy_name="Network firewall: Public: Settings: Display a notification",
policy_class="machine",
return_value_only=True,
return_full_policy_names=True,
@ -440,7 +440,7 @@ class WinLGPOGetPolicyFromPolicyInfoTestCase(TestCase, LoaderModuleMockMixin):
def test_get_policy_name_full_return(self):
result = win_lgpo.get_policy(
policy_name="Network firewall: Public: Settings: Display a " "notification",
policy_name="Network firewall: Public: Settings: Display a notification",
policy_class="machine",
return_value_only=False,
return_full_policy_names=True,
@ -466,7 +466,7 @@ class WinLGPOGetPolicyFromPolicyInfoTestCase(TestCase, LoaderModuleMockMixin):
def test_get_policy_name_full_return_ids(self):
result = win_lgpo.get_policy(
policy_name="Network firewall: Public: Settings: Display a " "notification",
policy_name="Network firewall: Public: Settings: Display a notification",
policy_class="machine",
return_value_only=False,
return_full_policy_names=False,
@ -891,7 +891,7 @@ class WinLGPOGetPolicyFromPolicyResources(TestCase, LoaderModuleMockMixin):
def setUp(self):
if self.adml_data is None:
self.adml_data = win_lgpo._get_policy_resources("en-US")
self.adml_data = win_lgpo._get_policy_resources(language="en-US")
def test__getAdmlPresentationRefId(self):
ref_id = "LetAppsAccessAccountInfo_Enum"
@ -902,7 +902,7 @@ class WinLGPOGetPolicyFromPolicyResources(TestCase, LoaderModuleMockMixin):
def test__getAdmlPresentationRefId_result_text_is_none(self):
ref_id = "LetAppsAccessAccountInfo_UserInControlOfTheseApps_List"
expected = (
"Put user in control of these specific apps (use Package " "Family Names)"
"Put user in control of these specific apps (use Package Family Names)"
)
result = win_lgpo._getAdmlPresentationRefId(self.adml_data, ref_id)
self.assertEqual(result, expected)

View file

@ -16,7 +16,7 @@ from tests.support.unit import TestCase, skipIf
@skipIf(not salt.utils.platform.is_windows(), "System is not Windows")
class WinTaskTestCase(TestCase):
"""
Test cases for salt.modules.win_task
Test cases for salt.modules.win_task
"""
def test_repeat_interval(self):

View file

@ -7,6 +7,7 @@ from __future__ import absolute_import, print_function, unicode_literals
# Import Salt Libs
import salt.modules.win_timezone as win_timezone
from salt.exceptions import CommandExecutionError
# Import Salt Testing Libs
from tests.support.mixins import LoaderModuleMockMixin
@ -23,37 +24,45 @@ class WinTimezoneTestCase(TestCase, LoaderModuleMockMixin):
def setup_loader_modules(self):
return {win_timezone: {}}
# 'get_zone' function tests: 3
def test_get_zone(self):
def test_get_zone_normal(self):
"""
Test if it gets current timezone (i.e. Asia/Calcutta)
Test if it get current timezone (i.e. Asia/Calcutta)
"""
mock_read = MagicMock(
side_effect=[
{"vdata": "India Standard Time"},
{"vdata": "Indian Standard Time"},
]
mock_read_ok = MagicMock(
return_value={
"pid": 78,
"retcode": 0,
"stderr": "",
"stdout": "India Standard Time",
}
)
with patch.dict(win_timezone.__utils__, {"reg.read_value": mock_read}):
with patch.dict(win_timezone.__salt__, {"cmd.run_all": mock_read_ok}):
self.assertEqual(win_timezone.get_zone(), "Asia/Calcutta")
def test_get_zone_unknown(self):
"""
Test get_zone with unknown timezone (i.e. Indian Standard Time)
"""
mock_read_error = MagicMock(
return_value={
"pid": 78,
"retcode": 0,
"stderr": "",
"stdout": "Indian Standard Time",
}
)
with patch.dict(win_timezone.__salt__, {"cmd.run_all": mock_read_error}):
self.assertEqual(win_timezone.get_zone(), "Unknown")
def test_get_zone_null_terminated(self):
def test_get_zone_error(self):
"""
Test if it handles instances where the registry contains null values
Test get_zone when it encounters an error
"""
mock_read = MagicMock(
side_effect=[
{"vdata": "India Standard Time\0\0\0\0"},
{"vdata": "Indian Standard Time\0\0some more junk data\0\0"},
]
mock_read_fatal = MagicMock(
return_value={"pid": 78, "retcode": 1, "stderr": "", "stdout": ""}
)
with patch.dict(win_timezone.__utils__, {"reg.read_value": mock_read}):
self.assertEqual(win_timezone.get_zone(), "Asia/Calcutta")
self.assertEqual(win_timezone.get_zone(), "Unknown")
with patch.dict(win_timezone.__salt__, {"cmd.run_all": mock_read_fatal}):
self.assertRaises(CommandExecutionError, win_timezone.get_zone)
# 'get_offset' function tests: 1
@ -61,9 +70,16 @@ class WinTimezoneTestCase(TestCase, LoaderModuleMockMixin):
"""
Test if it get current numeric timezone offset from UCT (i.e. +0530)
"""
mock_read = MagicMock(return_value={"vdata": "India Standard Time"})
mock_read = MagicMock(
return_value={
"pid": 78,
"retcode": 0,
"stderr": "",
"stdout": "India Standard Time",
}
)
with patch.dict(win_timezone.__utils__, {"reg.read_value": mock_read}):
with patch.dict(win_timezone.__salt__, {"cmd.run_all": mock_read}):
self.assertEqual(win_timezone.get_offset(), "+0530")
# 'get_zonecode' function tests: 1
@ -72,9 +88,16 @@ class WinTimezoneTestCase(TestCase, LoaderModuleMockMixin):
"""
Test if it get current timezone (i.e. PST, MDT, etc)
"""
mock_read = MagicMock(return_value={"vdata": "India Standard Time"})
mock_read = MagicMock(
return_value={
"pid": 78,
"retcode": 0,
"stderr": "",
"stdout": "India Standard Time",
}
)
with patch.dict(win_timezone.__utils__, {"reg.read_value": mock_read}):
with patch.dict(win_timezone.__salt__, {"cmd.run_all": mock_read}):
self.assertEqual(win_timezone.get_zonecode(), "IST")
# 'set_zone' function tests: 1
@ -83,13 +106,20 @@ class WinTimezoneTestCase(TestCase, LoaderModuleMockMixin):
"""
Test if it unlinks, then symlinks /etc/localtime to the set timezone.
"""
mock_cmd = MagicMock(
mock_write = MagicMock(
return_value={"pid": 78, "retcode": 0, "stderr": "", "stdout": ""}
)
mock_read = MagicMock(return_value={"vdata": "India Standard Time"})
mock_read = MagicMock(
return_value={
"pid": 78,
"retcode": 0,
"stderr": "",
"stdout": "India Standard Time",
}
)
with patch.dict(win_timezone.__salt__, {"cmd.run_all": mock_cmd}), patch.dict(
win_timezone.__utils__, {"reg.read_value": mock_read}
with patch.dict(win_timezone.__salt__, {"cmd.run_all": mock_write}), patch.dict(
win_timezone.__salt__, {"cmd.run_all": mock_read}
):
self.assertTrue(win_timezone.set_zone("Asia/Calcutta"))
@ -102,9 +132,16 @@ class WinTimezoneTestCase(TestCase, LoaderModuleMockMixin):
the one set in /etc/localtime. Returns True if they match,
and False if not. Mostly useful for running state checks.
"""
mock_read = MagicMock(return_value={"vdata": "India Standard Time"})
mock_read = MagicMock(
return_value={
"pid": 78,
"retcode": 0,
"stderr": "",
"stdout": "India Standard Time",
}
)
with patch.dict(win_timezone.__utils__, {"reg.read_value": mock_read}):
with patch.dict(win_timezone.__salt__, {"cmd.run_all": mock_read}):
self.assertTrue(win_timezone.zone_compare("Asia/Calcutta"))
# 'get_hwclock' function tests: 1

View file

@ -145,8 +145,8 @@ class Base(TestCase, LoaderModuleMockMixin):
salt.utils.path.which_bin(KNOWN_VIRTUALENV_BINARY_NAMES) is None,
"The 'virtualenv' packaged needs to be installed",
)
@requires_network()
class BuildoutTestCase(Base):
@requires_network()
@skipIf(True, "SLOWTEST skip")
def test_onlyif_unless(self):
b_dir = os.path.join(self.tdir, "b")
@ -157,7 +157,6 @@ class BuildoutTestCase(Base):
self.assertTrue(ret["comment"] == "unless condition is true")
self.assertTrue(ret["status"] is True)
@requires_network()
@skipIf(True, "SLOWTEST skip")
def test_salt_callback(self):
@buildout._salt_callback
@ -215,7 +214,6 @@ class BuildoutTestCase(Base):
self.assertTrue(0 == len(buildout.LOG.by_level[l]))
# pylint: enable=invalid-sequence-index
@requires_network()
@skipIf(True, "SLOWTEST skip")
def test_get_bootstrap_url(self):
for path in [
@ -240,7 +238,6 @@ class BuildoutTestCase(Base):
"b2 url for {0}".format(path),
)
@requires_network()
@skipIf(True, "SLOWTEST skip")
def test_get_buildout_ver(self):
for path in [
@ -261,7 +258,6 @@ class BuildoutTestCase(Base):
2, buildout._get_buildout_ver(path), "2 for {0}".format(path)
)
@requires_network()
@skipIf(True, "SLOWTEST skip")
def test_get_bootstrap_content(self):
self.assertEqual(
@ -277,7 +273,6 @@ class BuildoutTestCase(Base):
buildout._get_bootstrap_content(os.path.join(self.tdir, "var", "tb", "2")),
)
@requires_network()
@skipIf(True, "SLOWTEST skip")
def test_logger_clean(self):
buildout.LOG.clear()
@ -296,7 +291,6 @@ class BuildoutTestCase(Base):
not in [len(buildout.LOG.by_level[a]) > 0 for a in buildout.LOG.by_level]
)
@requires_network()
@skipIf(True, "SLOWTEST skip")
def test_logger_loggers(self):
buildout.LOG.clear()
@ -309,7 +303,6 @@ class BuildoutTestCase(Base):
self.assertEqual(buildout.LOG.by_level[i][0], "foo")
self.assertEqual(buildout.LOG.by_level[i][-1], "moo")
@requires_network()
@skipIf(True, "SLOWTEST skip")
def test__find_cfgs(self):
result = sorted(
@ -329,7 +322,6 @@ class BuildoutTestCase(Base):
)
self.assertEqual(result, assertlist)
@requires_network()
def skip_test_upgrade_bootstrap(self):
b_dir = os.path.join(self.tdir, "b")
bpy = os.path.join(b_dir, "bootstrap.py")
@ -357,6 +349,7 @@ class BuildoutTestCase(Base):
salt.utils.path.which_bin(KNOWN_VIRTUALENV_BINARY_NAMES) is None,
"The 'virtualenv' packaged needs to be installed",
)
@requires_network()
class BuildoutOnlineTestCase(Base):
@classmethod
def setUpClass(cls):
@ -426,7 +419,6 @@ class BuildoutOnlineTestCase(Base):
]
)
@requires_network()
@skipIf(True, "TODO this test should probably be fixed")
def test_buildout_bootstrap(self):
b_dir = os.path.join(self.tdir, "b")
@ -477,7 +469,6 @@ class BuildoutOnlineTestCase(Base):
or ("setuptools>=0.7" in comment)
)
@requires_network()
@skipIf(True, "SLOWTEST skip")
def test_run_buildout(self):
if salt.modules.virtualenv_mod.virtualenv_ver(self.ppy_st) >= (20, 0, 0):
@ -493,7 +484,6 @@ class BuildoutOnlineTestCase(Base):
self.assertTrue("Installing a" in out)
self.assertTrue("Installing b" in out)
@requires_network()
@skipIf(True, "SLOWTEST skip")
def test_buildout(self):
if salt.modules.virtualenv_mod.virtualenv_ver(self.ppy_st) >= (20, 0, 0):

View file

@ -304,9 +304,23 @@ class ZypperTestCase(TestCase, LoaderModuleMockMixin):
ref_out = {"retcode": 0, "stdout": get_test_data(filename)}
with patch.dict(
zypper.__salt__, {"cmd.run_all": MagicMock(return_value=ref_out)}
):
cmd_run_all = MagicMock(return_value=ref_out)
mock_call = call(
[
"zypper",
"--non-interactive",
"--xmlout",
"--no-refresh",
"--disable-repositories",
"products",
u"-i",
],
env={"ZYPP_READONLY_HACK": "1"},
output_loglevel="trace",
python_shell=False,
)
with patch.dict(zypper.__salt__, {"cmd.run_all": cmd_run_all}):
products = zypper.list_products()
self.assertEqual(len(products), 7)
self.assertIn(
@ -329,6 +343,7 @@ class ZypperTestCase(TestCase, LoaderModuleMockMixin):
self.assertEqual(
test_data[kwd], sorted([prod.get(kwd) for prod in products])
)
cmd_run_all.assert_has_calls([mock_call])
def test_refresh_db(self):
"""

View file

@ -6,7 +6,6 @@ tests.unit.returners.local_cache_test
Unit tests for the Default Job Cache (local_cache).
"""
# Import Python libs
from __future__ import absolute_import, print_function, unicode_literals
import logging
@ -16,15 +15,11 @@ import tempfile
import time
import salt.returners.local_cache as local_cache
# Import Salt libs
import salt.utils.files
import salt.utils.jid
import salt.utils.job
import salt.utils.platform
from salt.ext import six
# Import Salt Testing libs
from tests.support.mixins import (
AdaptedConfigurationTestCaseMixin,
LoaderModuleMockMixin,
@ -43,7 +38,9 @@ class LocalCacheCleanOldJobsTestCase(TestCase, LoaderModuleMockMixin):
@classmethod
def setUpClass(cls):
cls.TMP_CACHE_DIR = os.path.join(RUNTIME_VARS.TMP, "salt_test_job_cache")
cls.TMP_CACHE_DIR = tempfile.mkdtemp(
prefix="salt_test_job_cache", dir=RUNTIME_VARS.TMP
)
cls.TMP_JID_DIR = os.path.join(cls.TMP_CACHE_DIR, "jobs")
def setup_loader_modules(self):
@ -59,7 +56,7 @@ class LocalCacheCleanOldJobsTestCase(TestCase, LoaderModuleMockMixin):
_make_tmp_jid_dirs replaces it.
"""
if os.path.exists(self.TMP_CACHE_DIR):
shutil.rmtree(self.TMP_CACHE_DIR)
shutil.rmtree(self.TMP_CACHE_DIR, ignore_errors=True)
def test_clean_old_jobs_no_jid_root(self):
"""
@ -229,7 +226,9 @@ class Local_CacheTest(
@classmethod
def setUpClass(cls):
cls.TMP_CACHE_DIR = os.path.join(RUNTIME_VARS.TMP, "rootdir", "cache")
cls.TMP_CACHE_DIR = tempfile.mkdtemp(
prefix="salt_test_local_cache", dir=RUNTIME_VARS.TMP
)
cls.JOBS_DIR = os.path.join(cls.TMP_CACHE_DIR, "jobs")
cls.JID_DIR = os.path.join(
cls.JOBS_DIR,
@ -258,7 +257,7 @@ class Local_CacheTest(
attr_instance = getattr(cls, attrname)
if isinstance(attr_instance, six.string_types):
if os.path.isdir(attr_instance):
shutil.rmtree(attr_instance)
shutil.rmtree(attr_instance, ignore_errors=True)
elif os.path.isfile(attr_instance):
os.unlink(attr_instance)
delattr(cls, attrname)

View file

@ -0,0 +1,771 @@
# -*- coding: utf-8 -*-
"""
:maintainer: Alberto Planas <aplanas@suse.com>
:platform: Linux
"""
# Import Python Libs
from __future__ import absolute_import, print_function, unicode_literals
import pytest
import salt.states.btrfs as btrfs
import salt.utils.platform
from salt.exceptions import CommandExecutionError
# Import Salt Testing Libs
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.mock import MagicMock, patch
from tests.support.unit import TestCase, skipIf
@skipIf(salt.utils.platform.is_windows(), "Non-Windows feature")
class BtrfsTestCase(TestCase, LoaderModuleMockMixin):
"""
Test cases for salt.states.btrfs
"""
def setup_loader_modules(self):
return {btrfs: {"__salt__": {}, "__states__": {}, "__utils__": {}}}
@patch("salt.states.btrfs._umount")
@patch("tempfile.mkdtemp")
def test__mount_fails(self, mkdtemp, umount):
"""
Test mounting a device in a temporary place.
"""
mkdtemp.return_value = "/tmp/xxx"
states_mock = {
"mount.mounted": MagicMock(return_value={"result": False}),
}
with patch.dict(btrfs.__states__, states_mock):
assert btrfs._mount("/dev/sda1", use_default=False) is None
mkdtemp.assert_called_once()
states_mock["mount.mounted"].assert_called_with(
"/tmp/xxx",
device="/dev/sda1",
fstype="btrfs",
opts="subvol=/",
persist=False,
)
umount.assert_called_with("/tmp/xxx")
@patch("salt.states.btrfs._umount")
@patch("tempfile.mkdtemp")
def test__mount(self, mkdtemp, umount):
"""
Test mounting a device in a temporary place.
"""
mkdtemp.return_value = "/tmp/xxx"
states_mock = {
"mount.mounted": MagicMock(return_value={"result": True}),
}
with patch.dict(btrfs.__states__, states_mock):
assert btrfs._mount("/dev/sda1", use_default=False) == "/tmp/xxx"
mkdtemp.assert_called_once()
states_mock["mount.mounted"].assert_called_with(
"/tmp/xxx",
device="/dev/sda1",
fstype="btrfs",
opts="subvol=/",
persist=False,
)
umount.assert_not_called()
@patch("salt.states.btrfs._umount")
@patch("tempfile.mkdtemp")
def test__mount_use_default(self, mkdtemp, umount):
"""
Test mounting a device in a temporary place.
"""
mkdtemp.return_value = "/tmp/xxx"
states_mock = {
"mount.mounted": MagicMock(return_value={"result": True}),
}
with patch.dict(btrfs.__states__, states_mock):
assert btrfs._mount("/dev/sda1", use_default=True) == "/tmp/xxx"
mkdtemp.assert_called_once()
states_mock["mount.mounted"].assert_called_with(
"/tmp/xxx",
device="/dev/sda1",
fstype="btrfs",
opts="defaults",
persist=False,
)
umount.assert_not_called()
def test__umount(self):
"""
Test umounting and cleanning temporary place.
"""
states_mock = {
"mount.unmounted": MagicMock(),
}
utils_mock = {
"files.rm_rf": MagicMock(),
}
with patch.dict(btrfs.__states__, states_mock), patch.dict(
btrfs.__utils__, utils_mock
):
btrfs._umount("/tmp/xxx")
states_mock["mount.unmounted"].assert_called_with("/tmp/xxx")
utils_mock["files.rm_rf"].assert_called_with("/tmp/xxx")
def test__is_default_not_default(self):
"""
Test if the subvolume is the current default.
"""
salt_mock = {
"btrfs.subvolume_show": MagicMock(
return_value={"@/var": {"subvolume id": "256"}}
),
"btrfs.subvolume_get_default": MagicMock(return_value={"id": "5"}),
}
with patch.dict(btrfs.__salt__, salt_mock):
assert not btrfs._is_default("/tmp/xxx/@/var", "/tmp/xxx", "@/var")
salt_mock["btrfs.subvolume_show"].assert_called_with("/tmp/xxx/@/var")
salt_mock["btrfs.subvolume_get_default"].assert_called_with("/tmp/xxx")
def test__is_default(self):
"""
Test if the subvolume is the current default.
"""
salt_mock = {
"btrfs.subvolume_show": MagicMock(
return_value={"@/var": {"subvolume id": "256"}}
),
"btrfs.subvolume_get_default": MagicMock(return_value={"id": "256"}),
}
with patch.dict(btrfs.__salt__, salt_mock):
assert btrfs._is_default("/tmp/xxx/@/var", "/tmp/xxx", "@/var")
salt_mock["btrfs.subvolume_show"].assert_called_with("/tmp/xxx/@/var")
salt_mock["btrfs.subvolume_get_default"].assert_called_with("/tmp/xxx")
def test__set_default(self):
"""
Test setting a subvolume as the current default.
"""
salt_mock = {
"btrfs.subvolume_show": MagicMock(
return_value={"@/var": {"subvolume id": "256"}}
),
"btrfs.subvolume_set_default": MagicMock(return_value=True),
}
with patch.dict(btrfs.__salt__, salt_mock):
assert btrfs._set_default("/tmp/xxx/@/var", "/tmp/xxx", "@/var")
salt_mock["btrfs.subvolume_show"].assert_called_with("/tmp/xxx/@/var")
salt_mock["btrfs.subvolume_set_default"].assert_called_with(
"256", "/tmp/xxx"
)
def test__is_cow_not_cow(self):
"""
Test if the subvolume is copy on write.
"""
salt_mock = {
"file.lsattr": MagicMock(return_value={"/tmp/xxx/@/var": ["C"]}),
}
with patch.dict(btrfs.__salt__, salt_mock):
assert not btrfs._is_cow("/tmp/xxx/@/var")
salt_mock["file.lsattr"].assert_called_with("/tmp/xxx/@")
def test__is_cow(self):
"""
Test if the subvolume is copy on write.
"""
salt_mock = {
"file.lsattr": MagicMock(return_value={"/tmp/xxx/@/var": []}),
}
with patch.dict(btrfs.__salt__, salt_mock):
assert btrfs._is_cow("/tmp/xxx/@/var")
salt_mock["file.lsattr"].assert_called_with("/tmp/xxx/@")
def test__unset_cow(self):
"""
Test disabling the subvolume as copy on write.
"""
salt_mock = {
"file.chattr": MagicMock(return_value=True),
}
with patch.dict(btrfs.__salt__, salt_mock):
assert btrfs._unset_cow("/tmp/xxx/@/var")
salt_mock["file.chattr"].assert_called_with(
"/tmp/xxx/@/var", operator="add", attributes="C"
)
@skipIf(salt.utils.platform.is_windows(), "Skip on Windows")
@patch("salt.states.btrfs._umount")
@patch("salt.states.btrfs._mount")
def test_subvolume_created_exists(self, mount, umount):
"""
Test creating a subvolume.
"""
mount.return_value = "/tmp/xxx"
salt_mock = {
"btrfs.subvolume_exists": MagicMock(return_value=True),
}
opts_mock = {
"test": False,
}
with patch.dict(btrfs.__salt__, salt_mock), patch.dict(
btrfs.__opts__, opts_mock
):
assert btrfs.subvolume_created(name="@/var", device="/dev/sda1") == {
"name": "@/var",
"result": True,
"changes": {},
"comment": ["Subvolume @/var already present"],
}
salt_mock["btrfs.subvolume_exists"].assert_called_with("/tmp/xxx/@/var")
mount.assert_called_once()
umount.assert_called_once()
@skipIf(salt.utils.platform.is_windows(), "Skip on Windows")
@patch("salt.states.btrfs._umount")
@patch("salt.states.btrfs._mount")
def test_subvolume_created_exists_test(self, mount, umount):
"""
Test creating a subvolume.
"""
mount.return_value = "/tmp/xxx"
salt_mock = {
"btrfs.subvolume_exists": MagicMock(return_value=True),
}
opts_mock = {
"test": True,
}
with patch.dict(btrfs.__salt__, salt_mock), patch.dict(
btrfs.__opts__, opts_mock
):
assert btrfs.subvolume_created(name="@/var", device="/dev/sda1") == {
"name": "@/var",
"result": None,
"changes": {},
"comment": ["Subvolume @/var already present"],
}
salt_mock["btrfs.subvolume_exists"].assert_called_with("/tmp/xxx/@/var")
mount.assert_called_once()
umount.assert_called_once()
@skipIf(salt.utils.platform.is_windows(), "Skip on Windows")
@patch("salt.states.btrfs._is_default")
@patch("salt.states.btrfs._umount")
@patch("salt.states.btrfs._mount")
def test_subvolume_created_exists_was_default(self, mount, umount, is_default):
"""
Test creating a subvolume.
"""
mount.return_value = "/tmp/xxx"
is_default.return_value = True
salt_mock = {
"btrfs.subvolume_exists": MagicMock(return_value=True),
}
opts_mock = {
"test": False,
}
with patch.dict(btrfs.__salt__, salt_mock), patch.dict(
btrfs.__opts__, opts_mock
):
assert btrfs.subvolume_created(
name="@/var", device="/dev/sda1", set_default=True
) == {
"name": "@/var",
"result": True,
"changes": {},
"comment": ["Subvolume @/var already present"],
}
salt_mock["btrfs.subvolume_exists"].assert_called_with("/tmp/xxx/@/var")
mount.assert_called_once()
umount.assert_called_once()
@skipIf(salt.utils.platform.is_windows(), "Skip on Windows")
@patch("salt.states.btrfs._set_default")
@patch("salt.states.btrfs._is_default")
@patch("salt.states.btrfs._umount")
@patch("salt.states.btrfs._mount")
def test_subvolume_created_exists_set_default(
self, mount, umount, is_default, set_default
):
"""
Test creating a subvolume.
"""
mount.return_value = "/tmp/xxx"
is_default.return_value = False
set_default.return_value = True
salt_mock = {
"btrfs.subvolume_exists": MagicMock(return_value=True),
}
opts_mock = {
"test": False,
}
with patch.dict(btrfs.__salt__, salt_mock), patch.dict(
btrfs.__opts__, opts_mock
):
assert btrfs.subvolume_created(
name="@/var", device="/dev/sda1", set_default=True
) == {
"name": "@/var",
"result": True,
"changes": {"@/var_default": True},
"comment": ["Subvolume @/var already present"],
}
salt_mock["btrfs.subvolume_exists"].assert_called_with("/tmp/xxx/@/var")
mount.assert_called_once()
umount.assert_called_once()
@skipIf(salt.utils.platform.is_windows(), "Skip on Windows")
@patch("salt.states.btrfs._set_default")
@patch("salt.states.btrfs._is_default")
@patch("salt.states.btrfs._umount")
@patch("salt.states.btrfs._mount")
def test_subvolume_created_exists_set_default_no_force(
self, mount, umount, is_default, set_default
):
"""
Test creating a subvolume.
"""
mount.return_value = "/tmp/xxx"
is_default.return_value = False
set_default.return_value = True
salt_mock = {
"btrfs.subvolume_exists": MagicMock(return_value=True),
}
opts_mock = {
"test": False,
}
with patch.dict(btrfs.__salt__, salt_mock), patch.dict(
btrfs.__opts__, opts_mock
):
assert btrfs.subvolume_created(
name="@/var",
device="/dev/sda1",
set_default=True,
force_set_default=False,
) == {
"name": "@/var",
"result": True,
"changes": {},
"comment": ["Subvolume @/var already present"],
}
salt_mock["btrfs.subvolume_exists"].assert_called_with("/tmp/xxx/@/var")
mount.assert_called_once()
umount.assert_called_once()
@skipIf(salt.utils.platform.is_windows(), "Skip on Windows")
@patch("salt.states.btrfs._is_cow")
@patch("salt.states.btrfs._umount")
@patch("salt.states.btrfs._mount")
def test_subvolume_created_exists_no_cow(self, mount, umount, is_cow):
"""
Test creating a subvolume.
"""
mount.return_value = "/tmp/xxx"
is_cow.return_value = False
salt_mock = {
"btrfs.subvolume_exists": MagicMock(return_value=True),
}
opts_mock = {
"test": False,
}
with patch.dict(btrfs.__salt__, salt_mock), patch.dict(
btrfs.__opts__, opts_mock
):
assert btrfs.subvolume_created(
name="@/var", device="/dev/sda1", copy_on_write=False
) == {
"name": "@/var",
"result": True,
"changes": {},
"comment": ["Subvolume @/var already present"],
}
salt_mock["btrfs.subvolume_exists"].assert_called_with("/tmp/xxx/@/var")
mount.assert_called_once()
umount.assert_called_once()
@skipIf(salt.utils.platform.is_windows(), "Skip on Windows")
@patch("salt.states.btrfs._unset_cow")
@patch("salt.states.btrfs._is_cow")
@patch("salt.states.btrfs._umount")
@patch("salt.states.btrfs._mount")
def test_subvolume_created_exists_unset_cow(self, mount, umount, is_cow, unset_cow):
"""
Test creating a subvolume.
"""
mount.return_value = "/tmp/xxx"
is_cow.return_value = True
unset_cow.return_value = True
salt_mock = {
"btrfs.subvolume_exists": MagicMock(return_value=True),
}
opts_mock = {
"test": False,
}
with patch.dict(btrfs.__salt__, salt_mock), patch.dict(
btrfs.__opts__, opts_mock
):
assert btrfs.subvolume_created(
name="@/var", device="/dev/sda1", copy_on_write=False
) == {
"name": "@/var",
"result": True,
"changes": {"@/var_no_cow": True},
"comment": ["Subvolume @/var already present"],
}
salt_mock["btrfs.subvolume_exists"].assert_called_with("/tmp/xxx/@/var")
mount.assert_called_once()
umount.assert_called_once()
@skipIf(salt.utils.platform.is_windows(), "Skip on Windows")
@patch("salt.states.btrfs._umount")
@patch("salt.states.btrfs._mount")
def test_subvolume_created(self, mount, umount):
"""
Test creating a subvolume.
"""
mount.return_value = "/tmp/xxx"
salt_mock = {
"btrfs.subvolume_exists": MagicMock(return_value=False),
"btrfs.subvolume_create": MagicMock(),
}
states_mock = {
"file.directory": MagicMock(return_value={"result": True}),
}
opts_mock = {
"test": False,
}
with patch.dict(btrfs.__salt__, salt_mock), patch.dict(
btrfs.__states__, states_mock
), patch.dict(btrfs.__opts__, opts_mock):
assert btrfs.subvolume_created(name="@/var", device="/dev/sda1") == {
"name": "@/var",
"result": True,
"changes": {"@/var": "Created subvolume @/var"},
"comment": [],
}
salt_mock["btrfs.subvolume_exists"].assert_called_with("/tmp/xxx/@/var")
salt_mock["btrfs.subvolume_create"].assert_called_once()
mount.assert_called_once()
umount.assert_called_once()
@skipIf(salt.utils.platform.is_windows(), "Skip on Windows")
@patch("salt.states.btrfs._umount")
@patch("salt.states.btrfs._mount")
def test_subvolume_created_fails_directory(self, mount, umount):
"""
Test creating a subvolume.
"""
mount.return_value = "/tmp/xxx"
salt_mock = {
"btrfs.subvolume_exists": MagicMock(return_value=False),
}
states_mock = {
"file.directory": MagicMock(return_value={"result": False}),
}
opts_mock = {
"test": False,
}
with patch.dict(btrfs.__salt__, salt_mock), patch.dict(
btrfs.__states__, states_mock
), patch.dict(btrfs.__opts__, opts_mock):
assert btrfs.subvolume_created(name="@/var", device="/dev/sda1") == {
"name": "@/var",
"result": False,
"changes": {},
"comment": ["Error creating /tmp/xxx/@ directory"],
}
salt_mock["btrfs.subvolume_exists"].assert_called_with("/tmp/xxx/@/var")
mount.assert_called_once()
umount.assert_called_once()
@skipIf(salt.utils.platform.is_windows(), "Skip on Windows")
@patch("salt.states.btrfs._umount")
@patch("salt.states.btrfs._mount")
def test_subvolume_created_fails(self, mount, umount):
"""
Test creating a subvolume.
"""
mount.return_value = "/tmp/xxx"
salt_mock = {
"btrfs.subvolume_exists": MagicMock(return_value=False),
"btrfs.subvolume_create": MagicMock(side_effect=CommandExecutionError),
}
states_mock = {
"file.directory": MagicMock(return_value={"result": True}),
}
opts_mock = {
"test": False,
}
with patch.dict(btrfs.__salt__, salt_mock), patch.dict(
btrfs.__states__, states_mock
), patch.dict(btrfs.__opts__, opts_mock):
assert btrfs.subvolume_created(name="@/var", device="/dev/sda1") == {
"name": "@/var",
"result": False,
"changes": {},
"comment": ["Error creating subvolume @/var"],
}
salt_mock["btrfs.subvolume_exists"].assert_called_with("/tmp/xxx/@/var")
salt_mock["btrfs.subvolume_create"].assert_called_once()
mount.assert_called_once()
umount.assert_called_once()
def test_diff_properties_fails(self):
"""
Test when diff_properties do not found a property
"""
expected = {"wrong_property": True}
current = {
"compression": {
"description": "Set/get compression for a file or directory",
"value": "N/A",
},
"label": {"description": "Set/get label of device.", "value": "N/A"},
"ro": {
"description": "Set/get read-only flag or subvolume",
"value": "N/A",
},
}
with pytest.raises(Exception):
btrfs._diff_properties(expected, current)
def test_diff_properties_enable_ro(self):
"""
Test when diff_properties enable one single property
"""
expected = {"ro": True}
current = {
"compression": {
"description": "Set/get compression for a file or directory",
"value": "N/A",
},
"label": {"description": "Set/get label of device.", "value": "N/A"},
"ro": {
"description": "Set/get read-only flag or subvolume",
"value": "N/A",
},
}
assert btrfs._diff_properties(expected, current) == {"ro": True}
def test_diff_properties_only_enable_ro(self):
"""
Test when diff_properties is half ready
"""
expected = {"ro": True, "label": "mylabel"}
current = {
"compression": {
"description": "Set/get compression for a file or directory",
"value": "N/A",
},
"label": {"description": "Set/get label of device.", "value": "mylabel"},
"ro": {
"description": "Set/get read-only flag or subvolume",
"value": "N/A",
},
}
assert btrfs._diff_properties(expected, current) == {"ro": True}
def test_diff_properties_disable_ro(self):
"""
Test when diff_properties enable one single property
"""
expected = {"ro": False}
current = {
"compression": {
"description": "Set/get compression for a file or directory",
"value": "N/A",
},
"label": {"description": "Set/get label of device.", "value": "N/A"},
"ro": {
"description": "Set/get read-only flag or subvolume",
"value": True,
},
}
assert btrfs._diff_properties(expected, current) == {"ro": False}
def test_diff_properties_emty_na(self):
"""
Test when diff_properties is already disabled as N/A
"""
expected = {"ro": False}
current = {
"compression": {
"description": "Set/get compression for a file or directory",
"value": "N/A",
},
"label": {"description": "Set/get label of device.", "value": "N/A"},
"ro": {
"description": "Set/get read-only flag or subvolume",
"value": "N/A",
},
}
assert btrfs._diff_properties(expected, current) == {}
@patch("salt.states.btrfs._umount")
@patch("salt.states.btrfs._mount")
@patch("os.path.exists")
def test_properties_subvolume_not_exists(self, exists, mount, umount):
"""
Test when subvolume is not present
"""
exists.return_value = False
mount.return_value = "/tmp/xxx"
assert btrfs.properties(name="@/var", device="/dev/sda1") == {
"name": "@/var",
"result": False,
"changes": {},
"comment": ["Object @/var not found"],
}
mount.assert_called_once()
umount.assert_called_once()
@patch("salt.states.btrfs._umount")
@patch("salt.states.btrfs._mount")
@patch("os.path.exists")
def test_properties_default_root_subvolume(self, exists, mount, umount):
"""
Test when root subvolume resolves to another subvolume
"""
exists.return_value = False
mount.return_value = "/tmp/xxx"
assert btrfs.properties(name="/", device="/dev/sda1") == {
"name": "/",
"result": False,
"changes": {},
"comment": ["Object / not found"],
}
exists.assert_called_with("/tmp/xxx/.")
@patch("os.path.exists")
def test_properties_device_fail(self, exists):
"""
Test when we try to set a device that is not pressent
"""
exists.return_value = False
assert btrfs.properties(name="/dev/sda1", device=None) == {
"name": "/dev/sda1",
"result": False,
"changes": {},
"comment": ["Object /dev/sda1 not found"],
}
@patch("salt.states.btrfs._umount")
@patch("salt.states.btrfs._mount")
@patch("os.path.exists")
def test_properties_subvolume_fail(self, exists, mount, umount):
"""
Test setting a wrong property in a subvolume
"""
exists.return_value = True
mount.return_value = "/tmp/xxx"
salt_mock = {
"btrfs.properties": MagicMock(
side_effect=[
{
"ro": {
"description": "Set/get read-only flag or subvolume",
"value": "N/A",
},
}
]
),
}
opts_mock = {
"test": False,
}
with patch.dict(btrfs.__salt__, salt_mock), patch.dict(
btrfs.__opts__, opts_mock
):
assert btrfs.properties(
name="@/var", device="/dev/sda1", wrond_property=True
) == {
"name": "@/var",
"result": False,
"changes": {},
"comment": ["Some property not found in @/var"],
}
salt_mock["btrfs.properties"].assert_called_with("/tmp/xxx/@/var")
mount.assert_called_once()
umount.assert_called_once()
@patch("salt.states.btrfs._umount")
@patch("salt.states.btrfs._mount")
@patch("os.path.exists")
def test_properties_enable_ro_subvolume(self, exists, mount, umount):
"""
Test setting a ro property in a subvolume
"""
exists.return_value = True
mount.return_value = "/tmp/xxx"
salt_mock = {
"btrfs.properties": MagicMock(
side_effect=[
{
"ro": {
"description": "Set/get read-only flag or subvolume",
"value": "N/A",
},
},
None,
{
"ro": {
"description": "Set/get read-only flag or subvolume",
"value": "true",
},
},
]
),
}
opts_mock = {
"test": False,
}
with patch.dict(btrfs.__salt__, salt_mock), patch.dict(
btrfs.__opts__, opts_mock
):
assert btrfs.properties(name="@/var", device="/dev/sda1", ro=True) == {
"name": "@/var",
"result": True,
"changes": {"ro": "true"},
"comment": ["Properties changed in @/var"],
}
salt_mock["btrfs.properties"].assert_any_call("/tmp/xxx/@/var")
salt_mock["btrfs.properties"].assert_any_call(
"/tmp/xxx/@/var", set="ro=true"
)
mount.assert_called_once()
umount.assert_called_once()
@patch("salt.states.btrfs._umount")
@patch("salt.states.btrfs._mount")
@patch("os.path.exists")
def test_properties_test(self, exists, mount, umount):
"""
Test setting a property in test mode.
"""
exists.return_value = True
mount.return_value = "/tmp/xxx"
salt_mock = {
"btrfs.properties": MagicMock(
side_effect=[
{
"ro": {
"description": "Set/get read-only flag or subvolume",
"value": "N/A",
},
},
]
),
}
opts_mock = {
"test": True,
}
with patch.dict(btrfs.__salt__, salt_mock), patch.dict(
btrfs.__opts__, opts_mock
):
assert btrfs.properties(name="@/var", device="/dev/sda1", ro=True) == {
"name": "@/var",
"result": None,
"changes": {"ro": "true"},
"comment": [],
}
salt_mock["btrfs.properties"].assert_called_with("/tmp/xxx/@/var")
mount.assert_called_once()
umount.assert_called_once()

View file

@ -21,7 +21,7 @@ import salt.utils.path
# Import salt libs
import salt.version
from salt.modules.virtualenv_mod import KNOWN_BINARY_NAMES
from tests.support.helpers import VirtualEnv, dedent
from tests.support.helpers import VirtualEnv, dedent, requires_network
# Import Salt Testing libs
from tests.support.mixins import LoaderModuleMockMixin, SaltReturnAssertsMixin
@ -419,6 +419,7 @@ class PipStateUtilsTest(TestCase):
@skipIf(
salt.utils.path.which_bin(KNOWN_BINARY_NAMES) is None, "virtualenv not installed"
)
@requires_network()
class PipStateInstallationErrorTest(TestCase):
@skipIf(True, "SLOWTEST skip")
def test_importable_installation_error(self):

View file

@ -0,0 +1,74 @@
# -*- coding: utf-8 -*-
"""
:codeauthor: Tyler Johnson <tjohnson@saltstack.com>
"""
# Import Python libs
from __future__ import absolute_import
# Import Salt Libs
import salt.states.pkgrepo as pkgrepo
# Import Salt Testing Libs
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.mock import MagicMock, patch
from tests.support.unit import TestCase
class PkgrepoTestCase(TestCase, LoaderModuleMockMixin):
"""
Test cases for salt.states.pkgrepo
"""
def setup_loader_modules(self):
return {
pkgrepo: {
"__opts__": {"test": True},
"__grains__": {"os": "", "os_family": ""},
}
}
def test_new_key_url(self):
"""
Test when only the key_url is changed that a change is triggered
"""
kwargs = {
"name": "deb http://mock/ sid main",
"disabled": False,
}
key_url = "http://mock/changed_gpg.key"
with patch.dict(
pkgrepo.__salt__, {"pkg.get_repo": MagicMock(return_value=kwargs)}
):
ret = pkgrepo.managed(key_url=key_url, **kwargs)
self.assertDictEqual(
{"key_url": {"old": None, "new": key_url}}, ret["changes"]
)
def test_update_key_url(self):
"""
Test when only the key_url is changed that a change is triggered
"""
kwargs = {
"name": "deb http://mock/ sid main",
"gpgcheck": 1,
"disabled": False,
"key_url": "http://mock/gpg.key",
}
changed_kwargs = kwargs.copy()
changed_kwargs["key_url"] = "http://mock/gpg2.key"
with patch.dict(
pkgrepo.__salt__, {"pkg.get_repo": MagicMock(return_value=kwargs)}
):
ret = pkgrepo.managed(**changed_kwargs)
self.assertIn("key_url", ret["changes"], "Expected a change to key_url")
self.assertDictEqual(
{
"key_url": {
"old": kwargs["key_url"],
"new": changed_kwargs["key_url"],
}
},
ret["changes"],
)

File diff suppressed because it is too large Load diff

View file

@ -24,6 +24,7 @@ from tests.unit.modules.test_zcbuildout import KNOWN_VIRTUALENV_BINARY_NAMES, Ba
salt.utils.path.which_bin(KNOWN_VIRTUALENV_BINARY_NAMES) is None,
"The 'virtualenv' packaged needs to be installed",
)
@requires_network()
class BuildoutTestCase(Base):
def setup_loader_modules(self):
module_globals = {
@ -41,7 +42,6 @@ class BuildoutTestCase(Base):
# I don't have the time to invest in learning more about buildout,
# and given we don't have support yet, and there are other priorities
# I'm going to punt on this for now - WW
@requires_network()
@skipIf(True, "Buildout is still in beta. Test needs fixing.")
def test_quiet(self):
c_dir = os.path.join(self.tdir, "c")
@ -52,7 +52,6 @@ class BuildoutTestCase(Base):
self.assertFalse("Log summary:" in cret["comment"], cret["comment"])
self.assertTrue(cret["result"], cret["comment"])
@requires_network()
@skipIf(True, "SLOWTEST skip")
def test_error(self):
b_dir = os.path.join(self.tdir, "e")
@ -62,7 +61,6 @@ class BuildoutTestCase(Base):
)
self.assertFalse(ret["result"])
@requires_network()
@skipIf(True, "SLOWTEST skip")
def test_installed(self):
if salt.modules.virtualenv_mod.virtualenv_ver(self.ppy_st) >= (20, 0, 0):

View file

@ -17,9 +17,14 @@ class ClearFuncsTestCase(TestCase):
TestCase for salt.master.ClearFuncs class
"""
def setUp(self):
@classmethod
def setUpClass(cls):
opts = salt.config.master_config(None)
self.clear_funcs = salt.master.ClearFuncs(opts, {})
cls.clear_funcs = salt.master.ClearFuncs(opts, {})
@classmethod
def tearDownClass(cls):
del cls.clear_funcs
# runner tests

View file

@ -452,6 +452,82 @@ class HighStateTestCase(TestCase, AdaptedConfigurationTestCaseMixin):
self.assertEqual(ret, [("somestuff", "cmd")])
class MultiEnvHighStateTestCase(TestCase, AdaptedConfigurationTestCaseMixin):
def setUp(self):
root_dir = tempfile.mkdtemp(dir=RUNTIME_VARS.TMP)
self.base_state_tree_dir = os.path.join(root_dir, "base")
self.other_state_tree_dir = os.path.join(root_dir, "other")
cache_dir = os.path.join(root_dir, "cachedir")
for dpath in (
root_dir,
self.base_state_tree_dir,
self.other_state_tree_dir,
cache_dir,
):
if not os.path.isdir(dpath):
os.makedirs(dpath)
shutil.copy(
os.path.join(RUNTIME_VARS.BASE_FILES, "top.sls"), self.base_state_tree_dir
)
shutil.copy(
os.path.join(RUNTIME_VARS.BASE_FILES, "core.sls"), self.base_state_tree_dir
)
shutil.copy(
os.path.join(RUNTIME_VARS.BASE_FILES, "test.sls"), self.other_state_tree_dir
)
overrides = {}
overrides["root_dir"] = root_dir
overrides["state_events"] = False
overrides["id"] = "match"
overrides["file_client"] = "local"
overrides["file_roots"] = dict(
base=[self.base_state_tree_dir], other=[self.other_state_tree_dir]
)
overrides["cachedir"] = cache_dir
overrides["test"] = False
self.config = self.get_temp_config("minion", **overrides)
self.addCleanup(delattr, self, "config")
self.highstate = salt.state.HighState(self.config)
self.addCleanup(delattr, self, "highstate")
self.highstate.push_active()
def tearDown(self):
self.highstate.pop_active()
def test_lazy_avail_states_base(self):
# list_states not called yet
self.assertEqual(self.highstate.avail._filled, False)
self.assertEqual(self.highstate.avail._avail, {"base": None})
# After getting 'base' env available states
self.highstate.avail["base"] # pylint: disable=pointless-statement
self.assertEqual(self.highstate.avail._filled, False)
self.assertEqual(self.highstate.avail._avail, {"base": ["core", "top"]})
def test_lazy_avail_states_other(self):
# list_states not called yet
self.assertEqual(self.highstate.avail._filled, False)
self.assertEqual(self.highstate.avail._avail, {"base": None})
# After getting 'other' env available states
self.highstate.avail["other"] # pylint: disable=pointless-statement
self.assertEqual(self.highstate.avail._filled, True)
self.assertEqual(self.highstate.avail._avail, {"base": None, "other": ["test"]})
def test_lazy_avail_states_multi(self):
# list_states not called yet
self.assertEqual(self.highstate.avail._filled, False)
self.assertEqual(self.highstate.avail._avail, {"base": None})
# After getting 'base' env available states
self.highstate.avail["base"] # pylint: disable=pointless-statement
self.assertEqual(self.highstate.avail._filled, False)
self.assertEqual(self.highstate.avail._avail, {"base": ["core", "top"]})
# After getting 'other' env available states
self.highstate.avail["other"] # pylint: disable=pointless-statement
self.assertEqual(self.highstate.avail._filled, True)
self.assertEqual(
self.highstate.avail._avail, {"base": ["core", "top"], "other": ["test"]}
)
@skipIf(pytest is None, "PyTest is missing")
class StateReturnsTestCase(TestCase):
"""

View file

@ -28,9 +28,10 @@ from salt.utils.dns import (
lookup,
)
from salt.utils.odict import OrderedDict
from tests.support.mock import MagicMock, patch
# Testing
from tests.support.helpers import requires_network
from tests.support.mock import MagicMock, patch
from tests.support.unit import TestCase, skipIf
@ -311,6 +312,7 @@ class DNSlookupsCase(TestCase):
)
@skipIf(not salt.utils.dns.HAS_NSLOOKUP, "nslookup is not available")
@requires_network()
def test_lookup_with_servers(self):
rights = {
"A": [

View file

@ -4,20 +4,16 @@ These only test the provider selection and verification logic, they do not init
any remotes.
"""
# Import python libs
from __future__ import absolute_import, print_function, unicode_literals
import os
import shutil
from time import time
# Import salt libs
import salt.fileserver.gitfs
import salt.utils.files
import salt.utils.gitfs
import salt.utils.platform
# Import Salt Testing libs
import tests.support.paths
from salt.exceptions import FileserverConfigError
from tests.support.mock import MagicMock, patch
@ -36,11 +32,13 @@ if HAS_PYGIT2:
import pygit2
# GLOBALS
OPTS = {"cachedir": "/tmp/gitfs-test-cache"}
class TestGitFSProvider(TestCase):
def setUp(self):
self.opts = {"cachedir": "/tmp/gitfs-test-cache"}
def tearDown(self):
self.opts = None
def test_provider_case_insensitive(self):
"""
Ensure that both lowercase and non-lowercase values are supported
@ -59,11 +57,11 @@ class TestGitFSProvider(TestCase):
with patch.object(
role_class, "verify_pygit2", MagicMock(return_value=False)
):
args = [OPTS, {}]
args = [self.opts, {}]
kwargs = {"init_remotes": False}
if role_name == "winrepo":
kwargs["cache_root"] = "/tmp/winrepo-dir"
with patch.dict(OPTS, {key: provider}):
with patch.dict(self.opts, {key: provider}):
# Try to create an instance with uppercase letters in
# provider name. If it fails then a
# FileserverConfigError will be raised, so no assert is
@ -98,15 +96,15 @@ class TestGitFSProvider(TestCase):
verify = "verify_pygit2"
mock2 = _get_mock(verify, provider)
with patch.object(role_class, verify, mock2):
args = [OPTS, {}]
args = [self.opts, {}]
kwargs = {"init_remotes": False}
if role_name == "winrepo":
kwargs["cache_root"] = "/tmp/winrepo-dir"
with patch.dict(OPTS, {key: provider}):
with patch.dict(self.opts, {key: provider}):
role_class(*args, **kwargs)
with patch.dict(OPTS, {key: "foo"}):
with patch.dict(self.opts, {key: "foo"}):
# Set the provider name to a known invalid provider
# and make sure it raises an exception.
self.assertRaises(

View file

@ -2,7 +2,6 @@
"""
Tests for salt.utils.jinja
"""
# Import Python libs
from __future__ import absolute_import, print_function, unicode_literals
import ast
@ -13,7 +12,6 @@ import pprint
import re
import tempfile
# Import Salt libs
import salt.config
import salt.loader
@ -37,14 +35,11 @@ from salt.utils.jinja import (
from salt.utils.odict import OrderedDict
from salt.utils.templates import JINJA, render_jinja_tmpl
from tests.support.case import ModuleCase
from tests.support.helpers import flaky
from tests.support.helpers import requires_network
from tests.support.mock import MagicMock, Mock, patch
# Import Salt Testing libs
from tests.support.runtests import RUNTIME_VARS
from tests.support.unit import TestCase, skipIf
# Import 3rd party libs
try:
import timelib # pylint: disable=W0611
@ -127,6 +122,7 @@ class TestSaltCacheLoader(TestCase):
def tearDown(self):
salt.utils.files.rm_rf(self.tempdir)
self.tempdir = self.template_dir = self.opts
def test_searchpath(self):
"""
@ -284,6 +280,7 @@ class TestGetTemplate(TestCase):
def tearDown(self):
salt.utils.files.rm_rf(self.tempdir)
self.tempdir = self.template_dir = self.local_opts = self.local_salt = None
def test_fallback(self):
"""
@ -559,19 +556,6 @@ class TestGetTemplate(TestCase):
dict(opts=self.local_opts, saltenv="test", salt=self.local_salt),
)
@skipIf(six.PY3, "Not applicable to Python 3")
def test_render_with_unicode_syntax_error(self):
with patch.object(builtins, "__salt_system_encoding__", "utf-8"):
template = "hello\n\n{{ bad\n\nfoo한"
expected = r".*---\nhello\n\n{{ bad\n\nfoo\xed\x95\x9c <======================\n---"
self.assertRaisesRegex(
SaltRenderError,
expected,
render_jinja_tmpl,
template,
dict(opts=self.local_opts, saltenv="test", salt=self.local_salt),
)
def test_render_with_utf8_syntax_error(self):
with patch.object(builtins, "__salt_system_encoding__", "utf-8"):
template = "hello\n\n{{ bad\n\nfoo한"
@ -621,9 +605,9 @@ class TestGetTemplate(TestCase):
class TestJinjaDefaultOptions(TestCase):
def __init__(self, *args, **kws):
TestCase.__init__(self, *args, **kws)
self.local_opts = {
@classmethod
def setUpClass(cls):
cls.local_opts = {
"cachedir": os.path.join(RUNTIME_VARS.TMP, "jinja-template-cache"),
"file_buffer_size": 1048576,
"file_client": "local",
@ -642,11 +626,15 @@ class TestJinjaDefaultOptions(TestCase):
),
"jinja_env": {"line_comment_prefix": "##", "line_statement_prefix": "%"},
}
self.local_salt = {
cls.local_salt = {
"myvar": "zero",
"mylist": [0, 1, 2, 3],
}
@classmethod
def tearDownClass(cls):
cls.local_opts = cls.local_salt = None
def test_comment_prefix(self):
template = """
@ -681,9 +669,9 @@ class TestJinjaDefaultOptions(TestCase):
class TestCustomExtensions(TestCase):
def __init__(self, *args, **kws):
super(TestCustomExtensions, self).__init__(*args, **kws)
self.local_opts = {
@classmethod
def setUpClass(cls):
cls.local_opts = {
"cachedir": os.path.join(RUNTIME_VARS.TMP, "jinja-template-cache"),
"file_buffer_size": 1048576,
"file_client": "local",
@ -701,7 +689,7 @@ class TestCustomExtensions(TestCase):
os.path.dirname(os.path.abspath(__file__)), "extmods"
),
}
self.local_salt = {
cls.local_salt = {
# 'dns.A': dnsutil.A,
# 'dns.AAAA': dnsutil.AAAA,
# 'file.exists': filemod.file_exists,
@ -709,6 +697,10 @@ class TestCustomExtensions(TestCase):
# 'file.dirname': filemod.dirname
}
@classmethod
def tearDownClass(cls):
cls.local_opts = cls.local_salt = None
def test_regex_escape(self):
dataset = "foo?:.*/\\bar"
env = Environment(extensions=[SerializerExtension])
@ -721,51 +713,39 @@ class TestCustomExtensions(TestCase):
unique = set(dataset)
env = Environment(extensions=[SerializerExtension])
env.filters.update(JinjaFilter.salt_jinja_filters)
if six.PY3:
rendered = (
env.from_string("{{ dataset|unique }}")
.render(dataset=dataset)
.strip("'{}")
.split("', '")
)
self.assertEqual(sorted(rendered), sorted(list(unique)))
else:
rendered = env.from_string("{{ dataset|unique }}").render(dataset=dataset)
self.assertEqual(rendered, "{0}".format(unique))
rendered = (
env.from_string("{{ dataset|unique }}")
.render(dataset=dataset)
.strip("'{}")
.split("', '")
)
self.assertEqual(sorted(rendered), sorted(list(unique)))
def test_unique_tuple(self):
dataset = ("foo", "foo", "bar")
unique = set(dataset)
env = Environment(extensions=[SerializerExtension])
env.filters.update(JinjaFilter.salt_jinja_filters)
if six.PY3:
rendered = (
env.from_string("{{ dataset|unique }}")
.render(dataset=dataset)
.strip("'{}")
.split("', '")
)
self.assertEqual(sorted(rendered), sorted(list(unique)))
else:
rendered = env.from_string("{{ dataset|unique }}").render(dataset=dataset)
self.assertEqual(rendered, "{0}".format(unique))
rendered = (
env.from_string("{{ dataset|unique }}")
.render(dataset=dataset)
.strip("'{}")
.split("', '")
)
self.assertEqual(sorted(rendered), sorted(list(unique)))
def test_unique_list(self):
dataset = ["foo", "foo", "bar"]
unique = ["foo", "bar"]
env = Environment(extensions=[SerializerExtension])
env.filters.update(JinjaFilter.salt_jinja_filters)
if six.PY3:
rendered = (
env.from_string("{{ dataset|unique }}")
.render(dataset=dataset)
.strip("'[]")
.split("', '")
)
self.assertEqual(rendered, unique)
else:
rendered = env.from_string("{{ dataset|unique }}").render(dataset=dataset)
self.assertEqual(rendered, "{0}".format(unique))
rendered = (
env.from_string("{{ dataset|unique }}")
.render(dataset=dataset)
.strip("'[]")
.split("', '")
)
self.assertEqual(rendered, unique)
def test_serialize_json(self):
dataset = {"foo": True, "bar": 42, "baz": [1, 2, 3], "qux": 2.0}
@ -795,17 +775,7 @@ class TestCustomExtensions(TestCase):
dataset = "str value"
env = Environment(extensions=[SerializerExtension])
rendered = env.from_string("{{ dataset|yaml }}").render(dataset=dataset)
if six.PY3:
self.assertEqual("str value", rendered)
else:
# Due to a bug in the equality handler, this check needs to be split
# up into several different assertions. We need to check that the various
# string segments are present in the rendered value, as well as the
# type of the rendered variable (should be unicode, which is the same as
# six.text_type). This should cover all use cases but also allow the test
# to pass on CentOS 6 running Python 2.7.
self.assertIn("str value", rendered)
self.assertIsInstance(rendered, six.text_type)
self.assertEqual("str value", rendered)
def test_serialize_python(self):
dataset = {"foo": True, "bar": 42, "baz": [1, 2, 3], "qux": 2.0}
@ -976,20 +946,14 @@ class TestCustomExtensions(TestCase):
rendered = env.from_string("{{ data }}").render(data=data)
self.assertEqual(
rendered,
"{u'foo': {u'bar': u'baz', u'qux': 42}}"
if six.PY2
else "{'foo': {'bar': 'baz', 'qux': 42}}",
rendered, "{'foo': {'bar': 'baz', 'qux': 42}}",
)
rendered = env.from_string("{{ data }}").render(
data=[OrderedDict(foo="bar",), OrderedDict(baz=42,)]
)
self.assertEqual(
rendered,
"[{'foo': u'bar'}, {'baz': 42}]"
if six.PY2
else "[{'foo': 'bar'}, {'baz': 42}]",
rendered, "[{'foo': 'bar'}, {'baz': 42}]",
)
def test_set_dict_key_value(self):
@ -1031,10 +995,7 @@ class TestCustomExtensions(TestCase):
),
)
self.assertEqual(
rendered,
"{u'bar': {u'baz': {u'qux': 1, u'quux': 3}}}"
if six.PY2
else "{'bar': {'baz': {'qux': 1, 'quux': 3}}}",
rendered, "{'bar': {'baz': {'qux': 1, 'quux': 3}}}",
)
# Test incorrect usage
@ -1076,10 +1037,7 @@ class TestCustomExtensions(TestCase):
),
)
self.assertEqual(
rendered,
"{u'bar': {u'baz': [1, 2, 42]}}"
if six.PY2
else "{'bar': {'baz': [1, 2, 42]}}",
rendered, "{'bar': {'baz': [1, 2, 42]}}",
)
def test_extend_dict_key_value(self):
@ -1102,10 +1060,7 @@ class TestCustomExtensions(TestCase):
),
)
self.assertEqual(
rendered,
"{u'bar': {u'baz': [1, 2, 42, 43]}}"
if six.PY2
else "{'bar': {'baz': [1, 2, 42, 43]}}",
rendered, "{'bar': {'baz': [1, 2, 42, 43]}}",
)
# Edge cases
rendered = render_jinja_tmpl(
@ -1403,7 +1358,7 @@ class TestCustomExtensions(TestCase):
)
self.assertEqual(rendered, "16777216")
@flaky
@requires_network()
def test_http_query(self):
"""
Test the `http_query` Jinja filter.
@ -1576,6 +1531,45 @@ class TestCustomExtensions(TestCase):
)
self.assertEqual(rendered, "1, 4")
def test_method_call(self):
"""
Test the `method_call` Jinja filter.
"""
rendered = render_jinja_tmpl(
"{{ 6|method_call('bit_length') }}",
dict(opts=self.local_opts, saltenv="test", salt=self.local_salt),
)
self.assertEqual(rendered, "3")
rendered = render_jinja_tmpl(
"{{ 6.7|method_call('is_integer') }}",
dict(opts=self.local_opts, saltenv="test", salt=self.local_salt),
)
self.assertEqual(rendered, "False")
rendered = render_jinja_tmpl(
"{{ 'absaltba'|method_call('strip', 'ab') }}",
dict(opts=self.local_opts, saltenv="test", salt=self.local_salt),
)
self.assertEqual(rendered, "salt")
rendered = render_jinja_tmpl(
"{{ [1, 2, 1, 3, 4]|method_call('index', 1, 1, 3) }}",
dict(opts=self.local_opts, saltenv="test", salt=self.local_salt),
)
self.assertEqual(rendered, "2")
# have to use `dictsort` to keep test result deterministic
rendered = render_jinja_tmpl(
"{{ {}|method_call('fromkeys', ['a', 'b', 'c'], 0)|dictsort }}",
dict(opts=self.local_opts, saltenv="test", salt=self.local_salt),
)
self.assertEqual(rendered, "[('a', 0), ('b', 0), ('c', 0)]")
# missing object method test
rendered = render_jinja_tmpl(
"{{ 6|method_call('bit_width') }}",
dict(opts=self.local_opts, saltenv="test", salt=self.local_salt),
)
self.assertEqual(rendered, "None")
def test_md5(self):
"""
Test the `md5` Jinja filter.

View file

@ -0,0 +1,71 @@
# -*- coding: utf-8 -*-
"""
unit tests for salt.utils.job
"""
# Import Python Libs
from __future__ import absolute_import, print_function, unicode_literals
import salt.minion
# Import Salt Libs
import salt.utils.job as job
# Import 3rd-party libs
from salt.ext import six
# Import Salt Testing Libs
from tests.support.mock import patch
from tests.support.unit import TestCase, skipIf
class MockMasterMinion(object):
def return_mock_jobs(self):
return self.mock_jobs_cache
opts = {"job_cache": True, "ext_job_cache": None, "master_job_cache": "foo"}
mock_jobs_cache = {}
returners = {
"foo.save_load": lambda *args, **kwargs: True,
"foo.prep_jid": lambda *args, **kwargs: True,
"foo.get_load": lambda *args, **kwargs: True,
"foo.returner": lambda *args, **kwargs: True,
}
def __init__(self, *args, **kwargs):
pass
class JobTest(TestCase):
"""
Validate salt.utils.job
"""
@skipIf(not six.PY3, "Can only assertLogs in PY3")
def test_store_job_exception_handled(self):
"""
test store_job exception handling
"""
for func in ["foo.save_load", "foo.prep_jid", "foo.returner"]:
def raise_exception(*arg, **kwarg):
raise Exception("expected")
with patch.object(
salt.minion, "MasterMinion", MockMasterMinion
), patch.dict(MockMasterMinion.returners, {func: raise_exception}), patch(
"salt.utils.verify.valid_id", return_value=True
):
with self.assertLogs("salt.utils.job", level="CRITICAL") as logged:
job.store_job(
MockMasterMinion.opts,
{
"jid": "20190618090114890985",
"return": {"success": True},
"id": "a",
},
)
self.assertIn(
"The specified 'foo' returner threw a stack trace",
logged.output[0],
)

Some files were not shown because too many files have changed in this diff Show more