Merge branch 'master' into master-port-49955

This commit is contained in:
Daniel Wozniak 2020-04-21 17:54:32 -07:00 committed by GitHub
commit f7804af95f
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
40 changed files with 1464 additions and 263 deletions

View file

@ -20,6 +20,7 @@ Versions are `MAJOR.PATCH`.
### Added
- [#56627](https://github.com/saltstack/salt/pull/56627) - Add new salt-ssh set_path option
- [#51379](https://github.com/saltstack/salt/pull/56792) - Backport 51379 : Adds .set_domain_workgroup to win_system
## 3000.1

View file

@ -677,7 +677,9 @@
# The master_roots setting configures a master-only copy of the file_roots dictionary,
# used by the state compiler.
#master_roots: /srv/salt-master
#master_roots:
# base:
# - /srv/salt-master
# When using multiple environments, each with their own top file, the
# default behaviour is an unordered merge. To prevent top files from

View file

@ -2654,14 +2654,18 @@ nothing is ignored.
``master_roots``
----------------
Default: ``/srv/salt-master``
Default: ``''``
A master-only copy of the :conf_master:`file_roots` dictionary, used by the
state compiler.
Example:
.. code-block:: yaml
master_roots: /srv/salt-master
master_roots:
base:
- /srv/salt-master
roots: Master's Local File Server
---------------------------------

View file

@ -69,16 +69,6 @@ dynamic modules when states are run. To disable this behavior set
When dynamic modules are autoloaded via states, only the modules defined in the
same saltenvs as the states currently being run.
Also it is possible to use the explicit ``saltutil.sync_*`` :py:mod:`state functions <salt.states.saltutil>`
to sync the modules (previously it was necessary to use the ``module.run`` state):
.. code-block::yaml
synchronize_modules:
saltutil.sync_modules:
- refresh: True
Sync Via the saltutil Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -350,7 +340,7 @@ SDB
* :ref:`Writing SDB Modules <sdb-writing-modules>`
SDB is a way to store data that's not associated with a minion. See
SDB is a way to store data that's not associated with a minion. See
:ref:`Storing Data in Other Databases <sdb>`.
Serializer
@ -394,6 +384,12 @@ pkgfiles modules handle the actual installation.
SSH Wrapper
-----------
.. toctree::
:maxdepth: 1
:glob:
ssh_wrapper
Replacement execution modules for :ref:`Salt SSH <salt-ssh>`.
Thorium
@ -420,7 +416,7 @@ the state system.
Util
----
Just utility modules to use with other modules via ``__utils__`` (see
Just utility modules to use with other modules via ``__utils__`` (see
:ref:`Dunder Dictionaries <dunder-dictionaries>`).
Wheel

View file

@ -0,0 +1,63 @@
.. _ssh-wrapper:
===========
SSH Wrapper
===========
Salt-SSH Background
===================
Salt-SSH works by creating a tar ball of salt, a bunch of python modules, and a generated
short minion config. It then copies this onto the destination host over ssh, then
uses that host's local python install to run ``salt-client --local`` with any requested modules.
It does not automatically copy over states or cache files and since it is uses a local file_client,
modules that rely on :py:func:`cp.cache* <salt.modules.cp>` functionality do not work.
SSH Wrapper modules
===================
To support cp modules or other functionality which might not otherwise work in the remote environment,
a wrapper module can be created. These modules are run from the salt-master initiating the salt-ssh
command and can include logic to support the needed functionality. SSH Wrapper modules are located in
/salt/client/ssh/wrapper/ and are named the same as the execution module being extended. Any functions
defined inside of the wrapper module are called from the ``salt-ssh module.function argument``
command rather than executing on the minion.
State Module example
--------------------
Running salt states on an salt-ssh minion, obviously requires the state files themselves. To support this,
a state module wrapper script exists at salt/client/ssh/wrapper/state.py, and includes standard state
functions like :py:func:`apply <salt.modules.state.apply>`, :py:func:`sls <salt.modules.state.sls>`,
and :py:func:`highstate <salt.modules.state.highstate>`. When executing ``salt-ssh minion state.highstate``,
these wrapper functions are used and include the logic to walk the low_state output for that minion to
determine files used, gather needed files, tar them together, transfer the tar file to the minion over
ssh, and run a state on the ssh minion. This state then extracts the tar file, applies the needed states
and data, and cleans up the transferred files.
Wrapper Handling
----------------
From the wrapper script any invocations of ``__salt__['some.module']()`` do not run on the master
which is running the wrapper, but instead magically are invoked on the minion over ssh.
Should the function being called exist in the wrapper, the wrapper function will be
used instead.
One way of supporting this workflow may be to create a wrapper function which performs the needed file
copy operations. Now that files are resident on the ssh minion, the next step is to run the original
execution module function. But since that function name was already overridden by the wrapper, a
function alias can be created in the original execution module, which can then be called from the
wrapper.
Example
```````
The saltcheck module needs sls and tst files on the minion to function. The invocation of
:py:func:`saltcheck.run_state_tests <salt.modules.saltcheck.run_state_tests>` is run from
the wrapper module, and is responsible for performing the needed file copy. The
:py:func:`saltcheck <salt.modules.saltcheck>` execution module includes an alias line of
``run_state_tests_ssh = salt.utils.functools.alias_function(run_state_tests, 'run_state_tests_ssh')``
which creates an alias of ``run_state_tests`` with the name ``run_state_tests_ssh``. At the end of
the ``run_state_tests`` function in the wrapper module, it then calls
``__salt__['saltcheck.run_state_tests_ssh']()``. Since this function does not exist in the wrapper script,
the call is made on the remote minion, which then having the needed files, runs as expected.

View file

@ -250,7 +250,7 @@ done at the CLI:
caller = salt.client.Caller()
ret = called.cmd('event.send',
ret = caller.cmd('event.send',
'myco/event/success'
{ 'success': True,
'message': "It works!" })

View file

@ -1620,7 +1620,12 @@ class LocalClient(object):
yield {
id_: {
"out": "no_return",
"ret": "Minion did not return. [No response]",
"ret": "Minion did not return. [No response]"
"\nThe minions may not have all finished running and any "
"remaining minions will return upon completion. To look "
"up the return data for this job later, run the following "
"command:\n\n"
"salt-run jobs.lookup_jid {0}".format(jid),
"retcode": salt.defaults.exitcodes.EX_GENERIC,
}
}

View file

@ -58,6 +58,9 @@ LEA = salt.utils.path.which_bin(
)
LE_LIVE = "/etc/letsencrypt/live/"
if salt.utils.platform.is_freebsd():
LE_LIVE = "/usr/local" + LE_LIVE
def __virtual__():
"""

View file

@ -34,7 +34,15 @@ def __virtual__():
def cluster_create(
version, name="main", port=None, locale=None, encoding=None, datadir=None
version,
name="main",
port=None,
locale=None,
encoding=None,
datadir=None,
allow_group_access=None,
data_checksums=None,
wal_segsize=None,
):
"""
Adds a cluster to the Postgres server.
@ -53,7 +61,9 @@ def cluster_create(
salt '*' postgres.cluster_create '9.3' locale='fr_FR'
salt '*' postgres.cluster_create '11' data_checksums=True wal_segsize='32'
"""
cmd = [salt.utils.path.which("pg_createcluster")]
if port:
cmd += ["--port", six.text_type(port)]
@ -64,6 +74,15 @@ def cluster_create(
if datadir:
cmd += ["--datadir", datadir]
cmd += [version, name]
# initdb-specific options are passed after '--'
if allow_group_access or data_checksums or wal_segsize:
cmd += ["--"]
if allow_group_access is True:
cmd += ["--allow-group-access"]
if data_checksums is True:
cmd += ["--data-checksums"]
if wal_segsize:
cmd += ["--wal-segsize", wal_segsize]
cmdstr = " ".join([pipes.quote(c) for c in cmd])
ret = __salt__["cmd.run_all"](cmdstr, python_shell=False)
if ret.get("retcode", 0) != 0:

View file

@ -6,10 +6,15 @@ from __future__ import absolute_import, print_function, unicode_literals
import logging
from salt.ext import six
log = logging.getLogger(__name__)
def _analyse_overview_field(content):
"""
Split the field in drbd-overview
"""
if "(" in content:
# Output like "Connected(2*)" or "UpToDate(2*)"
return content.split("(")[0], content.split("(")[0]
@ -20,9 +25,140 @@ def _analyse_overview_field(content):
return content, ""
def _count_spaces_startswith(line):
"""
Count the number of spaces before the first character
"""
if line.split("#")[0].strip() == "":
return None
spaces = 0
for i in line:
if i.isspace():
spaces += 1
else:
return spaces
def _analyse_status_type(line):
"""
Figure out the sections in drbdadm status
"""
spaces = _count_spaces_startswith(line)
if spaces is None:
return ""
switch = {
0: "RESOURCE",
2: {" disk:": "LOCALDISK", " role:": "PEERNODE", " connection:": "PEERNODE"},
4: {" peer-disk:": "PEERDISK"},
}
ret = switch.get(spaces, "UNKNOWN")
# isinstance(ret, str) only works when run directly, calling need unicode(six)
if isinstance(ret, six.text_type):
return ret
for x in ret:
if x in line:
return ret[x]
return "UNKNOWN"
def _add_res(line):
"""
Analyse the line of local resource of ``drbdadm status``
"""
global resource
fields = line.strip().split()
if resource:
ret.append(resource)
resource = {}
resource["resource name"] = fields[0]
resource["local role"] = fields[1].split(":")[1]
resource["local volumes"] = []
resource["peer nodes"] = []
def _add_volume(line):
"""
Analyse the line of volumes of ``drbdadm status``
"""
section = _analyse_status_type(line)
fields = line.strip().split()
volume = {}
for field in fields:
volume[field.split(":")[0]] = field.split(":")[1]
if section == "LOCALDISK":
resource["local volumes"].append(volume)
else:
# 'PEERDISK'
lastpnodevolumes.append(volume)
def _add_peernode(line):
"""
Analyse the line of peer nodes of ``drbdadm status``
"""
global lastpnodevolumes
fields = line.strip().split()
peernode = {}
peernode["peernode name"] = fields[0]
# Could be role or connection:
peernode[fields[1].split(":")[0]] = fields[1].split(":")[1]
peernode["peer volumes"] = []
resource["peer nodes"].append(peernode)
lastpnodevolumes = peernode["peer volumes"]
def _empty(dummy):
"""
Action of empty line of ``drbdadm status``
"""
def _unknown_parser(line):
"""
Action of unsupported line of ``drbdadm status``
"""
global ret
ret = {"Unknown parser": line}
def _line_parser(line):
"""
Call action for different lines
"""
section = _analyse_status_type(line)
fields = line.strip().split()
switch = {
"": _empty,
"RESOURCE": _add_res,
"PEERNODE": _add_peernode,
"LOCALDISK": _add_volume,
"PEERDISK": _add_volume,
}
func = switch.get(section, _unknown_parser)
func(line)
def overview():
"""
Show status of the DRBD devices, support two nodes only.
drbd-overview is removed since drbd-utils-9.6.0,
use status instead.
CLI Example:
@ -90,3 +226,58 @@ def overview():
"synched": sync,
}
return ret
# Global para for func status
ret = []
resource = {}
lastpnodevolumes = None
def status(name="all"):
"""
Using drbdadm to show status of the DRBD devices,
available in the latest drbd9.
Support multiple nodes, multiple volumes.
:type name: str
:param name:
Resource name.
:return: drbd status of resource.
:rtype: list(dict(res))
CLI Example:
.. code-block:: bash
salt '*' drbd.status
salt '*' drbd.status name=<resource name>
"""
# Initialize for multiple times test cases
global ret
global resource
ret = []
resource = {}
cmd = ["drbdadm", "status"]
cmd.append(name)
# One possible output: (number of resource/node/vol are flexible)
# resource role:Secondary
# volume:0 disk:Inconsistent
# volume:1 disk:Inconsistent
# drbd-node1 role:Primary
# volume:0 replication:SyncTarget peer-disk:UpToDate done:10.17
# volume:1 replication:SyncTarget peer-disk:UpToDate done:74.08
# drbd-node2 role:Secondary
# volume:0 peer-disk:Inconsistent resync-suspended:peer
# volume:1 peer-disk:Inconsistent resync-suspended:peer
for line in __salt__["cmd.run"](cmd).splitlines():
_line_parser(line)
if resource:
ret.append(resource)
return ret

View file

@ -22,15 +22,17 @@ log = logging.getLogger(__name__)
def __virtual__():
"""
Only works on OpenBSD for now; other systems with pf (macOS, FreeBSD, etc)
need to be tested before enabling them.
Only works on OpenBSD and FreeBSD for now; other systems with pf (macOS,
FreeBSD, etc) need to be tested before enabling them.
"""
if __grains__["os"] == "OpenBSD" and salt.utils.path.which("pfctl"):
tested_oses = ["FreeBSD", "OpenBSD"]
if __grains__["os"] in tested_oses and salt.utils.path.which("pfctl"):
return True
return (
False,
"The pf execution module cannot be loaded: either the system is not OpenBSD or the pfctl binary was not found",
"The pf execution module cannot be loaded: either the OS ({}) is not "
"tested or the pfctl binary was not found".format(__grains__["os"]),
)
@ -102,7 +104,7 @@ def loglevel(level):
level:
Log level. Should be one of the following: emerg, alert, crit, err, warning, notice,
info or debug.
info or debug (OpenBSD); or none, urgent, misc, loud (FreeBSD).
CLI example:
@ -114,7 +116,20 @@ def loglevel(level):
# always made a change.
ret = {"changes": True}
all_levels = ["emerg", "alert", "crit", "err", "warning", "notice", "info", "debug"]
myos = __grains__["os"]
if myos == "FreeBSD":
all_levels = ["none", "urgent", "misc", "loud"]
else:
all_levels = [
"emerg",
"alert",
"crit",
"err",
"warning",
"notice",
"info",
"debug",
]
if level not in all_levels:
raise SaltInvocationError("Unknown loglevel: {0}".format(level))

View file

@ -26,16 +26,14 @@ import salt.utils.locales
import salt.utils.platform
import salt.utils.winapi
from salt.exceptions import CommandExecutionError
# Import 3rd-party Libs
from salt.ext import six
try:
import wmi
import win32net
import pywintypes
import win32api
import win32con
import pywintypes
import win32net
import wmi
from ctypes import windll
HAS_WIN32NET_MODS = True
@ -555,29 +553,6 @@ def get_system_info():
# Lookup dicts for Win32_OperatingSystem
os_type = {1: "Work Station", 2: "Domain Controller", 3: "Server"}
# Connect to WMI
with salt.utils.winapi.Com():
conn = wmi.WMI()
system = conn.Win32_OperatingSystem()[0]
ret = {
"name": get_computer_name(),
"description": system.Description,
"install_date": system.InstallDate,
"last_boot": system.LastBootUpTime,
"os_manufacturer": system.Manufacturer,
"os_name": system.Caption,
"users": system.NumberOfUsers,
"organization": system.Organization,
"os_architecture": system.OSArchitecture,
"primary": system.Primary,
"os_type": os_type[system.ProductType],
"registered_user": system.RegisteredUser,
"system_directory": system.SystemDirectory,
"system_drive": system.SystemDrive,
"os_version": system.Version,
"windows_directory": system.WindowsDirectory,
}
# lookup dicts for Win32_ComputerSystem
domain_role = {
0: "Standalone Workstation",
@ -606,75 +581,92 @@ def get_system_info():
7: "Performance Server",
8: "Maximum",
}
# Must get chassis_sku_number this way for backwards compatibility
# system.ChassisSKUNumber is only available on Windows 10/2016 and newer
product = conn.Win32_ComputerSystemProduct()[0]
ret.update({"chassis_sku_number": product.SKUNumber})
system = conn.Win32_ComputerSystem()[0]
# Get pc_system_type depending on Windows version
if platform.release() in ["Vista", "7", "8"]:
# Types for Vista, 7, and 8
pc_system_type = pc_system_types[system.PCSystemType]
else:
# New types were added with 8.1 and newer
pc_system_types.update({8: "Slate", 9: "Maximum"})
pc_system_type = pc_system_types[system.PCSystemType]
ret.update(
{
"bootup_state": system.BootupState,
"caption": system.Caption,
"chassis_bootup_state": warning_states[system.ChassisBootupState],
"dns_hostname": system.DNSHostname,
"domain": system.Domain,
"domain_role": domain_role[system.DomainRole],
"hardware_manufacturer": system.Manufacturer,
"hardware_model": system.Model,
"network_server_mode_enabled": system.NetworkServerModeEnabled,
"part_of_domain": system.PartOfDomain,
"pc_system_type": pc_system_type,
"power_state": system.PowerState,
"status": system.Status,
"system_type": system.SystemType,
"total_physical_memory": byte_calc(system.TotalPhysicalMemory),
"total_physical_memory_raw": system.TotalPhysicalMemory,
"thermal_state": warning_states[system.ThermalState],
"workgroup": system.Workgroup,
}
)
# Get processor information
processors = conn.Win32_Processor()
ret["processors"] = 0
ret["processors_logical"] = 0
ret["processor_cores"] = 0
ret["processor_cores_enabled"] = 0
ret["processor_manufacturer"] = processors[0].Manufacturer
ret["processor_max_clock_speed"] = (
six.text_type(processors[0].MaxClockSpeed) + "MHz"
)
for system in processors:
ret["processors"] += 1
ret["processors_logical"] += system.NumberOfLogicalProcessors
ret["processor_cores"] += system.NumberOfCores
try:
ret["processor_cores_enabled"] += system.NumberOfEnabledCore
except (AttributeError, TypeError):
pass
if ret["processor_cores_enabled"] == 0:
ret.pop("processor_cores_enabled", False)
system = conn.Win32_BIOS()[0]
ret.update(
{
"hardware_serial": system.SerialNumber,
"bios_manufacturer": system.Manufacturer,
"bios_version": system.Version,
"bios_details": system.BIOSVersion,
"bios_caption": system.Caption,
"bios_description": system.Description,
# Connect to WMI
with salt.utils.winapi.Com():
conn = wmi.WMI()
system = conn.Win32_OperatingSystem()[0]
ret = {
"name": get_computer_name(),
"description": system.Description,
"install_date": system.InstallDate,
"last_boot": system.LastBootUpTime,
"os_manufacturer": system.Manufacturer,
"os_name": system.Caption,
"users": system.NumberOfUsers,
"organization": system.Organization,
"os_architecture": system.OSArchitecture,
"primary": system.Primary,
"os_type": os_type[system.ProductType],
"registered_user": system.RegisteredUser,
"system_directory": system.SystemDirectory,
"system_drive": system.SystemDrive,
"os_version": system.Version,
"windows_directory": system.WindowsDirectory,
}
)
ret["install_date"] = _convert_date_time_string(ret["install_date"])
ret["last_boot"] = _convert_date_time_string(ret["last_boot"])
system = conn.Win32_ComputerSystem()[0]
# Get pc_system_type depending on Windows version
if platform.release() in ["Vista", "7", "8"]:
# Types for Vista, 7, and 8
pc_system_type = pc_system_types[system.PCSystemType]
else:
# New types were added with 8.1 and newer
pc_system_types.update({8: "Slate", 9: "Maximum"})
pc_system_type = pc_system_types[system.PCSystemType]
ret.update(
{
"bootup_state": system.BootupState,
"caption": system.Caption,
"chassis_bootup_state": warning_states[system.ChassisBootupState],
"chassis_sku_number": system.ChassisSKUNumber,
"dns_hostname": system.DNSHostname,
"domain": system.Domain,
"domain_role": domain_role[system.DomainRole],
"hardware_manufacturer": system.Manufacturer,
"hardware_model": system.Model,
"network_server_mode_enabled": system.NetworkServerModeEnabled,
"part_of_domain": system.PartOfDomain,
"pc_system_type": pc_system_type,
"power_state": system.PowerState,
"status": system.Status,
"system_type": system.SystemType,
"total_physical_memory": byte_calc(system.TotalPhysicalMemory),
"total_physical_memory_raw": system.TotalPhysicalMemory,
"thermal_state": warning_states[system.ThermalState],
"workgroup": system.Workgroup,
}
)
# Get processor information
processors = conn.Win32_Processor()
ret["processors"] = 0
ret["processors_logical"] = 0
ret["processor_cores"] = 0
ret["processor_cores_enabled"] = 0
ret["processor_manufacturer"] = processors[0].Manufacturer
ret["processor_max_clock_speed"] = (
six.text_type(processors[0].MaxClockSpeed) + "MHz"
)
for processor in processors:
ret["processors"] += 1
ret["processors_logical"] += processor.NumberOfLogicalProcessors
ret["processor_cores"] += processor.NumberOfCores
ret["processor_cores_enabled"] += processor.NumberOfEnabledCore
bios = conn.Win32_BIOS()[0]
ret.update(
{
"hardware_serial": bios.SerialNumber,
"bios_manufacturer": bios.Manufacturer,
"bios_version": bios.Version,
"bios_details": bios.BIOSVersion,
"bios_caption": bios.Caption,
"bios_description": bios.Description,
}
)
ret["install_date"] = _convert_date_time_string(ret["install_date"])
ret["last_boot"] = _convert_date_time_string(ret["last_boot"])
return ret
@ -742,13 +734,10 @@ def set_hostname(hostname):
salt 'minion-id' system.set_hostname newhostname
"""
curr_hostname = get_hostname()
cmd = "wmic computersystem where name='{0}' call rename name='{1}'".format(
curr_hostname, hostname
)
ret = __salt__["cmd.run"](cmd=cmd)
return "successful" in ret
with salt.utils.winapi.Com():
conn = wmi.WMI()
comp = conn.Win32_ComputerSystem()[0]
return comp.Rename(Name=hostname)
def join_domain(
@ -1034,11 +1023,41 @@ def get_domain_workgroup():
"""
with salt.utils.winapi.Com():
conn = wmi.WMI()
for computer in conn.Win32_ComputerSystem():
if computer.PartOfDomain:
return {"Domain": computer.Domain}
else:
return {"Workgroup": computer.Domain}
for computer in conn.Win32_ComputerSystem():
if computer.PartOfDomain:
return {"Domain": computer.Domain}
else:
return {"Workgroup": computer.Domain}
def set_domain_workgroup(workgroup):
"""
Set the domain or workgroup the computer belongs to.
.. versionadded:: Sodium
Returns:
bool: ``True`` if successful, otherwise ``False``
CLI Example:
.. code-block:: bash
salt 'minion-id' system.set_domain_workgroup LOCAL
"""
if six.PY2:
workgroup = _to_unicode(workgroup)
# Initialize COM
with salt.utils.winapi.Com():
# Grab the first Win32_ComputerSystem object from wmi
conn = wmi.WMI()
comp = conn.Win32_ComputerSystem()[0]
# Now we can join the new workgroup
res = comp.JoinDomainOrWorkgroup(Name=workgroup.upper())
return True if not res[0] else False
def _try_parse_datetime(time_str, fmts):

View file

@ -206,12 +206,17 @@ class Serial(object):
def verylong_encoder(obj, context):
# Make sure we catch recursion here.
objid = id(obj)
if objid in context:
# This instance list needs to correspond to the types recursed
# in the below if/elif chain. Also update
# tests/unit/test_payload.py
if objid in context and isinstance(obj, (dict, list, tuple)):
return "<Recursion on {} with id={}>".format(
type(obj).__name__, id(obj)
)
context.add(objid)
# The isinstance checks in this if/elif chain need to be
# kept in sync with the above recursion check.
if isinstance(obj, dict):
for key, value in six.iteritems(obj.copy()):
obj[key] = verylong_encoder(value, context)

View file

@ -65,7 +65,6 @@ def get_pillar(
# If local pillar and we're caching, run through the cache system first
log.debug("Determining pillar cache")
if opts["pillar_cache"]:
log.info("Compiling pillar from cache")
log.debug("get_pillar using pillar cache with ext: %s", ext)
return PillarCache(
opts,

View file

@ -242,6 +242,10 @@ def ext_pillar(
# Get the Master's instance info, primarily the region
(_, region) = _get_instance_info()
# If the Minion's region is available, use it instead
if use_grain:
region = __grains__.get("ec2", {}).get("region", region)
try:
conn = boto.ec2.connect_to_region(region)
except boto.exception.AWSConnectionError as exc:

View file

@ -4959,6 +4959,370 @@ def replace(
return ret
def keyvalue(
name,
key=None,
value=None,
key_values=None,
separator="=",
append_if_not_found=False,
prepend_if_not_found=False,
search_only=False,
show_changes=True,
ignore_if_missing=False,
count=1,
uncomment=None,
key_ignore_case=False,
value_ignore_case=False,
):
"""
Key/Value based editing of a file.
.. versionadded:: Sodium
This function differs from ``file.replace`` in that it is able to search for
keys, followed by a customizable separator, and replace the value with the
given value. Should the value be the same as the one already in the file, no
changes will be made.
Either supply both ``key`` and ``value`` parameters, or supply a dictionary
with key / value pairs. It is an error to supply both.
name
Name of the file to search/replace in.
key
Key to search for when ensuring a value. Use in combination with a
``value`` parameter.
value
Value to set for a given key. Use in combination with a ``key``
parameter.
key_values
Dictionary of key / value pairs to search for and ensure values for.
Used to specify multiple key / values at once.
separator : "="
Separator which separates key from value.
append_if_not_found : False
Append the key/value to the end of the file if not found. Note that this
takes precedence over ``prepend_if_not_found``.
prepend_if_not_found : False
Prepend the key/value to the beginning of the file if not found. Note
that ``append_if_not_found`` takes precedence.
show_changes : True
Show a diff of the resulting removals and inserts.
ignore_if_missing : False
Return with success even if the file is not found (or not readable).
count : 1
Number of occurences to allow (and correct), default is 1. Set to -1 to
replace all, or set to 0 to remove all lines with this key regardsless
of its value.
.. note::
Any additional occurences after ``count`` are removed.
A count of -1 will only replace all occurences that are currently
uncommented already. Lines commented out will be left alone.
uncomment : None
Disregard and remove supplied leading characters when finding keys. When
set to None, lines that are commented out are left for what they are.
.. note::
The argument to ``uncomment`` is not a prefix string. Rather; it is a
set of characters, each of which are stripped.
key_ignore_case : False
Keys are matched case insensitively. When a value is changed the matched
key is kept as-is.
value_ignore_case : False
Values are checked case insensitively, trying to set e.g. 'Yes' while
the current value is 'yes', will not result in changes when
``value_ignore_case`` is set to True.
An example of using ``file.keyvalue`` to ensure sshd does not allow
for root to login with a password and at the same time setting the
login-gracetime to 1 minute and disabling all forwarding:
.. code-block:: yaml
sshd_config_harden:
file.keyvalue:
- name: /etc/ssh/sshd_config
- key_values:
permitrootlogin: 'without-password'
LoginGraceTime: '1m'
DisableForwarding: 'yes'
- separator: ' '
- uncomment: '# '
- key_ignore_case: True
- append_if_not_found: True
The same example, except for only ensuring PermitRootLogin is set correctly.
Thus being able to use the shorthand ``key`` and ``value`` parameters
instead of ``key_values``.
.. code-block:: yaml
sshd_config_harden:
file.keyvalue:
- name: /etc/ssh/sshd_config
- key: PermitRootLogin
- value: without-password
- separator: ' '
- uncomment: '# '
- key_ignore_case: True
- append_if_not_found: True
.. note::
Notice how the key is not matched case-sensitively, this way it will
correctly identify both 'PermitRootLogin' as well as 'permitrootlogin'.
"""
name = os.path.expanduser(name)
# default return values
ret = {
"name": name,
"changes": {},
"pchanges": {},
"result": None,
"comment": "",
}
if not name:
return _error(ret, "Must provide name to file.keyvalue")
if key is not None and value is not None:
if type(key_values) is dict:
return _error(
ret, "file.keyvalue can not combine key_values with key and value"
)
key_values = {str(key): value}
elif type(key_values) is not dict:
return _error(
ret, "file.keyvalue key and value not supplied and key_values empty"
)
# try to open the file and only return a comment if ignore_if_missing is
# enabled, also mark as an error if not
file_contents = []
try:
with salt.utils.files.fopen(name, "r") as fd:
file_contents = fd.readlines()
except (OSError, IOError):
ret["comment"] = "unable to open {n}".format(n=name)
ret["result"] = True if ignore_if_missing else False
return ret
# used to store diff combinations and check if anything has changed
diff = []
# store the final content of the file in case it needs to be rewritten
content = []
# target format is templated like this
tmpl = "{key}{sep}{value}" + os.linesep
# number of lines changed
changes = 0
# keep track of number of times a key was updated
diff_count = {k: count for k in key_values.keys()}
# read all the lines from the file
for line in file_contents:
test_line = line.lstrip(uncomment)
did_uncomment = True if len(line) > len(test_line) else False
if key_ignore_case:
test_line = test_line.lower()
for key, value in key_values.items():
test_key = key.lower() if key_ignore_case else key
# if the line starts with the key
if test_line.startswith(test_key):
# if the testline got uncommented then the real line needs to
# be uncommented too, otherwhise there might be separation on
# a character which is part of the comment set
working_line = line.lstrip(uncomment) if did_uncomment else line
# try to separate the line into its' components
line_key, line_sep, line_value = working_line.partition(separator)
# if separation was unsuccessful then line_sep is empty so
# no need to keep trying. continue instead
if line_sep != separator:
continue
# start on the premises the key does not match the actual line
keys_match = False
if key_ignore_case:
if line_key.lower() == test_key:
keys_match = True
else:
if line_key == test_key:
keys_match = True
# if the key was found in the line and separation was successful
if keys_match:
# trial and error have shown it's safest to strip whitespace
# from values for the sake of matching
line_value = line_value.strip()
# make sure the value is an actual string at this point
test_value = str(value).strip()
# convert test_value and line_value to lowercase if need be
if value_ignore_case:
line_value = line_value.lower()
test_value = test_value.lower()
# values match if they are equal at this point
values_match = True if line_value == test_value else False
# in case a line had its comment removed there are some edge
# cases that need considderation where changes are needed
# regardless of values already matching.
needs_changing = False
if did_uncomment:
# irrespective of a value, if it was commented out and
# changes are still to be made, then it needs to be
# commented in
if diff_count[key] > 0:
needs_changing = True
# but if values did not match but there are really no
# changes expected anymore either then leave this line
elif not values_match:
values_match = True
else:
# a line needs to be removed if it has been seen enough
# times and was not commented out, regardless of value
if diff_count[key] == 0:
needs_changing = True
# then start checking to see if the value needs replacing
if not values_match or needs_changing:
# the old line always needs to go, so that will be
# reflected in the diff (this is the original line from
# the file being read)
diff.append("- {0}".format(line))
line = line[:0]
# any non-zero value means something needs to go back in
# its place. negative values are replacing all lines not
# commented out, positive values are having their count
# reduced by one every replacement
if diff_count[key] != 0:
# rebuild the line using the key and separator found
# and insert the correct value.
line = str(
tmpl.format(key=line_key, sep=line_sep, value=value)
)
# display a comment in case a value got converted
# into a string
if not isinstance(value, str):
diff.append(
"+ {0} (from {1} type){2}".format(
line.rstrip(), type(value).__name__, os.linesep
)
)
else:
diff.append("+ {0}".format(line))
changes += 1
# subtract one from the count if it was larger than 0, so
# next lines are removed. if it is less than 0 then count is
# ignored and all lines will be updated.
if diff_count[key] > 0:
diff_count[key] -= 1
# at this point a continue saves going through the rest of
# the keys to see if they match since this line already
# matched the current key
continue
# with the line having been checked for all keys (or matched before all
# keys needed searching), the line can be added to the content to be
# written once the last checks have been performed
content.append(line)
# finally, close the file
fd.close()
# if append_if_not_found was requested, then append any key/value pairs
# still having a count left on them
if append_if_not_found:
tmpdiff = []
for key, value in key_values.items():
if diff_count[key] > 0:
line = tmpl.format(key=key, sep=separator, value=value)
tmpdiff.append("+ {0}".format(line))
content.append(line)
changes += 1
if tmpdiff:
tmpdiff.insert(0, "- <EOF>" + os.linesep)
tmpdiff.append("+ <EOF>" + os.linesep)
diff.extend(tmpdiff)
# only if append_if_not_found was not set should prepend_if_not_found be
# considered, benefit of this is that the number of counts left does not
# mean there might be both a prepend and append happening
elif prepend_if_not_found:
did_diff = False
for key, value in key_values.items():
if diff_count[key] > 0:
line = tmpl.format(key=key, sep=separator, value=value)
if not did_diff:
diff.insert(0, " <SOF>" + os.linesep)
did_diff = True
diff.insert(1, "+ {0}".format(line))
content.insert(0, line)
changes += 1
# if a diff was made
if changes > 0:
# return comment of changes if test
if __opts__["test"]:
ret["comment"] = "File {n} is set to be changed ({c} lines)".format(
n=name, c=changes
)
if show_changes:
# For some reason, giving an actual diff even in test=True mode
# will be seen as both a 'changed' and 'unchanged'. this seems to
# match the other modules behaviour though
ret["pchanges"]["diff"] = "".join(diff)
# add changes to comments for now as well because of how
# stateoutputter seems to handle pchanges etc.
# See: https://github.com/saltstack/salt/issues/40208
ret["comment"] += "\nPredicted diff:\n\r\t\t"
ret["comment"] += "\r\t\t".join(diff)
ret["result"] = None
# otherwise return the actual diff lines
else:
ret["comment"] = "Changed {c} lines".format(c=changes)
if show_changes:
ret["changes"]["diff"] = "".join(diff)
else:
ret["result"] = True
return ret
# if not test=true, try and write the file
if not __opts__["test"]:
try:
with salt.utils.files.fopen(name, "w") as fd:
# write all lines to the file which was just truncated
fd.writelines(content)
fd.close()
except (OSError, IOError):
# return an error if the file was not writable
ret["comment"] = "{n} not writable".format(n=name)
ret["result"] = False
return ret
# if all went well, then set result to true
ret["result"] = True
return ret
def blockreplace(
name,
marker_start="#-- start managed zone --",

View file

@ -28,7 +28,17 @@ def __virtual__():
return True
def present(version, name, port=None, encoding=None, locale=None, datadir=None):
def present(
version,
name,
port=None,
encoding=None,
locale=None,
datadir=None,
allow_group_access=None,
data_checksums=None,
wal_segsize=None,
):
"""
Ensure that the named cluster is present with the specified properties.
For more information about all of these options see man pg_createcluster(1)
@ -51,6 +61,15 @@ def present(version, name, port=None, encoding=None, locale=None, datadir=None):
datadir
Where the cluster is stored
allow_group_access
Allows users in the same group as the cluster owner to read all cluster files created by initdb
data_checksums
Use checksums on data pages
wal_segsize
Set the WAL segment size, in megabytes
.. versionadded:: 2015.XX
"""
msg = "Cluster {0}/{1} is already present".format(version, name)
@ -87,6 +106,9 @@ def present(version, name, port=None, encoding=None, locale=None, datadir=None):
locale=locale,
encoding=encoding,
datadir=datadir,
allow_group_access=allow_group_access,
data_checksums=data_checksums,
wal_segsize=wal_segsize,
)
if cluster:
msg = "The cluster {0}/{1} has been created"

View file

@ -19,6 +19,9 @@ data directory.
- encoding: UTF8
- locale: C
- runas: postgres
- allow_group_access: True
- data_checksums: True
- wal_segsize: 32
"""
from __future__ import absolute_import, print_function, unicode_literals

View file

@ -300,6 +300,7 @@ def export(
out = __salt__[svn_cmd](cwd, name, basename, user, username, password, rev, *opts)
ret["changes"]["new"] = name
ret["changes"]["comment"] = name + " was Exported to " + target
ret["comment"] = out
return ret

View file

@ -24,8 +24,6 @@ import logging
# Import Salt libs
import salt.utils.functools
import salt.utils.platform
# Import 3rd party libs
from salt.ext import six
log = logging.getLogger(__name__)
@ -36,11 +34,13 @@ __virtualname__ = "system"
def __virtual__():
"""
This only supports Windows
Make sure this Windows and that the win_system module is available
"""
if salt.utils.platform.is_windows() and "system.get_computer_desc" in __salt__:
return __virtualname__
return (False, "system module could not be loaded")
if not salt.utils.platform.is_windows():
return False, "win_system: Only available on Windows"
if "system.get_computer_desc" not in __salt__:
return False, "win_system: win_system execution module not available"
return __virtualname__
def computer_desc(name):
@ -172,6 +172,85 @@ def hostname(name):
return ret
def workgroup(name):
"""
.. versionadded:: Sodium
Manage the workgroup of the computer
Args:
name (str): The workgroup to set
Example:
.. code-block:: yaml
set workgroup:
system.workgroup:
- name: local
"""
ret = {"name": name.upper(), "result": False, "changes": {}, "comment": ""}
# Grab the current domain/workgroup
out = __salt__["system.get_domain_workgroup"]()
current_workgroup = (
out["Domain"]
if "Domain" in out
else out["Workgroup"]
if "Workgroup" in out
else ""
)
# Notify the user if the requested workgroup is the same
if current_workgroup.upper() == name.upper():
ret["result"] = True
ret["comment"] = "Workgroup is already set to '{0}'".format(name.upper())
return ret
# If being run in test-mode, inform the user what is supposed to happen
if __opts__["test"]:
ret["result"] = None
ret["changes"] = {}
ret["comment"] = "Computer will be joined to workgroup '{0}'".format(name)
return ret
# Set our new workgroup, and then immediately ask the machine what it
# is again to validate the change
res = __salt__["system.set_domain_workgroup"](name.upper())
out = __salt__["system.get_domain_workgroup"]()
new_workgroup = (
out["Domain"]
if "Domain" in out
else out["Workgroup"]
if "Workgroup" in out
else ""
)
# Return our results based on the changes
ret = {}
if res and current_workgroup.upper() == new_workgroup.upper():
ret["result"] = True
ret["comment"] = "The new workgroup '{0}' is the same as '{1}'".format(
current_workgroup.upper(), new_workgroup.upper()
)
elif res:
ret["result"] = True
ret["comment"] = "The workgroup has been changed from '{0}' to '{1}'".format(
current_workgroup.upper(), new_workgroup.upper()
)
ret["changes"] = {
"old": current_workgroup.upper(),
"new": new_workgroup.upper(),
}
else:
ret["result"] = False
ret["comment"] = "Unable to join the requested workgroup '{0}'".format(
new_workgroup.upper()
)
return ret
def join_domain(
name,
username=None,

View file

@ -32,7 +32,7 @@ class CacheFactory(object):
@classmethod
def factory(cls, backend, ttl, *args, **kwargs):
log.info("Factory backend: %s", backend)
log.debug("Factory backend: %s", backend)
if backend == "memory":
return CacheDict(ttl, *args, **kwargs)
elif backend == "disk":

View file

@ -44,7 +44,7 @@ def store_job(opts, load, event=None, mminion=None):
nocache=load.get("nocache", False)
)
except KeyError:
emsg = "Returner '{0}' does not support function prep_jid".format(job_cache)
emsg = "Returner function not found: {0}".format(prep_fstr)
log.error(emsg)
raise KeyError(emsg)
@ -53,9 +53,7 @@ def store_job(opts, load, event=None, mminion=None):
try:
mminion.returners[saveload_fstr](load["jid"], load)
except KeyError:
emsg = "Returner '{0}' does not support function save_load".format(
job_cache
)
emsg = "Returner function not found: {0}".format(saveload_fstr)
log.error(emsg)
raise KeyError(emsg)
elif salt.utils.jid.is_jid(load["jid"]):

View file

@ -874,7 +874,7 @@ def ping_all_connected_minions(opts):
else:
tgt = "*"
form = "glob"
client.cmd(tgt, "test.ping", tgt_type=form)
client.cmd_async(tgt, "test.ping", tgt_type=form)
def get_master_key(key_user, opts, skip_perm_errors=False):

View file

@ -433,7 +433,19 @@ class ReactWrap(object):
# and kwargs['kwarg'] contain the positional and keyword arguments
# that will be passed to the client interface to execute the
# desired runner/wheel/remote-exec/etc. function.
l_fun(*args, **kwargs)
ret = l_fun(*args, **kwargs)
if ret is False:
log.error(
"Reactor '%s' failed to execute %s '%s': "
"TaskPool queue is full!"
"Consider tuning reactor_worker_threads and/or"
" reactor_worker_hwm",
low["__id__"],
low["state"],
low["fun"],
)
except SystemExit:
log.warning("Reactor '%s' attempted to exit. Ignored.", low["__id__"])
except Exception: # pylint: disable=broad-except
@ -449,13 +461,13 @@ class ReactWrap(object):
"""
Wrap RunnerClient for executing :ref:`runner modules <all-salt.runners>`
"""
self.pool.fire_async(self.client_cache["runner"].low, args=(fun, kwargs))
return self.pool.fire_async(self.client_cache["runner"].low, args=(fun, kwargs))
def wheel(self, fun, **kwargs):
"""
Wrap Wheel to enable executing :ref:`wheel modules <all-salt.wheel>`
"""
self.pool.fire_async(self.client_cache["wheel"].low, args=(fun, kwargs))
return self.pool.fire_async(self.client_cache["wheel"].low, args=(fun, kwargs))
def local(self, fun, tgt, **kwargs):
"""

View file

@ -109,6 +109,7 @@ salt/cli/salt.py:
salt/client/*:
- integration.client.test_kwarg
- integration.client.test_runner
- integration.client.test_saltcli
- integration.client.test_standard
salt/cloud/*:
@ -241,6 +242,16 @@ salt/utils/vt.py:
- integration.ssh.test_raw
- integration.ssh.test_state
salt/utils/vt.py:
- integration.cli.test_custom_module
- integration.cli.test_grains
- integration.ssh.test_grains
- integration.ssh.test_jinja_filters
- integration.ssh.test_mine
- integration.ssh.test_pillar
- integration.ssh.test_raw
- integration.ssh.test_state
salt/wheel/*:
- integration.wheel.test_client

View file

@ -14,6 +14,7 @@
"""
from __future__ import absolute_import, print_function, unicode_literals
import logging
import os
import pytest
@ -22,6 +23,8 @@ from tests.support.case import ShellCase, SSHCase
from tests.support.helpers import flaky
from tests.support.unit import skipIf
log = logging.getLogger(__name__)
@pytest.mark.windows_whitelisted
class GrainsTargetingTest(ShellCase):
@ -69,21 +72,15 @@ class GrainsTargetingTest(ShellCase):
with salt.utils.files.fopen(key_file, "a"):
pass
import logging
log = logging.getLogger(__name__)
# ping disconnected minion and ensure it times out and returns with correct message
try:
if salt.utils.platform.is_windows():
cmd_str = '-t 1 -G "id:disconnected" test.ping'
else:
cmd_str = "-t 1 -G 'id:disconnected' test.ping"
ret = ""
for item in self.run_salt(
'-t 1 -G "id:disconnected" test.ping', timeout=40
):
if item != "disconnected:":
ret = item.strip()
break
assert ret == test_ret
finally:
os.unlink(key_file)

View file

@ -106,7 +106,8 @@ class StdTest(ModuleCase):
"""
Test return/messaging on a disconnected minion
"""
test_ret = {"ret": "Minion did not return. [No response]", "out": "no_return"}
test_ret = "Minion did not return. [No response]"
test_out = "no_return"
# Create a minion key, but do not start the "fake" minion. This mimics
# a disconnected minion.
@ -122,8 +123,12 @@ class StdTest(ModuleCase):
num_ret = 0
for ret in cmd_iter:
num_ret += 1
self.assertEqual(ret["disconnected"]["ret"], test_ret["ret"])
self.assertEqual(ret["disconnected"]["out"], test_ret["out"])
assert ret["disconnected"]["ret"].startswith(test_ret), ret[
"disconnected"
]["ret"]
assert ret["disconnected"]["out"] == test_out, ret["disconnected"][
"out"
]
# Ensure that we entered the loop above
self.assertEqual(num_ret, 1)
@ -136,13 +141,13 @@ class StdTest(ModuleCase):
"""
test cmd with missing minion in nodegroup
"""
ret = self.client.cmd(
"minion,ghostminion", "test.ping", tgt_type="list", timeout=self.TIMEOUT
)
self.assertIn("minion", ret)
self.assertIn("ghostminion", ret)
self.assertEqual(True, ret["minion"])
self.assertEqual("Minion did not return. [No response]", ret["ghostminion"])
ret = self.client.cmd("minion,ghostminion", "test.ping", tgt_type="list")
assert "minion" in ret
assert "ghostminion" in ret
assert ret["minion"] is True
assert ret["ghostminion"].startswith(
"Minion did not return. [No response]"
), ret["ghostminion"]
@skipIf(True, "SLOWTEST skip")
def test_missing_minion_nodegroup(self):
@ -150,7 +155,9 @@ class StdTest(ModuleCase):
test cmd with missing minion in nodegroup
"""
ret = self.client.cmd("missing_minion", "test.ping", tgt_type="nodegroup")
self.assertIn("minion", ret)
self.assertIn("ghostminion", ret)
self.assertEqual(True, ret["minion"])
self.assertEqual("Minion did not return. [No response]", ret["ghostminion"])
assert "minion" in ret
assert "ghostminion" in ret
assert ret["minion"] is True
assert ret["ghostminion"].startswith(
"Minion did not return. [No response]"
), ret["ghostminion"]

View file

@ -37,7 +37,7 @@ class DigitalOceanTest(CloudTest):
Tests the return of running the --list-images command for digitalocean
"""
image_list = self.run_cloud("--list-images {0}".format(self.PROVIDER))
self.assertIn("14.04.5 x64", [i.strip() for i in image_list])
self.assertIn("ubuntu-18-04-x64", [i.strip() for i in image_list])
def test_list_locations(self):
"""

View file

@ -1,5 +1,5 @@
digitalocean-test:
provider: digitalocean-config
image: 14.04.5 x64
image: ubuntu-18-04-x64
size: 2GB
script_args: '-P'

View file

@ -30,6 +30,7 @@ class BeaconsAddDeleteTest(ModuleCase):
self.beacons_config_file_path = os.path.join(
self.minion_conf_d_dir, "beacons.conf"
)
self.run_function("beacons.reset", f_timeout=300)
def tearDown(self):
if os.path.isfile(self.beacons_config_file_path):
@ -113,6 +114,7 @@ class BeaconsTest(ModuleCase):
self.__class__.beacons_config_file_path = os.path.join(
self.minion_conf_d_dir, "beacons.conf"
)
self.run_function("beacons.reset", f_timeout=300)
try:
# Add beacon to disable
self.run_function(
@ -247,6 +249,7 @@ class BeaconsWithBeaconTypeTest(ModuleCase):
self.__class__.beacons_config_file_path = os.path.join(
self.minion_conf_d_dir, "beacons.conf"
)
self.run_function("beacons.reset", f_timeout=300)
try:
# Add beacon to disable
self.run_function(
@ -271,6 +274,8 @@ class BeaconsWithBeaconTypeTest(ModuleCase):
"""
Test disabling beacons
"""
ret = self.run_function("beacons.enable", f_timeout=300)
self.assertTrue(ret["result"])
# assert beacon exists
_list = self.run_function("beacons.list", return_yaml=False)
self.assertIn("watch_apache", _list)

View file

@ -23,6 +23,7 @@ from tests.support.case import ModuleCase
from tests.support.helpers import with_tempdir
from tests.support.mixins import SaltReturnAssertsMixin
from tests.support.runtests import RUNTIME_VARS
from tests.support.sminion import create_sminion
from tests.support.unit import skipIf
log = logging.getLogger(__name__)
@ -31,34 +32,6 @@ log = logging.getLogger(__name__)
DEFAULT_ENDING = salt.utils.stringutils.to_bytes(os.linesep)
def trim_line_end(line):
"""
Remove CRLF or LF from the end of line.
"""
if line[-2:] == salt.utils.stringutils.to_bytes("\r\n"):
return line[:-2]
elif line[-1:] == salt.utils.stringutils.to_bytes("\n"):
return line[:-1]
raise Exception("Invalid line ending")
def reline(source, dest, force=False, ending=DEFAULT_ENDING):
"""
Normalize the line endings of a file.
"""
fp, tmp = tempfile.mkstemp()
os.close(fp)
with salt.utils.files.fopen(tmp, "wb") as tmp_fd:
with salt.utils.files.fopen(source, "rb") as fd:
lines = fd.readlines()
for line in lines:
line_noend = trim_line_end(line)
tmp_fd.write(line_noend + ending)
if os.path.exists(dest) and force:
os.remove(dest)
os.rename(tmp, dest)
@pytest.mark.windows_whitelisted
class StateModuleTest(ModuleCase, SaltReturnAssertsMixin):
"""
@ -80,9 +53,16 @@ class StateModuleTest(ModuleCase, SaltReturnAssertsMixin):
fhw.write(line + ending)
destpath = os.path.join(RUNTIME_VARS.BASE_FILES, "testappend", "firstif")
_reline(destpath)
destpath = os.path.join(RUNTIME_VARS.BASE_FILES, "testappend", "secondif")
_reline(destpath)
cls.TIMEOUT = 600 if salt.utils.platform.is_windows() else 10
if salt.utils.platform.is_windows():
cls.TIMEOUT = 600
# Be sure to have everything sync'ed
sminion = create_sminion()
sminion.functions.saltutil.sync_all()
else:
cls.TIMEOUT = 10
@skipIf(True, "SLOWTEST skip")
def test_show_highstate(self):

View file

@ -2794,6 +2794,7 @@ class FileTest(ModuleCase, SaltReturnAssertsMixin):
os.remove(dest)
@destructiveTest
@skip_if_not_root
@skipIf(IS_WINDOWS, "Windows does not report any file modes. Skipping.")
@with_tempfile()
@skipIf(True, "SLOWTEST skip")
@ -2941,7 +2942,7 @@ class FileTest(ModuleCase, SaltReturnAssertsMixin):
temp_file_stats = os.stat(tempfile)
# Normalize the mode
temp_file_mode = six.text_type(oct(stat.S_IMODE(temp_file_stats.st_mode)))
temp_file_mode = str(oct(stat.S_IMODE(temp_file_stats.st_mode)))
temp_file_mode = salt.utils.files.normalize_mode(temp_file_mode)
self.assertEqual(temp_file_mode, "4750")
@ -2994,31 +2995,88 @@ class FileTest(ModuleCase, SaltReturnAssertsMixin):
self.assertEqual(master_data, minion_data)
self.assertSaltTrueReturn(ret)
@with_tempfile()
def test_keyvalue(self, name):
"""
file.keyvalue
"""
content = dedent(
"""\
# This is the sshd server system-wide configuration file. See
# sshd_config(5) for more information.
# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented. Uncommented options override the
# default value.
#Port 22
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::
#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_ecdsa_key
#HostKey /etc/ssh/ssh_host_ed25519_key
# Ciphers and keying
#RekeyLimit default none
# Logging
#SyslogFacility AUTH
#LogLevel INFO
# Authentication:
#LoginGraceTime 2m
#PermitRootLogin prohibit-password
#StrictModes yes
#MaxAuthTries 6
#MaxSessions 10
"""
)
with salt.utils.files.fopen(name, "w+") as fp_:
fp_.write(content)
ret = self.run_state(
"file.keyvalue",
name=name,
key="permitrootlogin",
value="no",
separator=" ",
uncomment=" #",
key_ignore_case=True,
)
with salt.utils.files.fopen(name, "r") as fp_:
file_contents = fp_.read()
self.assertNotIn("#PermitRootLogin", file_contents)
self.assertNotIn("prohibit-password", file_contents)
self.assertIn("PermitRootLogin no", file_contents)
self.assertSaltTrueReturn(ret)
@pytest.mark.windows_whitelisted
class BlockreplaceTest(ModuleCase, SaltReturnAssertsMixin):
marker_start = "# start"
marker_end = "# end"
content = dedent(
six.text_type(
"""\
"""\
Line 1 of block
Line 2 of block
"""
)
)
without_block = dedent(
six.text_type(
"""\
"""\
Hello world!
# comment here
"""
)
)
with_non_matching_block = dedent(
six.text_type(
"""\
"""\
Hello world!
# start
@ -3026,22 +3084,18 @@ class BlockreplaceTest(ModuleCase, SaltReturnAssertsMixin):
# end
# comment here
"""
)
)
with_non_matching_block_and_marker_end_not_after_newline = dedent(
six.text_type(
"""\
"""\
Hello world!
# start
No match here# end
# comment here
"""
)
)
with_matching_block = dedent(
six.text_type(
"""\
"""\
Hello world!
# start
@ -3050,11 +3104,9 @@ class BlockreplaceTest(ModuleCase, SaltReturnAssertsMixin):
# end
# comment here
"""
)
)
with_matching_block_and_extra_newline = dedent(
six.text_type(
"""\
"""\
Hello world!
# start
@ -3064,11 +3116,9 @@ class BlockreplaceTest(ModuleCase, SaltReturnAssertsMixin):
# end
# comment here
"""
)
)
with_matching_block_and_marker_end_not_after_newline = dedent(
six.text_type(
"""\
"""\
Hello world!
# start
@ -3076,7 +3126,6 @@ class BlockreplaceTest(ModuleCase, SaltReturnAssertsMixin):
Line 2 of block# end
# comment here
"""
)
)
content_explicit_posix_newlines = "Line 1 of block\n" "Line 2 of block\n"
content_explicit_windows_newlines = "Line 1 of block\r\n" "Line 2 of block\r\n"

View file

@ -33,8 +33,7 @@ from unittest import TestResult
from unittest import TestSuite as _TestSuite
from unittest import TextTestResult as _TextTestResult
from unittest import TextTestRunner as _TextTestRunner
from unittest import expectedFailure, skip
from unittest import skipIf as _skipIf
from unittest import expectedFailure, skip, skipIf
from unittest.case import SkipTest, _id
from salt.ext import six
@ -394,16 +393,6 @@ class TextTestRunner(_TextTestRunner):
resultclass = TextTestResult
def skipIf(skip, reason):
from tests.support.runtests import RUNTIME_VARS
if RUNTIME_VARS.PYTEST_SESSION:
import pytest
return pytest.mark.skipif(skip, reason=reason)
return _skipIf(skip, reason)
__all__ = [
"TestLoader",
"TextTestRunner",

View file

@ -58,6 +58,30 @@ class PostgresClusterTestCase(TestCase, LoaderModuleMockMixin):
)
self.assertEqual(cmdstr, self.cmd_run_all_mock.call_args[0][0])
def test_cluster_create_with_initdb_options(self):
deb_postgres.cluster_create(
"11",
"main",
port="5432",
locale="fr_FR",
encoding="UTF-8",
datadir="/opt/postgresql",
allow_group_access=True,
data_checksums=True,
wal_segsize="32",
)
cmdstr = (
"/usr/bin/pg_createcluster "
"--port 5432 --locale fr_FR --encoding UTF-8 "
"--datadir /opt/postgresql "
"11 main "
"-- "
"--allow-group-access "
"--data-checksums "
"--wal-segsize 32"
)
self.assertEqual(cmdstr, self.cmd_run_all_mock.call_args[0][0])
# XXX version should be a string but from cmdline you get a float
# def test_cluster_create_with_float(self):
# self.assertRaises(AssertionError, deb_postgres.cluster_create,

View file

@ -3,13 +3,9 @@
:codeauthor: Jayesh Kariya <jayeshk@saltstack.com>
"""
# Import Python libs
from __future__ import absolute_import, print_function, unicode_literals
# Import Salt Libs
import salt.modules.drbd as drbd
# Import Salt Testing Libs
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.mock import MagicMock, patch
from tests.support.unit import TestCase
@ -68,3 +64,128 @@ class DrbdTestCase(TestCase, LoaderModuleMockMixin):
)
with patch.dict(drbd.__salt__, {"cmd.run": mock}):
self.assertDictEqual(drbd.overview(), ret)
def test_status(self):
"""
Test if it shows status of the DRBD resources via drbdadm
"""
ret = [
{
"local role": "Primary",
"local volumes": [{"disk": "UpToDate"}],
"peer nodes": [
{
"peer volumes": [
{
"done": "96.47",
"peer-disk": "Inconsistent",
"replication": "SyncSource",
}
],
"peernode name": "opensuse-node2",
"role": "Secondary",
}
],
"resource name": "single",
}
]
mock = MagicMock(
return_value="""
single role:Primary
disk:UpToDate
opensuse-node2 role:Secondary
replication:SyncSource peer-disk:Inconsistent done:96.47
"""
)
with patch.dict(drbd.__salt__, {"cmd.run": mock}):
try: # python2
self.assertItemsEqual(drbd.status(), ret)
except AttributeError: # python3
self.assertCountEqual(drbd.status(), ret)
ret = [
{
"local role": "Primary",
"local volumes": [
{"disk": "UpToDate", "volume": "0"},
{"disk": "UpToDate", "volume": "1"},
],
"peer nodes": [
{
"peer volumes": [
{"peer-disk": "UpToDate", "volume": "0"},
{"peer-disk": "UpToDate", "volume": "1"},
],
"peernode name": "node2",
"role": "Secondary",
},
{
"peer volumes": [
{"peer-disk": "UpToDate", "volume": "0"},
{"peer-disk": "UpToDate", "volume": "1"},
],
"peernode name": "node3",
"role": "Secondary",
},
],
"resource name": "test",
},
{
"local role": "Primary",
"local volumes": [
{"disk": "UpToDate", "volume": "0"},
{"disk": "UpToDate", "volume": "1"},
],
"peer nodes": [
{
"peer volumes": [
{"peer-disk": "UpToDate", "volume": "0"},
{"peer-disk": "UpToDate", "volume": "1"},
],
"peernode name": "node2",
"role": "Secondary",
},
{
"peer volumes": [
{"peer-disk": "UpToDate", "volume": "0"},
{"peer-disk": "UpToDate", "volume": "1"},
],
"peernode name": "node3",
"role": "Secondary",
},
],
"resource name": "res",
},
]
mock = MagicMock(
return_value="""
res role:Primary
volume:0 disk:UpToDate
volume:1 disk:UpToDate
node2 role:Secondary
volume:0 peer-disk:UpToDate
volume:1 peer-disk:UpToDate
node3 role:Secondary
volume:0 peer-disk:UpToDate
volume:1 peer-disk:UpToDate
test role:Primary
volume:0 disk:UpToDate
volume:1 disk:UpToDate
node2 role:Secondary
volume:0 peer-disk:UpToDate
volume:1 peer-disk:UpToDate
node3 role:Secondary
volume:0 peer-disk:UpToDate
volume:1 peer-disk:UpToDate
"""
)
with patch.dict(drbd.__salt__, {"cmd.run": mock}):
try: # python2
self.assertItemsEqual(drbd.status(), ret)
except AttributeError: # python3
self.assertCountEqual(drbd.status(), ret)

View file

@ -1,12 +1,8 @@
# -*- coding: utf-8 -*-
# Import Python libs
from __future__ import absolute_import, print_function, unicode_literals
# Import Salt Libs
import salt.modules.pf as pf
# Import Salt Testing Libs
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.mock import MagicMock, patch
from tests.support.unit import TestCase
@ -64,14 +60,32 @@ class PfTestCase(TestCase, LoaderModuleMockMixin):
with patch.dict(pf.__salt__, {"cmd.run_all": mock_cmd}):
self.assertFalse(pf.disable()["changes"])
def test_loglevel(self):
def test_loglevel_freebsd(self):
"""
Tests setting a loglevel.
"""
ret = {}
ret["retcode"] = 0
mock_cmd = MagicMock(return_value=ret)
with patch.dict(pf.__salt__, {"cmd.run_all": mock_cmd}):
with patch.dict(pf.__salt__, {"cmd.run_all": mock_cmd}), patch.dict(
pf.__grains__, {"os": "FreeBSD"}
):
res = pf.loglevel("urgent")
mock_cmd.assert_called_once_with(
"pfctl -x urgent", output_loglevel="trace", python_shell=False
)
self.assertTrue(res["changes"])
def test_loglevel_openbsd(self):
"""
Tests setting a loglevel.
"""
ret = {}
ret["retcode"] = 0
mock_cmd = MagicMock(return_value=ret)
with patch.dict(pf.__salt__, {"cmd.run_all": mock_cmd}), patch.dict(
pf.__grains__, {"os": "OpenBSD"}
):
res = pf.loglevel("crit")
mock_cmd.assert_called_once_with(
"pfctl -x crit", output_loglevel="trace", python_shell=False

View file

@ -12,13 +12,138 @@ from datetime import datetime
# Import Salt Libs
import salt.modules.win_system as win_system
import salt.utils.platform
import salt.utils.stringutils
# Import Salt Testing Libs
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.mock import MagicMock, patch
from tests.support.mock import MagicMock, Mock, patch
from tests.support.unit import TestCase, skipIf
try:
import wmi
HAS_WMI = True
except ImportError:
HAS_WMI = False
class MockWMI_ComputerSystem(object):
"""
Mock WMI Win32_ComputerSystem Class
"""
BootupState = "Normal boot"
Caption = "SALT SERVER"
ChassisBootupState = 3
ChassisSKUNumber = "3.14159"
DNSHostname = "SALT SERVER"
Domain = "WORKGROUP"
DomainRole = 2
Manufacturer = "Dell Inc."
Model = "Dell 2980"
NetworkServerModeEnabled = True
PartOfDomain = False
PCSystemType = 4
PowerState = 0
Status = "OK"
SystemType = "x64-based PC"
TotalPhysicalMemory = 17078214656
ThermalState = 3
Workgroup = "WORKGROUP"
def __init__(self):
pass
@staticmethod
def Rename(Name):
return Name == Name
@staticmethod
def JoinDomainOrWorkgroup(Name):
return [0]
class MockWMI_OperatingSystem(object):
"""
Mock WMI Win32_OperatingSystem Class
"""
Description = "Because salt goes EVERYWHERE"
InstallDate = "20110211131800"
LastBootUpTime = "19620612120000"
Manufacturer = "Python"
Caption = "Salty"
NumberOfUsers = 7530000000
Organization = "SaltStack"
OSArchitecture = "Windows"
Primary = True
ProductType = 3
RegisteredUser = "thatch@saltstack.com"
SystemDirectory = "C:\\Windows\\System32"
SystemDrive = "C:\\"
Version = "10.0.17763"
WindowsDirectory = "C:\\Windows"
def __init__(self):
pass
class MockWMI_Processor(object):
"""
Mock WMI Win32_Processor Class
"""
Manufacturer = "Intel"
MaxClockSpeed = 2301
NumberOfLogicalProcessors = 8
NumberOfCores = 4
NumberOfEnabledCore = 4
def __init__(self):
pass
class MockWMI_BIOS(object):
"""
Mock WMI Win32_BIOS Class
"""
SerialNumber = "SALTY2011"
Manufacturer = "Dell Inc."
Version = "DELL - 10283849"
Caption = "A12"
BIOSVersion = [Version, Caption, "ASUS - 3948D"]
Description = Caption
def __init__(self):
pass
class Mockwinapi(object):
"""
Mock winapi class
"""
def __init__(self):
pass
class winapi(object):
"""
Mock winapi class
"""
def __init__(self):
pass
@staticmethod
def Com():
"""
Mock Com method
"""
return True
@skipIf(not HAS_WMI, "WMI only available on Windows")
@skipIf(not salt.utils.platform.is_windows(), "System is not Windows")
class WinSystemTestCase(TestCase, LoaderModuleMockMixin):
"""
@ -27,6 +152,15 @@ class WinSystemTestCase(TestCase, LoaderModuleMockMixin):
def setup_loader_modules(self):
modules_globals = {}
# wmi and pythoncom modules are platform specific...
mock_pythoncom = types.ModuleType(salt.utils.stringutils.to_str("pythoncom"))
sys_modules_patcher = patch.dict("sys.modules", {"pythoncom": mock_pythoncom})
sys_modules_patcher.start()
self.addCleanup(sys_modules_patcher.stop)
self.WMI = Mock()
self.addCleanup(delattr, self, "WMI")
modules_globals["wmi"] = wmi
if win_system.HAS_WIN32NET_MODS is False:
win32api = types.ModuleType(
str("win32api") # future lint: disable=blacklisted-function
@ -325,19 +459,42 @@ class WinSystemTestCase(TestCase, LoaderModuleMockMixin):
"""
Test setting a new hostname
"""
cmd_run_mock = MagicMock(return_value="Method execution successful.")
get_hostname = MagicMock(return_value="MINION")
with patch.dict(win_system.__salt__, {"cmd.run": cmd_run_mock}):
with patch.object(win_system, "get_hostname", get_hostname):
win_system.set_hostname("NEW")
with patch("salt.utils", Mockwinapi), patch(
"salt.utils.winapi.Com", MagicMock()
), patch.object(
self.WMI, "Win32_ComputerSystem", return_value=[MockWMI_ComputerSystem()]
), patch.object(
wmi, "WMI", Mock(return_value=self.WMI)
):
self.assertTrue(win_system.set_hostname("NEW"))
cmd_run_mock.assert_called_once_with(
cmd="wmic computersystem where name='MINION' call rename name='NEW'"
)
def test_get_domain_workgroup(self):
"""
Test get_domain_workgroup
"""
with patch("salt.utils", Mockwinapi), patch.object(
wmi, "WMI", Mock(return_value=self.WMI)
), patch("salt.utils.winapi.Com", MagicMock()), patch.object(
self.WMI, "Win32_ComputerSystem", return_value=[MockWMI_ComputerSystem()]
):
self.assertDictEqual(
win_system.get_domain_workgroup(), {"Workgroup": "WORKGROUP"}
)
def test_set_domain_workgroup(self):
"""
Test set_domain_workgroup
"""
with patch("salt.utils", Mockwinapi), patch.object(
wmi, "WMI", Mock(return_value=self.WMI)
), patch("salt.utils.winapi.Com", MagicMock()), patch.object(
self.WMI, "Win32_ComputerSystem", return_value=[MockWMI_ComputerSystem()]
):
self.assertTrue(win_system.set_domain_workgroup("test"))
def test_get_hostname(self):
"""
Test setting a new hostname
Test getting a new hostname
"""
cmd_run_mock = MagicMock(return_value="MINION")
with patch.dict(win_system.__salt__, {"cmd.run": cmd_run_mock}):
@ -346,7 +503,6 @@ class WinSystemTestCase(TestCase, LoaderModuleMockMixin):
cmd_run_mock.assert_called_once_with(cmd="hostname")
@skipIf(not win_system.HAS_WIN32NET_MODS, "Missing win32 libraries")
@skipIf(True, "SLOWTEST skip")
def test_get_system_info(self):
fields = [
"bios_caption",
@ -397,7 +553,22 @@ class WinSystemTestCase(TestCase, LoaderModuleMockMixin):
"windows_directory",
"workgroup",
]
ret = win_system.get_system_info()
with patch("salt.utils", Mockwinapi), patch(
"salt.utils.winapi.Com", MagicMock()
), patch.object(
self.WMI, "Win32_OperatingSystem", return_value=[MockWMI_OperatingSystem()]
), patch.object(
self.WMI, "Win32_ComputerSystem", return_value=[MockWMI_ComputerSystem()]
), patch.object(
self.WMI,
"Win32_Processor",
return_value=[MockWMI_Processor(), MockWMI_Processor()],
), patch.object(
self.WMI, "Win32_BIOS", return_value=[MockWMI_BIOS()]
), patch.object(
wmi, "WMI", Mock(return_value=self.WMI)
):
ret = win_system.get_system_info()
# Make sure all the fields are in the return
for field in fields:
self.assertIn(field, ret)

View file

@ -1,6 +1,5 @@
# -*- coding: utf-8 -*-
# Import python libs
from __future__ import absolute_import, print_function, unicode_literals
import logging
@ -14,8 +13,6 @@ import salt.serializers.json as jsonserializer
import salt.serializers.python as pythonserializer
import salt.serializers.yaml as yamlserializer
import salt.states.file as filestate
# Import salt libs
import salt.utils.files
import salt.utils.json
import salt.utils.platform
@ -24,8 +21,6 @@ import salt.utils.yaml
from salt.exceptions import CommandExecutionError
from salt.ext.six.moves import range
from tests.support.helpers import destructiveTest
# Import Salt Testing libs
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.mock import MagicMock, Mock, call, mock_open, patch
from tests.support.runtests import RUNTIME_VARS

View file

@ -113,7 +113,7 @@ class SvnTestCase(TestCase, LoaderModuleMockMixin):
"new": "salt",
"comment": "salt was Exported to c://salt",
},
"comment": "",
"comment": True,
"name": "salt",
"result": True,
},

View file

@ -10,6 +10,7 @@
# Import python libs
from __future__ import absolute_import, print_function, unicode_literals
import copy
import datetime
import errno
import logging
@ -165,6 +166,38 @@ class PayloadTestCase(TestCase):
odata = payload.loads(sdata)
self.assertTrue("recursion" in odata["data"].lower())
def test_recursive_dump_load_with_identical_non_recursive_types(self):
"""
If identical objects are nested anywhere, they should not be
marked recursive unless they're one of the types we iterate
over.
"""
payload = salt.payload.Serial("msgpack")
repeating = "repeating element"
data = {
"a": "a", # Test CPython implementation detail. Short
"b": "a", # strings are interned.
"c": 13, # So are small numbers.
"d": 13,
"fnord": repeating,
# Let's go for broke and make a crazy nested structure
"repeating": [
[[[[{"one": repeating, "two": repeating}], repeating, 13, "a"]]],
repeating,
repeating,
repeating,
],
}
# We need a nested dictionary to trigger the exception
data["repeating"][0][0][0].append(data)
# If we don't deepcopy the data it gets mutated
sdata = payload.dumps(copy.deepcopy(data))
odata = payload.loads(sdata)
# Delete the recursive piece - it's served its purpose, and our
# other test tests that it's actually marked as recursive.
del odata["repeating"][0][0][0][-1], data["repeating"][0][0][0][-1]
self.assertDictEqual(odata, data)
class SREQTestCase(TestCase):
port = 8845 # TODO: dynamically assign a port?