Merge branch 'master' into backport_53994

This commit is contained in:
Daniel Wozniak 2020-04-21 21:29:01 -07:00 committed by GitHub
commit b9dbff9513
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
362 changed files with 4108 additions and 737 deletions

View file

@ -13,7 +13,7 @@ Remove this section if not relevant
**[NOTICE] Bug fixes or features added to Salt require tests.**
<!-- Please review the [test documentation](https://docs.saltstack.com/en/master/topics/tutorials/writing_tests.html) for details on how to implement tests into Salt's test suite. -->
- [ ] Docs
- [ ] Changelog - https://docs.saltstack.com/en/latest/topics/development/changelog.html
- [ ] Changelog
- [ ] Tests written/updated
### Commits signed with GPG?

View file

@ -67,15 +67,6 @@ repos:
- --py-version=3.5
- --platform=linux
- id: pip-tools-compile
alias: compile-changelog-requirements
name: Changelog Py3.5 Requirements
files: ^requirements/static/changelog\.in$
args:
- -v
- --py-version=3.5
- --platform=linux
- id: pip-tools-compile
alias: compile-linux-crypto-py3.5-requirements
name: Linux Py3.5 Crypto Requirements
@ -175,15 +166,6 @@ repos:
- --py-version=3.6
- --platform=linux
- id: pip-tools-compile
alias: compile-changelog-requirements
name: Changelog Py3.6 Requirements
files: ^requirements/static/changelog\.in$
args:
- -v
- --py-version=3.6
- --platform=linux
- id: pip-tools-compile
alias: compile-linux-crypto-py3.6-requirements
name: Linux Py3.6 Crypto Requirements
@ -283,15 +265,6 @@ repos:
- --py-version=3.7
- --platform=linux
- id: pip-tools-compile
alias: compile-changelog-requirements
name: Changelog Py3.7 Requirements
files: ^requirements/static/changelog\.in$
args:
- -v
- --py-version=3.7
- --platform=linux
- id: pip-tools-compile
alias: compile-linux-crypto-py3.7-requirements
name: Linux Py3.7 Crypto Requirements

View file

@ -1,3 +1,4 @@
# Changelog
All notable changes to Salt will be documented in this file.
This changelog follows [keepachangelog](https://keepachangelog.com/en/1.0.0/) format, and is intended for human consumption.
@ -5,8 +6,6 @@ This changelog follows [keepachangelog](https://keepachangelog.com/en/1.0.0/) fo
This project versioning is _similar_ to [Semantic Versioning](https://semver.org), and is documented in [SEP 14](https://github.com/saltstack/salt-enhancement-proposals/pull/20/files).
Versions are `MAJOR.PATCH`.
# Changelog
## 3001 - Sodium
### Removed
@ -14,7 +13,9 @@ Versions are `MAJOR.PATCH`.
### Deprecated
### Changed
- [#56731](https://github.com/saltstack/salt/pull/56731) - Backport #53994
- [#56753](https://github.com/saltstack/salt/pull/56753) - Backport 51095
### Fixed
- [#56237](https://github.com/saltstack/salt/pull/56237) - Fix alphabetical ordering and remove duplicates across all documentation indexes - [@myii](https://github.com/myii)
@ -22,6 +23,7 @@ Versions are `MAJOR.PATCH`.
### Added
- [#56627](https://github.com/saltstack/salt/pull/56627) - Add new salt-ssh set_path option
- [#51379](https://github.com/saltstack/salt/pull/56792) - Backport 51379 : Adds .set_domain_workgroup to win_system
## 3000.1

View file

@ -1 +0,0 @@
Add towncrier tool to the Salt project to help manage CHANGELOG.md file.

View file

@ -677,7 +677,9 @@
# The master_roots setting configures a master-only copy of the file_roots dictionary,
# used by the state compiler.
#master_roots: /srv/salt-master
#master_roots:
# base:
# - /srv/salt-master
# When using multiple environments, each with their own top file, the
# default behaviour is an unordered merge. To prevent top files from
@ -1278,7 +1280,7 @@
############################################
# Warning: Failure to set TCP keepalives on the salt-master can result in
# not detecting the loss of a minion when the connection is lost or when
# it's host has been terminated without first closing the socket.
# its host has been terminated without first closing the socket.
# Salt's Presence System depends on this connection status to know if a minion
# is "present".
# ZeroMQ now includes support for configuring SO_KEEPALIVE if supported by

View file

@ -1202,7 +1202,7 @@ syndic_user: salt
############################################
# Warning: Failure to set TCP keepalives on the salt-master can result in
# not detecting the loss of a minion when the connection is lost or when
# it's host has been terminated without first closing the socket.
# its host has been terminated without first closing the socket.
# Salt's Presence System depends on this connection status to know if a minion
# is "present".
# ZeroMQ now includes support for configuring SO_KEEPALIVE if supported by

View file

@ -8,5 +8,5 @@
to the directory/file webserver_1/0.sls
The same applies for any subdirectories, this is especially 'tricky' when
git repos are created. Another command that typically can't render it's
git repos are created. Another command that typically can't render its
output is ```state.show_sls``` of a file in a path that contains a dot.

View file

@ -9456,7 +9456,7 @@ jQuery.fn.extend({
parentOffset = { top: 0, left: 0 },
elem = this[ 0 ];
// fixed elements are offset from window (parentOffset = {top:0, left: 0}, because it is it's only offset parent
// fixed elements are offset from window (parentOffset = {top:0, left: 0}, because it is its only offset parent
if ( jQuery.css( elem, "position" ) === "fixed" ) {
// we assume that getBoundingClientRect is available when computed position is fixed
offset = elem.getBoundingClientRect();

View file

@ -284897,7 +284897,7 @@ new
all
.TP
.B note
If you see the following error, you\(aqll need to upgrade \fBrequests\fP to atleast 2.4.2
If you see the following error, you\(aqll need to upgrade \fBrequests\fP to at least 2.4.2
.UNINDENT
.INDENT 0.0
.INDENT 3.5

View file

@ -607,8 +607,8 @@ be found by analyzing the cache log with ``memcache_debug`` enabled.
Default: ``False``
If cache storage got full, i.e. the items count exceeds the
``memcache_max_items`` value, memcache cleans up it's storage. If this option
set to ``False`` memcache removes the only one oldest value from it's storage.
``memcache_max_items`` value, memcache cleans up its storage. If this option
set to ``False`` memcache removes the only one oldest value from its storage.
If this set set to ``True`` memcache removes all the expired items and also
removes the oldest one if there are no expired items.
@ -1468,7 +1468,7 @@ This should still be considered a less than secure option, due to the fact
that trust is based on just the requesting minion id.
.. versionchanged:: 2018.3.0
For security reasons the file must be readonly except for it's owner.
For security reasons the file must be readonly except for its owner.
If :conf_master:`permissive_pki_access` is ``True`` the owning group can also
have write access, but if Salt is running as ``root`` it must be a member of that group.
A less strict requirement also existed in previous version.
@ -2654,14 +2654,18 @@ nothing is ignored.
``master_roots``
----------------
Default: ``/srv/salt-master``
Default: ``''``
A master-only copy of the :conf_master:`file_roots` dictionary, used by the
state compiler.
Example:
.. code-block:: yaml
master_roots: /srv/salt-master
master_roots:
base:
- /srv/salt-master
roots: Master's Local File Server
---------------------------------
@ -4151,7 +4155,7 @@ branch/tag (or from a per-remote ``env`` parameter), but if set this will
override the process of deriving the env from the branch/tag name. For example,
in the configuration below the ``foo`` branch would be assigned to the ``base``
environment, while the ``bar`` branch would need to explicitly have ``bar``
configured as it's environment to keep it from also being mapped to the
configured as its environment to keep it from also being mapped to the
``base`` environment.
.. code-block:: yaml

View file

@ -21,7 +21,7 @@ As of Salt 0.9.10 it is possible to run Salt as a non-root user. This can be
done by setting the :conf_master:`user` parameter in the master configuration
file. and restarting the ``salt-master`` service.
The minion has it's own :conf_minion:`user` parameter as well, but running the
The minion has its own :conf_minion:`user` parameter as well, but running the
minion as an unprivileged user will keep it from making changes to things like
users, installed packages, etc. unless access controls (sudo, etc.) are setup
on the minion to permit the non-root user to make the needed changes.

View file

@ -69,7 +69,7 @@ Where the args are:
Dictionary containing the load data including ``executor_opts`` passed via
cmdline/API.
``func``, ``args``, ``kwargs``:
Execution module function to be executed and it's arguments. For instance the
Execution module function to be executed and its arguments. For instance the
simplest ``direct_call`` executor just runs it as ``func(*args, **kwargs)``.
``Returns``:
``None`` if the execution sequence must be continued with the next executor.

View file

@ -185,7 +185,7 @@ Connection Timeout
==================
There are several stages when deploying Salt where Salt Cloud needs to wait for
something to happen. The VM getting it's IP address, the VM's SSH port is
something to happen. The VM getting its IP address, the VM's SSH port is
available, etc.
If you find that the Salt Cloud defaults are not enough and your deployment

View file

@ -1,87 +0,0 @@
.. _changelog:
=========
Changelog
=========
With the addition of `SEP 01`_ the `keepachangelog`_ format was introduced into
our CHANGELOG.md file. The Salt project is using the `towncrier`_ tool to manage
the Changelog.md file. The reason this tool was added to manage the changelog
was because we were previously managing the file manually and it would cause
many merge conflicts. This tool allows us to add changelog entries into separate
files and before a release we simply need to run ``towncrier --version=<version>``
for it to compile the changelog correctly.
.. _add-changelog:
How do I add a changelog entry
------------------------------
To add a changelog entry you will need to add a file in the `changelog` directory.
The file name should follow the syntax ``<issue #>.<type>``.
The types are in alignment with keepachangelog:
removed:
any features that have been removed
deprecated:
any features that will soon be removed
changed:
any changes in current existing features
fixed:
any bug fixes
added:
any new features added
For example if you are fixing a bug for issue number #1234 your filename would
look like this: changelog/1234.fixed. The contents of the file should contain
a summary of what you are fixing.
This does require that an issue be linked to all of the types above.
.. _generate-changelog:
How to generate the changelog
-----------------------------
This step is only used when we need to generate the changelog right before releasing.
You should NOT run towncrier on your PR, unless you are preparing the final PR
to update the changelog before a release.
You can run the `towncrier` tool directly or you can use nox to help run the command
and ensure towncrier is installed in a virtual environment. The instructions below
will detail both approaches.
If you want to see what output towncrier will produce before generating the change log
you can run towncrier in draft mode:
.. code-block:: bash
towncrier --draft --version=3001
.. code-block:: bash
nox -e 'changelog(draft=True)' -- 3000.1
Version will need to be set to whichever version we are about to release. Once you are
confident the draft output looks correct you can now generate the changelog by running:
.. code-block:: bash
towncrier --version=3001
.. code-block:: bash
nox -e 'changelog(draft=False)' -- 3000.1
After this is run towncrier will automatically remove all the files in the changelog directory.
.. _`SEP 01`: https://github.com/saltstack/salt-enhancement-proposals/pull/2
.. _`keepachangelog`: https://keepachangelog.com/en/1.0.0/
.. _`towncrier`: https://pypi.org/project/towncrier/

View file

@ -416,7 +416,7 @@ root of the Salt repository.
Bootstrap Script Changes
------------------------
Salt's Bootstrap Script, known as `bootstrap-salt.sh`_ in the Salt repo, has it's own
Salt's Bootstrap Script, known as `bootstrap-salt.sh`_ in the Salt repo, has its own
repository, contributing guidelines, and release cadence.
All changes to the Bootstrap Script should be made to `salt-bootstrap repo`_. Any

View file

@ -69,16 +69,6 @@ dynamic modules when states are run. To disable this behavior set
When dynamic modules are autoloaded via states, only the modules defined in the
same saltenvs as the states currently being run.
Also it is possible to use the explicit ``saltutil.sync_*`` :py:mod:`state functions <salt.states.saltutil>`
to sync the modules (previously it was necessary to use the ``module.run`` state):
.. code-block::yaml
synchronize_modules:
saltutil.sync_modules:
- refresh: True
Sync Via the saltutil Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -350,7 +340,7 @@ SDB
* :ref:`Writing SDB Modules <sdb-writing-modules>`
SDB is a way to store data that's not associated with a minion. See
SDB is a way to store data that's not associated with a minion. See
:ref:`Storing Data in Other Databases <sdb>`.
Serializer
@ -394,6 +384,12 @@ pkgfiles modules handle the actual installation.
SSH Wrapper
-----------
.. toctree::
:maxdepth: 1
:glob:
ssh_wrapper
Replacement execution modules for :ref:`Salt SSH <salt-ssh>`.
Thorium
@ -420,7 +416,7 @@ the state system.
Util
----
Just utility modules to use with other modules via ``__utils__`` (see
Just utility modules to use with other modules via ``__utils__`` (see
:ref:`Dunder Dictionaries <dunder-dictionaries>`).
Wheel

View file

@ -0,0 +1,63 @@
.. _ssh-wrapper:
===========
SSH Wrapper
===========
Salt-SSH Background
===================
Salt-SSH works by creating a tar ball of salt, a bunch of python modules, and a generated
short minion config. It then copies this onto the destination host over ssh, then
uses that host's local python install to run ``salt-client --local`` with any requested modules.
It does not automatically copy over states or cache files and since it is uses a local file_client,
modules that rely on :py:func:`cp.cache* <salt.modules.cp>` functionality do not work.
SSH Wrapper modules
===================
To support cp modules or other functionality which might not otherwise work in the remote environment,
a wrapper module can be created. These modules are run from the salt-master initiating the salt-ssh
command and can include logic to support the needed functionality. SSH Wrapper modules are located in
/salt/client/ssh/wrapper/ and are named the same as the execution module being extended. Any functions
defined inside of the wrapper module are called from the ``salt-ssh module.function argument``
command rather than executing on the minion.
State Module example
--------------------
Running salt states on an salt-ssh minion, obviously requires the state files themselves. To support this,
a state module wrapper script exists at salt/client/ssh/wrapper/state.py, and includes standard state
functions like :py:func:`apply <salt.modules.state.apply>`, :py:func:`sls <salt.modules.state.sls>`,
and :py:func:`highstate <salt.modules.state.highstate>`. When executing ``salt-ssh minion state.highstate``,
these wrapper functions are used and include the logic to walk the low_state output for that minion to
determine files used, gather needed files, tar them together, transfer the tar file to the minion over
ssh, and run a state on the ssh minion. This state then extracts the tar file, applies the needed states
and data, and cleans up the transferred files.
Wrapper Handling
----------------
From the wrapper script any invocations of ``__salt__['some.module']()`` do not run on the master
which is running the wrapper, but instead magically are invoked on the minion over ssh.
Should the function being called exist in the wrapper, the wrapper function will be
used instead.
One way of supporting this workflow may be to create a wrapper function which performs the needed file
copy operations. Now that files are resident on the ssh minion, the next step is to run the original
execution module function. But since that function name was already overridden by the wrapper, a
function alias can be created in the original execution module, which can then be called from the
wrapper.
Example
```````
The saltcheck module needs sls and tst files on the minion to function. The invocation of
:py:func:`saltcheck.run_state_tests <salt.modules.saltcheck.run_state_tests>` is run from
the wrapper module, and is responsible for performing the needed file copy. The
:py:func:`saltcheck <salt.modules.saltcheck>` execution module includes an alias line of
``run_state_tests_ssh = salt.utils.functools.alias_function(run_state_tests, 'run_state_tests_ssh')``
which creates an alias of ``run_state_tests`` with the name ``run_state_tests_ssh``. At the end of
the ``run_state_tests`` function in the wrapper module, it then calls
``__salt__['saltcheck.run_state_tests_ssh']()``. Since this function does not exist in the wrapper script,
the call is made on the remote minion, which then having the needed files, runs as expected.

View file

@ -250,7 +250,7 @@ done at the CLI:
caller = salt.client.Caller()
ret = called.cmd('event.send',
ret = caller.cmd('event.send',
'myco/event/success'
{ 'success': True,
'message': "It works!" })

View file

@ -69,7 +69,7 @@ Key Generation
--------------
We have reduced the requirements needed for `salt-key` to generate minion keys.
You're no longer required to have salt configured and it's common directories
You're no longer required to have salt configured and its common directories
created just to generate keys. This might prove useful if you're batch creating
keys to pre-load on minions.

View file

@ -237,7 +237,7 @@ Virtual Terminal
----------------
Sometimes the subprocess module is not good enough, and, in fact, not even
``askpass`` is. This virtual terminal is still in it's infant childhood, needs
``askpass`` is. This virtual terminal is still in its infant childhood, needs
quite some love, and was originally created to replace ``askpass``, but, while
developing it, it immediately proved that it could do so much more. It's
currently used by salt-cloud when bootstrapping salt on clouds which require

View file

@ -723,7 +723,7 @@ Changelog for v2015.5.10..v2015.5.11
* f49cc75049 Set correct type for master_tops config value
* **ISSUE** `#31614`_: (`frizzby`_) salt.utils.http.query() implementation contradicts it's documentation. decode arg (refs: `#31622`_)
* **ISSUE** `#31614`_: (`frizzby`_) salt.utils.http.query() implementation contradicts its documentation. decode arg (refs: `#31622`_)
* **PR** `#31622`_: (`jfindlay`_) doc/topics/tutorials/http: update query decoding docs
@ *2016-03-02 18:23:44 UTC*

View file

@ -1599,7 +1599,7 @@ Changelog for v2015.5.2..v2015.5.3
* b93dc5ef6c Linting!
* 2dd5904119 Fixes an issue where Pagerduty states/modules couldn't find it's profile in the Pillar
* 2dd5904119 Fixes an issue where Pagerduty states/modules couldn't find its profile in the Pillar
* **PR** `#24366`_: (`terminalmage`_) Use yes $'\\n' instead of printf '\\n' for pecl commands
@ *2015-06-03 21:28:58 UTC*

View file

@ -481,7 +481,7 @@ Changelog for v2015.8.10..v2015.8.11
* b7ac6c735a Moved imports to top, out of _get_moto_version function
* 02f9ba99ba Updated version check. Moved check into it's own function
* 02f9ba99ba Updated version check. Moved check into its own function
* d445026c56 Updated test to work with new moto version. Changed strings to unicode

View file

@ -208,12 +208,12 @@ Changelog for v2015.8.3..v2015.8.4
* 5a637420e8 Backport DNF support to 2015.5 branch
* **PR** `#30526`_: (`twangboy`_) Added FlushKey to make sure it's changes are saved to disk
* **PR** `#30526`_: (`twangboy`_) Added FlushKey to make sure its changes are saved to disk
@ *2016-01-22 02:33:13 UTC*
* e366f6a7fd Merge pull request `#30526`_ from twangboy/reg_flushkey
* 23085ffbbb Added FlushKey to make sure it's changes are saved to disk
* 23085ffbbb Added FlushKey to make sure its changes are saved to disk
* **PR** `#30521`_: (`basepi`_) [2015.8] Merge forward from 2015.5 to 2015.8
@ *2016-01-21 23:05:03 UTC*
@ -2990,7 +2990,7 @@ Changelog for v2015.8.3..v2015.8.4
* 7775d65089 Merge pull request `#29178`_ from whytewolf/glance_keystone_profile_fix
* 807dd426a6 Profile not being passed to keystone.endpoint_get in _auth. so if a profiles are being used, then keystone.endpoint_get will not be able to authenticate causing glance to not be able to get it's endpoint.
* 807dd426a6 Profile not being passed to keystone.endpoint_get in _auth. so if a profiles are being used, then keystone.endpoint_get will not be able to authenticate causing glance to not be able to get its endpoint.
.. _`#10157`: https://github.com/saltstack/salt/issues/10157
.. _`#11`: https://github.com/saltstack/salt/issues/11

View file

@ -642,7 +642,7 @@ Changelog for v2016.11.8..v2016.11.9
* 1b12acd303 Check type before casting
* 03fa37b445 Cast vdata to it's proper type
* 03fa37b445 Cast vdata to its proper type
* **PR** `#43863`_: (`nicholasmhughes`_) Atomicfile only copies mode and not user/group perms
@ *2017-11-10 18:47:55 UTC*

View file

@ -1338,7 +1338,7 @@ Changelog for v2016.3.1..v2016.3.2
* b7ac6c735a Moved imports to top, out of _get_moto_version function
* 02f9ba99ba Updated version check. Moved check into it's own function
* 02f9ba99ba Updated version check. Moved check into its own function
* d445026c56 Updated test to work with new moto version. Changed strings to unicode

View file

@ -649,7 +649,7 @@ Changelog for v2016.3.4..v2016.3.5
* cdbd2fbe3c Added limit-output to eliminate false packages
* **ISSUE** `#38174`_: (`NickDubelman`_) [syndic] Why can't a syndic node signal when all of it's minions have returned? (refs: `#38279`_)
* **ISSUE** `#38174`_: (`NickDubelman`_) [syndic] Why can't a syndic node signal when all of its minions have returned? (refs: `#38279`_)
* **ISSUE** `#32400`_: (`rallytime`_) Document Default Config Values (refs: `#38279`_)

View file

@ -136,7 +136,7 @@ Changelog for v2017.7.0..v2017.7.1
* 5b99d45f54 Merge pull request `#42473`_ from rallytime/bp-42436
* 82ed919803 Updating the versions function inside the manage runner to account for when a minion is offline and we are unable to determine it's version.
* 82ed919803 Updating the versions function inside the manage runner to account for when a minion is offline and we are unable to determine its version.
* **ISSUE** `#42381`_: (`zebooka`_) Git.detached broken in 2017.7.0 (refs: `#42399`_)

View file

@ -2041,7 +2041,7 @@ Changelog for v2017.7.1..v2017.7.2
* 09521602c1 Merge pull request `#42436`_ from garethgreenaway/42374_manage_runner_minion_offline
* 0fd39498c0 Updating the versions function inside the manage runner to account for when a minion is offline and we are unable to determine it's version.
* 0fd39498c0 Updating the versions function inside the manage runner to account for when a minion is offline and we are unable to determine its version.
* **ISSUE** `#42427`_: (`grichmond-salt`_) Issue Passing Variables created from load_json as Inline Pillar Between States (refs: `#42435`_)

View file

@ -1501,7 +1501,7 @@ Changelog for v2017.7.2..v2017.7.3
* 1b12acd303 Check type before casting
* 03fa37b445 Cast vdata to it's proper type
* 03fa37b445 Cast vdata to its proper type
* ed8da2450b Merge pull request `#43863`_ from nicholasmhughes/fix-atomicfile-permission-copy

View file

@ -3565,7 +3565,7 @@ Changelog for v2018.3.2..v2018.3.3
* 406efb161e Merge pull request `#48015`_ from garethgreenaway/47546_more_unicode_nonsense
* f457f9cb84 Adding a test to ensure archive.list returns the right results when a tar file contains a file with unicode in it's name.
* f457f9cb84 Adding a test to ensure archive.list returns the right results when a tar file contains a file with unicode in its name.
* 9af49bc595 Ensure member names are decoded before adding to various lists.

View file

@ -3212,7 +3212,7 @@ Changelog for v2018.3.3..v2018.3.4
* 791e3ff Use dwoz/winrm-fs for chunked downloads
* f3999e1 Move vagrant to it's own group
* f3999e1 Move vagrant to its own group
* 0662e37 Merge pull request `#49870`_ from KaiSforza/ci_actually_fail

View file

@ -2620,7 +2620,7 @@ Changelog for v2019.2.0..v2019.2.1
* 0372718 Fix lint issues on salt
* 9eab9f4 Add nox session/env/target to run lint against Salt and it's test suite
* 9eab9f4 Add nox session/env/target to run lint against Salt and its test suite
* 123f771 Lock lint requirements

View file

@ -180,6 +180,7 @@ Results can then be analyzed with `kcachegrind`_ or similar tool.
.. _`kcachegrind`: http://kcachegrind.sourceforge.net/html/Home.html
Make sure you have yappi installed.
On Windows, in the absense of kcachegrind, a simple file-based workflow to create
profiling graphs could use `gprof2dot`_, `graphviz`_ and this batch file:

View file

@ -57,7 +57,7 @@ Dependencies
============
Manipulation of the ESXi host via a Proxy Minion requires the machine running
the Proxy Minion process to have the ESXCLI package (and all of it's dependencies)
the Proxy Minion process to have the ESXCLI package (and all of its dependencies)
and the pyVmomi Python Library to be installed.
ESXi Password

View file

@ -272,7 +272,7 @@ system, such as a database.
data using a returner (instead of the local job cache on disk).
If a master has many accepted keys, it may take a long time to publish a job
because the master much first determine the matching minions and deliver
because the master must first determine the matching minions and deliver
that information back to the waiting client before the job can be published.
To mitigate this, a key cache may be enabled. This will reduce the load

View file

@ -138,7 +138,7 @@ The following configuration is an example, how a complete syslog-ng configuratio
The :py:func:`syslog_ng.reloaded <salt.states.syslog_ng.reloaded>` function can generate syslog-ng configuration from YAML. If the statement (source, destination, parser,
etc.) has a name, this function uses the id as the name, otherwise (log
statement) it's purpose is like a mandatory comment.
statement) its purpose is like a mandatory comment.
After execution this example the syslog\_ng state will generate this
file:

View file

@ -1091,7 +1091,7 @@ def docs_html(session, compress):
if pydir == "py3.4":
session.error("Sphinx only runs on Python >= 3.5")
requirements_file = "requirements/static/docs.in"
distro_constraints = ["requirements/static/{}/docs.txt".format(pydir)]
distro_constraints = ["requirements/static/{}/docs.txt".format(_get_pydir(session))]
install_command = ["--progress-bar=off", "-r", requirements_file]
for distro_constraint in distro_constraints:
install_command.extend(["--constraint", distro_constraint])
@ -1115,7 +1115,7 @@ def docs_man(session, compress, update):
if pydir == "py3.4":
session.error("Sphinx only runs on Python >= 3.5")
requirements_file = "requirements/static/docs.in"
distro_constraints = ["requirements/static/{}/docs.txt".format(pydir)]
distro_constraints = ["requirements/static/{}/docs.txt".format(_get_pydir(session))]
install_command = ["--progress-bar=off", "-r", requirements_file]
for distro_constraint in distro_constraints:
install_command.extend(["--constraint", distro_constraint])
@ -1129,24 +1129,3 @@ def docs_man(session, compress, update):
if compress:
session.run("tar", "-cJvf", "man-archive.tar.xz", "_build/man", external=True)
os.chdir("..")
@nox.session(name="changelog", python="3")
@nox.parametrize("draft", [False, True])
def changelog(session, draft):
"""
Generate salt's changelog
"""
requirements_file = "requirements/static/changelog.in"
distro_constraints = [
"requirements/static/{}/changelog.txt".format(_get_pydir(session))
]
install_command = ["--progress-bar=off", "-r", requirements_file]
for distro_constraint in distro_constraints:
install_command.extend(["--constraint", distro_constraint])
session.install(*install_command, silent=PIP_INSTALL_SILENT)
town_cmd = ["towncrier", "--version={}".format(session.posargs[0])]
if draft:
town_cmd.append("--draft")
session.run(*town_cmd)

View file

@ -16,34 +16,3 @@ line_length = 88
ensure_newline_before_comments=true
skip="salt/ext,tests/kitchen,templates"
[tool.towncrier]
package = "salt"
package_dir = "salt"
filename = "CHANGELOG.md"
directory = "changelog/"
start_string = "# Changelog\n"
[[tool.towncrier.type]]
directory = "removed"
name = "Removed"
showcontent = true
[[tool.towncrier.type]]
directory = "deprecated"
name = "Deprecated"
showcontent = true
[[tool.towncrier.type]]
directory = "changed"
name = "Changed"
showcontent = true
[[tool.towncrier.type]]
directory = "fixed"
name = "Fixed"
showcontent = true
[[tool.towncrier.type]]
directory = "added"
name = "Added"
showcontent = true

View file

@ -1 +0,0 @@
towncrier

View file

@ -1,12 +0,0 @@
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile -o requirements/static/py3.5/changelog.txt -v requirements/static/changelog.in
#
click==7.1.1 # via towncrier
incremental==17.5.0 # via towncrier
jinja2==2.11.2 # via towncrier
markupsafe==1.1.1 # via jinja2
toml==0.10.0 # via towncrier
towncrier==19.2.0

View file

@ -1,12 +0,0 @@
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile -o requirements/static/py3.6/changelog.txt -v requirements/static/changelog.in
#
click==7.1.1 # via towncrier
incremental==17.5.0 # via towncrier
jinja2==2.11.2 # via towncrier
markupsafe==1.1.1 # via jinja2
toml==0.10.0 # via towncrier
towncrier==19.2.0

View file

@ -1,12 +0,0 @@
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile -o requirements/static/py3.7/changelog.txt -v requirements/static/changelog.in
#
click==7.1.1 # via towncrier
incremental==17.5.0 # via towncrier
jinja2==2.11.2 # via towncrier
markupsafe==1.1.1 # via jinja2
toml==0.10.0 # via towncrier
towncrier==19.2.0

View file

@ -1620,7 +1620,12 @@ class LocalClient(object):
yield {
id_: {
"out": "no_return",
"ret": "Minion did not return. [No response]",
"ret": "Minion did not return. [No response]"
"\nThe minions may not have all finished running and any "
"remaining minions will return upon completion. To look "
"up the return data for this job later, run the following "
"command:\n\n"
"salt-run jobs.lookup_jid {0}".format(jid),
"retcode": salt.defaults.exitcodes.EX_GENERIC,
}
}

View file

@ -4059,7 +4059,7 @@ def create_snapshot(name, kwargs=None, call=None):
def revert_to_snapshot(name, kwargs=None, call=None):
"""
Revert virtual machine to it's current snapshot. If no snapshot
Revert virtual machine to its current snapshot. If no snapshot
exists, the state of the virtual machine remains unchanged
.. note::

View file

@ -1869,7 +1869,7 @@ if [ "$ITYPE" = "git" ]; then
if [ "$_POST_NEON_INSTALL" -eq $BS_TRUE ]; then
echo
echowarn "Post Neon git based installations will always install salt"
echowarn "and it's dependencies using pip which will be upgraded to"
echowarn "and its dependencies using pip which will be upgraded to"
echowarn "at least v${_MINIMUM_PIP_VERSION}, and, in case the setuptools version is also"
echowarn "too old, it will be upgraded to at least v${_MINIMUM_SETUPTOOLS_VERSION}"
echo
@ -7222,7 +7222,7 @@ install_macosx_stable_post() {
set +o nounset
# shellcheck disable=SC1091
. /etc/profile
# Revert nounset to it's previous state
# Revert nounset to its previous state
set -o nounset
return 0

View file

@ -66,7 +66,7 @@ def start(cmd, output="json", interval=1):
The script engine will scrap stdout of the
given script and generate an event based on the
presence of the 'tag' key and it's value.
presence of the 'tag' key and its value.
If there is a data obj available, that will also
be fired along with the tag.

View file

@ -120,7 +120,7 @@ class ProcessTest(unittest.TestCase):
# Now kill them normally so they won't be restarted
fetch("/?exit=0", fail_ok=True)
# One process left; watch it's pid change
# One process left; watch its pid change
pid = int(fetch("/").body)
fetch("/?exit=4", fail_ok=True)
pid2 = int(fetch("/").body)

View file

@ -2941,7 +2941,7 @@ class Minion(MinionBase):
def add_periodic_callback(self, name, method, interval=1):
"""
Add a periodic callback to the event loop and call it's start method.
Add a periodic callback to the event loop and call its start method.
If a callback by the given name exists this method returns False
"""
if name in self.periodic_callbacks:

View file

@ -58,6 +58,9 @@ LEA = salt.utils.path.which_bin(
)
LE_LIVE = "/etc/letsencrypt/live/"
if salt.utils.platform.is_freebsd():
LE_LIVE = "/usr/local" + LE_LIVE
def __virtual__():
"""

View file

@ -70,7 +70,7 @@ def uuid(dev=None):
"""
try:
if dev is None:
# take the only directory in /sys/fs/bcache and return it's basename
# take the only directory in /sys/fs/bcache and return its basename
return list(salt.utils.path.os_walk("/sys/fs/bcache/"))[0][1][0]
else:
# basename of the /sys/block/{dev}/bcache/cache symlink target
@ -141,7 +141,7 @@ def detach(dev=None):
Detach a backing device(s) from a cache set
If no dev is given, all backing devices will be attached.
Detaching a backing device will flush it's write cache.
Detaching a backing device will flush its write cache.
This should leave the underlying device in a consistent state, but might take a while.
CLI example:
@ -463,7 +463,7 @@ def config_(dev=None, **kwargs):
def status(stats=False, config=False, internals=False, superblock=False, alldevs=False):
"""
Show the full status of the BCache system and optionally all it's involved devices
Show the full status of the BCache system and optionally all its involved devices
CLI example:

View file

@ -2205,7 +2205,7 @@ def set_volumes_tags(
"""
ret = {"success": True, "comment": "", "changes": {}}
running_states = ("pending", "rebooting", "running", "stopping", "stopped")
### First creeate a dictionary mapping all changes for a given volume to it's volume ID...
### First creeate a dictionary mapping all changes for a given volume to its volume ID...
tag_sets = {}
for tm in tag_maps:
filters = dict(tm.get("filters", {}))

View file

@ -1029,7 +1029,7 @@ def run(
redirection.
:param bool bg: If ``True``, run command in background and do not await or
deliver it's results
deliver its results
.. versionadded:: 2016.3.0
@ -2460,7 +2460,7 @@ def script(
redirection.
:param bool bg: If True, run script in background and do not await or
deliver it's results
deliver its results
:param dict env: Environment variables to be set prior to execution.
@ -4093,7 +4093,7 @@ def run_bg(
r"""
.. versionadded: 2016.3.0
Execute the passed command in the background and return it's PID
Execute the passed command in the background and return its PID
.. note::

View file

@ -34,7 +34,15 @@ def __virtual__():
def cluster_create(
version, name="main", port=None, locale=None, encoding=None, datadir=None
version,
name="main",
port=None,
locale=None,
encoding=None,
datadir=None,
allow_group_access=None,
data_checksums=None,
wal_segsize=None,
):
"""
Adds a cluster to the Postgres server.
@ -53,7 +61,9 @@ def cluster_create(
salt '*' postgres.cluster_create '9.3' locale='fr_FR'
salt '*' postgres.cluster_create '11' data_checksums=True wal_segsize='32'
"""
cmd = [salt.utils.path.which("pg_createcluster")]
if port:
cmd += ["--port", six.text_type(port)]
@ -64,6 +74,15 @@ def cluster_create(
if datadir:
cmd += ["--datadir", datadir]
cmd += [version, name]
# initdb-specific options are passed after '--'
if allow_group_access or data_checksums or wal_segsize:
cmd += ["--"]
if allow_group_access is True:
cmd += ["--allow-group-access"]
if data_checksums is True:
cmd += ["--data-checksums"]
if wal_segsize:
cmd += ["--wal-segsize", wal_segsize]
cmdstr = " ".join([pipes.quote(c) for c in cmd])
ret = __salt__["cmd.run_all"](cmdstr, python_shell=False)
if ret.get("retcode", 0) != 0:

View file

@ -6,10 +6,15 @@ from __future__ import absolute_import, print_function, unicode_literals
import logging
from salt.ext import six
log = logging.getLogger(__name__)
def _analyse_overview_field(content):
"""
Split the field in drbd-overview
"""
if "(" in content:
# Output like "Connected(2*)" or "UpToDate(2*)"
return content.split("(")[0], content.split("(")[0]
@ -20,9 +25,140 @@ def _analyse_overview_field(content):
return content, ""
def _count_spaces_startswith(line):
"""
Count the number of spaces before the first character
"""
if line.split("#")[0].strip() == "":
return None
spaces = 0
for i in line:
if i.isspace():
spaces += 1
else:
return spaces
def _analyse_status_type(line):
"""
Figure out the sections in drbdadm status
"""
spaces = _count_spaces_startswith(line)
if spaces is None:
return ""
switch = {
0: "RESOURCE",
2: {" disk:": "LOCALDISK", " role:": "PEERNODE", " connection:": "PEERNODE"},
4: {" peer-disk:": "PEERDISK"},
}
ret = switch.get(spaces, "UNKNOWN")
# isinstance(ret, str) only works when run directly, calling need unicode(six)
if isinstance(ret, six.text_type):
return ret
for x in ret:
if x in line:
return ret[x]
return "UNKNOWN"
def _add_res(line):
"""
Analyse the line of local resource of ``drbdadm status``
"""
global resource
fields = line.strip().split()
if resource:
ret.append(resource)
resource = {}
resource["resource name"] = fields[0]
resource["local role"] = fields[1].split(":")[1]
resource["local volumes"] = []
resource["peer nodes"] = []
def _add_volume(line):
"""
Analyse the line of volumes of ``drbdadm status``
"""
section = _analyse_status_type(line)
fields = line.strip().split()
volume = {}
for field in fields:
volume[field.split(":")[0]] = field.split(":")[1]
if section == "LOCALDISK":
resource["local volumes"].append(volume)
else:
# 'PEERDISK'
lastpnodevolumes.append(volume)
def _add_peernode(line):
"""
Analyse the line of peer nodes of ``drbdadm status``
"""
global lastpnodevolumes
fields = line.strip().split()
peernode = {}
peernode["peernode name"] = fields[0]
# Could be role or connection:
peernode[fields[1].split(":")[0]] = fields[1].split(":")[1]
peernode["peer volumes"] = []
resource["peer nodes"].append(peernode)
lastpnodevolumes = peernode["peer volumes"]
def _empty(dummy):
"""
Action of empty line of ``drbdadm status``
"""
def _unknown_parser(line):
"""
Action of unsupported line of ``drbdadm status``
"""
global ret
ret = {"Unknown parser": line}
def _line_parser(line):
"""
Call action for different lines
"""
section = _analyse_status_type(line)
fields = line.strip().split()
switch = {
"": _empty,
"RESOURCE": _add_res,
"PEERNODE": _add_peernode,
"LOCALDISK": _add_volume,
"PEERDISK": _add_volume,
}
func = switch.get(section, _unknown_parser)
func(line)
def overview():
"""
Show status of the DRBD devices, support two nodes only.
drbd-overview is removed since drbd-utils-9.6.0,
use status instead.
CLI Example:
@ -90,3 +226,58 @@ def overview():
"synched": sync,
}
return ret
# Global para for func status
ret = []
resource = {}
lastpnodevolumes = None
def status(name="all"):
"""
Using drbdadm to show status of the DRBD devices,
available in the latest drbd9.
Support multiple nodes, multiple volumes.
:type name: str
:param name:
Resource name.
:return: drbd status of resource.
:rtype: list(dict(res))
CLI Example:
.. code-block:: bash
salt '*' drbd.status
salt '*' drbd.status name=<resource name>
"""
# Initialize for multiple times test cases
global ret
global resource
ret = []
resource = {}
cmd = ["drbdadm", "status"]
cmd.append(name)
# One possible output: (number of resource/node/vol are flexible)
# resource role:Secondary
# volume:0 disk:Inconsistent
# volume:1 disk:Inconsistent
# drbd-node1 role:Primary
# volume:0 replication:SyncTarget peer-disk:UpToDate done:10.17
# volume:1 replication:SyncTarget peer-disk:UpToDate done:74.08
# drbd-node2 role:Secondary
# volume:0 peer-disk:Inconsistent resync-suspended:peer
# volume:1 peer-disk:Inconsistent resync-suspended:peer
for line in __salt__["cmd.run"](cmd).splitlines():
_line_parser(line)
if resource:
ret.append(resource)
return ret

View file

@ -4426,7 +4426,7 @@ def extract_hash(
else:
hash_len_expr = six.text_type(hash_len)
filename_separators = string.whitespace + r"\/"
filename_separators = string.whitespace + r"\/*"
if source_hash_name:
if not isinstance(source_hash_name, six.string_types):

View file

@ -232,7 +232,7 @@ def _create_element(name, element_type, data, server=None):
def _update_element(name, element_type, data, server=None):
"""
Update an element, including it's properties
Update an element, including its properties
"""
# Urlencode the name (names may have slashes)
name = quote(name, safe="")

View file

@ -58,6 +58,8 @@ def _gluster_output_cleanup(result):
for line in result.splitlines():
if line.startswith("gluster>"):
ret += line[9:].strip()
elif line.startswith("Welcome to gluster prompt"):
pass
else:
ret += line.strip()

View file

@ -176,7 +176,7 @@ def db_create(database, containment="NONE", new_database_options=None, **kwargs)
# cur.execute(sql)
conn.cursor().execute(sql)
except Exception as e: # pylint: disable=broad-except
return "Could not create the login: {0}".format(e)
return "Could not create the database: {0}".format(e)
finally:
if conn:
conn.autocommit(False)
@ -308,7 +308,7 @@ def role_remove(role, **kwargs):
conn.close()
return True
except Exception as e: # pylint: disable=broad-except
return "Could not create the role: {0}".format(e)
return "Could not remove the role: {0}".format(e)
def login_exists(login, domain="", **kwargs):
@ -561,4 +561,4 @@ def user_remove(username, **kwargs):
conn.close()
return True
except Exception as e: # pylint: disable=broad-except
return "Could not create the user: {0}".format(e)
return "Could not remove the user: {0}".format(e)

View file

@ -138,7 +138,7 @@ def __virtual__():
NILRT_RESTARTCHECK_STATE_PATH, exc.errno, exc.strerror
),
)
# modules.dep always exists, make sure it's restart state files also exist
# modules.dep always exists, make sure its restart state files also exist
if not (
os.path.exists(
os.path.join(NILRT_RESTARTCHECK_STATE_PATH, "modules.dep.timestamp")
@ -289,14 +289,91 @@ def refresh_db(failhard=False, **kwargs): # pylint: disable=unused-argument
return ret
def _is_testmode(**kwargs):
"""
Returns whether a test mode (noaction) operation was requested.
"""
return bool(kwargs.get("test") or __opts__.get("test"))
def _append_noaction_if_testmode(cmd, **kwargs):
"""
Adds the --noaction flag to the command if it's running in the test mode.
"""
if bool(kwargs.get("test") or __opts__.get("test")):
if _is_testmode(**kwargs):
cmd.append("--noaction")
def _build_install_command_list(cmd_prefix, to_install, to_downgrade, to_reinstall):
"""
Builds a list of install commands to be executed in sequence in order to process
each of the to_install, to_downgrade, and to_reinstall lists.
"""
cmds = []
if to_install:
cmd = copy.deepcopy(cmd_prefix)
cmd.extend(to_install)
cmds.append(cmd)
if to_downgrade:
cmd = copy.deepcopy(cmd_prefix)
cmd.append("--force-downgrade")
cmd.extend(to_downgrade)
cmds.append(cmd)
if to_reinstall:
cmd = copy.deepcopy(cmd_prefix)
cmd.append("--force-reinstall")
cmd.extend(to_reinstall)
cmds.append(cmd)
return cmds
def _parse_reported_packages_from_install_output(output):
"""
Parses the output of "opkg install" to determine what packages would have been
installed by an operation run with the --noaction flag.
We are looking for lines like:
Installing <package> (<version>) on <target>
or
Upgrading <package> from <oldVersion> to <version> on root
"""
reported_pkgs = {}
install_pattern = re.compile(
r"Installing\s(?P<package>.*?)\s\((?P<version>.*?)\)\son\s(?P<target>.*?)"
)
upgrade_pattern = re.compile(
r"Upgrading\s(?P<package>.*?)\sfrom\s(?P<oldVersion>.*?)\sto\s(?P<version>.*?)\son\s(?P<target>.*?)"
)
for line in salt.utils.itertools.split(output, "\n"):
match = install_pattern.match(line)
if match is None:
match = upgrade_pattern.match(line)
if match:
reported_pkgs[match.group("package")] = match.group("version")
return reported_pkgs
def _execute_install_command(cmd, parse_output, errors, parsed_packages):
"""
Executes a command for the install operation.
If the command fails, its error output will be appended to the errors list.
If the command succeeds and parse_output is true, updated packages will be appended
to the parsed_packages dictionary.
"""
out = __salt__["cmd.run_all"](cmd, output_loglevel="trace", python_shell=False)
if out["retcode"] != 0:
if out["stderr"]:
errors.append(out["stderr"])
else:
errors.append(out["stdout"])
elif parse_output:
parsed_packages.update(
_parse_reported_packages_from_install_output(out["stdout"])
)
def install(
name=None, refresh=False, pkgs=None, sources=None, reinstall=False, **kwargs
):
@ -440,24 +517,9 @@ def install(
# This should cause the command to fail.
to_install.append(pkgstr)
cmds = []
if to_install:
cmd = copy.deepcopy(cmd_prefix)
cmd.extend(to_install)
cmds.append(cmd)
if to_downgrade:
cmd = copy.deepcopy(cmd_prefix)
cmd.append("--force-downgrade")
cmd.extend(to_downgrade)
cmds.append(cmd)
if to_reinstall:
cmd = copy.deepcopy(cmd_prefix)
cmd.append("--force-reinstall")
cmd.extend(to_reinstall)
cmds.append(cmd)
cmds = _build_install_command_list(
cmd_prefix, to_install, to_downgrade, to_reinstall
)
if not cmds:
return {}
@ -466,16 +528,17 @@ def install(
refresh_db()
errors = []
is_testmode = _is_testmode(**kwargs)
test_packages = {}
for cmd in cmds:
out = __salt__["cmd.run_all"](cmd, output_loglevel="trace", python_shell=False)
if out["retcode"] != 0:
if out["stderr"]:
errors.append(out["stderr"])
else:
errors.append(out["stdout"])
_execute_install_command(cmd, is_testmode, errors, test_packages)
__context__.pop("pkg.list_pkgs", None)
new = list_pkgs()
if is_testmode:
new = copy.deepcopy(new)
new.update(test_packages)
ret = salt.utils.data.compare_dicts(old, new)
if pkg_type == "file" and reinstall:
@ -513,6 +576,26 @@ def install(
return ret
def _parse_reported_packages_from_remove_output(output):
"""
Parses the output of "opkg remove" to determine what packages would have been
removed by an operation run with the --noaction flag.
We are looking for lines like
Removing <package> (<version>) from <Target>...
"""
reported_pkgs = {}
remove_pattern = re.compile(
r"Removing\s(?P<package>.*?)\s\((?P<version>.*?)\)\sfrom\s(?P<target>.*?)..."
)
for line in salt.utils.itertools.split(output, "\n"):
match = remove_pattern.match(line)
if match:
reported_pkgs[match.group("package")] = ""
return reported_pkgs
def remove(name=None, pkgs=None, **kwargs): # pylint: disable=unused-argument
"""
Remove packages using ``opkg remove``.
@ -576,6 +659,9 @@ def remove(name=None, pkgs=None, **kwargs): # pylint: disable=unused-argument
__context__.pop("pkg.list_pkgs", None)
new = list_pkgs()
if _is_testmode(**kwargs):
reportedPkgs = _parse_reported_packages_from_remove_output(out["stdout"])
new = {k: v for k, v in new.items() if k not in reportedPkgs}
ret = salt.utils.data.compare_dicts(old, new)
rs_result = _get_restartcheck_result(errors)

View file

@ -22,15 +22,17 @@ log = logging.getLogger(__name__)
def __virtual__():
"""
Only works on OpenBSD for now; other systems with pf (macOS, FreeBSD, etc)
need to be tested before enabling them.
Only works on OpenBSD and FreeBSD for now; other systems with pf (macOS,
FreeBSD, etc) need to be tested before enabling them.
"""
if __grains__["os"] == "OpenBSD" and salt.utils.path.which("pfctl"):
tested_oses = ["FreeBSD", "OpenBSD"]
if __grains__["os"] in tested_oses and salt.utils.path.which("pfctl"):
return True
return (
False,
"The pf execution module cannot be loaded: either the system is not OpenBSD or the pfctl binary was not found",
"The pf execution module cannot be loaded: either the OS ({}) is not "
"tested or the pfctl binary was not found".format(__grains__["os"]),
)
@ -102,7 +104,7 @@ def loglevel(level):
level:
Log level. Should be one of the following: emerg, alert, crit, err, warning, notice,
info or debug.
info or debug (OpenBSD); or none, urgent, misc, loud (FreeBSD).
CLI example:
@ -114,7 +116,20 @@ def loglevel(level):
# always made a change.
ret = {"changes": True}
all_levels = ["emerg", "alert", "crit", "err", "warning", "notice", "info", "debug"]
myos = __grains__["os"]
if myos == "FreeBSD":
all_levels = ["none", "urgent", "misc", "loud"]
else:
all_levels = [
"emerg",
"alert",
"crit",
"err",
"warning",
"notice",
"info",
"debug",
]
if level not in all_levels:
raise SaltInvocationError("Unknown loglevel: {0}".format(level))

View file

@ -66,7 +66,7 @@ def get(
The value specified by this option will be returned if the desired
pillar key does not exist.
If a default value is specified, then it will be an empty string,
If a default value is not specified, then it will be an empty string,
unless :conf_minion:`pillar_raise_on_missing` is set to ``True``, in
which case an error will be raised.

View file

@ -29,7 +29,7 @@ def __virtual__():
def _parse_args(arg):
"""
yamlify `arg` and ensure it's outermost datatype is a list
yamlify `arg` and ensure its outermost datatype is a list
"""
yaml_args = salt.utils.args.yamlify_arg(arg)

View file

@ -6,6 +6,9 @@ Utility functions for use with or in SLS files
# Import Python libs
from __future__ import absolute_import, print_function, unicode_literals
import os
import textwrap
# Import Salt libs
import salt.exceptions
import salt.loader
@ -243,3 +246,184 @@ def deserialize(serializer, stream_or_string, **mod_kwargs):
"""
kwargs = salt.utils.args.clean_kwargs(**mod_kwargs)
return _get_serialize_fn(serializer, "deserialize")(stream_or_string, **kwargs)
def banner(
width=72,
commentchar="#",
borderchar="#",
blockstart=None,
blockend=None,
title=None,
text=None,
newline=False,
):
"""
Create a standardized comment block to include in a templated file.
A common technique in configuration management is to include a comment
block in managed files, warning users not to modify the file. This
function simplifies and standardizes those comment blocks.
:param width: The width, in characters, of the banner. Default is 72.
:param commentchar: The character to be used in the starting position of
each line. This value should be set to a valid line comment character
for the syntax of the file in which the banner is being inserted.
Multiple character sequences, like '//' are supported.
If the file's syntax does not support line comments (such as XML),
use the ``blockstart`` and ``blockend`` options.
:param borderchar: The character to use in the top and bottom border of
the comment box. Must be a single character.
:param blockstart: The character sequence to use at the beginning of a
block comment. Should be used in conjunction with ``blockend``
:param blockend: The character sequence to use at the end of a
block comment. Should be used in conjunction with ``blockstart``
:param title: The first field of the comment block. This field appears
centered at the top of the box.
:param text: The second filed of the comment block. This field appears
left-justifed at the bottom of the box.
:param newline: Boolean value to indicate whether the comment block should
end with a newline. Default is ``False``.
**Example 1 - the default banner:**
.. code-block:: jinja
{{ salt['slsutil.banner']() }}
.. code-block:: none
########################################################################
# #
# THIS FILE IS MANAGED BY SALT - DO NOT EDIT #
# #
# The contents of this file are managed by Salt. Any changes to this #
# file may be overwritten automatically and without warning. #
########################################################################
**Example 2 - a Javadoc-style banner:**
.. code-block:: jinja
{{ salt['slsutil.banner'](commentchar=' *', borderchar='*', blockstart='/**', blockend=' */') }}
.. code-block:: none
/**
***********************************************************************
* *
* THIS FILE IS MANAGED BY SALT - DO NOT EDIT *
* *
* The contents of this file are managed by Salt. Any changes to this *
* file may be overwritten automatically and without warning. *
***********************************************************************
*/
**Example 3 - custom text:**
.. code-block:: jinja
{{ set copyright='This file may not be copied or distributed without permission of SaltStack, Inc.' }}
{{ salt['slsutil.banner'](title='Copyright 2019 SaltStack, Inc.', text=copyright, width=60) }}
.. code-block:: none
############################################################
# #
# Copyright 2019 SaltStack, Inc. #
# #
# This file may not be copied or distributed without #
# permission of SaltStack, Inc. #
############################################################
"""
if title is None:
title = "THIS FILE IS MANAGED BY SALT - DO NOT EDIT"
if text is None:
text = (
"The contents of this file are managed by Salt. "
"Any changes to this file may be overwritten "
"automatically and without warning."
)
# Set up some typesetting variables
ledge = commentchar.rstrip()
redge = commentchar.strip()
lgutter = ledge + " "
rgutter = " " + redge
textwidth = width - len(lgutter) - len(rgutter)
# Check the width
if textwidth <= 0:
raise salt.exceptions.ArgumentValueError("Width is too small to render banner")
# Define the static elements
border_line = (
commentchar + borderchar[:1] * (width - len(ledge) - len(redge)) + redge
)
spacer_line = commentchar + " " * (width - len(commentchar) * 2) + commentchar
# Create the banner
wrapper = textwrap.TextWrapper(width=textwidth)
block = list()
if blockstart is not None:
block.append(blockstart)
block.append(border_line)
block.append(spacer_line)
for line in wrapper.wrap(title):
block.append(lgutter + line.center(textwidth) + rgutter)
block.append(spacer_line)
for line in wrapper.wrap(text):
block.append(lgutter + line + " " * (textwidth - len(line)) + rgutter)
block.append(border_line)
if blockend is not None:
block.append(blockend)
# Convert list to multi-line string
result = os.linesep.join(block)
# Add a newline character to the end of the banner
if newline:
return result + os.linesep
return result
def boolstr(value, true="true", false="false"):
"""
Convert a boolean value into a string. This function is
intended to be used from within file templates to provide
an easy way to take boolean values stored in Pillars or
Grains, and write them out in the apprpriate syntax for
a particular file template.
:param value: The boolean value to be converted
:param true: The value to return if ``value`` is ``True``
:param false: The value to return if ``value`` is ``False``
In this example, a pillar named ``smtp:encrypted`` stores a boolean
value, but the template that uses that value needs ``yes`` or ``no``
to be written, based on the boolean value.
*Note: this is written on two lines for clarity. The same result
could be achieved in one line.*
.. code-block:: jinja
{% set encrypted = salt[pillar.get]('smtp:encrypted', false) %}
use_tls: {{ salt['slsutil.boolstr'](encrypted, 'yes', 'no') }}
Result (assuming the value is ``True``):
.. code-block:: none
use_tls: yes
"""
if value:
return true
return false

View file

@ -33,7 +33,7 @@ def __virtual__():
def attr(key, value=None):
"""
Access/write a SysFS attribute.
If the attribute is a symlink, it's destination is returned
If the attribute is a symlink, its destination is returned
:return: value or bool

View file

@ -151,7 +151,7 @@ class Buildable(object):
def build(self):
"""
Builds the textual representation of the whole configuration object
with it's children.
with its children.
"""
header = self.build_header()
body = self.build_body()
@ -457,7 +457,7 @@ def _get_type_id_options(name, configuration):
def _expand_one_key_dictionary(_dict):
"""
Returns the only one key and it's value from a dictionary.
Returns the only one key and its value from a dictionary.
"""
key = next(six.iterkeys(_dict))
value = _dict[key]

View file

@ -7,7 +7,7 @@ Functions to interact with Hashicorp Vault.
:platform: all
:note: If you see the following error, you'll need to upgrade ``requests`` to atleast 2.4.2
:note: If you see the following error, you'll need to upgrade ``requests`` to at least 2.4.2
.. code-block:: text

View file

@ -292,7 +292,7 @@ def grant_access_to_shared_folders_to(name, users=None):
"""
Grant access to auto-mounted shared folders to the users.
User is specified by it's name. To grant access for several users use argument `users`.
User is specified by its name. To grant access for several users use argument `users`.
Access will be denied to the users not listed in `users` argument.
See https://www.virtualbox.org/manual/ch04.html#sf_mount_auto for more details.

View file

@ -10948,7 +10948,7 @@ def register_vm(name, datacenter, placement, vmx_path, service_instance=None):
@gets_service_instance_via_proxy
def power_on_vm(name, datacenter=None, service_instance=None):
"""
Powers on a virtual machine specified by it's name.
Powers on a virtual machine specified by its name.
name
Name of the virtual machine
@ -10989,7 +10989,7 @@ def power_on_vm(name, datacenter=None, service_instance=None):
@gets_service_instance_via_proxy
def power_off_vm(name, datacenter=None, service_instance=None):
"""
Powers off a virtual machine specified by it's name.
Powers off a virtual machine specified by its name.
name
Name of the virtual machine

View file

@ -1201,7 +1201,7 @@ def remove(path, force=False):
else:
for name in os.listdir(path):
item = "{0}\\{1}".format(path, name)
# If it's a normal directory, recurse to remove it's contents
# If its a normal directory, recurse to remove it's contents
remove(item, force)
# rmdir will work now because the directory is empty

View file

@ -2651,7 +2651,7 @@ class _policy_info(object):
########## LEGACY AUDIT POLICIES ##########
# To use these set the following policy to DISABLED
# "Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings"
# or it's alias...
# or its alias...
# SceNoApplyLegacyAuditPolicy
"AuditAccountLogon": {
"Policy": "Audit account logon events",
@ -2750,7 +2750,7 @@ class _policy_info(object):
# "Audit: Force audit policy subcategory settings (Windows
# Vista or later) to override audit policy category
# settings"
# or it's alias...
# or its alias...
# SceNoApplyLegacyAuditPolicy
# Account Logon Section
"AuditCredentialValidation": {
@ -6908,7 +6908,7 @@ def _checkAllAdmxPolicies(
# Make sure the we're passing the full policy name
# This issue was found when setting the `Allow Telemetry` setting
# All following states would show a change in this setting
# When the state does it's first `lgpo.get` it would return `AllowTelemetry`
# When the state does its first `lgpo.get` it would return `AllowTelemetry`
# On the second run, it would return `Allow Telemetry`
# This makes sure we're always returning the full_name when required
if (
@ -9221,7 +9221,7 @@ def _get_policy_adm_setting(
# Make sure the we're passing the full policy name
# This issue was found when setting the `Allow Telemetry` setting
# All following states would show a change in this setting
# When the state does it's first `lgpo.get` it would return `AllowTelemetry`
# When the state does its first `lgpo.get` it would return `AllowTelemetry`
# On the second run, it would return `Allow Telemetry`
# This makes sure we're always returning the full_name when required
if this_policy_name in policy_vals[this_policy_namespace][this_policy_name]:

View file

@ -26,16 +26,14 @@ import salt.utils.locales
import salt.utils.platform
import salt.utils.winapi
from salt.exceptions import CommandExecutionError
# Import 3rd-party Libs
from salt.ext import six
try:
import wmi
import win32net
import pywintypes
import win32api
import win32con
import pywintypes
import win32net
import wmi
from ctypes import windll
HAS_WIN32NET_MODS = True
@ -555,29 +553,6 @@ def get_system_info():
# Lookup dicts for Win32_OperatingSystem
os_type = {1: "Work Station", 2: "Domain Controller", 3: "Server"}
# Connect to WMI
with salt.utils.winapi.Com():
conn = wmi.WMI()
system = conn.Win32_OperatingSystem()[0]
ret = {
"name": get_computer_name(),
"description": system.Description,
"install_date": system.InstallDate,
"last_boot": system.LastBootUpTime,
"os_manufacturer": system.Manufacturer,
"os_name": system.Caption,
"users": system.NumberOfUsers,
"organization": system.Organization,
"os_architecture": system.OSArchitecture,
"primary": system.Primary,
"os_type": os_type[system.ProductType],
"registered_user": system.RegisteredUser,
"system_directory": system.SystemDirectory,
"system_drive": system.SystemDrive,
"os_version": system.Version,
"windows_directory": system.WindowsDirectory,
}
# lookup dicts for Win32_ComputerSystem
domain_role = {
0: "Standalone Workstation",
@ -606,75 +581,92 @@ def get_system_info():
7: "Performance Server",
8: "Maximum",
}
# Must get chassis_sku_number this way for backwards compatibility
# system.ChassisSKUNumber is only available on Windows 10/2016 and newer
product = conn.Win32_ComputerSystemProduct()[0]
ret.update({"chassis_sku_number": product.SKUNumber})
system = conn.Win32_ComputerSystem()[0]
# Get pc_system_type depending on Windows version
if platform.release() in ["Vista", "7", "8"]:
# Types for Vista, 7, and 8
pc_system_type = pc_system_types[system.PCSystemType]
else:
# New types were added with 8.1 and newer
pc_system_types.update({8: "Slate", 9: "Maximum"})
pc_system_type = pc_system_types[system.PCSystemType]
ret.update(
{
"bootup_state": system.BootupState,
"caption": system.Caption,
"chassis_bootup_state": warning_states[system.ChassisBootupState],
"dns_hostname": system.DNSHostname,
"domain": system.Domain,
"domain_role": domain_role[system.DomainRole],
"hardware_manufacturer": system.Manufacturer,
"hardware_model": system.Model,
"network_server_mode_enabled": system.NetworkServerModeEnabled,
"part_of_domain": system.PartOfDomain,
"pc_system_type": pc_system_type,
"power_state": system.PowerState,
"status": system.Status,
"system_type": system.SystemType,
"total_physical_memory": byte_calc(system.TotalPhysicalMemory),
"total_physical_memory_raw": system.TotalPhysicalMemory,
"thermal_state": warning_states[system.ThermalState],
"workgroup": system.Workgroup,
}
)
# Get processor information
processors = conn.Win32_Processor()
ret["processors"] = 0
ret["processors_logical"] = 0
ret["processor_cores"] = 0
ret["processor_cores_enabled"] = 0
ret["processor_manufacturer"] = processors[0].Manufacturer
ret["processor_max_clock_speed"] = (
six.text_type(processors[0].MaxClockSpeed) + "MHz"
)
for system in processors:
ret["processors"] += 1
ret["processors_logical"] += system.NumberOfLogicalProcessors
ret["processor_cores"] += system.NumberOfCores
try:
ret["processor_cores_enabled"] += system.NumberOfEnabledCore
except (AttributeError, TypeError):
pass
if ret["processor_cores_enabled"] == 0:
ret.pop("processor_cores_enabled", False)
system = conn.Win32_BIOS()[0]
ret.update(
{
"hardware_serial": system.SerialNumber,
"bios_manufacturer": system.Manufacturer,
"bios_version": system.Version,
"bios_details": system.BIOSVersion,
"bios_caption": system.Caption,
"bios_description": system.Description,
# Connect to WMI
with salt.utils.winapi.Com():
conn = wmi.WMI()
system = conn.Win32_OperatingSystem()[0]
ret = {
"name": get_computer_name(),
"description": system.Description,
"install_date": system.InstallDate,
"last_boot": system.LastBootUpTime,
"os_manufacturer": system.Manufacturer,
"os_name": system.Caption,
"users": system.NumberOfUsers,
"organization": system.Organization,
"os_architecture": system.OSArchitecture,
"primary": system.Primary,
"os_type": os_type[system.ProductType],
"registered_user": system.RegisteredUser,
"system_directory": system.SystemDirectory,
"system_drive": system.SystemDrive,
"os_version": system.Version,
"windows_directory": system.WindowsDirectory,
}
)
ret["install_date"] = _convert_date_time_string(ret["install_date"])
ret["last_boot"] = _convert_date_time_string(ret["last_boot"])
system = conn.Win32_ComputerSystem()[0]
# Get pc_system_type depending on Windows version
if platform.release() in ["Vista", "7", "8"]:
# Types for Vista, 7, and 8
pc_system_type = pc_system_types[system.PCSystemType]
else:
# New types were added with 8.1 and newer
pc_system_types.update({8: "Slate", 9: "Maximum"})
pc_system_type = pc_system_types[system.PCSystemType]
ret.update(
{
"bootup_state": system.BootupState,
"caption": system.Caption,
"chassis_bootup_state": warning_states[system.ChassisBootupState],
"chassis_sku_number": system.ChassisSKUNumber,
"dns_hostname": system.DNSHostname,
"domain": system.Domain,
"domain_role": domain_role[system.DomainRole],
"hardware_manufacturer": system.Manufacturer,
"hardware_model": system.Model,
"network_server_mode_enabled": system.NetworkServerModeEnabled,
"part_of_domain": system.PartOfDomain,
"pc_system_type": pc_system_type,
"power_state": system.PowerState,
"status": system.Status,
"system_type": system.SystemType,
"total_physical_memory": byte_calc(system.TotalPhysicalMemory),
"total_physical_memory_raw": system.TotalPhysicalMemory,
"thermal_state": warning_states[system.ThermalState],
"workgroup": system.Workgroup,
}
)
# Get processor information
processors = conn.Win32_Processor()
ret["processors"] = 0
ret["processors_logical"] = 0
ret["processor_cores"] = 0
ret["processor_cores_enabled"] = 0
ret["processor_manufacturer"] = processors[0].Manufacturer
ret["processor_max_clock_speed"] = (
six.text_type(processors[0].MaxClockSpeed) + "MHz"
)
for processor in processors:
ret["processors"] += 1
ret["processors_logical"] += processor.NumberOfLogicalProcessors
ret["processor_cores"] += processor.NumberOfCores
ret["processor_cores_enabled"] += processor.NumberOfEnabledCore
bios = conn.Win32_BIOS()[0]
ret.update(
{
"hardware_serial": bios.SerialNumber,
"bios_manufacturer": bios.Manufacturer,
"bios_version": bios.Version,
"bios_details": bios.BIOSVersion,
"bios_caption": bios.Caption,
"bios_description": bios.Description,
}
)
ret["install_date"] = _convert_date_time_string(ret["install_date"])
ret["last_boot"] = _convert_date_time_string(ret["last_boot"])
return ret
@ -742,13 +734,10 @@ def set_hostname(hostname):
salt 'minion-id' system.set_hostname newhostname
"""
curr_hostname = get_hostname()
cmd = "wmic computersystem where name='{0}' call rename name='{1}'".format(
curr_hostname, hostname
)
ret = __salt__["cmd.run"](cmd=cmd)
return "successful" in ret
with salt.utils.winapi.Com():
conn = wmi.WMI()
comp = conn.Win32_ComputerSystem()[0]
return comp.Rename(Name=hostname)
def join_domain(
@ -1034,11 +1023,41 @@ def get_domain_workgroup():
"""
with salt.utils.winapi.Com():
conn = wmi.WMI()
for computer in conn.Win32_ComputerSystem():
if computer.PartOfDomain:
return {"Domain": computer.Domain}
else:
return {"Workgroup": computer.Domain}
for computer in conn.Win32_ComputerSystem():
if computer.PartOfDomain:
return {"Domain": computer.Domain}
else:
return {"Workgroup": computer.Domain}
def set_domain_workgroup(workgroup):
"""
Set the domain or workgroup the computer belongs to.
.. versionadded:: Sodium
Returns:
bool: ``True`` if successful, otherwise ``False``
CLI Example:
.. code-block:: bash
salt 'minion-id' system.set_domain_workgroup LOCAL
"""
if six.PY2:
workgroup = _to_unicode(workgroup)
# Initialize COM
with salt.utils.winapi.Com():
# Grab the first Win32_ComputerSystem object from wmi
conn = wmi.WMI()
comp = conn.Win32_ComputerSystem()[0]
# Now we can join the new workgroup
res = comp.JoinDomainOrWorkgroup(Name=workgroup.upper())
return True if not res[0] else False
def _try_parse_datetime(time_str, fmts):

View file

@ -238,7 +238,7 @@ def _get_date_value(date):
def _reverse_lookup(dictionary, value):
"""
Lookup the key in a dictionary by it's value. Will return the first match.
Lookup the key in a dictionary by its value. Will return the first match.
:param dict dictionary: The dictionary to search

View file

@ -209,24 +209,22 @@ def get_zone():
Returns:
str: Timezone in unix format
Raises:
CommandExecutionError: If timezone could not be gathered
CLI Example:
.. code-block:: bash
salt '*' timezone.get_zone
"""
win_zone = __utils__["reg.read_value"](
hive="HKLM",
key="SYSTEM\\CurrentControlSet\\Control\\TimeZoneInformation",
vname="TimeZoneKeyName",
)["vdata"]
# Some data may have null characters. We only need the first portion up to
# the first null character. See the following:
# https://github.com/saltstack/salt/issues/51940
# https://stackoverflow.com/questions/27716746/hklm-system-currentcontrolset-control-timezoneinformation-timezonekeyname-corrup
if "\0" in win_zone:
win_zone = win_zone.split("\0")[0]
return mapper.get_unix(win_zone.lower(), "Unknown")
cmd = ["tzutil", "/g"]
res = __salt__["cmd.run_all"](cmd, python_shell=False)
if res["retcode"] or not res["stdout"]:
raise CommandExecutionError(
"tzutil encountered an error getting timezone", info=res
)
return mapper.get_unix(res["stdout"].lower(), "Unknown")
def get_offset():

View file

@ -131,7 +131,7 @@ def add(
directory. Must be the Drive Letter followed by a colon. ie: U:
profile (str, optional): An explicit path to a profile. Can be a UNC or
a folder on the system. If left blank, windows uses it's default
a folder on the system. If left blank, windows uses its default
profile directory.
logonscript (str, optional): Path to a login script to run when the user

View file

@ -17,7 +17,7 @@ class SynchronizingWebsocket(WebSocket):
Class to handle requests sent to this websocket connection.
Each instance of this class represents a Salt websocket connection.
Waits to receive a ``ready`` message from the client.
Calls send on it's end of the pipe to signal to the sender on receipt
Calls send on its end of the pipe to signal to the sender on receipt
of ``ready``.
This class also kicks off initial information probing jobs when clients

View file

@ -193,7 +193,7 @@ class Serial(object):
return tuple(obj)
elif isinstance(obj, CaseInsensitiveDict):
return dict(obj)
# Nothing known exceptions found. Let msgpack raise it's own.
# Nothing known exceptions found. Let msgpack raise its own.
return obj
try:
@ -206,12 +206,17 @@ class Serial(object):
def verylong_encoder(obj, context):
# Make sure we catch recursion here.
objid = id(obj)
if objid in context:
# This instance list needs to correspond to the types recursed
# in the below if/elif chain. Also update
# tests/unit/test_payload.py
if objid in context and isinstance(obj, (dict, list, tuple)):
return "<Recursion on {} with id={}>".format(
type(obj).__name__, id(obj)
)
context.add(objid)
# The isinstance checks in this if/elif chain need to be
# kept in sync with the above recursion check.
if isinstance(obj, dict):
for key, value in six.iteritems(obj.copy()):
obj[key] = verylong_encoder(value, context)

View file

@ -65,7 +65,6 @@ def get_pillar(
# If local pillar and we're caching, run through the cache system first
log.debug("Determining pillar cache")
if opts["pillar_cache"]:
log.info("Compiling pillar from cache")
log.debug("get_pillar using pillar cache with ext: %s", ext)
return PillarCache(
opts,

View file

@ -242,6 +242,10 @@ def ext_pillar(
# Get the Master's instance info, primarily the region
(_, region) = _get_instance_info()
# If the Minion's region is available, use it instead
if use_grain:
region = __grains__.get("ec2", {}).get("region", region)
try:
conn = boto.ec2.connect_to_region(region)
except boto.exception.AWSConnectionError as exc:

View file

@ -152,7 +152,7 @@ you could run functions from this module on any host where an appropriate
version of ``racadm`` is installed, and that host would reach out over the network
and communicate with the chassis.
``Chassis.cmd`` acts as a "shim" between the execution module and the proxy. It's
``Chassis.cmd`` acts as a "shim" between the execution module and the proxy. Its
first parameter is always the function from salt.modules.dracr to execute. If the
function takes more positional or keyword arguments you can append them to the call.
It's this shim that speaks to the chassis through the proxy, arranging for the

View file

@ -350,7 +350,7 @@ __all__ = ["render"]
def render(template, saltenv="base", sls="", tmplpath=None, rendered_sls=None, **kws):
sls = salt.utils.stringutils.to_str(sls)
mod = types.ModuleType(sls)
# Note: mod object is transient. It's existence only lasts as long as
# Note: mod object is transient. Its existence only lasts as long as
# the lowstate data structure that the highstate in the sls file
# is compiled to.

View file

@ -415,7 +415,7 @@ def list_jobs_filter(
def print_job(jid, ext_source=None):
"""
Print a specific job's detail given by it's jid, including the return data.
Print a specific job's detail given by its jid, including the return data.
CLI Example:

View file

@ -210,7 +210,7 @@ def orchestrate_show_sls(
Display the state data from a specific sls, or list of sls files, after
being render using the master minion.
Note, the master minion adds a "_master" suffix to it's minion id.
Note, the master minion adds a "_master" suffix to its minion id.
.. seealso:: The state.show_sls module function

View file

@ -158,7 +158,7 @@ def present(name, subscriptions=None, region=None, key=None, keyid=None, profile
)
if subscription not in _subscriptions:
# Ensure the endpoint is set back to it's original value,
# Ensure the endpoint is set back to its original value,
# incase we starred out a password
subscription["endpoint"] = _endpoint

View file

@ -412,7 +412,7 @@ def dhcp_options_present(
# boto provides no "update_dhcp_options()" functionality, and you can't delete it if
# it's attached, and you can't detach it if it's the only one, so just check if it's
# there or not, and make no effort to validate it's actual settings... :(
# there or not, and make no effort to validate its actual settings... :(
### TODO - add support for multiple sets of DHCP options, and then for "swapping out"
### sets by creating new, mapping, then deleting the old.
r = __salt__["boto_vpc.dhcp_options_exists"](

View file

@ -927,7 +927,7 @@ def syslog_configured(
After a successful parameter set, reset the service. Defaults to ``True``.
reset_syslog_config
Resets the syslog service to it's default settings. Defaults to ``False``.
Resets the syslog service to its default settings. Defaults to ``False``.
If set to ``True``, default settings defined by the list of syslog configs
in ``reset_configs`` will be reset before running any other syslog settings.

View file

@ -133,7 +133,7 @@ Creates a virtual machine with a given configuration.
vm_registered
-------------
Registers a virtual machine with it's configuration file path.
Registers a virtual machine with its configuration file path.
Dependencies
============

View file

@ -4959,6 +4959,370 @@ def replace(
return ret
def keyvalue(
name,
key=None,
value=None,
key_values=None,
separator="=",
append_if_not_found=False,
prepend_if_not_found=False,
search_only=False,
show_changes=True,
ignore_if_missing=False,
count=1,
uncomment=None,
key_ignore_case=False,
value_ignore_case=False,
):
"""
Key/Value based editing of a file.
.. versionadded:: Sodium
This function differs from ``file.replace`` in that it is able to search for
keys, followed by a customizable separator, and replace the value with the
given value. Should the value be the same as the one already in the file, no
changes will be made.
Either supply both ``key`` and ``value`` parameters, or supply a dictionary
with key / value pairs. It is an error to supply both.
name
Name of the file to search/replace in.
key
Key to search for when ensuring a value. Use in combination with a
``value`` parameter.
value
Value to set for a given key. Use in combination with a ``key``
parameter.
key_values
Dictionary of key / value pairs to search for and ensure values for.
Used to specify multiple key / values at once.
separator : "="
Separator which separates key from value.
append_if_not_found : False
Append the key/value to the end of the file if not found. Note that this
takes precedence over ``prepend_if_not_found``.
prepend_if_not_found : False
Prepend the key/value to the beginning of the file if not found. Note
that ``append_if_not_found`` takes precedence.
show_changes : True
Show a diff of the resulting removals and inserts.
ignore_if_missing : False
Return with success even if the file is not found (or not readable).
count : 1
Number of occurences to allow (and correct), default is 1. Set to -1 to
replace all, or set to 0 to remove all lines with this key regardsless
of its value.
.. note::
Any additional occurences after ``count`` are removed.
A count of -1 will only replace all occurences that are currently
uncommented already. Lines commented out will be left alone.
uncomment : None
Disregard and remove supplied leading characters when finding keys. When
set to None, lines that are commented out are left for what they are.
.. note::
The argument to ``uncomment`` is not a prefix string. Rather; it is a
set of characters, each of which are stripped.
key_ignore_case : False
Keys are matched case insensitively. When a value is changed the matched
key is kept as-is.
value_ignore_case : False
Values are checked case insensitively, trying to set e.g. 'Yes' while
the current value is 'yes', will not result in changes when
``value_ignore_case`` is set to True.
An example of using ``file.keyvalue`` to ensure sshd does not allow
for root to login with a password and at the same time setting the
login-gracetime to 1 minute and disabling all forwarding:
.. code-block:: yaml
sshd_config_harden:
file.keyvalue:
- name: /etc/ssh/sshd_config
- key_values:
permitrootlogin: 'without-password'
LoginGraceTime: '1m'
DisableForwarding: 'yes'
- separator: ' '
- uncomment: '# '
- key_ignore_case: True
- append_if_not_found: True
The same example, except for only ensuring PermitRootLogin is set correctly.
Thus being able to use the shorthand ``key`` and ``value`` parameters
instead of ``key_values``.
.. code-block:: yaml
sshd_config_harden:
file.keyvalue:
- name: /etc/ssh/sshd_config
- key: PermitRootLogin
- value: without-password
- separator: ' '
- uncomment: '# '
- key_ignore_case: True
- append_if_not_found: True
.. note::
Notice how the key is not matched case-sensitively, this way it will
correctly identify both 'PermitRootLogin' as well as 'permitrootlogin'.
"""
name = os.path.expanduser(name)
# default return values
ret = {
"name": name,
"changes": {},
"pchanges": {},
"result": None,
"comment": "",
}
if not name:
return _error(ret, "Must provide name to file.keyvalue")
if key is not None and value is not None:
if type(key_values) is dict:
return _error(
ret, "file.keyvalue can not combine key_values with key and value"
)
key_values = {str(key): value}
elif type(key_values) is not dict:
return _error(
ret, "file.keyvalue key and value not supplied and key_values empty"
)
# try to open the file and only return a comment if ignore_if_missing is
# enabled, also mark as an error if not
file_contents = []
try:
with salt.utils.files.fopen(name, "r") as fd:
file_contents = fd.readlines()
except (OSError, IOError):
ret["comment"] = "unable to open {n}".format(n=name)
ret["result"] = True if ignore_if_missing else False
return ret
# used to store diff combinations and check if anything has changed
diff = []
# store the final content of the file in case it needs to be rewritten
content = []
# target format is templated like this
tmpl = "{key}{sep}{value}" + os.linesep
# number of lines changed
changes = 0
# keep track of number of times a key was updated
diff_count = {k: count for k in key_values.keys()}
# read all the lines from the file
for line in file_contents:
test_line = line.lstrip(uncomment)
did_uncomment = True if len(line) > len(test_line) else False
if key_ignore_case:
test_line = test_line.lower()
for key, value in key_values.items():
test_key = key.lower() if key_ignore_case else key
# if the line starts with the key
if test_line.startswith(test_key):
# if the testline got uncommented then the real line needs to
# be uncommented too, otherwhise there might be separation on
# a character which is part of the comment set
working_line = line.lstrip(uncomment) if did_uncomment else line
# try to separate the line into its' components
line_key, line_sep, line_value = working_line.partition(separator)
# if separation was unsuccessful then line_sep is empty so
# no need to keep trying. continue instead
if line_sep != separator:
continue
# start on the premises the key does not match the actual line
keys_match = False
if key_ignore_case:
if line_key.lower() == test_key:
keys_match = True
else:
if line_key == test_key:
keys_match = True
# if the key was found in the line and separation was successful
if keys_match:
# trial and error have shown it's safest to strip whitespace
# from values for the sake of matching
line_value = line_value.strip()
# make sure the value is an actual string at this point
test_value = str(value).strip()
# convert test_value and line_value to lowercase if need be
if value_ignore_case:
line_value = line_value.lower()
test_value = test_value.lower()
# values match if they are equal at this point
values_match = True if line_value == test_value else False
# in case a line had its comment removed there are some edge
# cases that need considderation where changes are needed
# regardless of values already matching.
needs_changing = False
if did_uncomment:
# irrespective of a value, if it was commented out and
# changes are still to be made, then it needs to be
# commented in
if diff_count[key] > 0:
needs_changing = True
# but if values did not match but there are really no
# changes expected anymore either then leave this line
elif not values_match:
values_match = True
else:
# a line needs to be removed if it has been seen enough
# times and was not commented out, regardless of value
if diff_count[key] == 0:
needs_changing = True
# then start checking to see if the value needs replacing
if not values_match or needs_changing:
# the old line always needs to go, so that will be
# reflected in the diff (this is the original line from
# the file being read)
diff.append("- {0}".format(line))
line = line[:0]
# any non-zero value means something needs to go back in
# its place. negative values are replacing all lines not
# commented out, positive values are having their count
# reduced by one every replacement
if diff_count[key] != 0:
# rebuild the line using the key and separator found
# and insert the correct value.
line = str(
tmpl.format(key=line_key, sep=line_sep, value=value)
)
# display a comment in case a value got converted
# into a string
if not isinstance(value, str):
diff.append(
"+ {0} (from {1} type){2}".format(
line.rstrip(), type(value).__name__, os.linesep
)
)
else:
diff.append("+ {0}".format(line))
changes += 1
# subtract one from the count if it was larger than 0, so
# next lines are removed. if it is less than 0 then count is
# ignored and all lines will be updated.
if diff_count[key] > 0:
diff_count[key] -= 1
# at this point a continue saves going through the rest of
# the keys to see if they match since this line already
# matched the current key
continue
# with the line having been checked for all keys (or matched before all
# keys needed searching), the line can be added to the content to be
# written once the last checks have been performed
content.append(line)
# finally, close the file
fd.close()
# if append_if_not_found was requested, then append any key/value pairs
# still having a count left on them
if append_if_not_found:
tmpdiff = []
for key, value in key_values.items():
if diff_count[key] > 0:
line = tmpl.format(key=key, sep=separator, value=value)
tmpdiff.append("+ {0}".format(line))
content.append(line)
changes += 1
if tmpdiff:
tmpdiff.insert(0, "- <EOF>" + os.linesep)
tmpdiff.append("+ <EOF>" + os.linesep)
diff.extend(tmpdiff)
# only if append_if_not_found was not set should prepend_if_not_found be
# considered, benefit of this is that the number of counts left does not
# mean there might be both a prepend and append happening
elif prepend_if_not_found:
did_diff = False
for key, value in key_values.items():
if diff_count[key] > 0:
line = tmpl.format(key=key, sep=separator, value=value)
if not did_diff:
diff.insert(0, " <SOF>" + os.linesep)
did_diff = True
diff.insert(1, "+ {0}".format(line))
content.insert(0, line)
changes += 1
# if a diff was made
if changes > 0:
# return comment of changes if test
if __opts__["test"]:
ret["comment"] = "File {n} is set to be changed ({c} lines)".format(
n=name, c=changes
)
if show_changes:
# For some reason, giving an actual diff even in test=True mode
# will be seen as both a 'changed' and 'unchanged'. this seems to
# match the other modules behaviour though
ret["pchanges"]["diff"] = "".join(diff)
# add changes to comments for now as well because of how
# stateoutputter seems to handle pchanges etc.
# See: https://github.com/saltstack/salt/issues/40208
ret["comment"] += "\nPredicted diff:\n\r\t\t"
ret["comment"] += "\r\t\t".join(diff)
ret["result"] = None
# otherwise return the actual diff lines
else:
ret["comment"] = "Changed {c} lines".format(c=changes)
if show_changes:
ret["changes"]["diff"] = "".join(diff)
else:
ret["result"] = True
return ret
# if not test=true, try and write the file
if not __opts__["test"]:
try:
with salt.utils.files.fopen(name, "w") as fd:
# write all lines to the file which was just truncated
fd.writelines(content)
fd.close()
except (OSError, IOError):
# return an error if the file was not writable
ret["comment"] = "{n} not writable".format(n=name)
ret["result"] = False
return ret
# if all went well, then set result to true
ret["result"] = True
return ret
def blockreplace(
name,
marker_start="#-- start managed zone --",

View file

@ -3,7 +3,7 @@
Manage Glassfish/Payara server
.. versionadded:: Carbon
Management of glassfish using it's RESTful API
Management of glassfish using its RESTful API
You can setup connection parameters like this
.. code-block:: yaml

View file

@ -49,7 +49,7 @@ except ImportError:
def purge_pip():
"""
Purge pip and it's sub-modules
Purge pip and its sub-modules
"""
# Remove references to the loaded pip module above so reloading works
if "pip" not in sys.modules:

View file

@ -28,7 +28,17 @@ def __virtual__():
return True
def present(version, name, port=None, encoding=None, locale=None, datadir=None):
def present(
version,
name,
port=None,
encoding=None,
locale=None,
datadir=None,
allow_group_access=None,
data_checksums=None,
wal_segsize=None,
):
"""
Ensure that the named cluster is present with the specified properties.
For more information about all of these options see man pg_createcluster(1)
@ -51,6 +61,15 @@ def present(version, name, port=None, encoding=None, locale=None, datadir=None):
datadir
Where the cluster is stored
allow_group_access
Allows users in the same group as the cluster owner to read all cluster files created by initdb
data_checksums
Use checksums on data pages
wal_segsize
Set the WAL segment size, in megabytes
.. versionadded:: 2015.XX
"""
msg = "Cluster {0}/{1} is already present".format(version, name)
@ -87,6 +106,9 @@ def present(version, name, port=None, encoding=None, locale=None, datadir=None):
locale=locale,
encoding=encoding,
datadir=datadir,
allow_group_access=allow_group_access,
data_checksums=data_checksums,
wal_segsize=wal_segsize,
)
if cluster:
msg = "The cluster {0}/{1} has been created"

View file

@ -19,6 +19,9 @@ data directory.
- encoding: UTF8
- locale: C
- runas: postgres
- allow_group_access: True
- data_checksums: True
- wal_segsize: 32
"""
from __future__ import absolute_import, print_function, unicode_literals

View file

@ -93,7 +93,7 @@ def post_message(name, **kwargs):
enough to be displayed side-by-side with other values.
webhook
The identifier of WebHook.
The identifier of WebHook (URL or token).
channel
The channel to use instead of the WebHook default.

View file

@ -300,6 +300,7 @@ def export(
out = __salt__[svn_cmd](cwd, name, basename, user, username, password, rev, *opts)
ret["changes"]["new"] = name
ret["changes"]["comment"] = name + " was Exported to " + target
ret["comment"] = out
return ret

View file

@ -96,7 +96,7 @@ def grant_access_to_shared_folders_to(name, users=None):
"""
Grant access to auto-mounted shared folders to the users.
User is specified by it's name. To grant access for several users use
User is specified by its name. To grant access for several users use
argument `users`.
name

View file

@ -24,8 +24,6 @@ import logging
# Import Salt libs
import salt.utils.functools
import salt.utils.platform
# Import 3rd party libs
from salt.ext import six
log = logging.getLogger(__name__)
@ -36,11 +34,13 @@ __virtualname__ = "system"
def __virtual__():
"""
This only supports Windows
Make sure this Windows and that the win_system module is available
"""
if salt.utils.platform.is_windows() and "system.get_computer_desc" in __salt__:
return __virtualname__
return (False, "system module could not be loaded")
if not salt.utils.platform.is_windows():
return False, "win_system: Only available on Windows"
if "system.get_computer_desc" not in __salt__:
return False, "win_system: win_system execution module not available"
return __virtualname__
def computer_desc(name):
@ -172,6 +172,85 @@ def hostname(name):
return ret
def workgroup(name):
"""
.. versionadded:: Sodium
Manage the workgroup of the computer
Args:
name (str): The workgroup to set
Example:
.. code-block:: yaml
set workgroup:
system.workgroup:
- name: local
"""
ret = {"name": name.upper(), "result": False, "changes": {}, "comment": ""}
# Grab the current domain/workgroup
out = __salt__["system.get_domain_workgroup"]()
current_workgroup = (
out["Domain"]
if "Domain" in out
else out["Workgroup"]
if "Workgroup" in out
else ""
)
# Notify the user if the requested workgroup is the same
if current_workgroup.upper() == name.upper():
ret["result"] = True
ret["comment"] = "Workgroup is already set to '{0}'".format(name.upper())
return ret
# If being run in test-mode, inform the user what is supposed to happen
if __opts__["test"]:
ret["result"] = None
ret["changes"] = {}
ret["comment"] = "Computer will be joined to workgroup '{0}'".format(name)
return ret
# Set our new workgroup, and then immediately ask the machine what it
# is again to validate the change
res = __salt__["system.set_domain_workgroup"](name.upper())
out = __salt__["system.get_domain_workgroup"]()
new_workgroup = (
out["Domain"]
if "Domain" in out
else out["Workgroup"]
if "Workgroup" in out
else ""
)
# Return our results based on the changes
ret = {}
if res and current_workgroup.upper() == new_workgroup.upper():
ret["result"] = True
ret["comment"] = "The new workgroup '{0}' is the same as '{1}'".format(
current_workgroup.upper(), new_workgroup.upper()
)
elif res:
ret["result"] = True
ret["comment"] = "The workgroup has been changed from '{0}' to '{1}'".format(
current_workgroup.upper(), new_workgroup.upper()
)
ret["changes"] = {
"old": current_workgroup.upper(),
"new": new_workgroup.upper(),
}
else:
ret["result"] = False
ret["comment"] = "Unable to join the requested workgroup '{0}'".format(
new_workgroup.upper()
)
return ret
def join_domain(
name,
username=None,

View file

@ -26,6 +26,7 @@ IPADDR{{loop.index}}="{{i['ipaddr']}}"
PREFIX{{loop.index}}="{{i['prefix']}}"
{% endfor -%}
{%endif%}{% if gateway %}GATEWAY="{{gateway}}"
{%endif%}{% if arpcheck %}ARPCHECK="{{arpcheck}}"
{%endif%}{% if enable_ipv6 %}IPV6INIT="yes"
{% if ipv6_autoconf %}IPV6_AUTOCONF="{{ipv6_autoconf}}"
{%endif%}{% if dhcpv6c %}DHCPV6C="{{dhcpv6c}}"

View file

@ -896,7 +896,7 @@ def _set_tcp_keepalive(zmq_socket, opts):
Warning: Failure to set TCP keepalives on the salt-master can result in
not detecting the loss of a minion when the connection is lost or when
it's host has been terminated without first closing the socket.
its host has been terminated without first closing the socket.
Salt's Presence System depends on this connection status to know if a minion
is "present".

View file

@ -32,7 +32,7 @@ class CacheFactory(object):
@classmethod
def factory(cls, backend, ttl, *args, **kwargs):
log.info("Factory backend: %s", backend)
log.debug("Factory backend: %s", backend)
if backend == "memory":
return CacheDict(ttl, *args, **kwargs)
elif backend == "disk":

View file

@ -672,7 +672,7 @@ def query(
):
"""
Query DNS for information.
Where `lookup()` returns record data, `query()` tries to interpret the data and return it's results
Where `lookup()` returns record data, `query()` tries to interpret the data and return its results
:param name: name to lookup
:param rdtype: DNS record type
@ -1037,7 +1037,7 @@ def tlsa_rec(rdata):
def service(svc, proto="tcp", domain=None, walk=False, secure=None):
"""
Find an SRV service in a domain or it's parents
Find an SRV service in a domain or its parents
:param svc: service to find (ldap, 389, etc)
:param proto: protocol the service talks (tcp, udp, etc)
:param domain: domain to start search in

Some files were not shown because too many files have changed in this diff Show more