Merge branch '2016.11' into 'develop'

Conflicts:
  - conf/master
  - doc/topics/installation/ubuntu.rst
  - salt/modules/pillar.py
  - salt/netapi/rest_tornado/saltnado.py
  - salt/states/influxdb_user.py
  - salt/utils/minions.py
  - salt/utils/openstack/nova.py
This commit is contained in:
rallytime 2017-01-17 09:50:06 -07:00
commit 5b43a252c9
31 changed files with 857 additions and 639 deletions

View file

@ -400,7 +400,7 @@
# Pass in an alternative location for the salt-ssh roster file
#roster_file: /etc/salt/roster
# Define a locations for roster files so they can be chosen when using Salt API.
# Define a location for roster files so they can be chosen when using Salt API.
# An administrator can place roster files into these locations. Then when
# calling Salt API, parameter 'roster_file' should contain a relative path to
# these locations. That is, "roster_file=/foo/roster" will be resolved as

View file

@ -1348,22 +1348,6 @@ Enable extra routines for YAML renderer used states containing UTF characters.
yaml_utf8: False
.. conf_master:: test
``test``
--------
Default: ``False``
Set all state calls to only test if they are going to actually make changes
or just post what changes are going to be made.
.. code-block:: yaml
test: False
.. conf_master:: runner_returns
``runner_returns``
------------------

View file

@ -608,7 +608,9 @@ The directory where Unix sockets will be kept.
Default: ``''``
Backup files replaced by file.managed and file.recurse under cachedir.
Make backups of files replaced by ``file.managed`` and ``file.recurse`` state modules under
:conf_minion:`cachedir` in ``file_backup`` subdirectory preserving original paths.
Refer to :ref:`File State Backups documentation <file-state-backups>` for more details.
.. code-block:: yaml
@ -1198,6 +1200,20 @@ The default renderer used for local state executions
renderer: yaml_jinja
.. conf_master:: test
``test``
--------
Default: ``False``
Set all state calls to only test if they are going to actually make changes
or just post what changes are going to be made.
.. code-block:: yaml
test: False
.. conf_minion:: state_verbose
``state_verbose``

View file

@ -139,13 +139,13 @@ Running Test Subsections
Instead of running the entire test suite all at once, which can take a long time,
there are several ways to run only specific groups of tests or individual tests:
* Run unit tests only: ``./tests/runtests.py --unit-tests``
* Run unit and integration tests for states: ``./tests/runtests.py --state``
* Run integration tests for an individual module: ``./tests/runtests.py -n integration.modules.virt``
* Run unit tests for an individual module: ``./tests/runtests.py -n unit.modules.virt_test``
* Run :ref:`unit tests only<running-unit-tests-no-daemons>`: ``python tests/runtests.py --unit-tests``
* Run unit and integration tests for states: ``python tests/runtests.py --state``
* Run integration tests for an individual module: ``python tests/runtests.py -n integration.modules.virt``
* Run unit tests for an individual module: ``python tests/runtests.py -n unit.modules.virt_test``
* Run an individual test by using the class and test name (this example is for the
``test_default_kvm_profile`` test in the ``integration.module.virt``):
``./tests/runtests.py -n integration.module.virt.VirtTest.test_default_kvm_profile``
``python tests/runtests.py -n integration.module.virt.VirtTest.test_default_kvm_profile``
For more specific examples of how to run various test subsections or individual
tests, please see the :ref:`Test Selection Options <test-selection-options>`
@ -163,14 +163,14 @@ Since the unit tests do not require a master or minion to execute, it is often u
run unit tests individually, or as a whole group, without having to start up the integration testing
daemons. Starting up the master, minion, and syndic daemons takes a lot of time before the tests can
even start running and is unnecessary to run unit tests. To run unit tests without invoking the
integration test daemons, simple remove the ``/tests`` portion of the ``runtests.py`` command:
integration test daemons, simply run the ``runtests.py`` script with ``--unit`` argument:
.. code-block:: bash
./runtests.py --unit
python tests/runtests.py --unit
All of the other options to run individual tests, entire classes of tests, or entire test modules still
apply.
All of the other options to run individual tests, entire classes of tests, or
entire test modules still apply.
Running Destructive Integration Tests
@ -191,13 +191,14 @@ successfully. Therefore, running destructive tests should be done with caution.
.. note::
Running destructive tests will change the underlying system. Use caution when running destructive tests.
Running destructive tests will change the underlying system.
Use caution when running destructive tests.
To run tests marked as destructive, set the ``--run-destructive`` flag:
.. code-block:: bash
./tests/runtests.py --run-destructive
python tests/runtests.py --run-destructive
Running Cloud Provider Tests
@ -259,13 +260,13 @@ Here's a simple usage example:
.. code-block:: bash
tests/runtests.py --docked=ubuntu-12.04 -v
python tests/runtests.py --docked=ubuntu-12.04 -v
The full `docker`_ container repository can also be provided:
.. code-block:: bash
tests/runtests.py --docked=salttest/ubuntu-12.04 -v
python tests/runtests.py --docked=salttest/ubuntu-12.04 -v
The SaltStack team is creating some containers which will have the necessary

View file

@ -14,11 +14,7 @@ are available in the SaltStack repository.
Instructions are at https://repo.saltstack.com/#ubuntu.
Installation from the Community-Maintained Repository
=====================================================
The PPA is no longer maintained, and shouldn't be used. Please use the Official
Salt Stack repository instead.
.. _ubuntu-install-pkgs:
Install Packages
================

View file

@ -38,7 +38,7 @@ simply by creating a data structure. (And this is exactly how much of Salt's
own internals work!)
.. autoclass:: salt.netapi.NetapiClient
:members: local, local_async, local_batch, local_subset, ssh, ssh_async,
:members: local, local_async, local_subset, ssh, ssh_async,
runner, runner_async, wheel, wheel_async
.. toctree::

View file

@ -0,0 +1,31 @@
============================
Salt 2015.8.13 Release Notes
============================
Version 2015.8.13 is a bugfix release for :ref:`2015.8.0 <release-2015-8-0>`.
Changes for v2015.8.12..v2015.8.13
----------------------------------
Extended changelog courtesy of Todd Stansell (https://github.com/tjstansell/salt-changelogs):
*Generated at: 2017-01-09T21:17:06Z*
Statistics:
- Total Merges: **3**
- Total Issue references: **3**
- Total PR references: **5**
Changes:
* 3428232 Clean up tests and docs for batch execution
* 3d8f3d1 Remove batch execution from NetapiClient and Saltnado
* 97b0f64 Lintfix
* d151666 Add explanation comment
* 62f2c87 Add docstring
* 9b0a786 Explain what it is about and how to configure that
* 5ea3579 Pick up a specified roster file from the configured locations
* 3a8614c Disable custom rosters in API
* c0e5a11 Add roster disable flag

View file

@ -58,7 +58,7 @@ Unfortunately, it can lead to code that looks like the following.
{% endfor %}
This is an example from the author's salt formulae demonstrating misuse of jinja.
Aside from being difficult to read and maintian,
Aside from being difficult to read and maintain,
accessing the logic it contains from a non-jinja renderer
while probably possible is a significant barrier!
@ -158,6 +158,6 @@ Conclusion
----------
That was... surprisingly straight-forward.
Now the logic is now available in every renderer, instead of just Jinja.
Now the logic is available in every renderer, instead of just Jinja.
Best of all, it can be maintained in Python,
which is a whole lot easier than Jinja.

View file

@ -11,7 +11,7 @@ idna==2.1
ioflo==1.5.5
ioloop==0.1a0
ipaddress==1.0.16
Jinja2==2.8
Jinja2==2.9.4
libnacl==1.4.5
lxml==3.6.0
Mako==1.0.4

View file

@ -32,7 +32,7 @@ try:
GROUP = dbus.Interface(BUS.get_object(avahi.DBUS_NAME, SERVER.EntryGroupNew()),
avahi.DBUS_INTERFACE_ENTRY_GROUP)
HAS_DBUS = True
except ImportError:
except (ImportError, NameError):
HAS_DBUS = False
except DBusException:
HAS_DBUS = False

View file

@ -206,7 +206,7 @@ class Cache(object):
Raises an exception if cache driver detected an error accessing data
in the cache backend (auth, permissions, etc).
'''
fun = '{0}.{1}'.format(self.driver, 'list')
fun = '{0}.{1}'.format(self.driver, 'getlist')
return self.modules[fun](bank)
def contains(self, bank, key=None):

View file

@ -129,7 +129,7 @@ def flush(bank, key=None):
)
def list(bank):
def getlist(bank):
'''
Return an iterable object containing all entries stored in the specified bank.
'''

View file

@ -112,7 +112,7 @@ def flush(bank, key=None):
return True
def list(bank):
def getlist(bank):
'''
Return an iterable object containing all entries stored in the specified bank.
'''

View file

@ -48,6 +48,25 @@ examples could be set up in the cloud configuration at
driver: nova
userdata_file: /tmp/userdata.txt
To use keystoneauth1 instead of keystoneclient, include the `use_keystoneauth`
option in the provider config.
.. note:: this is required to use keystone v3 as for authentication.
.. code-block:: yaml
my-openstack-config:
use_keystoneauth: True
identity_url: 'https://controller:5000/v3'
auth_version: 3
compute_name: nova
compute_region: RegionOne
service_type: compute
tenant: admin
user: admin
password: passwordgoeshere
driver: nova
For local installations that only use private IP address ranges, the
following option may be useful. Using the old syntax:
@ -279,6 +298,7 @@ def get_conn():
kwargs['project_id'] = vm_['tenant']
kwargs['auth_url'] = vm_['identity_url']
kwargs['region_name'] = vm_['compute_region']
kwargs['use_keystoneauth'] = vm_['use_keystoneauth']
if 'password' in vm_:
kwargs['password'] = vm_['password']

View file

@ -9,7 +9,7 @@
#
# BUGS: https://github.com/saltstack/salt-bootstrap/issues
#
# COPYRIGHT: (c) 2012-2016 by the SaltStack Team, see AUTHORS.rst for more
# COPYRIGHT: (c) 2012-2017 by the SaltStack Team, see AUTHORS.rst for more
# details.
#
# LICENSE: Apache 2.0
@ -18,7 +18,7 @@
#======================================================================================================================
set -o nounset # Treat unset variables as an error
__ScriptVersion="2016.10.25"
__ScriptVersion="2017.01.10"
__ScriptName="bootstrap-salt.sh"
__ScriptFullName="$0"
@ -309,9 +309,10 @@ __usage() {
-F Allow copied files to overwrite existing (config, init.d, etc)
-K If set, keep the temporary files in the temporary directories specified
with -c and -k
-C Only run the configuration function. This option automatically bypasses
any installation. Implies -F (forced overwrite). To overwrite master or
syndic configs, -M or -S, respectively, must also be specified.
-C Only run the configuration function. Implies -F (forced overwrite).
To overwrite Master or Syndic configs, -M or -S, respectively, must
also be specified. Salt installation will be ommitted, but some of the
dependencies could be installed to write configuration with -j or -J.
-A Pass the salt-master DNS name or IP. This will be stored under
\${BS_SALT_ETC_DIR}/minion.d/99-master-address.conf
-i Pass the salt-minion id. This will be stored under
@ -342,12 +343,12 @@ __usage() {
repo.saltstack.com. The option passed with -R replaces the
"repo.saltstack.com". If -R is passed, -r is also set. Currently only
works on CentOS/RHEL based distributions.
-J Replace the Master config file with data passed in as a json string. If
-J Replace the Master config file with data passed in as a JSON string. If
a Master config file is found, a reasonable effort will be made to save
the file with a ".bak" extension. If used in conjunction with -C or -F,
no ".bak" file will be created as either of those options will force
a complete overwrite of the file.
-j Replace the Minion config file with data passed in as a json string. If
-j Replace the Minion config file with data passed in as a JSON string. If
a Minion config file is found, a reasonable effort will be made to save
the file with a ".bak" extension. If used in conjunction with -C or -F,
no ".bak" file will be created as either of those options will force
@ -475,7 +476,7 @@ fi
# Check that we're installing or configuring a master if we're being passed a master config json dict
if [ "$_CUSTOM_MASTER_CONFIG" != "null" ]; then
if [ "$_INSTALL_MASTER" -eq $BS_FALSE ] && [ "$_CONFIG_ONLY" -eq $BS_FALSE ]; then
echoerror "Don't pass a master config json dict (-J) if no master is going to be bootstrapped or configured."
echoerror "Don't pass a master config JSON dict (-J) if no master is going to be bootstrapped or configured."
exit 1
fi
fi
@ -483,7 +484,7 @@ fi
# Check that we're installing or configuring a minion if we're being passed a minion config json dict
if [ "$_CUSTOM_MINION_CONFIG" != "null" ]; then
if [ "$_INSTALL_MINION" -eq $BS_FALSE ] && [ "$_CONFIG_ONLY" -eq $BS_FALSE ]; then
echoerror "Don't pass a minion config json dict (-j) if no minion is going to be bootstrapped or configured."
echoerror "Don't pass a minion config JSON dict (-j) if no minion is going to be bootstrapped or configured."
exit 1
fi
fi
@ -850,7 +851,7 @@ __derive_debian_numeric_version() {
# DESCRIPTION: Strip single or double quotes from the provided string.
#----------------------------------------------------------------------------------------------------------------------
__unquote_string() {
echo "$*" | sed -e "s/^\([\"']\)\(.*\)\1\$/\2/g"
echo "$*" | sed -e "s/^\([\"\']\)\(.*\)\1\$/\2/g"
}
#--- FUNCTION -------------------------------------------------------------------------------------------------------
@ -924,6 +925,8 @@ __gather_linux_system_info() {
DISTRO_NAME=$(lsb_release -si)
if [ "${DISTRO_NAME}" = "Scientific" ]; then
DISTRO_NAME="Scientific Linux"
elif [ "$(echo "$DISTRO_NAME" | grep ^CloudLinux)" != "" ]; then
DISTRO_NAME="Cloud Linux"
elif [ "$(echo "$DISTRO_NAME" | grep ^RedHat)" != "" ]; then
# Let's convert 'CamelCased' to 'Camel Cased'
n=$(__camelcase_split "$DISTRO_NAME")
@ -1037,6 +1040,9 @@ __gather_linux_system_info() {
n="Arch Linux"
v="" # Arch Linux does not provide a version.
;;
cloudlinux )
n="Cloud Linux"
;;
debian )
n="Debian"
v=$(__derive_debian_numeric_version "$v")
@ -1195,12 +1201,6 @@ __ubuntu_derivatives_translation() {
# Mappings
trisquel_6_ubuntu_base="12.04"
linuxmint_13_ubuntu_base="12.04"
linuxmint_14_ubuntu_base="12.10"
#linuxmint_15_ubuntu_base="13.04"
# Bug preventing add-apt-repository from working on Mint 15:
# https://bugs.launchpad.net/linuxmint/+bug/1198751
linuxmint_16_ubuntu_base="13.10"
linuxmint_17_ubuntu_base="14.04"
linuxmint_18_ubuntu_base="16.04"
linaro_12_ubuntu_base="12.04"
@ -1258,15 +1258,12 @@ __ubuntu_codename_translation() {
"14")
DISTRO_CODENAME="trusty"
;;
"15")
if [ -n "$_april" ]; then
DISTRO_CODENAME="vivid"
else
DISTRO_CODENAME="wily"
fi
;;
"16")
DISTRO_CODENAME="xenial"
if [ "$_april" ]; then
DISTRO_CODENAME="xenial"
else
DISTRO_CODENAME="yakkety"
fi
;;
*)
DISTRO_CODENAME="trusty"
@ -1453,6 +1450,14 @@ if ([ "${DISTRO_NAME_L}" != "ubuntu" ] && [ $_PIP_ALL -eq $BS_TRUE ]);then
exit 1
fi
# Starting from Ubuntu 16.10, gnupg-curl has been renamed to gnupg1-curl.
GNUPG_CURL="gnupg-curl"
if ([ "${DISTRO_NAME_L}" = "ubuntu" ] && [ "${DISTRO_VERSION}" = "16.10" ]); then
GNUPG_CURL="gnupg1-curl"
fi
#--- FUNCTION -------------------------------------------------------------------------------------------------------
# NAME: __function_defined
# DESCRIPTION: Checks if a function is defined within this scripts scope
@ -1497,7 +1502,7 @@ __apt_get_upgrade_noinput() {
__apt_key_fetch() {
url=$1
__apt_get_install_noinput gnupg-curl || return 1
__apt_get_install_noinput ${GNUPG_CURL} || return 1
# shellcheck disable=SC2086
apt-key adv ${_GPG_ARGS} --fetch-keys "$url"; return $?
@ -1561,6 +1566,10 @@ __yum_install_noinput() {
__git_clone_and_checkout() {
echodebug "Installed git version: $(git --version | awk '{ print $3 }')"
# Turn off SSL verification if -I flag was set for insecure downloads
if [ "$_INSECURE_DL" -eq $BS_TRUE ]; then
export GIT_SSL_NO_VERIFY=1
fi
__SALT_GIT_CHECKOUT_PARENT_DIR=$(dirname "${_SALT_GIT_CHECKOUT_DIR}" 2>/dev/null)
__SALT_GIT_CHECKOUT_PARENT_DIR="${__SALT_GIT_CHECKOUT_PARENT_DIR:-/tmp/git}"
@ -1689,7 +1698,12 @@ __check_end_of_life_versions() {
# Ubuntu versions not supported
#
# < 12.04
if [ "$DISTRO_MAJOR_VERSION" -lt 12 ]; then
# 13.x, 15.x
# 12.10, 14.10
if [ "$DISTRO_MAJOR_VERSION" -lt 12 ] || \
[ "$DISTRO_MAJOR_VERSION" -eq 13 ] || \
[ "$DISTRO_MAJOR_VERSION" -eq 15 ] || \
([ "$DISTRO_MAJOR_VERSION" -lt 16 ] && [ "$DISTRO_MINOR_VERSION" -eq 10 ]); then
echoerror "End of life distributions are not supported."
echoerror "Please consider upgrading to the next stable. See:"
echoerror " https://wiki.ubuntu.com/Releases"
@ -1726,7 +1740,7 @@ __check_end_of_life_versions() {
fedora)
# Fedora lower than 18 are no longer supported
if [ "$DISTRO_MAJOR_VERSION" -lt 18 ]; then
if [ "$DISTRO_MAJOR_VERSION" -lt 23 ]; then
echoerror "End of life distributions are not supported."
echoerror "Please consider upgrading to the next stable. See:"
echoerror " https://fedoraproject.org/wiki/Releases"
@ -2284,49 +2298,30 @@ __enable_universe_repository() {
echodebug "Enabling the universe repository"
# Ubuntu versions higher than 12.04 do not live in the old repositories
if [ "$DISTRO_MAJOR_VERSION" -gt 12 ] || ([ "$DISTRO_MAJOR_VERSION" -eq 12 ] && [ "$DISTRO_MINOR_VERSION" -gt 04 ]); then
add-apt-repository -y "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) universe" || return 1
elif [ "$DISTRO_MAJOR_VERSION" -lt 11 ] && [ "$DISTRO_MINOR_VERSION" -lt 10 ]; then
# Below Ubuntu 11.10, the -y flag to add-apt-repository is not supported
add-apt-repository "deb http://old-releases.ubuntu.com/ubuntu $(lsb_release -sc) universe" || return 1
else
add-apt-repository -y "deb http://old-releases.ubuntu.com/ubuntu $(lsb_release -sc) universe" || return 1
fi
add-apt-repository -y "deb http://old-releases.ubuntu.com/ubuntu $(lsb_release -sc) universe" || return 1
add-apt-repository -y "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) universe" || return 1
return 0
}
install_ubuntu_deps() {
if [ "$DISTRO_MAJOR_VERSION" -gt 12 ] || ([ "$DISTRO_MAJOR_VERSION" -eq 12 ] && [ "$DISTRO_MINOR_VERSION" -eq 10 ]); then
if [ "$DISTRO_MAJOR_VERSION" -gt 12 ]; then
# Above Ubuntu 12.04 add-apt-repository is in a different package
__apt_get_install_noinput software-properties-common || return 1
else
__apt_get_install_noinput python-software-properties || return 1
fi
if [ $_DISABLE_REPOS -eq $BS_FALSE ]; then
__enable_universe_repository || return 1
# Versions starting with 2015.5.6 and 2015.8.1 are hosted at repo.saltstack.com
if [ "$(echo "$STABLE_REV" | egrep '^(2015\.5|2015\.8|latest|archive\/)')" = "" ]; then
if [ "$(echo "$STABLE_REV" | egrep '^(2015\.5|2015\.8|2016\.3|2016\.11|latest|archive\/)')" = "" ]; then
if [ "$DISTRO_MAJOR_VERSION" -lt 14 ]; then
echoinfo "Installing Python Requests/Chardet from Chris Lea's PPA repository"
if [ "$DISTRO_MAJOR_VERSION" -gt 11 ] || ([ "$DISTRO_MAJOR_VERSION" -eq 11 ] && [ "$DISTRO_MINOR_VERSION" -gt 04 ]); then
# Above Ubuntu 11.04 add a -y flag
add-apt-repository -y "ppa:chris-lea/python-requests" || return 1
add-apt-repository -y "ppa:chris-lea/python-chardet" || return 1
add-apt-repository -y "ppa:chris-lea/python-urllib3" || return 1
add-apt-repository -y "ppa:chris-lea/python-crypto" || return 1
else
add-apt-repository "ppa:chris-lea/python-requests" || return 1
add-apt-repository "ppa:chris-lea/python-chardet" || return 1
add-apt-repository "ppa:chris-lea/python-urllib3" || return 1
add-apt-repository "ppa:chris-lea/python-crypto" || return 1
fi
add-apt-repository -y "ppa:chris-lea/python-requests" || return 1
add-apt-repository -y "ppa:chris-lea/python-chardet" || return 1
add-apt-repository -y "ppa:chris-lea/python-urllib3" || return 1
add-apt-repository -y "ppa:chris-lea/python-crypto" || return 1
fi
fi
@ -2337,7 +2332,7 @@ install_ubuntu_deps() {
# Minimal systems might not have upstart installed, install it
__PACKAGES="upstart"
if [ "$DISTRO_MAJOR_VERSION" -ge 15 ]; then
if [ "$DISTRO_MAJOR_VERSION" -ge 16 ]; then
__PACKAGES="${__PACKAGES} python2.7"
fi
if [ "$_VIRTUALENV_DIR" != "null" ]; then
@ -2349,6 +2344,9 @@ install_ubuntu_deps() {
# requests is still used by many salt modules
__PACKAGES="${__PACKAGES} python-requests"
# YAML module is used for generating custom master/minion configs
__PACKAGES="${__PACKAGES} python-yaml"
# Additionally install procps and pciutils which allows for Docker bootstraps. See 366#issuecomment-39666813
__PACKAGES="${__PACKAGES} procps pciutils"
@ -2365,7 +2363,7 @@ install_ubuntu_deps() {
}
install_ubuntu_stable_deps() {
if ([ "${_SLEEP}" -eq "${__DEFAULT_SLEEP}" ] && [ "$DISTRO_MAJOR_VERSION" -lt 15 ]); then
if [ "${_SLEEP}" -eq "${__DEFAULT_SLEEP}" ] && [ "$DISTRO_MAJOR_VERSION" -lt 16 ]; then
# The user did not pass a custom sleep value as an argument, let's increase the default value
echodebug "On Ubuntu systems we increase the default sleep value to 10."
echodebug "See https://github.com/saltstack/salt/issues/12248 for more info."
@ -2408,12 +2406,12 @@ install_ubuntu_stable_deps() {
fi
# Versions starting with 2015.5.6, 2015.8.1 and 2016.3.0 are hosted at repo.saltstack.com
if [ "$(echo "$STABLE_REV" | egrep '^(2015\.5|2015\.8|2016\.3|latest|archive\/)')" != "" ]; then
if [ "$(echo "$STABLE_REV" | egrep '^(2015\.5|2015\.8|2016\.3|2016\.11|latest|archive\/)')" != "" ]; then
# Workaround for latest non-LTS ubuntu
if [ "$DISTRO_MAJOR_VERSION" -eq 15 ]; then
if [ "$DISTRO_VERSION" = "16.10" ]; then
echowarn "Non-LTS Ubuntu detected, but stable packages requested. Trying packages from latest LTS release. You may experience problems."
UBUNTU_VERSION=14.04
UBUNTU_CODENAME=trusty
UBUNTU_VERSION=16.04
UBUNTU_CODENAME=xenial
else
UBUNTU_VERSION=$DISTRO_VERSION
UBUNTU_CODENAME=$DISTRO_CODENAME
@ -2439,12 +2437,7 @@ install_ubuntu_stable_deps() {
STABLE_PPA="saltstack/salt"
fi
if [ "$DISTRO_MAJOR_VERSION" -gt 11 ] || ([ "$DISTRO_MAJOR_VERSION" -eq 11 ] && [ "$DISTRO_MINOR_VERSION" -gt 04 ]); then
# Above Ubuntu 11.04 add a -y flag
add-apt-repository -y "ppa:$STABLE_PPA" || return 1
else
add-apt-repository "ppa:$STABLE_PPA" || return 1
fi
add-apt-repository -y "ppa:$STABLE_PPA" || return 1
fi
apt-get update
@ -2456,24 +2449,17 @@ install_ubuntu_stable_deps() {
install_ubuntu_daily_deps() {
install_ubuntu_stable_deps || return 1
if [ "$DISTRO_MAJOR_VERSION" -ge 12 ]; then
# Above Ubuntu 11.10 add-apt-repository is in a different package
if [ "$DISTRO_MAJOR_VERSION" -gt 12 ]; then
__apt_get_install_noinput software-properties-common || return 1
else
# Ubuntu 12.04 needs python-software-properties to get add-apt-repository binary
__apt_get_install_noinput python-software-properties || return 1
fi
if [ $_DISABLE_REPOS -eq $BS_FALSE ]; then
__enable_universe_repository || return 1
# for anything up to and including 11.04 do not use the -y option
if [ "$DISTRO_MAJOR_VERSION" -gt 11 ] || ([ "$DISTRO_MAJOR_VERSION" -eq 11 ] && [ "$DISTRO_MINOR_VERSION" -gt 04 ]); then
# Above Ubuntu 11.04 add a -y flag
add-apt-repository -y ppa:saltstack/salt-daily || return 1
else
add-apt-repository ppa:saltstack/salt-daily || return 1
fi
add-apt-repository -y ppa:saltstack/salt-daily || return 1
apt-get update
fi
@ -2486,7 +2472,15 @@ install_ubuntu_daily_deps() {
install_ubuntu_git_deps() {
apt-get update
__apt_get_install_noinput git-core || return 1
if ! __check_command_exists git; then
__apt_get_install_noinput git-core || return 1
fi
if [ "$_INSECURE_DL" -eq $BS_FALSE ] && [ "${_SALT_REPO_URL%%://*}" = "https" ]; then
__apt_get_install_noinput ca-certificates
fi
__git_clone_and_checkout || return 1
__PACKAGES=""
@ -2569,12 +2563,6 @@ install_ubuntu_git() {
}
install_ubuntu_stable_post() {
# Workaround for latest LTS packages on latest ubuntu. Normally packages on
# debian-based systems will automatically start the corresponding daemons
if [ "$DISTRO_MAJOR_VERSION" -lt 15 ]; then
return 0
fi
for fname in api master minion syndic; do
# Skip salt-api since the service should be opt-in and not necessarily started on boot
[ $fname = "api" ] && continue
@ -2607,7 +2595,7 @@ install_ubuntu_git_post() {
[ $fname = "minion" ] && [ "$_INSTALL_MINION" -eq $BS_FALSE ] && continue
[ $fname = "syndic" ] && [ "$_INSTALL_SYNDIC" -eq $BS_FALSE ] && continue
if [ -f /bin/systemctl ] && [ "$DISTRO_MAJOR_VERSION" -ge 15 ]; then
if [ -f /bin/systemctl ] && [ "$DISTRO_MAJOR_VERSION" -ge 16 ]; then
__copyfile "${_SALT_GIT_CHECKOUT_DIR}/pkg/deb/salt-${fname}.service" "/lib/systemd/system/salt-${fname}.service"
# Skip salt-api since the service should be opt-in and not necessarily started on boot
@ -2652,7 +2640,7 @@ install_ubuntu_restart_daemons() {
[ $_START_DAEMONS -eq $BS_FALSE ] && return
# Ensure upstart configs / systemd units are loaded
if [ -f /bin/systemctl ] && [ "$DISTRO_MAJOR_VERSION" -ge 15 ]; then
if [ -f /bin/systemctl ] && [ "$DISTRO_MAJOR_VERSION" -ge 16 ]; then
systemctl daemon-reload
elif [ -f /sbin/initctl ]; then
/sbin/initctl reload-configuration
@ -2667,7 +2655,7 @@ install_ubuntu_restart_daemons() {
[ $fname = "minion" ] && [ "$_INSTALL_MINION" -eq $BS_FALSE ] && continue
[ $fname = "syndic" ] && [ "$_INSTALL_SYNDIC" -eq $BS_FALSE ] && continue
if [ -f /bin/systemctl ] && [ "$DISTRO_MAJOR_VERSION" -ge 15 ]; then
if [ -f /bin/systemctl ] && [ "$DISTRO_MAJOR_VERSION" -ge 16 ]; then
echodebug "There's systemd support while checking salt-$fname"
systemctl stop salt-$fname > /dev/null 2>&1
systemctl start salt-$fname.service
@ -2711,7 +2699,7 @@ install_ubuntu_check_services() {
[ $fname = "master" ] && [ "$_INSTALL_MASTER" -eq $BS_FALSE ] && continue
[ $fname = "syndic" ] && [ "$_INSTALL_SYNDIC" -eq $BS_FALSE ] && continue
if [ -f /bin/systemctl ] && [ "$DISTRO_MAJOR_VERSION" -ge 15 ]; then
if [ -f /bin/systemctl ] && [ "$DISTRO_MAJOR_VERSION" -ge 16 ]; then
__check_services_systemd salt-$fname || return 1
elif [ -f /sbin/initctl ] && [ -f /etc/init/salt-${fname}.conf ]; then
__check_services_upstart salt-$fname || return 1
@ -2755,6 +2743,9 @@ install_debian_deps() {
__PACKAGES="procps pciutils"
__PIP_PACKAGES=""
# YAML module is used for generating custom master/minion configs
__PACKAGES="${__PACKAGES} python-yaml"
# shellcheck disable=SC2086
__apt_get_install_noinput ${__PACKAGES} || return 1
@ -2817,7 +2808,7 @@ install_debian_7_deps() {
fi
# Versions starting with 2015.8.7 and 2016.3.0 are hosted at repo.saltstack.com
if [ "$(echo "$STABLE_REV" | egrep '^(2015\.8|2016\.3|latest|archive\/201[5-6]\.)')" != "" ]; then
if [ "$(echo "$STABLE_REV" | egrep '^(2015\.8|2016\.3|2016\.11|latest|archive\/201[5-6]\.)')" != "" ]; then
# amd64 is just a part of repository URI, 32-bit pkgs are hosted under the same location
SALTSTACK_DEBIAN_URL="${HTTP_VAL}://repo.saltstack.com/apt/debian/${DISTRO_MAJOR_VERSION}/${__REPO_ARCH}/${STABLE_REV}"
echo "deb $SALTSTACK_DEBIAN_URL wheezy main" > "/etc/apt/sources.list.d/saltstack.list"
@ -2841,6 +2832,9 @@ install_debian_7_deps() {
# Additionally install procps and pciutils which allows for Docker bootstraps. See 366#issuecomment-39666813
__PACKAGES='procps pciutils'
# YAML module is used for generating custom master/minion configs
__PACKAGES="${__PACKAGES} python-yaml"
# shellcheck disable=SC2086
__apt_get_install_noinput ${__PACKAGES} || return 1
@ -2896,7 +2890,7 @@ install_debian_8_deps() {
fi
# Versions starting with 2015.5.6, 2015.8.1 and 2016.3.0 are hosted at repo.saltstack.com
if [ "$(echo "$STABLE_REV" | egrep '^(2015\.5|2015\.8|2016\.3|latest|archive\/201[5-6]\.)')" != "" ]; then
if [ "$(echo "$STABLE_REV" | egrep '^(2015\.5|2015\.8|2016\.3|2016\.11|latest|archive\/201[5-6]\.)')" != "" ]; then
SALTSTACK_DEBIAN_URL="${HTTP_VAL}://repo.saltstack.com/apt/debian/${DISTRO_MAJOR_VERSION}/${__REPO_ARCH}/${STABLE_REV}"
echo "deb $SALTSTACK_DEBIAN_URL jessie main" > "/etc/apt/sources.list.d/saltstack.list"
@ -2920,9 +2914,8 @@ install_debian_8_deps() {
# shellcheck disable=SC2086
__apt_get_install_noinput ${__PACKAGES} || return 1
if [ "$_UPGRADE_SYS" -eq $BS_TRUE ]; then
__apt_get_upgrade_noinput || return 1
fi
# YAML module is used for generating custom master/minion configs
__PACKAGES="${__PACKAGES} python-yaml"
if [ "${_EXTRA_PACKAGES}" != "" ]; then
echoinfo "Installing the following extra packages as requested: ${_EXTRA_PACKAGES}"
@ -2938,10 +2931,14 @@ install_debian_git_deps() {
__apt_get_install_noinput git || return 1
fi
if [ "$_INSECURE_DL" -eq $BS_FALSE ] && [ "${_SALT_REPO_URL%%://*}" = "https" ]; then
__apt_get_install_noinput ca-certificates
fi
__git_clone_and_checkout || return 1
__PACKAGES="libzmq3 libzmq3-dev lsb-release python-apt python-backports.ssl-match-hostname python-crypto"
__PACKAGES="${__PACKAGES} python-jinja2 python-msgpack python-requests python-tornado"
__PACKAGES="${__PACKAGES} python-jinja2 python-msgpack python-requests"
__PACKAGES="${__PACKAGES} python-tornado python-yaml python-zmq"
if [ "$_INSTALL_CLOUD" -eq $BS_TRUE ]; then
@ -2975,9 +2972,14 @@ install_debian_8_git_deps() {
__apt_get_install_noinput git || return 1
fi
if [ "$_INSECURE_DL" -eq $BS_FALSE ] && [ "${_SALT_REPO_URL%%://*}" = "https" ]; then
__apt_get_install_noinput ca-certificates
fi
__git_clone_and_checkout || return 1
__PACKAGES='libzmq3 libzmq3-dev lsb-release python-apt python-crypto python-jinja2 python-msgpack python-requests python-yaml python-zmq'
__PACKAGES="libzmq3 libzmq3-dev lsb-release python-apt python-crypto python-jinja2 python-msgpack"
__PACKAGES="${__PACKAGES} python-requests python-systemd python-yaml python-zmq"
if [ "$_INSTALL_CLOUD" -eq $BS_TRUE ]; then
# Install python-libcloud if asked to
@ -3184,16 +3186,7 @@ install_debian_check_services() {
# Fedora Install Functions
#
FEDORA_PACKAGE_MANAGER="yum"
__fedora_get_package_manager() {
if [ "$DISTRO_MAJOR_VERSION" -ge 22 ] || __check_command_exists dnf; then
FEDORA_PACKAGE_MANAGER="dnf"
fi
}
install_fedora_deps() {
__fedora_get_package_manager
if [ $_DISABLE_REPOS -eq $BS_FALSE ]; then
if [ "$_ENABLE_EXTERNAL_ZMQ_REPOS" -eq $BS_TRUE ]; then
@ -3203,32 +3196,25 @@ install_fedora_deps() {
__install_saltstack_copr_salt_repository || return 1
fi
__PACKAGES="yum-utils PyYAML libyaml python-crypto python-jinja2 python-zmq"
if [ "$DISTRO_MAJOR_VERSION" -ge 23 ]; then
__PACKAGES="${__PACKAGES} python2-msgpack python2-requests"
else
__PACKAGES="${__PACKAGES} python-msgpack python-requests"
fi
__PACKAGES="yum-utils PyYAML libyaml python-crypto python-jinja2 python-zmq python2-msgpack python2-requests"
# shellcheck disable=SC2086
$FEDORA_PACKAGE_MANAGER install -y ${__PACKAGES} || return 1
dnf install -y ${__PACKAGES} || return 1
if [ "$_UPGRADE_SYS" -eq $BS_TRUE ]; then
$FEDORA_PACKAGE_MANAGER -y update || return 1
dnf -y update || return 1
fi
if [ "${_EXTRA_PACKAGES}" != "" ]; then
echoinfo "Installing the following extra packages as requested: ${_EXTRA_PACKAGES}"
# shellcheck disable=SC2086
$FEDORA_PACKAGE_MANAGER install -y ${_EXTRA_PACKAGES} || return 1
dnf install -y ${_EXTRA_PACKAGES} || return 1
fi
return 0
}
install_fedora_stable() {
__fedora_get_package_manager
__PACKAGES=""
if [ "$_INSTALL_CLOUD" -eq $BS_TRUE ];then
@ -3245,7 +3231,7 @@ install_fedora_stable() {
fi
# shellcheck disable=SC2086
$FEDORA_PACKAGE_MANAGER install -y ${__PACKAGES} || return 1
dnf install -y ${__PACKAGES} || return 1
return 0
}
@ -3267,11 +3253,15 @@ install_fedora_stable_post() {
}
install_fedora_git_deps() {
__fedora_get_package_manager
if [ "$_INSECURE_DL" -eq $BS_FALSE ] && [ "${_SALT_REPO_URL%%://*}" = "https" ]; then
dnf install -y ca-certificates || return 1
fi
install_fedora_deps || return 1
if ! __check_command_exists git; then
$FEDORA_PACKAGE_MANAGER install -y git || return 1
dnf install -y git || return 1
fi
__git_clone_and_checkout || return 1
@ -3299,7 +3289,7 @@ install_fedora_git_deps() {
fi
# shellcheck disable=SC2086
$FEDORA_PACKAGE_MANAGER install -y ${__PACKAGES} || return 1
dnf install -y ${__PACKAGES} || return 1
if [ "${__PIP_PACKAGES}" != "" ]; then
# shellcheck disable=SC2086,SC2090
@ -3449,7 +3439,13 @@ __install_saltstack_rhel_repository() {
repo_url="repo.saltstack.com"
fi
base_url="${HTTP_VAL}://${repo_url}/yum/redhat/\$releasever/\$basearch/${repo_rev}/"
# Cloud Linux $releasever = 7.x, which doesn't exist in repo.saltstack.com, we need this to be "7"
if [ "${DISTRO_NAME}" = "Cloud Linux" ] && [ "${DISTRO_MAJOR_VERSION}" = "7" ]; then
base_url="${HTTP_VAL}://${repo_url}/yum/redhat/${DISTRO_MAJOR_VERSION}/\$basearch/${repo_rev}/"
else
base_url="${HTTP_VAL}://${repo_url}/yum/redhat/\$releasever/\$basearch/${repo_rev}/"
fi
fetch_url="${HTTP_VAL}://${repo_url}/yum/redhat/${DISTRO_MAJOR_VERSION}/${CPU_ARCH_L}/${repo_rev}/"
if [ "${DISTRO_MAJOR_VERSION}" -eq 5 ]; then
@ -3528,14 +3524,23 @@ install_centos_stable_deps() {
__PACKAGES="yum-utils chkconfig"
if [ "${_EXTRA_PACKAGES}" != "" ]; then
echoinfo "Also installing the following extra packages as requested: ${_EXTRA_PACKAGES}"
__PACKAGES="${__PACKAGES} ${_EXTRA_PACKAGES}"
# YAML module is used for generating custom master/minion configs
if [ "$DISTRO_MAJOR_VERSION" -eq 5 ]; then
__PACKAGES="${__PACKAGES} python26-PyYAML"
else
__PACKAGES="${__PACKAGES} PyYAML"
fi
# shellcheck disable=SC2086
__yum_install_noinput ${__PACKAGES} || return 1
if [ "${_EXTRA_PACKAGES}" != "" ]; then
echoinfo "Installing the following extra packages as requested: ${_EXTRA_PACKAGES}"
# shellcheck disable=SC2086
__yum_install_noinput ${_EXTRA_PACKAGES} || return 1
fi
return 0
}
@ -3574,7 +3579,7 @@ install_centos_stable_post() {
[ $fname = "syndic" ] && [ "$_INSTALL_SYNDIC" -eq $BS_FALSE ] && continue
if [ -f /bin/systemctl ]; then
/usr/systemctl is-enabled salt-${fname}.service > /dev/null 2>&1 || (
/bin/systemctl is-enabled salt-${fname}.service > /dev/null 2>&1 || (
/bin/systemctl preset salt-${fname}.service > /dev/null 2>&1 &&
/bin/systemctl enable salt-${fname}.service > /dev/null 2>&1
)
@ -3593,6 +3598,14 @@ install_centos_stable_post() {
}
install_centos_git_deps() {
if [ "$_INSECURE_DL" -eq $BS_FALSE ] && [ "${_SALT_REPO_URL%%://*}" = "https" ]; then
if [ "$DISTRO_MAJOR_VERSION" -gt 5 ]; then
__yum_install_noinput ca-certificates || return 1
else
__yum_install_noinput "openssl.${CPU_ARCH_L}" || return 1
fi
fi
install_centos_stable_deps || return 1
if ! __check_command_exists git; then
@ -3604,10 +3617,10 @@ install_centos_git_deps() {
__PACKAGES=""
if [ "$DISTRO_MAJOR_VERSION" -eq 5 ]; then
__PACKAGES="${__PACKAGES} python26-PyYAML python26 python26-requests"
__PACKAGES="${__PACKAGES} python26-crypto python26-jinja2 python26-msgpack python26-tornado python26-zmq"
__PACKAGES="${__PACKAGES} python26 python26-crypto python26-jinja2 python26-msgpack python26-requests"
__PACKAGES="${__PACKAGES} python26-tornado python26-zmq"
else
__PACKAGES="${__PACKAGES} PyYAML python-crypto python-futures python-msgpack python-zmq python-jinja2"
__PACKAGES="${__PACKAGES} python-crypto python-futures python-msgpack python-zmq python-jinja2"
__PACKAGES="${__PACKAGES} python-requests python-tornado"
fi
@ -4082,6 +4095,69 @@ install_scientific_linux_check_services() {
#
#######################################################################################################################
#######################################################################################################################
#
# CloudLinux Install Functions
#
install_cloud_linux_stable_deps() {
install_centos_stable_deps || return 1
return 0
}
install_cloud_linux_git_deps() {
install_centos_git_deps || return 1
return 0
}
install_cloud_linux_testing_deps() {
install_centos_testing_deps || return 1
return 0
}
install_cloud_linux_stable() {
install_centos_stable || return 1
return 0
}
install_cloud_linux_git() {
install_centos_git || return 1
return 0
}
install_cloud_linux_testing() {
install_centos_testing || return 1
return 0
}
install_cloud_linux_stable_post() {
install_centos_stable_post || return 1
return 0
}
install_cloud_linux_git_post() {
install_centos_git_post || return 1
return 0
}
install_cloud_linux_testing_post() {
install_centos_testing_post || return 1
return 0
}
install_cloud_linux_restart_daemons() {
install_centos_restart_daemons || return 1
return 0
}
install_cloud_linux_check_services() {
install_centos_check_services || return 1
return 0
}
#
# End of CloudLinux Install Functions
#
#######################################################################################################################
#######################################################################################################################
#
# Amazon Linux AMI Install Functions
@ -4089,6 +4165,10 @@ install_scientific_linux_check_services() {
install_amazon_linux_ami_deps() {
# We need to install yum-utils before doing anything else when installing on
# Amazon Linux ECS-optimized images. See issue #974.
yum -y install yum-utils
ENABLE_EPEL_CMD=""
if [ $_DISABLE_REPOS -eq $BS_TRUE ]; then
ENABLE_EPEL_CMD="--enablerepo=${_EPEL_REPO}"
@ -4133,6 +4213,10 @@ _eof
}
install_amazon_linux_ami_git_deps() {
if [ "$_INSECURE_DL" -eq $BS_FALSE ] && [ "${_SALT_REPO_URL%%://*}" = "https" ]; then
yum -y install ca-certificates || return 1
fi
install_amazon_linux_ami_deps || return 1
ENABLE_EPEL_CMD=""
@ -4238,6 +4322,9 @@ install_arch_linux_stable_deps() {
pacman-db-upgrade || return 1
fi
# YAML module is used for generating custom master/minion configs
pacman -Sy --noconfirm --needed python2-yaml
if [ "$_UPGRADE_SYS" -eq $BS_TRUE ]; then
pacman -Syyu --noconfirm --needed || return 1
fi
@ -4262,7 +4349,7 @@ install_arch_linux_git_deps() {
fi
pacman -R --noconfirm python2-distribute
pacman -Sy --noconfirm --needed python2-crypto python2-setuptools python2-jinja \
python2-markupsafe python2-msgpack python2-psutil python2-yaml \
python2-markupsafe python2-msgpack python2-psutil \
python2-pyzmq zeromq python2-requests python2-systemd || return 1
__git_clone_and_checkout || return 1
@ -4293,7 +4380,7 @@ install_arch_linux_stable() {
pacman -S --noconfirm --needed bash || return 1
pacman -Su --noconfirm || return 1
# We can now resume regular salt update
pacman -Syu --noconfirm salt-zmq || return 1
pacman -Syu --noconfirm salt || return 1
return 0
}
@ -4515,6 +4602,10 @@ install_freebsd_9_stable_deps() {
# shellcheck disable=SC2086
/usr/local/sbin/pkg install ${FROM_FREEBSD} -y swig || return 1
# YAML module is used for generating custom master/minion configs
# shellcheck disable=SC2086
/usr/local/sbin/pkg install ${FROM_FREEBSD} -y py27-yaml || return 1
if [ "${_EXTRA_PACKAGES}" != "" ]; then
echoinfo "Installing the following extra packages as requested: ${_EXTRA_PACKAGES}"
# shellcheck disable=SC2086
@ -5027,8 +5118,7 @@ __ZYPPER_REQUIRES_REPLACE_FILES=-1
__set_suse_pkg_repo() {
suse_pkg_url_path="${DISTRO_REPO}/systemsmanagement:saltstack.repo"
if [ "$_DOWNSTREAM_PKG_REPO" -eq $BS_TRUE ]; then
# FIXME: cleartext download over unsecure protocol (HTTP)
suse_pkg_url_base="http://download.opensuse.org/repositories/systemsmanagement:saltstack"
suse_pkg_url_base="http://download.opensuse.org/repositories/systemsmanagement:/saltstack"
else
suse_pkg_url_base="${HTTP_VAL}://repo.saltstack.com/opensuse"
fi
@ -5127,6 +5217,10 @@ install_opensuse_stable_deps() {
}
install_opensuse_git_deps() {
if [ "$_INSECURE_DL" -eq $BS_FALSE ] && [ "${_SALT_REPO_URL%%://*}" = "https" ]; then
__zypper_install ca-certificates || return 1
fi
install_opensuse_stable_deps || return 1
if ! __check_command_exists git; then
@ -5917,7 +6011,7 @@ config_salt() {
# Copy the minions configuration if found
# Explicitly check for custom master config to avoid moving the minion config
elif [ -f "$_TEMP_CONFIG_DIR/minion" ] && [ "$_CUSTOM_MASTER_CONFIG" = "null" ]; then
__movefile "$_TEMP_CONFIG_DIR/minion" "$_SALT_ETC_DIR" "$_CONFIG_ONLY" || return 1
__movefile "$_TEMP_CONFIG_DIR/minion" "$_SALT_ETC_DIR" "$_FORCE_OVERWRITE" || return 1
CONFIGURED_ANYTHING=$BS_TRUE
fi
@ -6008,9 +6102,6 @@ config_salt() {
exit 0
fi
# Create default logs directory if not exists
mkdir -p /var/log/salt
return 0
}
#
@ -6116,7 +6207,7 @@ for FUNC_NAME in $(__strip_duplicates "$DEP_FUNC_NAMES"); do
done
echodebug "DEPS_INSTALL_FUNC=${DEPS_INSTALL_FUNC}"
# Let's get the minion config function
# Let's get the Salt config function
CONFIG_FUNC_NAMES="config_${DISTRO_NAME_L}${PREFIXED_DISTRO_MAJOR_VERSION}_${ITYPE}_salt"
CONFIG_FUNC_NAMES="$CONFIG_FUNC_NAMES config_${DISTRO_NAME_L}${PREFIXED_DISTRO_MAJOR_VERSION}${PREFIXED_DISTRO_MINOR_VERSION}_${ITYPE}_salt"
CONFIG_FUNC_NAMES="$CONFIG_FUNC_NAMES config_${DISTRO_NAME_L}${PREFIXED_DISTRO_MAJOR_VERSION}_salt"
@ -6265,6 +6356,16 @@ if [ "$_CUSTOM_MASTER_CONFIG" != "null" ] || [ "$_CUSTOM_MINION_CONFIG" != "null
if [ "$_TEMP_CONFIG_DIR" = "null" ]; then
_TEMP_CONFIG_DIR="$_SALT_ETC_DIR"
fi
if [ "$_CONFIG_ONLY" -eq $BS_TRUE ]; then
# Execute function to satisfy dependencies for configuration step
echoinfo "Running ${DEPS_INSTALL_FUNC}()"
$DEPS_INSTALL_FUNC
if [ $? -ne 0 ]; then
echoerror "Failed to run ${DEPS_INSTALL_FUNC}()!!!"
exit 1
fi
fi
fi
# Configure Salt
@ -6277,7 +6378,21 @@ if [ "$CONFIG_SALT_FUNC" != "null" ] && [ "$_TEMP_CONFIG_DIR" != "null" ]; then
fi
fi
# Pre-Seed master keys
# Drop the master address if passed
if [ "$_SALT_MASTER_ADDRESS" != "null" ]; then
[ ! -d "$_SALT_ETC_DIR/minion.d" ] && mkdir -p "$_SALT_ETC_DIR/minion.d"
cat <<_eof > $_SALT_ETC_DIR/minion.d/99-master-address.conf
master: $_SALT_MASTER_ADDRESS
_eof
fi
# Drop the minion id if passed
if [ "$_SALT_MINION_ID" != "null" ]; then
[ ! -d "$_SALT_ETC_DIR" ] && mkdir -p "$_SALT_ETC_DIR"
echo "$_SALT_MINION_ID" > "$_SALT_ETC_DIR/minion_id"
fi
# Pre-seed master keys
if [ "$PRESEED_MASTER_FUNC" != "null" ] && [ "$_TEMP_KEYS_DIR" != "null" ]; then
echoinfo "Running ${PRESEED_MASTER_FUNC}()"
$PRESEED_MASTER_FUNC
@ -6298,29 +6413,6 @@ if [ "$_CONFIG_ONLY" -eq $BS_FALSE ]; then
fi
fi
# Ensure that the cachedir exists
# (Workaround for https://github.com/saltstack/salt/issues/6502)
if [ "$_INSTALL_MINION" -eq $BS_TRUE ]; then
if [ ! -d "${_SALT_CACHE_DIR}/minion/proc" ]; then
echodebug "Creating salt's cachedir"
mkdir -p "${_SALT_CACHE_DIR}/minion/proc"
fi
fi
# Drop the master address if passed
if [ "$_SALT_MASTER_ADDRESS" != "null" ]; then
[ ! -d "$_SALT_ETC_DIR/minion.d" ] && mkdir -p "$_SALT_ETC_DIR/minion.d"
cat <<_eof > $_SALT_ETC_DIR/minion.d/99-master-address.conf
master: $_SALT_MASTER_ADDRESS
_eof
fi
# Drop the minion id if passed
if [ "$_SALT_MINION_ID" != "null" ]; then
[ ! -d "$_SALT_ETC_DIR" ] && mkdir -p "$_SALT_ETC_DIR"
echo "$_SALT_MINION_ID" > "$_SALT_ETC_DIR/minion_id"
fi
# Run any post install function. Only execute function if not in config mode only
if [ "$POST_INSTALL_FUNC" != "null" ] && [ "$_CONFIG_ONLY" -eq $BS_FALSE ]; then
echoinfo "Running ${POST_INSTALL_FUNC}()"

View file

@ -518,11 +518,6 @@ VALID_OPTS = {
# http://api.zeromq.org/3-2:zmq-setsockopt
'pub_hwm': int,
# ZMQ HWM for SaltEvent pub socket
'salt_event_pub_hwm': int,
# ZMQ HWM for EventPublisher pub socket
'event_publisher_pub_hwm': int,
# IPC buffer size
# Refs https://github.com/saltstack/salt/issues/34215
'ipc_write_buffer': int,
@ -1202,10 +1197,6 @@ DEFAULT_MINION_OPTS = {
'sudo_user': '',
'http_request_timeout': 1 * 60 * 60.0, # 1 hour
'http_max_body': 100 * 1024 * 1024 * 1024, # 100GB
# ZMQ HWM for SaltEvent pub socket - different for minion vs. master
'salt_event_pub_hwm': 2000,
# ZMQ HWM for EventPublisher pub socket - different for minion vs. master
'event_publisher_pub_hwm': 1000,
'event_match_type': 'startswith',
'minion_restart_command': [],
'pub_ret': True,
@ -1226,10 +1217,6 @@ DEFAULT_MASTER_OPTS = {
'publish_port': 4505,
'zmq_backlog': 1000,
'pub_hwm': 1000,
# ZMQ HWM for SaltEvent pub socket - different for minion vs. master
'salt_event_pub_hwm': 2000,
# ZMQ HWM for EventPublisher pub socket - different for minion vs. master
'event_publisher_pub_hwm': 1000,
'auth_mode': 1,
'user': 'root',
'worker_threads': 5,

View file

@ -745,12 +745,13 @@ class RemoteFuncs(object):
return False
if 'events' in load:
for event in load['events']:
self.event.fire_event(event, event['tag']) # old dup event
if 'data' in event:
event_data = event['data']
else:
event_data = event
self.event.fire_event(event_data, event['tag']) # old dup event
if load.get('pretag') is not None:
if 'data' in event:
self.event.fire_event(event['data'], tagify(event['tag'], base=load['pretag']))
else:
self.event.fire_event(event, tagify(event['tag'], base=load['pretag']))
self.event.fire_event(event_data, tagify(event['tag'], base=load['pretag']))
else:
tag = load['tag']
self.event.fire_event(load, tag)

View file

@ -134,17 +134,21 @@ def items(*args, **kwargs):
.. versionadded:: 2015.5.0
pillarenv
Pass a specific pillar environment from which to compile pillar data.
If not specified, then the minion's :conf_minion:`pillarenv` option is
not used, and if that also is not specified then all configured pillar
environments will be merged into a single pillar dictionary and
returned.
.. versionadded:: 2016.11.2
saltenv
Included only for compatibility with
:conf_minion:`pillarenv_from_saltenv`, and is otherwise ignored.
.. versionadded:: Nitrogen
pillarenv
Pass a specific pillar environment from which to compile pillar data.
.. versionadded:: Nitrogen
CLI Example:
.. code-block:: bash
@ -169,7 +173,8 @@ def items(*args, **kwargs):
__grains__,
opts['id'],
saltenv=pillarenv,
pillar=kwargs.get('pillar'))
pillar=kwargs.get('pillar'),
pillarenv=kwargs.get('pillarenv') or __opts__['pillarenv'])
return pillar.compile_pillar()

View file

@ -74,11 +74,6 @@ CERT_DEFAULTS = {
'serial_bits': 64,
'algorithm': 'sha256'
}
PEM_RE = re.compile(
r"\s*(?P<pem_header>-----(?:.+?)-----)\s+"
r"(?P<pem_body>.+?)\s+(?P<pem_footer>-----(?:.+?)-----)\s*",
re.DOTALL
)
def __virtual__():
@ -146,11 +141,13 @@ def _new_extension(name, value, critical=0, issuer=None, _pyfree=1):
lhash = None
except AttributeError:
lhash = M2Crypto.m2.x509v3_lhash() # pylint: disable=no-member
ctx = M2Crypto.m2.x509v3_set_conf_lhash(lhash) # pylint: disable=no-member
#ctx not zeroed
ctx = M2Crypto.m2.x509v3_set_conf_lhash(
lhash) # pylint: disable=no-member
# ctx not zeroed
_fix_ctx(ctx, issuer)
x509_ext_ptr = M2Crypto.m2.x509v3_ext_conf(lhash, ctx, name, value) # pylint: disable=no-member
#ctx,lhash freed
x509_ext_ptr = M2Crypto.m2.x509v3_ext_conf(
lhash, ctx, name, value) # pylint: disable=no-member
# ctx,lhash freed
if x509_ext_ptr is None:
raise M2Crypto.X509.X509Error(
@ -353,18 +350,28 @@ def _get_certificate_obj(cert):
return M2Crypto.X509.load_cert_string(text)
def _get_private_key_obj(private_key):
def _get_private_key_obj(private_key, passphrase=None):
'''
Returns a private key object based on PEM text.
'''
private_key = _text_or_file(private_key)
private_key = get_pem_entry(private_key, pem_type='RSA PRIVATE KEY')
rsaprivkey = M2Crypto.RSA.load_key_string(private_key)
rsaprivkey = M2Crypto.RSA.load_key_string(
private_key, callback=_passphrase_callback(passphrase))
evpprivkey = M2Crypto.EVP.PKey()
evpprivkey.assign_rsa(rsaprivkey)
return evpprivkey
def _passphrase_callback(passphrase):
'''
Returns a callback function used to supply a passphrase for private keys
'''
def f(*args):
return passphrase
return f
def _get_request_obj(csr):
'''
Returns a CSR object based on PEM text.
@ -397,6 +404,8 @@ def _make_regex(pem_type):
'''
return re.compile(
r"\s*(?P<pem_header>-----BEGIN {0}-----)\s+"
r"(?:(?P<proc_type>Proc-Type: 4,ENCRYPTED)\s*)?"
r"(?:(?P<dek_info>DEK-Info: (?:DES-[3A-Z\-]+,[0-9A-F]{{16}}|[0-9A-Z\-]+,[0-9A-F]{{32}}))\s*)?"
r"(?P<pem_body>.+?)\s+(?P<pem_footer>"
r"-----END {1}-----)\s*".format(pem_type, pem_type),
re.DOTALL
@ -424,18 +433,21 @@ def get_pem_entry(text, pem_type=None):
MIICyzCC Ar8CAQI...-----END CERTIFICATE REQUEST"
'''
text = _text_or_file(text)
# Replace encoded newlines
text = text.replace('\\n', '\n')
_match = None
if len(text.splitlines()) == 1 and text.startswith('-----') and text.endswith('-----'):
if len(text.splitlines()) == 1 and text.startswith(
'-----') and text.endswith('-----'):
# mine.get returns the PEM on a single line, we fix this
pem_fixed = []
pem_temp = text
while len(pem_temp) > 0:
if pem_temp.startswith('-----'):
# Grab ----(.*)---- blocks
pem_fixed.append(pem_temp[:pem_temp.index('-----', 5)+5])
pem_temp = pem_temp[pem_temp.index('-----', 5)+5:]
pem_fixed.append(pem_temp[:pem_temp.index('-----', 5) + 5])
pem_temp = pem_temp[pem_temp.index('-----', 5) + 5:]
else:
# grab base64 chunks
if pem_temp[:64].count('-') == 0:
@ -446,39 +458,34 @@ def get_pem_entry(text, pem_type=None):
pem_temp = pem_temp[pem_temp.index('-'):]
text = "\n".join(pem_fixed)
if not pem_type:
# Find using a regex iterator, pick the first match
for _match in PEM_RE.finditer(text):
if _match:
break
if not _match:
raise salt.exceptions.SaltInvocationError(
'PEM text not valid:\n{0}'.format(text)
)
_match_dict = _match.groupdict()
pem_header = _match_dict['pem_header']
pem_footer = _match_dict['pem_footer']
pem_body = _match_dict['pem_body']
else:
_dregex = _make_regex('[0-9A-Z ]+')
errmsg = 'PEM text not valid:\n{0}'.format(text)
if pem_type:
_dregex = _make_regex(pem_type)
for _match in _dregex.finditer(text):
if _match:
break
if not _match:
raise salt.exceptions.SaltInvocationError(
'PEM does not contain a single entry of type {0}:\n'
'{1}'.format(pem_type, text)
)
_match_dict = _match.groupdict()
pem_header = _match_dict['pem_header']
pem_footer = _match_dict['pem_footer']
pem_body = _match_dict['pem_body']
errmsg = ('PEM does not contain a single entry of type {0}:\n'
'{1}'.format(pem_type, text))
for _match in _dregex.finditer(text):
if _match:
break
if not _match:
raise salt.exceptions.SaltInvocationError(errmsg)
_match_dict = _match.groupdict()
pem_header = _match_dict['pem_header']
proc_type = _match_dict['proc_type']
dek_info = _match_dict['dek_info']
pem_footer = _match_dict['pem_footer']
pem_body = _match_dict['pem_body']
# Remove all whitespace from body
pem_body = ''.join(pem_body.split())
# Generate correctly formatted pem
ret = pem_header + '\n'
if proc_type:
ret += proc_type + '\n'
if dek_info:
ret += dek_info + '\n' + '\n'
for i in range(0, len(pem_body), 64):
ret += pem_body[i:i + 64] + '\n'
ret += pem_footer + '\n'
@ -651,7 +658,7 @@ def read_crl(crl):
return crlparsed
def get_public_key(key, asObj=False):
def get_public_key(key, passphrase=None, asObj=False):
'''
Returns a string containing the public key in PEM format.
@ -689,7 +696,8 @@ def get_public_key(key, asObj=False):
rsa = csr.get_pubkey().get_rsa()
if (text.startswith('-----BEGIN PRIVATE KEY-----') or
text.startswith('-----BEGIN RSA PRIVATE KEY-----')):
rsa = M2Crypto.RSA.load_key_string(text)
rsa = M2Crypto.RSA.load_key_string(
text, callback=_passphrase_callback(passphrase))
if asObj:
evppubkey = M2Crypto.EVP.PKey()
@ -700,7 +708,7 @@ def get_public_key(key, asObj=False):
return bio.read_all()
def get_private_key_size(private_key):
def get_private_key_size(private_key, passphrase=None):
'''
Returns the bit length of a private key in PEM format.
@ -713,7 +721,7 @@ def get_private_key_size(private_key):
salt '*' x509.get_private_key_size /etc/pki/mycert.key
'''
return _get_private_key_obj(private_key).size() * 8
return _get_private_key_obj(private_key, passphrase).size() * 8
def write_pem(text, path, overwrite=True, pem_type=None):
@ -770,7 +778,12 @@ def write_pem(text, path, overwrite=True, pem_type=None):
return 'PEM written to {0}'.format(path)
def create_private_key(path=None, text=False, bits=2048, verbose=True):
def create_private_key(path=None,
text=False,
bits=2048,
passphrase=None,
cipher='aes_128_cbc',
verbose=True):
'''
Creates a private key in PEM format.
@ -785,6 +798,12 @@ def create_private_key(path=None, text=False, bits=2048, verbose=True):
bits:
Length of the private key in bits. Default 2048
passphrase:
Passphrase for encryting the private key
cipher:
Cipher for encrypting the private key. Has no effect if passhprase is None.
verbose:
Provide visual feedback on stdout. Default True
@ -812,7 +831,12 @@ def create_private_key(path=None, text=False, bits=2048, verbose=True):
rsa = M2Crypto.RSA.gen_key(bits, M2Crypto.m2.RSA_F4, _callback_func)
# pylint: enable=no-member
bio = M2Crypto.BIO.MemoryBuffer()
rsa.save_key_bio(bio, cipher=None)
if passphrase is None:
cipher = None
rsa.save_key_bio(
bio,
cipher=cipher,
callback=_passphrase_callback(passphrase))
if path:
return write_pem(
@ -953,10 +977,20 @@ def create_crl( # pylint: disable=too-many-arguments,too-many-locals
get_pem_entry(signing_private_key))
try:
crltext = crl.export(cert, key, OpenSSL.crypto.FILETYPE_PEM, days=days_valid, digest=bytes(digest))
crltext = crl.export(
cert,
key,
OpenSSL.crypto.FILETYPE_PEM,
days=days_valid,
digest=bytes(digest))
except TypeError:
log.warning('Error signing crl with specified digest. Are you using pyopenssl 0.15 or newer? The default md5 digest will be used.')
crltext = crl.export(cert, key, OpenSSL.crypto.FILETYPE_PEM, days=days_valid)
log.warning(
'Error signing crl with specified digest. Are you using pyopenssl 0.15 or newer? The default md5 digest will be used.')
crltext = crl.export(
cert,
key,
OpenSSL.crypto.FILETYPE_PEM,
days=days_valid)
if text:
return crltext
@ -1127,6 +1161,9 @@ def create_certificate(
certificate, and the public key matching ``signing_private_key`` will
be used to create the certificate.
signing_private_key_passphrase:
Passphrase used to decrypt the signing_private_key.
signing_cert:
A certificate matching the private key that will be used to sign this
certificate. This is used to populate the issuer values in the
@ -1149,6 +1186,10 @@ def create_certificate(
to the certificate, subject or extension information in the CSR will
be lost.
public_key_passphrase:
If the public key is supplied as a private key, this is the passphrase
used to decrypt it.
csr:
A file or PEM string containing a certificate signing request. This
will be used to supply the subject, extensions and public key of a
@ -1298,6 +1339,8 @@ def create_certificate(
raise salt.exceptions.SaltInvocationError(
'Either path or text must be specified, not both.')
if 'public_key_passphrase' not in kwargs:
kwargs['public_key_passphrase'] = None
if ca_server:
if 'signing_policy' not in kwargs:
raise salt.exceptions.SaltInvocationError(
@ -1311,13 +1354,15 @@ def create_certificate(
if 'public_key' in kwargs:
# Strip newlines to make passing through as cli functions easier
kwargs['public_key'] = get_public_key(
kwargs['public_key']).replace('\n', '')
kwargs['public_key'],
passphrase=kwargs['public_key_passphrase']).replace('\n', '')
# Remove system entries in kwargs
# Including listen_in and preqreuired because they are not included
# in STATE_INTERNAL_KEYWORDS
# for salt 2014.7.2
for ignore in list(_STATE_INTERNAL_KEYWORDS) + ['listen_in', 'preqrequired', '__prerequired__']:
for ignore in list(_STATE_INTERNAL_KEYWORDS) + \
['listen_in', 'preqrequired', '__prerequired__']:
kwargs.pop(ignore, None)
cert_txt = __salt__['publish.publish'](
@ -1382,7 +1427,8 @@ def create_certificate(
cert.set_subject(csr.get_subject())
csrexts = read_csr(kwargs['csr'])['X509v3 Extensions']
cert.set_pubkey(get_public_key(kwargs['public_key'], asObj=True))
cert.set_pubkey(get_public_key(kwargs['public_key'],
passphrase=kwargs['public_key_passphrase'], asObj=True))
subject = cert.get_subject()
@ -1431,13 +1477,19 @@ def create_certificate(
cert.add_ext(ext)
if 'signing_private_key_passphrase' not in kwargs:
kwargs['signing_private_key_passphrase'] = None
if 'testrun' in kwargs and kwargs['testrun'] is True:
cert_props = read_certificate(cert)
cert_props['Issuer Public Key'] = get_public_key(
kwargs['signing_private_key'])
kwargs['signing_private_key'],
passphrase=kwargs['signing_private_key_passphrase'])
return cert_props
if not verify_private_key(kwargs['signing_private_key'], signing_cert):
if not verify_private_key(private_key=kwargs['signing_private_key'],
passphrase=kwargs[
'signing_private_key_passphrase'],
public_key=signing_cert):
raise salt.exceptions.SaltInvocationError(
'signing_private_key: {0} '
'does no match signing_cert: {1}'.format(
@ -1447,7 +1499,8 @@ def create_certificate(
)
cert.sign(
_get_private_key_obj(kwargs['signing_private_key']),
_get_private_key_obj(kwargs['signing_private_key'],
passphrase=kwargs['signing_private_key_passphrase']),
kwargs['algorithm']
)
@ -1461,7 +1514,7 @@ def create_certificate(
else:
prepend = ''
write_pem(text=cert.as_pem(), path=os.path.join(kwargs['copypath'],
prepend + kwargs['serial_number']+'.crt'),
prepend + kwargs['serial_number'] + '.crt'),
pem_type='CERTIFICATE')
if path:
@ -1521,7 +1574,8 @@ def create_csr(path=None, text=False, **kwargs):
if 'private_key' not in kwargs and 'public_key' in kwargs:
kwargs['private_key'] = kwargs['public_key']
log.warning("OpenSSL no longer allows working with non-signed CSRs. A private_key must be specified. Attempting to use public_key as private_key")
log.warning(
"OpenSSL no longer allows working with non-signed CSRs. A private_key must be specified. Attempting to use public_key as private_key")
if 'private_key' not in kwargs not in kwargs:
raise salt.exceptions.SaltInvocationError('private_key is required')
@ -1529,7 +1583,10 @@ def create_csr(path=None, text=False, **kwargs):
if 'public_key' not in kwargs:
kwargs['public_key'] = kwargs['private_key']
csr.set_pubkey(get_public_key(kwargs['public_key'], asObj=True))
if 'public_key_passphrase' not in kwargs:
kwargs['public_key_passphrase'] = None
csr.set_pubkey(get_public_key(kwargs['public_key'],
passphrase=kwargs['public_key_passphrase'], asObj=True))
# pylint: disable=unused-variable
for entry, num in six.iteritems(subject.nid):
@ -1567,7 +1624,8 @@ def create_csr(path=None, text=False, **kwargs):
csr.add_extensions(extstack)
csr.sign(_get_private_key_obj(kwargs['private_key']), kwargs['algorithm'])
csr.sign(_get_private_key_obj(kwargs['private_key'],
passphrase=kwargs['public_key_passphrase']), kwargs['algorithm'])
if path:
return write_pem(
@ -1579,7 +1637,7 @@ def create_csr(path=None, text=False, **kwargs):
return csr.as_pem()
def verify_private_key(private_key, public_key):
def verify_private_key(private_key, public_key, passphrase=None):
'''
Verify that 'private_key' matches 'public_key'
@ -1591,6 +1649,9 @@ def verify_private_key(private_key, public_key):
The public key to verify, can be a string or path to a PEM formatted
certificate, csr, or another private key.
passphrase:
Passphrase to decrypt the private key.
CLI Example:
.. code-block:: bash
@ -1598,10 +1659,12 @@ def verify_private_key(private_key, public_key):
salt '*' x509.verify_private_key private_key=/etc/pki/myca.key \\
public_key=/etc/pki/myca.crt
'''
return bool(get_public_key(private_key) == get_public_key(public_key))
return bool(get_public_key(private_key, passphrase)
== get_public_key(public_key))
def verify_signature(certificate, signing_pub_key=None):
def verify_signature(certificate, signing_pub_key=None,
signing_pub_key_passphrase=None):
'''
Verify that ``certificate`` has been signed by ``signing_pub_key``
@ -1613,6 +1676,9 @@ def verify_signature(certificate, signing_pub_key=None):
The public key to verify, can be a string or path to a PEM formatted
certificate, csr, or private key.
signing_pub_key_passphrase:
Passphrase to the signing_pub_key if it is an encrypted private key.
CLI Example:
.. code-block:: bash
@ -1623,7 +1689,8 @@ def verify_signature(certificate, signing_pub_key=None):
cert = _get_certificate_obj(certificate)
if signing_pub_key:
signing_pub_key = get_public_key(signing_pub_key, asObj=True)
signing_pub_key = get_public_key(signing_pub_key,
passphrase=signing_pub_key_passphrase, asObj=True)
return bool(cert.verify(pkey=signing_pub_key) == 1)

View file

@ -98,18 +98,6 @@ class NetapiClient(object):
local = salt.client.get_local_client(mopts=self.opts)
return local.cmd(*args, **kwargs)
def local_batch(self, *args, **kwargs):
'''
Run :ref:`execution modules <all-salt.modules>` against batches of minions
Wraps :py:meth:`salt.client.LocalClient.cmd_batch`
:return: Returns the result from the exeuction module for each batch of
returns
'''
local = salt.client.get_local_client(mopts=self.opts)
return local.cmd_batch(*args, **kwargs)
def local_subset(self, *args, **kwargs):
'''
Run :ref:`execution modules <all-salt.modules>` against subsets of minions

View file

@ -191,7 +191,6 @@ a return like::
# Import Python libs
import time
import math
import fnmatch
import logging
from copy import copy
@ -230,7 +229,6 @@ logger = logging.getLogger()
# # all of these require coordinating minion stuff
# - "local" (done)
# - "local_async" (done)
# - "local_batch" (done)
# # master side
# - "runner" (done)
@ -252,7 +250,6 @@ class SaltClientsMixIn(object):
SaltClientsMixIn.__saltclients = {
'local': local_client.run_job_async,
# not the actual client we'll use.. but its what we'll use to get args
'local_batch': local_client.cmd_batch,
'local_async': local_client.run_job_async,
'runner': salt.runner.RunnerClient(opts=self.application.opts).cmd_async,
'runner_async': None, # empty, since we use the same client as `runner`
@ -390,30 +387,6 @@ class EventListener(object):
del self.timeout_map[future]
# TODO: move to a utils function within salt-- the batching stuff is a bit tied together
def get_batch_size(batch, num_minions):
'''
Return the batch size that you should have
batch: string
num_minions: int
'''
# figure out how many we can keep in flight
partition = lambda x: float(x) / 100.0 * num_minions
try:
if '%' in batch:
res = partition(float(batch.strip('%')))
if res < 1:
return int(math.ceil(res))
else:
return int(res)
else:
return int(batch)
except ValueError:
print(('Invalid batch data sent: {0}\nData must be in the form'
'of %10, 10% or 3').format(batch))
class BaseSaltAPIHandler(tornado.web.RequestHandler, SaltClientsMixIn): # pylint: disable=W0223
ct_out_map = (
('application/json', json.dumps),
@ -809,7 +782,7 @@ class SaltAPIHandler(BaseSaltAPIHandler, SaltClientsMixIn): # pylint: disable=W
Content-Type: application/json
Content-Legnth: 83
{"clients": ["local", "local_batch", "local_async", "runner", "runner_async"], "return": "Welcome"}
{"clients": ["local", "local_async", "runner", "runner_async"], "return": "Welcome"}
'''
ret = {"clients": list(self.saltclients.keys()),
"return": "Welcome"}
@ -927,57 +900,6 @@ class SaltAPIHandler(BaseSaltAPIHandler, SaltClientsMixIn): # pylint: disable=W
self.write(self.serialize({'return': ret}))
self.finish()
@tornado.gen.coroutine
def _disbatch_local_batch(self, chunk):
'''
Disbatch local client batched commands
'''
f_call = salt.utils.format_call(self.saltclients['local_batch'], chunk)
# ping all the minions (to see who we have to talk to)
# Don't catch any exception, since we won't know what to do, we'll
# let the upper level deal with this one
ping_ret = yield self._disbatch_local({'tgt': chunk['tgt'],
'fun': 'test.ping',
'tgt_type': f_call['kwargs']['tgt_type']})
chunk_ret = {}
if not isinstance(ping_ret, dict):
raise tornado.gen.Return(chunk_ret)
minions = list(ping_ret.keys())
maxflight = get_batch_size(f_call['kwargs']['batch'], len(minions))
inflight_futures = []
# override the tgt_type
f_call['kwargs']['tgt_type'] = 'list'
# do this batch
while len(minions) > 0 or len(inflight_futures) > 0:
# if you have more to go, lets disbatch jobs
while len(inflight_futures) < maxflight and len(minions) > 0:
minion_id = minions.pop(0)
batch_chunk = dict(chunk)
batch_chunk['tgt'] = [minion_id]
batch_chunk['tgt_type'] = 'list'
future = self._disbatch_local(batch_chunk)
inflight_futures.append(future)
# if we have nothing to wait for, don't wait
if len(inflight_futures) == 0:
continue
# wait until someone is done
finished_future = yield Any(inflight_futures)
try:
b_ret = finished_future.result()
except TimeoutException:
break
chunk_ret.update(b_ret)
inflight_futures.remove(finished_future)
raise tornado.gen.Return(chunk_ret)
@tornado.gen.coroutine
def _disbatch_local(self, chunk):
'''

View file

@ -222,8 +222,23 @@ per-remote parameter:
- production https://gitserver/git-pillar.git:
- env: prod
If ``__env__`` is specified as the branch name, then git_pillar will use the
branch specified by :conf_master:`git_pillar_base`:
If ``__env__`` is specified as the branch name, then git_pillar will decide
which branch to use based on the following criteria:
- If the minion has a :conf_minion:`pillarenv` configured, it will use that
pillar environment. (2016.11.2 and later)
- Otherwise, if the minion has an ``environment`` configured, it will use that
environment.
- Otherwise, the master's :conf_master:`git_pillar_base` will be used.
.. note::
The use of :conf_minion:`environment` to choose the pillar environment
dates from a time before the :conf_minion:`pillarenv` parameter was added.
In a future release, it will be ignored and either the minion's
:conf_minion:`pillarenv` or the master's :conf_master:`git_pillar_base`
will be used.
Here's an example of using ``__env__`` as the git_pillar environment:
.. code-block:: yaml
@ -388,6 +403,10 @@ def ext_pillar(minion_id, repo, pillar_dirs):
# the pillar top.sls is sourced from the correct location.
pillar_roots = [pillar_dir]
pillar_roots.extend([x for x in all_dirs if x != pillar_dir])
if env == '__env__':
env = opts.get('pillarenv') \
or opts.get('environment') \
or opts.get('git_pillar_base')
opts['pillar_roots'] = {env: pillar_roots}
local_pillar = Pillar(opts, __grains__, minion_id, env)

View file

@ -297,14 +297,20 @@ def call(method, **params):
# either not connected
# either unable to execute the command
err_tb = traceback.format_exc() # let's get the full traceback and display for debugging reasons.
comment = 'Cannot execute "{method}" on {device}{port} as {user}. Reason: {error}!'.format(
device=NETWORK_DEVICE.get('HOSTNAME', '[unspecified hostname]'),
port=(':{port}'.format(port=NETWORK_DEVICE.get('OPTIONAL_ARGS', {}).get('port'))
if NETWORK_DEVICE.get('OPTIONAL_ARGS', {}).get('port') else ''),
user=NETWORK_DEVICE.get('USERNAME', ''),
method=method,
error=error
)
if isinstance(error, NotImplementedError):
comment = '{method} is not implemented for the NAPALM {driver} driver!'.format(
method=method,
driver=NETWORK_DEVICE.get('DRIVER_NAME')
)
else:
comment = 'Cannot execute "{method}" on {device}{port} as {user}. Reason: {error}!'.format(
device=NETWORK_DEVICE.get('HOSTNAME', '[unspecified hostname]'),
port=(':{port}'.format(port=NETWORK_DEVICE.get('OPTIONAL_ARGS', {}).get('port'))
if NETWORK_DEVICE.get('OPTIONAL_ARGS', {}).get('port') else ''),
user=NETWORK_DEVICE.get('USERNAME', ''),
method=method,
error=error
)
log.error(comment)
log.error(err_tb)
return {

View file

@ -113,12 +113,12 @@ There is more documentation about this feature in the
Special files can be managed via the ``mknod`` function. This function will
create and enforce the permissions on a special file. The function supports the
creation of character devices, block devices, and fifo pipes. The function will
creation of character devices, block devices, and FIFO pipes. The function will
create the directory structure up to the special file if it is needed on the
minion. The function will not overwrite or operate on (change major/minor
numbers) existing special files with the exception of user, group, and
permissions. In most cases the creation of some special files require root
permisisons on the minion. This would require that the minion to be run as the
permissions on the minion. This would require that the minion to be run as the
root user. Here is an example of a character device:
.. code-block:: yaml
@ -1489,7 +1489,8 @@ def managed(name,
Default context passed to the template.
backup
Overrides the default backup mode for this specific file.
Overrides the default backup mode for this specific file. See
:ref:`backup_mode documentation <file-state-backups>` for more details.
show_changes
Output a unified diff of the old file and the new file. If ``False``
@ -2782,6 +2783,10 @@ def recurse(name,
Set this to True if empty directories should also be created
(default is False)
backup
Overrides the default backup mode for all replaced files. See
:ref:`backup_mode documentation <file-state-backups>` for more details.
include_pat
When copying, include only this pattern from the source. Default
is glob match; if prefixed with 'E@', then regexp match.

View file

@ -111,8 +111,8 @@ def _compute_diff(configured, expected):
remove_usernames = configured_users - expected_users
common_usernames = expected_users & configured_users
add = dict((username, expected.get(username)) for (username, _) in add_usernames)
remove = dict((username, configured.get(username)) for (username, _) in remove_usernames)
add = dict((username, expected.get(username)) for username in add_usernames)
remove = dict((username, configured.get(username)) for username in remove_usernames)
update = {}
for username in common_usernames:

View file

@ -63,13 +63,6 @@ the mine where it can be easily retrieved by other minions.
/etc/pki/issued_certs:
file.directory: []
/etc/pki/ca.key:
x509.private_key_managed:
- bits: 4096
- backup: True
- require:
- file: /etc/pki
/etc/pki/ca.crt:
x509.certificate_managed:
- signing_private_key: /etc/pki/ca.key
@ -84,8 +77,12 @@ the mine where it can be easily retrieved by other minions.
- days_valid: 3650
- days_remaining: 0
- backup: True
- managed_private_key:
name: /etc/pki/ca.key
bits: 4096
backup: True
- require:
- x509: /etc/pki/ca.key
- file: /etc/pki
mine.send:
module.run:
@ -142,10 +139,6 @@ This state creates a private key then requests a certificate signed by ca accord
.. code-block:: yaml
/etc/pki/www.key:
x509.private_key_managed:
- bits: 4096
/etc/pki/www.crt:
x509.certificate_managed:
- ca_server: ca
@ -154,6 +147,10 @@ This state creates a private key then requests a certificate signed by ca accord
- CN: www.example.com
- days_remaining: 30
- backup: True
- managed_private_key:
name: /etc/pki/www.key
bits: 4096
backup: True
'''
@ -190,7 +187,8 @@ def _revoked_to_list(revs):
list_ = []
for rev in revs:
for rev_name, props in six.iteritems(rev): # pylint: disable=unused-variable
for rev_name, props in six.iteritems(
rev): # pylint: disable=unused-variable
dict_ = {}
for prop in props:
for propname, val in six.iteritems(prop):
@ -202,11 +200,46 @@ def _revoked_to_list(revs):
return list_
def _get_file_args(name, **kwargs):
valid_file_args = ['user',
'group',
'mode',
'makedirs',
'dir_mode',
'backup',
'create',
'follow_symlinks',
'check_cmd']
file_args = {}
extra_args = {}
for k, v in kwargs.items():
if k in valid_file_args:
file_args[k] = v
else:
extra_args[k] = v
file_args['name'] = name
return file_args, extra_args
def _check_private_key(name, bits=2048, passphrase=None, new=False):
current_bits = 0
if os.path.isfile(name):
try:
current_bits = __salt__['x509.get_private_key_size'](
private_key=name, passphrase=passphrase)
except salt.exceptions.SaltInvocationError:
pass
return current_bits == bits and not new
def private_key_managed(name,
bits=2048,
passphrase=None,
cipher='aes_128_cbc',
new=False,
backup=False,
verbose=True,):
verbose=True,
**kwargs):
'''
Manage a private key's existence.
@ -216,14 +249,15 @@ def private_key_managed(name,
bits:
Key length in bits. Default 2048.
passphrase:
Passphrase for encrypting the private key.
cipher:
Cipher for encrypting the private key.
new:
Always create a new key. Defaults to False.
Combining new with :mod:`prereq <salt.states.requsities.preqreq>` can allow key rotation
whenever a new certificiate is generated.
backup:
When replacing an existing file, backup the old file on the minion.
Default is False.
Combining new with :mod:`prereq <salt.states.requsities.preqreq>`, or when used as part of a `managed_private_key` can allow key rotation whenever a new certificiate is generated.
verbose:
Provide visual feedback on stdout, dots while key is generated.
@ -231,6 +265,9 @@ def private_key_managed(name,
.. versionadded:: 2016.11.0
kwargs:
Any kwargs supported by file.managed are supported.
Example:
The jinja templating in this example ensures a private key is generated if the file doesn't exist
@ -247,45 +284,24 @@ def private_key_managed(name,
- x509: /etc/pki/www.crt
{%- endif %}
'''
ret = {'name': name, 'changes': {}, 'result': False, 'comment': ''}
current_bits = 0
if os.path.isfile(name):
try:
current_bits = __salt__['x509.get_private_key_size'](private_key=name)
current = "{0} bit private key".format(current_bits)
except salt.exceptions.SaltInvocationError:
current = '{0} is not a valid Private Key.'.format(name)
file_args, kwargs = _get_file_args(name, **kwargs)
new_key = False
if _check_private_key(name, bits, passphrase, new):
file_args['contents'] = __salt__['x509.get_pem_entry'](
name, pem_type='RSA PRIVATE KEY')
else:
current = '{0} does not exist.'.format(name)
new_key = True
file_args['contents'] = __salt__['x509.create_private_key'](
text=True, bits=bits, passphrase=passphrase, cipher=cipher, verbose=verbose)
if current_bits == bits and not new:
ret['result'] = True
ret['comment'] = 'The Private key is already in the correct state'
return ret
ret['changes'] = {
'old': current,
'new': "{0} bit private key".format(bits)}
if __opts__['test'] is True:
ret['result'] = None
ret['comment'] = 'The Private Key "{0}" will be updated.'.format(name)
return ret
if os.path.isfile(name) and backup:
bkroot = os.path.join(__opts__['cachedir'], 'file_backup')
salt.utils.backup_minion(name, bkroot)
ret['comment'] = __salt__['x509.create_private_key'](
path=name, bits=bits, verbose=verbose)
ret['result'] = True
ret = __states__['file.managed'](**file_args)
if ret['changes'] and new_key:
ret['changes'] = 'New private key generated'
return ret
def csr_managed(name,
backup=False,
**kwargs):
'''
Manage a Certificate Signing Request
@ -297,6 +313,9 @@ def csr_managed(name,
The properties to be added to the certificate request, including items like subject, extensions
and public key. See above for valid properties.
kwargs:
Any arguments supported by :state:`file.managed <salt.states.file.managed>` are supported.
Example:
.. code-block:: yaml
@ -310,45 +329,23 @@ def csr_managed(name,
- L: Salt Lake City
- keyUsage: 'critical dataEncipherment'
'''
ret = {'name': name, 'changes': {}, 'result': False, 'comment': ''}
old = __salt__['x509.read_csr'](name)
file_args, kwargs = _get_file_args(name, **kwargs)
file_args['contents'] = __salt__['x509.create_csr'](text=True, **kwargs)
if os.path.isfile(name):
try:
current = __salt__['x509.read_csr'](csr=name)
except salt.exceptions.SaltInvocationError:
current = '{0} is not a valid CSR.'.format(name)
else:
current = '{0} does not exist.'.format(name)
new_csr = __salt__['x509.create_csr'](text=True, **kwargs)
new = __salt__['x509.read_csr'](csr=new_csr)
if current == new:
ret['result'] = True
ret['comment'] = 'The CSR is already in the correct state'
return ret
ret['changes'] = {
'old': current,
'new': new, }
if __opts__['test'] is True:
ret['result'] = None
ret['comment'] = 'The CSR {0} will be updated.'.format(name)
if os.path.isfile(name) and backup:
bkroot = os.path.join(__opts__['cachedir'], 'file_backup')
salt.utils.backup_minion(name, bkroot)
ret['comment'] = __salt__['x509.write_pem'](text=new_csr, path=name, pem_type="CERTIFICATE REQUEST")
ret['result'] = True
ret = __states__['file.managed'](**file_args)
if ret['changes']:
new = __salt__['x509.read_csr'](file_args['contents'])
if old != new:
ret['changes'] = {"Old": old, "New": new}
return ret
def certificate_managed(name,
days_remaining=90,
backup=False,
managed_private_key=None,
append_certs=None,
**kwargs):
'''
Manage a Certificate
@ -360,12 +357,14 @@ def certificate_managed(name,
The minimum number of days remaining when the certificate should be recreated. Default is 90. A
value of 0 disables automatic renewal.
backup:
When replacing an existing file, backup the old file on the minion. Default is False.
managed_private_key:
Manages the private key corresponding to the certificate. All of the arguments supported by :state:`x509.private_key_managed <salt.states.x509.private_key_managed>` are supported. If `name` is not speicified or is the same as the name of the certificate, the private key and certificate will be written together in the same file.
append_certs:
A list of certificates to be appended to the managed file.
kwargs:
Any arguments supported by :mod:`x509.create_certificate <salt.modules.x509.create_certificate>`
are supported.
Any arguments supported by :mod:`x509.create_certificate <salt.modules.x509.create_certificate>` or :state:`file.managed <salt.states.file.managed>` are supported.
Examples:
@ -400,11 +399,42 @@ def certificate_managed(name,
- backup: True
'''
ret = {'name': name, 'changes': {}, 'result': False, 'comment': ''}
if 'path' in kwargs:
name = kwargs.pop('path')
file_args, kwargs = _get_file_args(name, **kwargs)
rotate_private_key = False
new_private_key = False
if managed_private_key:
private_key_args = {
'name': name,
'new': False,
'bits': 2048,
'passphrase': None,
'cipher': 'aes_128_cbc',
'verbose': True
}
private_key_args.update(managed_private_key)
kwargs['public_key_passphrase'] = private_key_args['passphrase']
if private_key_args['new']:
rotate_private_key = True
private_key_args['new'] = False
if _check_private_key(private_key_args['name'],
private_key_args['bits'],
private_key_args['passphrase'],
private_key_args['new']):
private_key = __salt__['x509.get_pem_entry'](
private_key_args['name'], pem_type='RSA PRIVATE KEY')
else:
new_private_key = True
private_key = __salt__['x509.create_private_key'](text=True, bits=private_key_args['bits'], passphrase=private_key_args[
'passphrase'], cipher=private_key_args['cipher'], verbose=private_key_args['verbose'])
kwargs['public_key'] = private_key
current_days_remaining = 0
current_comp = {}
@ -418,7 +448,7 @@ def certificate_managed(name,
try:
current_comp['X509v3 Extensions']['authorityKeyIdentifier'] = (
re.sub(r'serial:([0-9A-F]{2}:)*[0-9A-F]{2}', 'serial:--',
current_comp['X509v3 Extensions']['authorityKeyIdentifier']))
current_comp['X509v3 Extensions']['authorityKeyIdentifier']))
except KeyError:
pass
current_comp.pop('Not Before')
@ -427,8 +457,8 @@ def certificate_managed(name,
current_comp.pop('SHA-256 Finger Print')
current_notafter = current_comp.pop('Not After')
current_days_remaining = (
datetime.datetime.strptime(current_notafter, '%Y-%m-%d %H:%M:%S') -
datetime.datetime.now()).days
datetime.datetime.strptime(current_notafter, '%Y-%m-%d %H:%M:%S') -
datetime.datetime.now()).days
if days_remaining == 0:
days_remaining = current_days_remaining - 1
except salt.exceptions.SaltInvocationError:
@ -437,7 +467,8 @@ def certificate_managed(name,
current = '{0} does not exist.'.format(name)
if 'ca_server' in kwargs and 'signing_policy' not in kwargs:
raise salt.exceptions.SaltInvocationError('signing_policy must be specified if ca_server is.')
raise salt.exceptions.SaltInvocationError(
'signing_policy must be specified if ca_server is.')
new = __salt__['x509.create_certificate'](testrun=True, **kwargs)
@ -450,7 +481,7 @@ def certificate_managed(name,
try:
new_comp['X509v3 Extensions']['authorityKeyIdentifier'] = (
re.sub(r'serial:([0-9A-F]{2}:)*[0-9A-F]{2}', 'serial:--',
new_comp['X509v3 Extensions']['authorityKeyIdentifier']))
new_comp['X509v3 Extensions']['authorityKeyIdentifier']))
except KeyError:
pass
new_comp.pop('Not Before')
@ -462,28 +493,58 @@ def certificate_managed(name,
else:
new_comp = new
new_certificate = False
if (current_comp == new_comp and
current_days_remaining > days_remaining and
__salt__['x509.verify_signature'](name, new_issuer_public_key)):
ret['result'] = True
ret['comment'] = 'The certificate is already in the correct state'
return ret
certificate = __salt__['x509.get_pem_entry'](
name, pem_type='CERTIFICATE')
else:
if rotate_private_key and not new_private_key:
new_private_key = True
private_key = __salt__['x509.create_private_key'](
text=True, bits=private_key_args['bits'], verbose=private_key_args['verbose'])
kwargs['public_key'] = private_key
new_certificate = True
certificate = __salt__['x509.create_certificate'](text=True, **kwargs)
ret['changes'] = {
'old': current,
'new': new, }
file_args['contents'] = ''
private_ret = {}
if managed_private_key:
if private_key_args['name'] == name:
file_args['contents'] = private_key
else:
private_file_args = copy.deepcopy(file_args)
unique_private_file_args, _ = _get_file_args(**private_key_args)
private_file_args.update(unique_private_file_args)
private_file_args['contents'] = private_key
private_ret = __states__['file.managed'](**private_file_args)
if not private_ret['result']:
return private_ret
if __opts__['test'] is True:
ret['result'] = None
ret['comment'] = 'The certificate {0} will be updated.'.format(name)
return ret
file_args['contents'] += certificate
if os.path.isfile(name) and backup:
bkroot = os.path.join(__opts__['cachedir'], 'file_backup')
salt.utils.backup_minion(name, bkroot)
if not append_certs:
append_certs = []
for append_cert in append_certs:
file_args[
'contents'] += __salt__['x509.get_pem_entry'](append_cert, pem_type='CERTIFICATE')
ret['comment'] = __salt__['x509.create_certificate'](path=name, **kwargs)
ret['result'] = True
file_args['show_changes'] = False
ret = __states__['file.managed'](**file_args)
if ret['changes']:
ret['changes'] = {'Certificate': ret['changes']}
else:
ret['changes'] = {}
if private_ret and private_ret['changes']:
ret['changes']['Private Key'] = private_ret['changes']
if new_private_key:
ret['changes']['Private Key'] = 'New private key generated'
if new_certificate:
ret['changes']['Certificate'] = {
'Old': current,
'New': __salt__['x509.read_certificate'](certificate=certificate)}
return ret
@ -496,7 +557,7 @@ def crl_managed(name,
digest="",
days_remaining=30,
include_expired=False,
backup=False,):
**kwargs):
'''
Manage a Certificate Revocation List
@ -530,8 +591,8 @@ def crl_managed(name,
include_expired:
Include expired certificates in the CRL. Default is ``False``.
backup:
When replacing an existing file, backup the old file on the minion. Default is False.
kwargs:
Any arguments supported by :state:`file.managed <salt.states.file.managed>` are supported.
Example:
@ -552,8 +613,6 @@ def crl_managed(name,
- revocation_date: 2015-02-25 00:00:00
- reason: cessationOfOperation
'''
ret = {'name': name, 'changes': {}, 'result': False, 'comment': ''}
if revoked is None:
revoked = []
@ -569,8 +628,8 @@ def crl_managed(name,
current_comp.pop('Last Update')
current_notafter = current_comp.pop('Next Update')
current_days_remaining = (
datetime.datetime.strptime(current_notafter, '%Y-%m-%d %H:%M:%S') -
datetime.datetime.now()).days
datetime.datetime.strptime(current_notafter, '%Y-%m-%d %H:%M:%S') -
datetime.datetime.now()).days
if days_remaining == 0:
days_remaining = current_days_remaining - 1
except salt.exceptions.SaltInvocationError:
@ -579,43 +638,35 @@ def crl_managed(name,
current = '{0} does not exist.'.format(name)
new_crl = __salt__['x509.create_crl'](text=True, signing_private_key=signing_private_key,
signing_cert=signing_cert, revoked=revoked, days_valid=days_valid, digest=digest, include_expired=include_expired)
signing_cert=signing_cert, revoked=revoked, days_valid=days_valid, digest=digest, include_expired=include_expired)
new = __salt__['x509.read_crl'](crl=new_crl)
new_comp = new.copy()
new_comp.pop('Last Update')
new_comp.pop('Next Update')
file_args, kwargs = _get_file_args(name, **kwargs)
new_crl = False
if (current_comp == new_comp and
current_days_remaining > days_remaining and
__salt__['x509.verify_crl'](name, signing_cert)):
file_args['contents'] = __salt__[
'x509.get_pem_entry'](name, pem_type='X509 CRL')
else:
new_crl = True
file_args['contents'] = new_crl
ret['result'] = True
ret['comment'] = 'The crl is already in the correct state'
return ret
ret['changes'] = {
'old': current,
'new': new, }
if __opts__['test'] is True:
ret['result'] = None
ret['comment'] = 'The crl {0} will be updated.'.format(name)
return ret
if os.path.isfile(name) and backup:
bkroot = os.path.join(__opts__['cachedir'], 'file_backup')
salt.utils.backup_minion(name, bkroot)
ret['comment'] = __salt__['x509.write_pem'](text=new_crl, path=name, pem_type='X509 CRL')
ret['result'] = True
ret = __states__['file.managed'](**file_args)
if new_crl:
ret['changes'] = {'Old': current, 'New': __salt__[
'x509.read_crl'](crl=new_crl)}
return ret
def pem_managed(name,
text,
backup=False):
backup=False,
**kwargs):
'''
Manage the contents of a PEM file directly with the content in text, ensuring correct formatting.
@ -625,37 +676,10 @@ def pem_managed(name,
text:
The PEM formatted text to write.
backup:
When replacing an existing file, backup the old file on the minion. Default is False.
kwargs:
Any arguments supported by :state:`file.managed <salt.states.file.managed>` are supported.
'''
ret = {'name': name, 'changes': {}, 'result': False, 'comment': ''}
file_args, kwargs = _get_file_args(name, **kwargs)
file_args['contents'] = __salt__['x509.get_pem_entry'](text=text)
new = __salt__['x509.get_pem_entry'](text=text)
try:
with salt.utils.fopen(name) as fp_:
current = fp_.read()
except (OSError, IOError):
current = '{0} does not exist or is unreadable'.format(name)
if new == current:
ret['result'] = True
ret['comment'] = 'The file is already in the correct state'
return ret
ret['changes']['new'] = new
ret['changes']['old'] = current
if __opts__['test'] is True:
ret['result'] = None
ret['comment'] = 'The file {0} will be updated.'.format(name)
return ret
if os.path.isfile(name) and backup:
bkroot = os.path.join(__opts__['cachedir'], 'file_backup')
salt.utils.backup_minion(name, bkroot)
ret['comment'] = __salt__['x509.write_pem'](text=text, path=name)
ret['result'] = True
return ret
return __states__['file.managed'](**file_args)

View file

@ -680,7 +680,9 @@ class GitProvider(object):
Resolve dynamically-set branch
'''
if self.branch == '__env__':
target = self.opts.get('environment') or 'base'
target = self.opts.get('pillarenv') \
or self.opts.get('environment') \
or 'base'
return self.opts['{0}_base'.format(self.role)] \
if target == 'base' \
else target

View file

@ -634,6 +634,8 @@ class CkMinions(object):
make sure everyone has checked back in.
'''
try:
if expr is None:
expr = ''
check_func = getattr(self, '_check_{0}_minions'.format(tgt_type), None)
if tgt_type in ('grain',
'grain_pcre',

View file

@ -6,9 +6,9 @@ Nova class
# Import Python libs
from __future__ import absolute_import, with_statement
from distutils.version import LooseVersion # pylint: disable=no-name-in-module,import-error
import time
import inspect
import logging
import time
# Import third party libs
import salt.ext.six as six
@ -26,6 +26,14 @@ try:
HAS_NOVA = True
except ImportError:
pass
HAS_KEYSTONEAUTH = False
try:
import keystoneauth1.loading
import keystoneauth1.session
HAS_KEYSTONEAUTH = True
except ImportError:
pass
# pylint: enable=import-error
# Import salt libs
@ -169,6 +177,15 @@ def get_entry(dict_, key, value, raise_error=True):
return {}
def get_entry_multi(dict_, pairs, raise_error=True):
for entry in dict_:
if all([entry[key] == value for key, value in pairs]):
return entry
if raise_error is True:
raise SaltCloudSystemExit('Unable to find {0} in {1}.'.format(pairs, dict_))
return {}
def sanatize_novaclient(kwargs):
variables = (
'username', 'api_key', 'project_id', 'auth_url', 'insecure',
@ -201,11 +218,79 @@ class SaltNova(object):
region_name=None,
password=None,
os_auth_plugin=None,
use_keystoneauth=False,
**kwargs
):
'''
Set up nova credentials
'''
if all([use_keystoneauth, HAS_KEYSTONEAUTH]):
self._new_init(username=username,
project_id=project_id,
auth_url=auth_url,
region_name=region_name,
password=password,
os_auth_plugin=os_auth_plugin,
**kwargs)
else:
self._old_init(username=username,
project_id=project_id,
auth_url=auth_url,
region_name=region_name,
password=password,
os_auth_plugin=os_auth_plugin,
**kwargs)
def _new_init(self, username, project_id, auth_url, region_name, password, os_auth_plugin, auth=None, **kwargs):
if auth is None:
auth = {}
loader = keystoneauth1.loading.get_plugin_loader(os_auth_plugin or 'password')
self.client_kwargs = kwargs.copy()
self.kwargs = auth.copy()
if not self.extensions:
if hasattr(OpenStackComputeShell, '_discover_extensions'):
self.extensions = OpenStackComputeShell()._discover_extensions('2.0')
else:
self.extensions = client.discover_extensions('2.0')
for extension in self.extensions:
extension.run_hooks('__pre_parse_args__')
self.client_kwargs['extensions'] = self.extensions
self.kwargs['username'] = username
self.kwargs['project_name'] = project_id
self.kwargs['auth_url'] = auth_url
self.kwargs['password'] = password
if auth_url.endswith('3'):
self.kwargs['user_domain_name'] = kwargs.get('user_domain_name', 'default')
self.kwargs['project_domain_name'] = kwargs.get('project_domain_name', 'default')
self.client_kwargs['region_name'] = region_name
self.client_kwargs['service_type'] = 'compute'
if hasattr(self, 'extensions'):
# needs an object, not a dictionary
self.kwargstruct = KwargsStruct(**self.client_kwargs)
for extension in self.extensions:
extension.run_hooks('__post_parse_args__', self.kwargstruct)
self.client_kwargs = self.kwargstruct.__dict__
# Requires novaclient version >= 2.6.1
self.version = str(kwargs.get('version', 2))
self.client_kwargs = sanatize_novaclient(self.client_kwargs)
options = loader.load_from_options(**self.kwargs)
self.session = keystoneauth1.session.Session(auth=options)
conn = client.Client(version=self.version, session=self.session, **self.client_kwargs)
self.kwargs['auth_token'] = conn.client.session.get_token()
self.catalog = conn.client.session.get('/auth/catalog', endpoint_filter={'service_type': 'identity'}).json().get('catalog', [])
if conn.client.get_endpoint(service_type='identity').endswith('v3'):
self._v3_setup(region_name)
else:
self._v2_setup(region_name)
def _old_init(self, username, project_id, auth_url, region_name, password, os_auth_plugin, **kwargs):
self.kwargs = kwargs.copy()
if not self.extensions:
if hasattr(OpenStackComputeShell, '_discover_extensions'):
@ -259,6 +344,33 @@ class SaltNova(object):
self.kwargs['auth_token'] = conn.client.auth_token
self.catalog = conn.client.service_catalog.catalog['access']['serviceCatalog']
self._v2_setup(region_name)
def _v3_setup(self, region_name):
if region_name is not None:
servers_endpoints = get_entry(self.catalog, 'type', 'compute')['endpoints']
self.kwargs['bypass_url'] = get_entry_multi(
servers_endpoints,
[('region', region_name), ('interface', 'public')]
)['url']
self.compute_conn = client.Client(version=self.version, session=self.session, **self.client_kwargs)
volume_endpoints = get_entry(self.catalog, 'type', 'volume', raise_error=False).get('endpoints', {})
if volume_endpoints:
if region_name is not None:
self.kwargs['bypass_url'] = get_entry_multi(
volume_endpoints,
[('region', region_name), ('interface', 'public')]
)['url']
self.volume_conn = client.Client(version=self.version, session=self.session, **self.client_kwargs)
if hasattr(self, 'extensions'):
self.expand_extensions()
else:
self.volume_conn = None
def _v2_setup(self, region_name):
if region_name is not None:
servers_endpoints = get_entry(self.catalog, 'type', 'compute')['endpoints']
self.kwargs['bypass_url'] = get_entry(

View file

@ -48,13 +48,12 @@ class TestSaltAPIHandler(SaltnadoTestCase):
)
self.assertEqual(response.code, 200)
response_obj = json.loads(response.body)
self.assertEqual(response_obj['clients'],
['runner',
'runner_async',
'local_async',
'local',
'local_batch']
)
self.assertItemsEqual(response_obj['clients'],
['runner',
'runner_async',
'local_async',
'local']
)
self.assertEqual(response_obj['return'], 'Welcome')
def test_post_no_auth(self):
@ -152,68 +151,6 @@ class TestSaltAPIHandler(SaltnadoTestCase):
)
self.assertEqual(response.code, 400)
# local_batch tests
@skipIf(True, 'to be reenabled when #23623 is merged')
def test_simple_local_batch_post(self):
'''
Basic post against local_batch
'''
low = [{'client': 'local_batch',
'tgt': '*',
'fun': 'test.ping',
}]
response = self.fetch('/',
method='POST',
body=json.dumps(low),
headers={'Content-Type': self.content_type_map['json'],
saltnado.AUTH_TOKEN_HEADER: self.token['token']},
connect_timeout=30,
request_timeout=30,
)
response_obj = json.loads(response.body)
self.assertEqual(response_obj['return'], [{'minion': True, 'sub_minion': True}])
# local_batch tests
@skipIf(True, 'to be reenabled when #23623 is merged')
def test_full_local_batch_post(self):
'''
Test full parallelism of local_batch
'''
low = [{'client': 'local_batch',
'tgt': '*',
'fun': 'test.ping',
'batch': '100%',
}]
response = self.fetch('/',
method='POST',
body=json.dumps(low),
headers={'Content-Type': self.content_type_map['json'],
saltnado.AUTH_TOKEN_HEADER: self.token['token']},
connect_timeout=30,
request_timeout=30,
)
response_obj = json.loads(response.body)
self.assertEqual(response_obj['return'], [{'minion': True, 'sub_minion': True}])
def test_simple_local_batch_post_no_tgt(self):
'''
Local_batch testing with no tgt
'''
low = [{'client': 'local_batch',
'tgt': 'minion_we_dont_have',
'fun': 'test.ping',
}]
response = self.fetch('/',
method='POST',
body=json.dumps(low),
headers={'Content-Type': self.content_type_map['json'],
saltnado.AUTH_TOKEN_HEADER: self.token['token']},
connect_timeout=30,
request_timeout=30,
)
response_obj = json.loads(response.body)
self.assertEqual(response_obj['return'], [{}])
# local_async tests
def test_simple_local_async_post(self):
low = [{'client': 'local_async',
@ -435,7 +372,7 @@ class TestMinionSaltAPIHandler(SaltnadoTestCase):
make sure you get an error
'''
# get a token for this test
low = [{'client': 'local_batch',
low = [{'client': 'local',
'tgt': '*',
'fun': 'test.ping',
}]

View file

@ -30,6 +30,7 @@ filemod.__opts__ = {
'file_roots': {'base': 'tmp'},
'pillar_roots': {'base': 'tmp'},
'cachedir': 'tmp',
'grains': {},
}
filemod.__grains__ = {'kernel': 'Linux'}