fish shell completion: merge

This commit is contained in:
Roman Inflianskas 2014-09-13 14:45:20 +04:00
commit ef2b3d81b9
56 changed files with 5919 additions and 367 deletions

View file

@ -218,6 +218,7 @@ Full list of builtin execution modules
svn
swift
sysbench
syslog_ng
sysmod
system
systemd

View file

@ -0,0 +1,6 @@
======================
salt.modules.syslog_ng
======================
.. automodule:: salt.modules.syslog_ng
:members:

View file

@ -159,7 +159,7 @@ the module is loaded with the name of the string.
This means that the package manager modules can be presented as the ``pkg`` module
regardless of what the actual module is named.
Since ``__virtual__`` is called before the module is loaded, ``__salt__ `` will be
Since ``__virtual__`` is called before the module is loaded, ``__salt__`` will be
unavailable as it will not have been packed into the module at this point in time.
The package manager modules are among the best example of using the ``__virtual__``
@ -168,7 +168,7 @@ function. Some examples:
- :blob:`pacman.py <salt/modules/pacman.py>`
- :blob:`yumpkg.py <salt/modules/yumpkg.py>`
- :blob:`aptpkg.py <salt/modules/aptpkg.py>`
- :blob:`at.py` <salt/modules/at.py>`
- :blob:`at.py <salt/modules/at.py>`
.. note::
Modules which return a string from ``__virtual__`` that is already used by a module that

View file

@ -0,0 +1,6 @@
=================
salt.pillar.pepa
=================
.. automodule:: salt.pillar.pepa
:members:

View file

@ -123,6 +123,7 @@ Full list of builtin state modules
supervisord
svn
sysctl
syslog_ng
test
timezone
tomcat

View file

@ -0,0 +1,6 @@
=====================
salt.states.syslog_ng
=====================
.. automodule:: salt.states.syslog_ng
:members:

View file

@ -100,7 +100,7 @@ the discrete states are split or groups into separate sls files:
- sls: network
In this example, the httpd service running state will not be applied
(i.e., the httpd service will not be started) unless both the https package is
(i.e., the httpd service will not be started) unless both the httpd package is
installed AND the network state is satisfied.
.. note:: Requisite matching

View file

@ -45,7 +45,7 @@ returner system. To configure the master job cache, set up an external returner
database based on the instructions included with each returner and then simply
add the following configuration to the master configuration file:
.. code_block:: yaml
.. code-block:: yaml
master_job_cache: mysql
@ -63,6 +63,6 @@ described in the specific returner documentation. Ensure that the returner
database is accessible from the minions, and set the `ext_job_cache` setting
in the master configuration file:
.. code_block:: yaml
.. code-block:: yaml
ext_job_cache: redis

View file

@ -219,7 +219,7 @@ New Salt-Cloud Providers
- :mod:`Aliyun ECS Cloud <salt.cloud.clouds.aliyun>`
- :mod:`LXC Containers <salt.cloud.clouds.lxc>`
- :mod:`Proxmox KVM Containers <salt.cloud.clouds.proxmox>`
- :mod:`Proxmox (OpenVZ containers & KVM) <salt.cloud.clouds.proxmox>`
Deprecations

View file

@ -3,7 +3,10 @@
Syslog-ng usage
===============
The syslog\_ng state modul is to generate syslog-ng
Overview
--------
Syslog\_ng state module is for generating syslog-ng
configurations. You can do the following things:
- generate syslog-ng configuration from YAML,
@ -16,130 +19,199 @@ configuration, get the version and other information about syslog-ng.
Configuration
-------------
The following configuration is an example, how a complete syslog-ng
state configuration looks like:
Users can create syslog-ng configuration statements with the
:py:func:`syslog_ng.config <salt.states.syslog_ng.config>` function. It requires
a `name` and a `config` parameter. The `name` parameter determines the name of
the generated statement and the `config` parameter holds a parsed YAML structure.
A statement can be declared in the following forms (both are equivalent):
.. code-block:: yaml
source.s_localhost:
syslog_ng.config:
- config:
- tcp:
- ip: "127.0.0.1"
- port: 1233
.. code-block:: yaml
s_localhost:
syslog_ng.config:
- config:
source:
- tcp:
- ip: "127.0.0.1"
- port: 1233
The first one is called short form, because it needs less typing. Users can use lists
and dictionaries to specify their configuration. The format is quite self describing and
there are more examples [at the end](#examples) of this document.
Quotation
---------
The quotation can be tricky sometimes but here are some rules to follow:
* when a string meant to be ``"string"`` in the generated configuration, it should be like ``'"string"'`` in the YAML document
* similarly, users should write ``"'string'"`` to get ``'string'`` in the generated configuration
Full example
------------
The following configuration is an example, how a complete syslog-ng configuration looks like:
.. code-block:: yaml
# Set the location of the configuration file
"/home/tibi/install/syslog-ng/etc/syslog-ng.conf":
syslog_ng.set_config_file
set_location:
module.run:
- name: syslog_ng.set_config_file
- m_name: "/home/tibi/install/syslog-ng/etc/syslog-ng.conf"
# The syslog-ng and syslog-ng-ctl binaries are here. You needn't use
# The syslog-ng and syslog-ng-ctl binaries are here. You needn't use
# this method if these binaries can be found in a directory in your PATH.
"/home/tibi/install/syslog-ng/sbin":
syslog_ng.set_binary_path
set_bin_path:
module.run:
- name: syslog_ng.set_binary_path
- m_name: "/home/tibi/install/syslog-ng/sbin"
# Writes the first lines into the config file, also erases its previous
# content
"3.6":
syslog_ng.write_version
write_version:
module.run:
- name: syslog_ng.write_version
- m_name: "3.6"
# There is a shorter form to set the above variables
set_variables:
module.run:
- name: syslog_ng.set_parameters
- version: "3.6"
- binary_path: "/home/tibi/install/syslog-ng/sbin"
- config_file: "/home/tibi/install/syslog-ng/etc/syslog-ng.conf"
# Some global options
global_options:
options.global_options:
syslog_ng.config:
- config:
options:
- time_reap: 30
- mark_freq: 10
- keep_hostname: "yes"
- time_reap: 30
- mark_freq: 10
- keep_hostname: "yes"
s_localhost:
source.s_localhost:
syslog_ng.config:
- config:
source:
- tcp:
- ip: "127.0.0.1"
- port: 1233
- tcp:
- ip: "127.0.0.1"
- port: 1233
d_log_server:
destination.d_log_server:
syslog_ng.config:
- config:
destination:
- tcp:
- "127.0.0.1"
- port: 1234
- tcp:
- "127.0.0.1"
- port: 1234
l_log_to_central_server:
log.l_log_to_central_server:
syslog_ng.config:
- config:
log:
- source: s_localhost
- destination: d_log_server
- source: s_localhost
- destination: d_log_server
some_comment:
syslog_ng.write_config:
module.run:
- name: syslog_ng.write_config
- config: |
# Multi line
# comment
auto_start_or_reload:
{% set pids = salt["ps.pgrep"]("syslog-ng") %}
{% if pids == None or pids|length == 0 %}
syslog_ng.started:
- user: tibi
{% else %}
syslog_ng.reloaded
{% endif %}
# An other mode to use comments or existing configuration snippets
config.other_comment_form:
syslog_ng.config:
- config: |
# Multi line
# comment
#auto_stop:
# syslog_ng.stopped
The ``3.6``, ``s_devlog``, ``d_log_server``, etc. are identifiers. The
second lines in each block are functions and their first parameter is
their id. The ``- config`` is the second named parameter of the
``syslog_ng.config`` function. This function can generate the syslog-ng
configuration from YAML. If the statement (source, destination, parser,
The :py:func:`syslog_ng.reloaded <salt.states.syslog_ng.reloaded>` function can generate syslog-ng configuration from YAML. If the statement (source, destination, parser,
etc.) has a name, this function uses the id as the name, otherwise (log
statement) it's purpose is like a mandatory comment.
You can use ``set_binary_path`` to set the directory which contains the
syslog-ng and syslog-ng-ctl binaries. If this directory is in your PATH,
you don't need to use this function.
Under ``auto_start_or_reload`` you can see a Jinja template. If
syslog-ng isn't running it will start it, otherwise reload it. It uses
the process name ``syslog-ng`` to determine its running state. I suggest
that you use ``service`` state if it's available on your system.
After execution this example the syslog\_ng state will generate this
file:
.. code-block:: text
#Generated by Salt on 2014-06-19 16:53:11
@version: 3.6
#Generated by Salt on 2014-08-18 00:11:11
@version: 3.6
options {
time_reap(30);
mark_freq(10);
keep_hostname(yes);
};
options {
time_reap(
30
);
mark_freq(
10
);
keep_hostname(
yes
);
};
source s_localhost {
tcp(
ip("127.0.0.1"),
port(1233)
);
};
destination d_log_server {
tcp(
"127.0.0.1",
port(1234)
);
};
source s_localhost {
tcp(
ip(
127.0.0.1
),
port(
1233
)
);
};
log {
source(s_localhost);
destination(d_log_server);
};
# Multi line
# comment
destination d_log_server {
tcp(
127.0.0.1,
port(
1234
)
);
};
log {
source(
s_localhost
);
destination(
d_log_server
);
};
# Multi line
# comment
# Multi line
# comment
Users can include arbitrary texts in the generated configuration with
using the ``write_config`` function.
using the ``config`` statement (see the example above).
Syslog_ng module functions
--------------------------
You can use :py:func:`syslog_ng.set_binary_path <salt.modules.syslog_ng.set_binary_path>`
to set the directory which contains the
syslog-ng and syslog-ng-ctl binaries. If this directory is in your PATH,
you don't need to use this function. There is also a :py:func:`syslog_ng.set_config_file <salt.modules.syslog_ng.set_config_file>`
function to set the location of the configuration file.
Examples
--------
@ -165,7 +237,7 @@ Simple source
- config:
source:
- file:
- file: "/var/log/apache/access.log"
- file: ''"/var/log/apache/access.log"''
- follow_freq : 1
- flags:
- no-parse
@ -180,12 +252,26 @@ OR
- config:
source:
- file:
- "/var/log/apache/access.log"
- ''"/var/log/apache/access.log"''
- follow_freq : 1
- flags:
- no-parse
- validate-utf8
OR
.. code-block:: yaml
source.s_tail:
syslog_ng.config:
- config:
- file:
- ''"/var/log/apache/access.log"''
- follow_freq : 1
- flags:
- no-parse
- validate-utf8
Complex source
~~~~~~~~~~~~~~
@ -228,7 +314,7 @@ Filter
- config:
filter:
- match:
- "@json:"
- ''"@json:"''
Template
~~~~~~~~
@ -251,7 +337,7 @@ Template
-config:
template:
- template:
- "$ISODATE $HOST $MSG\n"
- '"$ISODATE $HOST $MSG\n"'
- template_escape:
- "no"
@ -274,8 +360,8 @@ Rewrite
- config:
rewrite:
- set:
- "${.json.message}"
- value : "$MESSAGE"
- '"${.json.message}"'
- value : '"$MESSAGE"'
Global options
~~~~~~~~~~~~~~
@ -353,7 +439,7 @@ Log
- rewrite: r_set_message_to_MESSAGE
- destination:
- file:
- "/tmp/json-input.log"
- '"/tmp/json-input.log"'
- template: t_gsoc2014
- flags: final
- channel:
@ -366,4 +452,3 @@ Log
- file:
- "/tmp/all.log"
- template: t_gsoc2014

View file

@ -15,7 +15,7 @@ complete -c salt -x -l args-separator -d "Set the special
complete -c salt -f -l async -d "Run the salt command but don't wait for a reply"
complete -c salt -f -s C -l compound -d "The compound target option allows for multiple target types to be evaluated, allowing for greater granularity in target matching. The compound target is space delimited, targets other than globs are preceded with an identifier matching the specific targets argument type: salt \"G@os:RedHat and webser* or E@database.*\""
complete -c salt -f -s S -l ipcidr -d "Match based on Subnet (CIDR notation) or IPv4 address."
complete -c salt -f -s T -l make-token -d "Generate and save an authentication token for re-use. Thetoken is generated and made available for the period defined in the Salt Master."
complete -c salt -f -s T -l make-token -d "Generate and save an authentication token for re-use. The token is generated and made available for the period defined in the Salt Master."
complete -c salt -x -l password -d "Password for external authentication"
complete -c salt -f -s I -l pillar -d "Instead of using shell globs to evaluate the target use a pillar value to identify targets, the syntax for the target is the pillar key followed by a globexpression: \"role:production*\""
complete -c salt -f -l show-timeout -d "Display minions that timeout without the additional output of --verbose"

View file

@ -103,6 +103,7 @@ for program in $salt_programs_return
complete -c $program -x -l return -d "Set an alternative return method. By default salt will send the return data from the command back to the master, but the return data can be redirected into any number of systems, databases or applications."
end
# convinience functions
function __fish_salt_log
echo $argv >&2

View file

@ -4,6 +4,7 @@ BUILD_DIR=build/output/salt
rm -rf dist/ $BUILD_DIR &&\
cp $PKG_DIR/_syspaths.py salt/ &&\
python2.7 setup.py sdist &&\
python2.7 setup.py bdist &&\
python2.7 setup.py bdist_esky &&\
rm salt/_syspaths.py &&\

View file

@ -1331,7 +1331,9 @@ class LocalClient(object):
payload_kwargs['kwargs'] = kwargs
# If we have a salt user, add it to the payload
if self.salt_user:
if self.opts['order_masters'] and 'user' in kwargs:
payload_kwargs['user'] = kwargs['user']
elif self.salt_user:
payload_kwargs['user'] = self.salt_user
# If we're a syndication master, pass the timeout

View file

@ -451,22 +451,37 @@ class SSH(object):
print(msg)
print('-' * len(msg) + '\n')
print('')
sret = {}
outputter = self.opts.get('output', 'nested')
for ret in self.handle_ssh():
host = ret.keys()[0]
self.cache_job(jid, host, ret[host])
ret = self.key_deploy(host, ret)
outputter = ret[host].get('out', self.opts.get('output', 'nested'))
p_data = {host: ret[host].get('return', {})}
salt.output.display_output(
p_data,
outputter,
self.opts)
if not isinstance(ret[host], dict):
p_data = {host: ret[host]}
if 'return' not in ret[host]:
p_data = ret
else:
outputter = ret[host].get('out', self.opts.get('output', 'nested'))
p_data = {host: ret[host].get('return', {})}
if self.opts.get('static'):
sret.update(p_data)
else:
salt.output.display_output(
p_data,
outputter,
self.opts)
if self.event:
self.event.fire_event(
ret,
salt.utils.event.tagify(
[jid, 'ret', host],
'job'))
if self.opts.get('static'):
salt.output.display_output(
sret,
outputter,
self.opts)
class Single(object):

View file

@ -39,7 +39,10 @@ log = logging.getLogger(__name__)
def dropfile(cachedir, user=None):
'''
Set an aes dropfile to update the publish session key
Set an AES dropfile to update the publish session key
A dropfile is checked periodically by master workers to determine
if AES key rotation has occurred.
'''
dfnt = os.path.join(cachedir, '.dfnt')
dfn = os.path.join(cachedir, '.dfn')
@ -89,7 +92,15 @@ def dropfile(cachedir, user=None):
def gen_keys(keydir, keyname, keysize, user=None):
'''
Generate a keypair for use with salt
Generate a RSA public keypair for use with salt
:param str keydir: The directory to write the keypair to
:param str keyname: The type of salt server for whom this key should be written. (i.e. 'master' or 'minion')
:param int keysize: The number of bits in the key
:param str user: The user on the system who should own this keypair
:rtype: str
:return: Path on the filesystem to the RSA private key
'''
base = os.path.join(keydir, keyname)
priv = '{0}.pem'.format(base)
@ -172,7 +183,7 @@ def gen_signature(priv_path, pub_path, sign_path):
class MasterKeys(dict):
'''
The Master Keys class is used to manage the public key pair used for
The Master Keys class is used to manage the RSA public key pair used for
authentication by the master.
It also generates a signing key-pair if enabled with master_sign_key_name.
@ -272,6 +283,13 @@ class Auth(object):
the master server from a minion.
'''
def __init__(self, opts):
'''
Init an Auth instance
:param dict opts: Options for this server
:return: Auth instance
:rtype: Auth
'''
self.opts = opts
self.token = Crypticle.generate_key_string()
self.serial = salt.payload.Serial(self.opts)
@ -286,7 +304,10 @@ class Auth(object):
def get_keys(self):
'''
Returns a key objects for the minion
Return keypair object for the minion.
:rtype: M2Crypto.RSA.RSA
:return: The RSA keypair
'''
# Make sure all key parent directories are accessible
user = self.opts.get('user', 'root')
@ -308,6 +329,10 @@ class Auth(object):
'''
Encrypt a string with the minion private key to verify identity
with the master.
:param str clear_tok: A plaintext token to encrypt
:return: Encrypted token
:rtype: str
'''
return self.get_keys().private_encrypt(clear_tok, 5)
@ -316,6 +341,9 @@ class Auth(object):
Generates the payload used to authenticate with the master
server. This payload consists of the passed in id_ and the ssh
public key to encrypt the AES key sent back form the master.
:return: Payload dictionary
:rtype: dict
'''
payload = {}
payload['enc'] = 'clear'
@ -335,12 +363,25 @@ class Auth(object):
def decrypt_aes(self, payload, master_pub=True):
'''
This function is used to decrypt the aes seed phrase returned from
the master server, the seed phrase is decrypted with the ssh rsa
This function is used to decrypt the AES seed phrase returned from
the master server. The seed phrase is decrypted with the SSH RSA
host key.
Pass in the encrypted aes key.
Returns the decrypted aes seed key, a string
Pass in the encrypted AES key.
Returns the decrypted AES seed key, a string
:param dict payload: The incoming payload. This is a dictionary which may have the following keys:
'aes': The shared AES key
'enc': The format of the message. ('clear', 'pub', etc)
'publish_port': The TCP port which published the message
'token': The encrypted token used to verify the message.
'pub_key': The public key of the sender.
:rtype: str
:return: The decrypted token that was provided, with padding.
:rtype: str
:return: The decrypted AES seed key
'''
if self.opts.get('auth_trb', False):
log.warning(
@ -377,8 +418,11 @@ class Auth(object):
def verify_pubkey_sig(self, message, sig):
'''
wraps the verify_signature method so we have
additional checks and return a bool
Wraps the verify_signature method so we have
additional checks.
:rtype: bool
:return: Success or failure of public key verification
'''
if self.opts['master_sign_key_name']:
path = os.path.join(self.opts['pki_dir'],
@ -404,7 +448,7 @@ class Auth(object):
else:
log.error('Failed to verify the signature of the message because '
'the verification key-pairs name is not defined. Please '
'make sure, master_sign_key_name is defined.')
'make sure that master_sign_key_name is defined.')
return False
def verify_signing_master(self, payload):
@ -426,8 +470,15 @@ class Auth(object):
def check_auth_deps(self, payload):
'''
checks if both master and minion either sign (master) and
verify (minion). If one side does not, it should fail
Checks if both master and minion either sign (master) and
verify (minion). If one side does not, it should fail.
:param dict payload: The incoming payload. This is a dictionary which may have the following keys:
'aes': The shared AES key
'enc': The format of the message. ('clear', 'pub', 'aes')
'publish_port': The TCP port which published the message
'token': The encrypted token used to verify the message.
'pub_key': The RSA public key of the sender.
'''
# master and minion sign and verify
if 'pub_sig' in payload and self.opts['verify_master_pubkey_sign']:
@ -453,8 +504,18 @@ class Auth(object):
def extract_aes(self, payload, master_pub=True):
'''
return the aes key received from the master
when the minion has been successfully authed
Return the AES key received from the master after the minion has been
successfully authenticated.
:param dict payload: The incoming payload. This is a dictionary which may have the following keys:
'aes': The shared AES key
'enc': The format of the message. ('clear', 'pub', etc)
'publish_port': The TCP port which published the message
'token': The encrypted token used to verify the message.
'pub_key': The RSA public key of the sender.
:rtype: str
:return: The shared AES key received from the master.
'''
if master_pub:
try:
@ -477,6 +538,16 @@ class Auth(object):
def verify_master(self, payload):
'''
Verify that the master is the same one that was previously accepted.
:param dict payload: The incoming payload. This is a dictionary which may have the following keys:
'aes': The shared AES key
'enc': The format of the message. ('clear', 'pub', etc)
'publish_port': The TCP port which published the message
'token': The encrypted token used to verify the message.
'pub_key': The RSA public key of the sender.
:rtype: str
:return: An empty string on verfication failure. On success, the decrypted AES message in the payload.
'''
m_pub_fn = os.path.join(self.opts['pki_dir'], self.mpub)
if os.path.isfile(m_pub_fn) and not self.opts['open_mode']:
@ -536,6 +607,16 @@ class Auth(object):
Send a sign in request to the master, sets the key information and
returns a dict containing the master publish interface to bind to
and the decrypted aes key for transport decryption.
:param int timeout: Number of seconds to wait before timing out the sign-in request
:param bool safe: If True, do not raise an exception on timeout. Retry instead.
:param int tries: The number of times to try to authenticate before giving up.
:raises SaltReqTimeoutError: If the sign-in request has timed out and :param safe: is not set
:return: Return a string on failure indicating the reason for failure. On success, return a dictionary
with the publication port and the shared AES key.
'''
auth = {}
@ -733,7 +814,10 @@ class SAuth(Auth):
Authenticate with the master, this method breaks the functional
paradigm, it will update the master information from a fresh sign
in, signing in can occur as often as needed to keep up with the
revolving master aes key.
revolving master AES key.
:rtype: Crypticle
:returns: A crypticle used for encryption operations
'''
acceptance_wait_time = self.opts['acceptance_wait_time']
acceptance_wait_time_max = self.opts['acceptance_wait_time_max']

View file

@ -9,6 +9,7 @@ import imp
import sys
import salt
import logging
import inspect
import tempfile
import time
@ -656,12 +657,12 @@ class Loader(object):
setattr(mod, pack['name'], pack['value'])
# Call a module's initialization method if it exists
if hasattr(mod, '__init__'):
if callable(mod.__init__):
try:
mod.__init__(self.opts)
except TypeError:
pass
module_init = getattr(mod, '__init__', None)
if inspect.isfunction(module_init):
try:
module_init(self.opts)
except TypeError:
pass
funcs = {}
module_name = mod.__name__[mod.__name__.rindex('.') + 1:]
if getattr(mod, '__load__', False) is not False:
@ -675,23 +676,21 @@ class Loader(object):
if attr.startswith('_'):
# private functions are skipped
continue
if callable(getattr(mod, attr)):
func = getattr(mod, attr)
if hasattr(func, '__bases__'):
if 'BaseException' in func.__bases__:
# the callable object is an exception, don't load it
continue
func = getattr(mod, attr)
if not inspect.isfunction(func):
# Not a function!? Skip it!!!
continue
# Let's get the function name.
# If the module has the __func_alias__ attribute, it must be a
# dictionary mapping in the form of(key -> value):
# <real-func-name> -> <desired-func-name>
#
# It default's of course to the found callable attribute name
# if no alias is defined.
funcname = getattr(mod, '__func_alias__', {}).get(attr, attr)
funcs['{0}.{1}'.format(module_name, funcname)] = func
self._apply_outputter(func, mod)
# Let's get the function name.
# If the module has the __func_alias__ attribute, it must be a
# dictionary mapping in the form of(key -> value):
# <real-func-name> -> <desired-func-name>
#
# It default's of course to the found callable attribute name
# if no alias is defined.
funcname = getattr(mod, '__func_alias__', {}).get(attr, attr)
funcs['{0}.{1}'.format(module_name, funcname)] = func
self._apply_outputter(func, mod)
if not hasattr(mod, '__salt__'):
mod.__salt__ = functions
try:
@ -747,12 +746,12 @@ class Loader(object):
setattr(mod, pack['name'], pack['value'])
# Call a module's initialization method if it exists
if hasattr(mod, '__init__'):
if callable(mod.__init__):
try:
mod.__init__(self.opts)
except TypeError:
pass
module_init = getattr(mod, '__init__', None)
if inspect.isfunction(module_init):
try:
module_init(self.opts)
except TypeError:
pass
# Trim the full pathname to just the module
# this will be the short name that other salt modules and state
@ -960,42 +959,38 @@ class Loader(object):
# log messages omitted for obviousness
continue
if callable(getattr(mod, attr)):
# check to make sure this is callable
func = getattr(mod, attr)
if isinstance(func, type):
# skip callables that might be exceptions
if any(['Error' in func.__name__,
'Exception' in func.__name__]):
continue
func = getattr(mod, attr)
if not inspect.isfunction(func):
# Not a function!? Skip it!!!
continue
# now that callable passes all the checks, add it to the
# library of available functions of this type
# Once confirmed that "func" is a function, add it to the
# library of available functions
# Let's get the function name.
# If the module has the __func_alias__ attribute, it must
# be a dictionary mapping in the form of(key -> value):
# <real-func-name> -> <desired-func-name>
#
# It default's of course to the found callable attribute
# name if no alias is defined.
funcname = getattr(mod, '__func_alias__', {}).get(
attr, attr
)
# Let's get the function name.
# If the module has the __func_alias__ attribute, it must
# be a dictionary mapping in the form of(key -> value):
# <real-func-name> -> <desired-func-name>
#
# It default's of course to the found callable attribute
# name if no alias is defined.
funcname = getattr(mod, '__func_alias__', {}).get(
attr, attr
)
# functions are namespaced with their module name, unless
# the module_name is None (this is a special case added for
# pyobjects), in which case just the function name is used
if module_name is None:
module_func_name = funcname
else:
module_func_name = '{0}.{1}'.format(module_name, funcname)
# functions are namespaced with their module name, unless
# the module_name is None (this is a special case added for
# pyobjects), in which case just the function name is used
if module_name is None:
module_func_name = funcname
else:
module_func_name = '{0}.{1}'.format(module_name, funcname)
funcs[module_func_name] = func
log.trace(
'Added {0} to {1}'.format(module_func_name, self.tag)
)
self._apply_outputter(func, mod)
funcs[module_func_name] = func
log.trace(
'Added {0} to {1}'.format(module_func_name, self.tag)
)
self._apply_outputter(func, mod)
return funcs
def process_virtual(self, mod, module_name):
@ -1025,7 +1020,7 @@ class Loader(object):
# if they are not intended to run on the given platform or are missing
# dependencies.
try:
if hasattr(mod, '__virtual__') and callable(mod.__virtual__):
if hasattr(mod, '__virtual__') and inspect.isfunction(mod.__virtual__):
if self.opts.get('virtual_timer', False):
start = time.time()
virtual = mod.__virtual__()

View file

@ -1881,6 +1881,7 @@ class Syndic(Minion):
data['ret'],
data['jid'],
data['to'],
{'user': data.get('user', '')},
**kwargs)
def _setsockopts(self):
@ -2331,8 +2332,6 @@ class Matcher(object):
'''
def __init__(self, opts, functions=None):
self.opts = opts
if functions is None:
functions = salt.loader.minion_mods(self.opts)
self.functions = functions
def confirm_top(self, match, data, nodegroups=None):
@ -2412,6 +2411,8 @@ class Matcher(object):
'''
Match based on the local data store on the minion
'''
if self.functions is None:
self.functions = salt.loader.minion_mods(self.opts)
comps = tgt.split(':')
if len(comps) < 2:
return False

View file

@ -1225,9 +1225,8 @@ def exec_code(lang, code, cwd=None):
codefile = salt.utils.mkstemp()
with salt.utils.fopen(codefile, 'w+t') as fp_:
fp_.write(code)
cmd = '{0} {1}'.format(lang, codefile)
ret = run(cmd, cwd=cwd)
cmd = [lang, codefile]
ret = run(cmd, cwd=cwd, python_shell=False)
os.remove(codefile)
return ret

1012
salt/modules/gpg.py Normal file

File diff suppressed because it is too large Load diff

View file

@ -4,7 +4,9 @@ Module for handling openstack keystone calls.
:optdepends: - keystoneclient Python adapter
:configuration: This module is not usable until the following are specified
either in a pillar or in the minion's config file::
either in a pillar or in the minion's config file:
.. code-block:: yaml
keystone.user: admin
keystone.password: verybadpass
@ -12,14 +14,17 @@ Module for handling openstack keystone calls.
keystone.tenant_id: f80919baedab48ec8931f200c65a50df
keystone.auth_url: 'http://127.0.0.1:5000/v2.0/'
OR (for token based authentication)
OR (for token based authentication)
.. code-block:: yaml
keystone.token: 'ADMIN'
keystone.endpoint: 'http://127.0.0.1:35357/v2.0'
If configuration for multiple openstack accounts is required, they can be
set up as different configuration profiles:
For example::
set up as different configuration profiles. For example:
.. code-block:: yaml
openstack1:
keystone.user: admin
@ -37,7 +42,9 @@ Module for handling openstack keystone calls.
With this configuration in place, any of the keystone functions can make use
of a configuration profile by declaring it explicitly.
For example::
For example:
.. code-block:: bash
salt '*' keystone.tenant_list profile=openstack1
'''
@ -119,8 +126,8 @@ def ec2_credentials_create(user_id=None, name=None,
salt '*' keystone.ec2_credentials_create name=admin tenant=admin
salt '*' keystone.ec2_credentials_create \
user_id=c965f79c4f864eaaa9c3b41904e67082 \
tenant_id=722787eb540849158668370dc627ec5f
user_id=c965f79c4f864eaaa9c3b41904e67082 \
tenant_id=722787eb540849158668370dc627ec5f
'''
kstone = auth(profile, **connection_args)
@ -153,10 +160,9 @@ def ec2_credentials_delete(user_id=None, name=None, access_key=None,
.. code-block:: bash
salt '*' keystone.ec2_credentials_delete \
860f8c2c38ca4fab989f9bc56a061a64
access_key=5f66d2f24f604b8bb9cd28886106f442
860f8c2c38ca4fab989f9bc56a061a64 access_key=5f66d2f24f604b8bb9cd28886106f442
salt '*' keystone.ec2_credentials_delete name=admin \
access_key=5f66d2f24f604b8bb9cd28886106f442
access_key=5f66d2f24f604b8bb9cd28886106f442
'''
kstone = auth(profile, **connection_args)
@ -421,7 +427,7 @@ def service_create(name, service_type, description=None, profile=None,
.. code-block:: bash
salt '*' keystone.service_create nova compute \
'OpenStack Compute Service'
'OpenStack Compute Service'
'''
kstone = auth(profile, **connection_args)
service = kstone.services.create(name, service_type, description)
@ -861,9 +867,9 @@ def user_role_add(user_id=None, user=None, tenant_id=None,
.. code-block:: bash
salt '*' keystone.user_role_add \
user_id=298ce377245c4ec9b70e1c639c89e654 \
tenant_id=7167a092ece84bae8cead4bf9d15bb3b \
role_id=ce377245c4ec9b70e1c639c89e8cead4
user_id=298ce377245c4ec9b70e1c639c89e654 \
tenant_id=7167a092ece84bae8cead4bf9d15bb3b \
role_id=ce377245c4ec9b70e1c639c89e8cead4
salt '*' keystone.user_role_add user=admin tenant=admin role=admin
'''
kstone = auth(profile, **connection_args)
@ -910,9 +916,9 @@ def user_role_remove(user_id=None, user=None, tenant_id=None,
.. code-block:: bash
salt '*' keystone.user_role_remove \
user_id=298ce377245c4ec9b70e1c639c89e654 \
tenant_id=7167a092ece84bae8cead4bf9d15bb3b \
role_id=ce377245c4ec9b70e1c639c89e8cead4
user_id=298ce377245c4ec9b70e1c639c89e654 \
tenant_id=7167a092ece84bae8cead4bf9d15bb3b \
role_id=ce377245c4ec9b70e1c639c89e8cead4
salt '*' keystone.user_role_remove user=admin tenant=admin role=admin
'''
kstone = auth(profile, **connection_args)
@ -957,8 +963,8 @@ def user_role_list(user_id=None, tenant_id=None, user_name=None,
.. code-block:: bash
salt '*' keystone.user_role_list \
user_id=298ce377245c4ec9b70e1c639c89e654 \
tenant_id=7167a092ece84bae8cead4bf9d15bb3b
user_id=298ce377245c4ec9b70e1c639c89e654 \
tenant_id=7167a092ece84bae8cead4bf9d15bb3b
salt '*' keystone.user_role_list user_name=admin tenant_name=admin
'''
kstone = auth(profile, **connection_args)

View file

@ -117,10 +117,10 @@ def cloud_init_interface(name, vm_=None, **kwargs):
gateway
network gateway for the container
additional_ips
additionnal ips which will be wired on the main bridge (br0)
additional ips which will be wired on the main bridge (br0)
which is connected to internet.
Be aware that you may use manual virtual mac addresses
providen by you provider (online, ovh, etc).
provided by you provider (online, ovh, etc).
This is a list of mappings ``{ip: '', mac: '',netmask:''}``
Set gateway to ``None`` and an interface with a gateway
to escape from another interface that's eth0.
@ -316,7 +316,7 @@ def _lxc_profile(profile):
Profiles can be defined in the config or pillar, e.g.:
Profile can be a string to be retrieven in config
Profile can be a string to be retrieved in config
or a mapping.
If is is a mapping and it contains a name, the name will
@ -595,21 +595,14 @@ class _LXCConfig(object):
self._filter_data(i)
def as_string(self):
chunks = []
def _process(item):
sep = ' = '
if not item[0]:
sep = ''
chunks.append('{0[0]}{1}{0[1]}'.format(item, sep))
map(_process, self.data)
chunks = ('{0[0]}{1}{0[1]}'.format(item, (' = ' if item[0] else '')) for item in self.data)
return '\n'.join(chunks) + '\n'
def write(self):
if self.path:
content = self.as_string()
# 2 step rendering to be sure not to open/wipe the config
# before as_string suceeds.
# before as_string succeeds.
with open(self.path, 'w') as fic:
fic.write(content)
fic.flush()
@ -992,7 +985,7 @@ def init(name,
changes['350_dns'] = 'DNS updated\n'
if not cret['result']:
ret['result'] = False
changes['350_dns'] = 'DNS provisionning error\n'
changes['350_dns'] = 'DNS provisioning error\n'
try:
lxcret = int(
__salt__['lxc.run_cmd'](
@ -2057,7 +2050,7 @@ def bootstrap(name, config=None, approve_key=True,
__salt__['lxc.stop'](name)
elif prior_state == 'frozen':
__salt__['lxc.freeze'](name)
# mark seeded upon sucessful install
# mark seeded upon successful install
if res:
__salt__['lxc.run_cmd'](
name, 'sh -c \'touch "{0}";\''.format(SEED_MARKER))
@ -2106,7 +2099,7 @@ def run_cmd(name, cmd, no_start=False, preserve_state=True,
use_vt
use saltstack utils.vt to stream output to console
keep_env
A list of env vars to preserve. May be passed as commma-delimited list.
A list of env vars to preserve. May be passed as comma-delimited list.
Defaults to http_proxy,https_proxy.
.. note::

View file

@ -105,7 +105,7 @@ def persist(name, value, config='/etc/sysctl.conf'):
nlines.append(line)
continue
else:
rest = line.split('=', 1)
key, rest = line.split('=', 1)
if rest.startswith('"'):
_, rest_v, rest = rest.split('"', 2)
elif rest.startswith('\''):
@ -115,7 +115,8 @@ def persist(name, value, config='/etc/sysctl.conf'):
rest = rest[len(rest_v):]
if rest_v == value:
return 'Already set'
nlines.append('{0}={1}\n'.format(name, value))
new_line = '{0}={1}{2}'.format(key, value, rest)
nlines.append(new_line)
edited = True
if not edited:
nlines.append('{0}={1}\n'.format(name, value))

View file

@ -1549,3 +1549,237 @@ def owner_to(dbname,
password=password,
maintenance_db=dbname)
return cmdret
# Schema related actions
def schema_create(dbname, name, owner=None,
user=None,
db_user=None, db_password=None,
db_host=None, db_port=None):
'''
Creates a Postgres schema.
CLI Example:
.. code-block:: bash
salt '*' postgres.schema_create dbname name owner='owner' \\
user='user' \\
db_user='user' db_password='password'
db_host='hostname' db_port='port'
'''
# check if schema exists
if schema_exists(dbname, name,
db_user=db_user, db_password=db_password,
db_host=db_host, db_port=db_port):
log.info('{0!r} already exists in {1!r}'.format(name, dbname))
return False
sub_cmd = 'CREATE SCHEMA {0}'.format(name)
if owner is not None:
sub_cmd = '{0} AUTHORIZATION {1}'.format(sub_cmd, owner)
ret = _psql_prepare_and_run(['-c', sub_cmd],
user=db_user, password=db_password,
port=db_port, host=db_host,
maintenance_db=dbname, runas=user)
return ret['retcode'] == 0
def schema_remove(dbname, name,
user=None,
db_user=None, db_password=None,
db_host=None, db_port=None):
'''
Removes a schema from the Postgres server.
CLI Example:
.. code-block:: bash
salt '*' postgres.schema_remove dbname schemaname
dbname
Database name we work on
schemaname
The schema's name we'll remove
user
System user all operations should be performed on behalf of
db_user
database username if different from config or default
db_password
user password if any password for a specified user
db_host
Database host if different from config or default
db_port
Database port if different from config or default
'''
# check if schema exists
if not schema_exists(dbname, name,
db_user=db_user, db_password=db_password,
db_host=db_host, db_port=db_port):
log.info('Schema {0!r} does not exist in {1!r}'.format(name, dbname))
return False
# schema exists, proceed
sub_cmd = 'DROP SCHEMA {0}'.format(name)
_psql_prepare_and_run(
['-c', sub_cmd],
runas=user,
maintenance_db=dbname,
host=db_host, user=db_user, port=db_port, password=db_password)
if not schema_exists(dbname, name,
db_user=db_user, db_password=db_password,
db_host=db_host, db_port=db_port):
return True
else:
log.info('Failed to delete schema {0!r}.'.format(name))
return False
def schema_exists(dbname, name,
db_user=None, db_password=None,
db_host=None, db_port=None):
'''
Checks if a schema exists on the Postgres server.
CLI Example:
.. code-block:: bash
salt '*' postgres.schema_exists dbname schemaname
dbname
Database name we query on
name
Schema name we look for
db_user
database username if different from config or default
db_password
user password if any password for a specified user
db_host
Database host if different from config or default
db_port
Database port if different from config or default
'''
return bool(
schema_get(dbname, name,
db_user=db_user,
db_host=db_host,
db_port=db_port,
db_password=db_password))
def schema_get(dbname, name,
db_user=None, db_password=None,
db_host=None, db_port=None):
'''
Return a dict with information about schemas in a database.
CLI Example:
.. code-block:: bash
salt '*' postgres.schema_get dbname name
dbname
Database name we query on
name
Schema name we look for
db_user
database username if different from config or default
db_password
user password if any password for a specified user
db_host
Database host if different from config or default
db_port
Database port if different from config or default
'''
all_schemas = schema_list(dbname,
db_user=db_user,
db_host=db_host,
db_port=db_port,
db_password=db_password)
try:
return all_schemas.get(name, None)
except AttributeError:
log.error('Could not retrieve Postgres schema. Is Postgres running?')
return False
def schema_list(dbname,
db_user=None, db_password=None,
db_host=None, db_port=None):
'''
Return a dict with information about schemas in a Postgres database.
CLI Example:
.. code-block:: bash
salt '*' postgres.schema_list dbname
dbname
Database name we query on
db_user
database username if different from config or default
db_password
user password if any password for a specified user
db_host
Database host if different from config or default
db_port
Database port if different from config or default
'''
ret = {}
query = (''.join([
'SELECT '
'pg_namespace.nspname as "name",'
'pg_namespace.nspacl as "acl", '
'pg_roles.rolname as "owner" '
'FROM pg_namespace '
'LEFT JOIN pg_roles ON pg_roles.oid = pg_namespace.nspowner '
]))
rows = psql_query(query,
host=db_host,
user=db_user,
port=db_port,
maintenance_db=dbname,
password=db_password)
for row in rows:
retrow = {}
for key in ('owner', 'acl'):
retrow[key] = row[key]
ret[row['name']] = retrow
return ret

View file

@ -15,6 +15,7 @@ import datetime
import tempfile
# Import salt libs
import salt.config
import salt.utils
import salt.state
import salt.payload
@ -26,6 +27,7 @@ __proxyenabled__ = ['*']
__outputter__ = {
'sls': 'highstate',
'pkg': 'highstate',
'top': 'highstate',
'single': 'highstate',
'highstate': 'highstate',
@ -187,7 +189,10 @@ def high(data, queue=False, **kwargs):
def template(tem, queue=False, **kwargs):
'''
Execute the information stored in a template file on the minion
Execute the information stored in a template file on the minion.
This function does not ask a master for a SLS file to render but
instead directly processes the file at the provided path on the minion.
CLI Example:
@ -198,8 +203,14 @@ def template(tem, queue=False, **kwargs):
conflict = _check_queue(queue, kwargs)
if conflict is not None:
return conflict
st_ = salt.state.State(__opts__)
ret = st_.call_template(tem)
st_ = salt.state.HighState(__opts__)
if not tem.endswith('.sls'):
tem = '{sls}.sls'.format(sls=tem)
high, errors = st_.render_state(tem, None, '', None, local=True)
if errors:
__context__['retcode'] = 1
return errors
ret = st_.state.call_high(high)
_set_retcode(ret)
return ret
@ -223,7 +234,10 @@ def template_str(tem, queue=False, **kwargs):
return ret
def highstate(test=None, queue=False, **kwargs):
def highstate(test=None,
queue=False,
localconfig=None,
**kwargs):
'''
Retrieve the state data from the salt master for this minion and execute it
@ -241,6 +255,11 @@ def highstate(test=None, queue=False, **kwargs):
This option starts a new thread for each queued state run so use this
option sparingly.
localconfig: ``None``
Instead of using running minion opts, load ``localconfig`` and merge that
with the running minion opts. This allows you to create "roots" of
salt directories (with their own minion config, pillars, file_roots) to
run highstate out of.
CLI Example:
@ -260,6 +279,9 @@ def highstate(test=None, queue=False, **kwargs):
orig_test = __opts__.get('test', None)
opts = copy.deepcopy(__opts__)
if localconfig:
opts = salt.config.minion_config(localconfig, defaults=opts)
if test is None:
if salt.utils.test_mode(test=test, **kwargs):
opts['test'] = True

1190
salt/modules/syslog_ng.py Normal file

File diff suppressed because it is too large Load diff

View file

@ -72,9 +72,11 @@ def __catalina_home():
'''
locations = ['/usr/share/tomcat*', '/opt/tomcat']
for location in locations:
catalina_home = glob.glob(location)
if catalina_home:
return catalina_home[-1]
folders = glob.glob(location)
if folders:
for catalina_home in folders:
if os.path.isdir(catalina_home + "/bin"):
return catalina_home
return False

View file

@ -342,12 +342,10 @@ def get_resource_path(venv, package_or_requirement, resource_name):
salt '*' virtualenv.get_resource_path /path/to/my/venv my_package my/resource.xml
'''
if not salt.utils.verify.safe_py_code(venv):
return ''
if not salt.utils.verify.safe_py_code(package_or_requirement):
return ''
raise salt.exceptions.CommandExecutionError
if not salt.utils.verify.safe_py_code(resource_name):
return ''
raise salt.exceptions.CommandExecutionError
bin_path = os.path.join(venv, 'bin/python')
if not os.path.exists(bin_path):
@ -366,12 +364,10 @@ def get_resource_content(venv, package_or_requirement, resource_name):
salt '*' virtualenv.get_resource_content /path/to/my/venv my_package my/resource.xml
'''
if not salt.utils.verify.safe_py_code(venv):
return ''
if not salt.utils.verify.safe_py_code(package_or_requirement):
return ''
raise salt.exceptions.CommandExecutionError
if not salt.utils.verify.safe_py_code(resource_name):
return ''
raise salt.exceptions.CommandExecutionError
bin_path = os.path.join(venv, 'bin/python')
if not os.path.exists(bin_path):

View file

@ -40,63 +40,77 @@ class NestDisplay(object):
def __init__(self):
self.colors = salt.utils.get_colors(__opts__.get('color'))
def ustring(self,
indent,
color,
msg,
prefix='',
suffix='',
endc=None,
encoding='utf-8'):
if endc is None:
endc = self.colors['ENDC']
try:
return u'{0}{1}{2}{3}{4}{5}\n'.format(
indent, color, prefix, msg, endc, suffix)
except UnicodeDecodeError:
return u'{0}{1}{2}{3}{4}{5}\n'.format(
indent, color, prefix, msg.decode(encoding), endc, suffix)
def display(self, ret, indent, prefix, out):
'''
Recursively iterate down through data structures to determine output
'''
strip_colors = __opts__.get('strip_colors', True)
if ret is None or ret is True or ret is False:
out += u'{0}{1}{2}{3}{4}\n'.format(
' ' * indent,
self.colors['YELLOW'],
prefix,
ret,
self.colors['ENDC'])
# Number includes all python numbers types (float, int, long, complex, ...)
out += self.ustring(
' ' * indent,
self.colors['YELLOW'],
ret,
prefix=prefix)
# Number includes all python numbers types
# (float, int, long, complex, ...)
elif isinstance(ret, Number):
out += u'{0}{1}{2}{3}{4}\n'.format(
' ' * indent,
self.colors['YELLOW'],
prefix,
ret,
self.colors['ENDC'])
out += self.ustring(
' ' * indent,
self.colors['YELLOW'],
ret,
prefix=prefix)
elif isinstance(ret, string_types):
lines = re.split(r'\r?\n', ret)
for line in lines:
if strip_colors:
line = salt.output.strip_esc_sequence(line)
out += u'{0}{1}{2}{3}{4}\n'.format(
' ' * indent,
self.colors['GREEN'],
prefix,
line,
self.colors['ENDC'])
out += self.ustring(
' ' * indent,
self.colors['GREEN'],
line,
prefix=prefix)
elif isinstance(ret, list) or isinstance(ret, tuple):
for ind in ret:
if isinstance(ind, (list, tuple, dict)):
out += u'{0}{1}|_{2}\n'.format(
' ' * indent,
self.colors['GREEN'],
self.colors['ENDC'])
out += self.ustring(' ' * indent,
self.colors['GREEN'],
'|_')
prefix = '' if isinstance(ind, dict) else '- '
out = self.display(ind, indent + 2, prefix, out)
else:
out = self.display(ind, indent, '- ', out)
elif isinstance(ret, dict):
if indent:
out += u'{0}{1}{2}{3}\n'.format(
' ' * indent,
self.colors['CYAN'],
'-' * 10,
self.colors['ENDC'])
out += self.ustring(
' ' * indent,
self.colors['CYAN'],
'-' * 10)
for key in sorted(ret):
val = ret[key]
out += u'{0}{1}{2}{3}{4}:\n'.format(
' ' * indent,
self.colors['CYAN'],
prefix,
key,
self.colors['ENDC'])
out += self.ustring(
' ' * indent,
self.colors['CYAN'],
key,
suffix=":",
prefix=prefix)
out = self.display(val, indent + 4, '', out)
return out

View file

@ -1,20 +1,280 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
'''
Configuration templating using Hierarchical substitution and Jinja.
Pepa
====
Documentation: https://github.com/mickep76/pepa
Configuration templating for SaltStack using Hierarchical substitution and Jinja.
Configuring Pepa
================
.. code-block:: yaml
extension_modules: /srv/salt/ext
ext_pillar:
- pepa:
resource: host # Name of resource directory and sub-key in pillars
sequence: # Sequence used for hierarchical substitution
- hostname: # Name of key
name: input # Alias used for template directory
base_only: True # Only use templates from Base environment, i.e. no staging
- default:
- environment:
- location..region:
name: region
- location..country:
name: country
- location..datacenter:
name: datacenter
- roles:
- osfinger:
name: os
- hostname:
name: override
base_only: True
subkey: True # Create a sub-key in pillars, named after the resource in this case [host]
subkey_only: True # Only create a sub-key, and leave the top level untouched
pepa_roots: # Base directory for each environment
base: /srv/pepa/base # Path for base environment
dev: /srv/pepa/base # Associate dev with base
qa: /srv/pepa/qa
prod: /srv/pepa/prod
# Use a different delimiter for nested dictionaries, defaults to '..' since some keys may use '.' in the name
#pepa_delimiter: ..
# Supply Grains for Pepa, this should **ONLY** be used for testing or validation
#pepa_grains:
# environment: dev
# Supply Pillar for Pepa, this should **ONLY** be used for testing or validation
#pepa_pillars:
# saltversion: 0.17.4
# Enable debug for Pepa, and keep Salt on warning
#log_level: debug
#log_granular_levels:
# salt: warning
# salt.loaded.ext.pillar.pepa: debug
Pepa can also be used in Master-less SaltStack setup.
Command line
============
.. code-block:: bash
usage: pepa.py [-h] [-c CONFIG] [-d] [-g GRAINS] [-p PILLAR] [-n] [-v]
hostname
positional arguments:
hostname Hostname
optional arguments:
-h, --help show this help message and exit
-c CONFIG, --config CONFIG
Configuration file
-d, --debug Print debug info
-g GRAINS, --grains GRAINS
Input Grains as YAML
-p PILLAR, --pillar PILLAR
Input Pillar as YAML
-n, --no-color No color output
-v, --validate Validate output
Templates
=========
Templates is configuration for a host or software, that can use information from Grains or Pillars. These can then be used for hierarchically substitution.
**Example File:** host/input/test_example_com.yaml
.. code-block:: yaml
location..region: emea
location..country: nl
location..datacenter: foobar
environment: dev
roles:
- salt.master
network..gateway: 10.0.0.254
network..interfaces..eth0..hwaddr: 00:20:26:a1:12:12
network..interfaces..eth0..dhcp: False
network..interfaces..eth0..ipv4: 10.0.0.3
network..interfaces..eth0..netmask: 255.255.255.0
network..interfaces..eth0..fqdn: {{ hostname }}
cobbler..profile: fedora-19-x86_64
As you see in this example you can use Jinja directly inside the template.
**Example File:** host/region/amer.yaml
.. code-block:: yaml
network..dns..servers:
- 10.0.0.1
- 10.0.0.2
time..ntp..servers:
- ntp1.amer.example.com
- ntp2.amer.example.com
- ntp3.amer.example.com
time..timezone: America/Chihuahua
yum..mirror: yum.amer.example.com
Each template is named after the value of the key using lowercase and all extended characters are replaced with underscore.
**Example:**
osfinger: Fedora-19
**Would become:**
fedora_19.yaml
Nested dictionaries
===================
In order to create nested dictionaries as output you can use double dot **".."** as a delimiter. You can change this using "pepa_delimiter" we choose double dot since single dot is already used by key names in some modules, and using ":" requires quoting in the YAML.
**Example:**
.. code-block:: yaml
network..dns..servers:
- 10.0.0.1
- 10.0.0.2
network..dns..options:
- timeout:2
- attempts:1
- ndots:1
network..dns..search:
- example.com
**Would become:**
.. code-block:: yaml
network:
dns:
servers:
- 10.0.0.1
- 10.0.0.2
options:
- timeout:2
- attempts:1
- ndots:1
search:
- example.com
Operators
=========
Operators can be used to merge/unset a list/hash or set the key as immutable, so it can't be changed.
=========== ================================================
Operator Description
=========== ================================================
merge() Merge list or hash
unset() Unset key
immutable() Set the key as immutable, so it can't be changed
imerge() Set immutable and merge
iunset() Set immutable and unset
=========== ================================================
**Example:**
.. code-block:: yaml
network..dns..search..merge():
- foobar.com
- dummy.nl
owner..immutable(): Operations
host..printers..unset():
Validation
==========
Since it's very hard to test Jinja as is, the best approach is to run all the permutations of input and validate the output, i.e. Unit Testing.
To facilitate this in Pepa we use YAML, Jinja and Cerberus <https://github.com/nicolaiarocci/cerberus>.
Schema
======
So this is a validation schema for network configuration, as you see it can be customized with Jinja just as Pepa templates.
This was designed to be run as a build job in Jenkins or similar tool. You can provide Grains/Pillar input using either the config file or command line arguments.
**File Example: host/validation/network.yaml**
.. code-block:: yaml
network..dns..search:
type: list
allowed:
- example.com
network..dns..options:
type: list
allowed: ['timeout:2', 'attempts:1', 'ndots:1']
network..dns..servers:
type: list
schema:
regex: ^([0-9]{1,3}\\.){3}[0-9]{1,3}$
network..gateway:
type: string
regex: ^([0-9]{1,3}\\.){3}[0-9]{1,3}$
{% if network.interfaces is defined %}
{% for interface in network.interfaces %}
network..interfaces..{{ interface }}..dhcp:
type: boolean
network..interfaces..{{ interface }}..fqdn:
type: string
regex: ^([a-z0-9]([a-z0-9-]{0,61}[a-z0-9])?\\.)+[a-zA-Z]{2,6}$
network..interfaces..{{ interface }}..hwaddr:
type: string
regex: ^([0-9a-f]{1,2}\\:){5}[0-9a-f]{1,2}$
network..interfaces..{{ interface }}..ipv4:
type: string
regex: ^([0-9]{1,3}\\.){3}[0-9]{1,3}$
network..interfaces..{{ interface }}..netmask:
type: string
regex: ^([0-9]{1,3}\\.){3}[0-9]{1,3}$
{% endfor %}
{% endif %}
Links
=====
For more examples and information see <https://github.com/mickep76/pepa>.
'''
__author__ = 'Michael Persson <michael.ake.persson@gmail.com>'
__copyright__ = 'Copyright (c) 2013 Michael Persson'
__license__ = 'Apache License, Version 2.0'
__version__ = '0.6.4'
__version__ = '0.6.5'
# Import python libs
import logging
import sys
import glob
import yaml
import jinja2
import re
from os.path import isfile, join
# Only used when called from a terminal
log = None
@ -28,6 +288,7 @@ if __name__ == '__main__':
parser.add_argument('-g', '--grains', help='Input Grains as YAML')
parser.add_argument('-p', '--pillar', help='Input Pillar as YAML')
parser.add_argument('-n', '--no-color', action='store_true', help='No color output')
parser.add_argument('-v', '--validate', action='store_true', help='Validate output')
args = parser.parse_args()
LOG_LEVEL = logging.WARNING
@ -60,28 +321,15 @@ __opts__ = {
'pepa_roots': {
'base': '/srv/salt'
},
'pepa_delimiter': '..'
'pepa_delimiter': '..',
'pepa_validate': False
}
# Import libraries
import yaml
import jinja2
import re
try:
from os.path import isfile, join
HAS_OS_PATH = True
except ImportError:
HAS_OS_PATH = False
def __virtual__():
'''
Only return if all the modules are available
'''
if not HAS_OS_PATH:
return False
return True
@ -178,8 +426,12 @@ def ext_pillar(minion_id, pillar, resource, sequence, subkey=False, subkey_only=
log.warning('Key {0} is immutable, changes are not allowed'.format(key))
elif rkey in immutable:
log.warning("Key {0} is immutable, changes are not allowed".format(rkey))
elif operator == 'merge()':
log.debug("Merge key {0}: {1}".format(rkey, results[key]))
elif operator == 'merge()' or operator == 'imerge()':
if operator == 'merge()':
log.debug("Merge key {0}: {1}".format(rkey, results[key]))
else:
log.debug("Set immutable and merge key {0}: {1}".format(rkey, results[key]))
immutable[rkey] = True
if rkey in output and type(results[key]) != type(output[rkey]):
log.warning('You can''t merge different types for key {0}'.format(rkey))
elif type(results[key]) is dict:
@ -188,34 +440,18 @@ def ext_pillar(minion_id, pillar, resource, sequence, subkey=False, subkey_only=
output[rkey].extend(results[key])
else:
log.warning('Unsupported type need to be list or dict for key {0}'.format(rkey))
elif operator == 'unset()':
log.debug("Unset key {0}".format(rkey))
try:
elif operator == 'unset()' or operator == 'iunset()':
if operator == 'unset()':
log.debug("Unset key {0}".format(rkey))
else:
log.debug("Set immutable and unset key {0}".format(rkey))
immutable[rkey] = True
if rkey in output:
del output[rkey]
except KeyError:
pass
elif operator == 'immutable()':
log.debug("Set immutable and substitute key {0}: {1}".format(rkey, results[key]))
immutable[rkey] = True
output[rkey] = results[key]
elif operator == 'imerge()':
log.debug("Set immutable and merge key {0}: {1}".format(rkey, results[key]))
immutable[rkey] = True
if rkey in output and type(results[key]) != type(output[rkey]):
log.warning('You can''t merge different types for key {0}'.format(rkey))
elif type(results[key]) is dict:
output[rkey].update(results[key])
elif type(results[key]) is list:
output[rkey].extend(results[key])
else:
log.warning('Unsupported type need to be list or dict for key {0}'.format(rkey))
elif operator == 'iunset()':
log.debug("Set immutable and unset key {0}".format(rkey))
immutable[rkey] = True
try:
del output[rkey]
except KeyError:
pass
elif operator is not None:
log.warning('Unsupported operator {0}, skipping key {1}'.format(operator, rkey))
else:
@ -231,8 +467,46 @@ def ext_pillar(minion_id, pillar, resource, sequence, subkey=False, subkey_only=
pillar_data[resource] = tree.copy()
else:
pillar_data = tree
if __opts__['pepa_validate']:
pillar_data['pepa_keys'] = output.copy()
return pillar_data
def validate(output, resource):
'''
Validate Pepa templates
'''
try:
import cerberus
except ImportError:
log.critical('You need module cerberus in order to use validation')
return
roots = __opts__['pepa_roots']
valdir = join(roots['base'], resource, 'validate')
all_schemas = {}
pepa_schemas = []
for fn in glob.glob(valdir + '/*.yaml'):
log.info("Loading schema: {0}".format(fn))
template = jinja2.Template(open(fn).read())
data = output
data['grains'] = __grains__.copy()
data['pillar'] = __pillar__.copy()
schema = yaml.load(template.render(data))
all_schemas.update(schema)
pepa_schemas.append(fn)
val = cerberus.Validator()
if not val.validate(output['pepa_keys'], all_schemas):
for ekey, error in val.errors.items():
log.warning('Validation failed for key {0}: {1}'.format(ekey, error))
output['pepa_schema_keys'] = all_schemas
output['pepa_schemas'] = pepa_schemas
# Only used when called from a terminal
if __name__ == '__main__':
# Load configuration file
@ -263,9 +537,16 @@ if __name__ == '__main__':
if args.pillar:
__pillar__.update(yaml.load(args.pillar))
# Validate or not
if args.validate:
__opts__['pepa_validate'] = True
# Print results
result = ext_pillar(args.hostname, __pillar__, __opts__['ext_pillar'][loc]['pepa']['resource'], __opts__['ext_pillar'][loc]['pepa']['sequence'])
if __opts__['pepa_validate']:
validate(result, __opts__['ext_pillar'][loc]['pepa']['resource'])
yaml.dumper.SafeDumper.ignore_aliases = lambda self, data: True
if not args.no_color:
try:

View file

@ -1,23 +1,31 @@
# -*- coding: utf-8 -*-
'''
Simple returner for CouchDB. Optional configuration
settings are listed below, along with sane defaults.
settings are listed below, along with sane defaults:
couchdb.db: 'salt'
couchdb.url: 'http://salt:5984/'
.. code-block:: yaml
couchdb.db: 'salt'
couchdb.url: 'http://salt:5984/'
Alternative configuration values can be used by prefacing the configuration.
Any values not found in the alternative configuration will be pulled from
the default location::
the default location:
alternative.couchdb.db: 'salt'
alternative.couchdb.url: 'http://salt:5984/'
.. code-block:: yaml
To use the couchdb returner, append '--return couchdb' to the salt command. ex:
alternative.couchdb.db: 'salt'
alternative.couchdb.url: 'http://salt:5984/'
To use the couchdb returner, append ``--return couchdb`` to the salt command. Example:
.. code-block:: bash
salt '*' test.ping --return couchdb
To use the alternative configuration, append '--return_config alternative' to the salt command. ex:
To use the alternative configuration, append ``--return_config alternative`` to the salt command. Example:
.. code-block:: bash
salt '*' test.ping --return couchdb --return_config alternative
'''

View file

@ -35,9 +35,14 @@ class Roster(object):
Used to manage a roster of minions allowing the master to become outwardly
minion aware
'''
def __init__(self, opts, backends=None):
def __init__(self, opts, backends='flat'):
self.opts = opts
self.backends = backends
if isinstance(backends, list):
self.backends = backends
else:
self.backends = backends.split(',')
if not backends:
self.backends = ['flat']
self.rosters = salt.loader.roster(opts)
def _gen_back(self):
@ -51,8 +56,6 @@ class Roster(object):
if fun in self.rosters:
back.add(backend)
return back
for roster in self.rosters:
back.add(roster.split('.')[0])
return sorted(back)
def targets(self, tgt, tgt_type):

75
salt/roster/cache.py Normal file
View file

@ -0,0 +1,75 @@
# -*- coding: utf-8 -*-
'''
Use the minion cache on the master to derive IP addresses based on minion ID.
Currently only contains logic to return an IPv4 address; does not handle IPv6,
or authentication (passwords, keys, etc).
It is possible to configure this roster to prefer a particular type of IP over
another. To configure the order, set the roster_order in the master config
file. The default for this is:
.. code-block:: yaml
roster_order:
- public
- private
- local
'''
# Import python libs
import os.path
import msgpack
# Import Salt libs
import salt.loader
import salt.utils
import salt.utils.cloud
import salt.utils.validate.net
from salt import syspaths
def targets(tgt, tgt_type='glob', **kwargs): # pylint: disable=W0613
'''
Return the targets from the flat yaml file, checks opts for location but
defaults to /etc/salt/roster
'''
cache = os.path.join(syspaths.CACHE_DIR, 'master', 'minions', tgt, 'data.p')
if not os.path.exists(cache):
return {}
roster_order = __opts__.get('roster_order', (
'public', 'private', 'local'
))
with salt.utils.fopen(cache, 'r') as fh_:
cache_data = msgpack.load(fh_)
ipv4 = cache_data.get('grains', {}).get('ipv4', [])
preferred_ip = extract_ipv4(roster_order, ipv4)
if preferred_ip is None:
return {}
return {
tgt: {
'host': preferred_ip,
}
}
def extract_ipv4(roster_order, ipv4):
'''
Extract the preferred IP address from the ipv4 grain
'''
for ip_type in roster_order:
for ip_ in ipv4:
if not salt.utils.validate.net.ipv4_addr(ip_):
continue
if ip_type == 'local' and ip_.startswith('127.'):
return ip_
elif ip_type == 'private' and not salt.utils.cloud.is_public_ip(ip_):
return ip_
elif ip_type == 'public' and salt.utils.cloud.is_public_ip(ip_):
return ip_
return None

View file

@ -714,7 +714,10 @@ class State(object):
# In case a package has been installed into the current python
# process 'site-packages', the 'site' module needs to be reloaded in
# order for the newly installed package to be importable.
reload(site)
try:
reload(site)
except RuntimeError:
log.error('Error encountered during module reload. Modules were not reloaded.')
self.load_modules()
if not self.opts.get('local', False) and self.opts.get('multiprocessing', True):
self.functions['saltutil.refresh_modules']()
@ -2346,14 +2349,22 @@ class BaseHighState(object):
self.state.opts['pillar'] = self.state._gather_pillar()
self.state.module_refresh()
def render_state(self, sls, saltenv, mods, matches):
def render_state(self, sls, saltenv, mods, matches, local=False):
'''
Render a state file and retrieve all of the include states
'''
err = ''
errors = []
state_data = self.client.get_state(sls, saltenv)
fn_ = state_data.get('dest', False)
if not local:
state_data = self.client.get_state(sls, saltenv)
fn_ = state_data.get('dest', False)
else:
fn_ = sls
if not os.path.isfile(fn_):
errors.append(
'Specified SLS {0} on local filesystem cannot '
'be found.'.format(sls)
)
if not fn_:
errors.append(
'Specified SLS {0} in saltenv {1} is not '
@ -2381,7 +2392,10 @@ class BaseHighState(object):
exc_info_on_loglevel=logging.DEBUG
)
errors.append('{0}\n{1}'.format(msg, traceback.format_exc()))
mods.add('{0}:{1}'.format(saltenv, sls))
try:
mods.add('{0}:{1}'.format(saltenv, sls))
except AttributeError:
pass
if state:
if not isinstance(state, dict):
errors.append(

View file

@ -54,6 +54,7 @@ def extracted(name,
- source: https://github.com/downloads/Graylog2/graylog2-server/graylog2-server-0.9.6p1.tar.gz
- source_hash: md5=499ae16dcae71eeb7c3a30c75ea7a1a6
- archive_format: tar
- tar_options: v
- if_missing: /opt/graylog2-server-0.9.6p1/
name
@ -75,14 +76,16 @@ def extracted(name,
previously extracted.
tar_options
Only used for tar format, it need to be the tar argument specific to
this archive, such as 'J' for LZMA.
Required if used with ``archive_format: tar``, otherwise optional.
It needs to be the tar argument specific to the archive being extracted,
such as 'J' for LZMA or 'v' to verbosely list files processed.
Using this option means that the tar executable on the target will
be used, which is less platform independent.
Main operators like -x, --extract, --get, -c, etc. and -f/--file
Main operators like -x, --extract, --get, -c and -f/--file
**should not be used** here.
If this option is not set, then the Python tarfile module is used.
The tarfile module supports gzip and bz2 in Python 2.
If ``archive_format`` is ``zip`` or ``rar`` and this option is not set,
then the Python tarfile module is used. The tarfile module supports gzip
and bz2 in Python 2.
keep
Keep the archive in the minion's cache

View file

@ -787,7 +787,7 @@ def running(name, container=None, port_bindings=None, binds=None,
.. code-block:: yaml
- dns:
- volumes_from:
- name_other_container
network_mode

View file

@ -0,0 +1,149 @@
# -*- coding: utf-8 -*-
'''
Management of PostgreSQL schemas
================================
The postgres_schemas module is used to create and manage Postgres schemas.
.. code-block:: yaml
public:
postgres_schema.present 'dbname' 'name'
'''
# Import Python libs
import logging
log = logging.getLogger(__name__)
def __virtual__():
'''
Only load if the postgres module is present
'''
return 'postgres.schema_exists' in __salt__
def present(dbname, name,
owner=None,
db_user=None, db_password=None,
db_host=None, db_port=None):
'''
Ensure that the named schema is present in the database.
dbname
The database's name will work on
name
The name of the schema to manage
db_user
database username if different from config or default
db_password
user password if any password for a specified user
db_host
Database host if different from config or default
db_port
Database port if different from config or default
'''
ret = {'dbname': dbname,
'name': name,
'changes': {},
'result': True,
'comment': 'Schema {0} is already present in '
'database {1}'.format(name, dbname)}
db_args = {
'db_user': db_user,
'db_password': db_password,
'db_host': db_host,
'db_port': db_port
}
# check if schema exists
schema_attr = __salt__['postgres.schema_get'](dbname, name, **db_args)
cret = None
# The schema is not present, make it!
if schema_attr is None:
cret = __salt__['postgres.schema_create'](dbname,
name,
owner=owner,
**db_args)
else:
msg = 'Schema {0} already exists in database {1}'
cret = None
if cret:
msg = 'Schema {0} has been created in database {1}'
ret['result'] = True
ret['changes'][name] = 'Present'
elif cret is not None:
msg = 'Failed to create schema {0} in database {1}'
ret['result'] = False
else:
msg = 'Schema {0} already exists in database {1}'
ret['result'] = True
ret['comment'] = msg.format(name, dbname)
return ret
def absent(dbname, name,
db_user=None, db_password=None,
db_host=None, db_port=None):
'''
Ensure that the named schema is absent
dbname
The database's name will work on
name
The name of the schema to remove
db_user
database username if different from config or default
db_password
user password if any password for a specified user
db_host
Database host if different from config or default
db_port
Database port if different from config or default
'''
ret = {'name': name,
'dbname': dbname,
'changes': {},
'result': True,
'comment': ''}
db_args = {
'db_user': db_user,
'db_password': db_password,
'db_host': db_host,
'db_port': db_port
}
# check if schema exists and remove it
if __salt__['postgres.schema_exists'](dbname, name, **db_args):
if __salt__['postgres.schema_remove'](dbname, name, **db_args):
ret['comment'] = 'Schema {0} has been removed' \
' from database {1}'.format(name, dbname)
ret['changes'][name] = 'Absent'
return ret
else:
ret['result'] = False
ret['comment'] = 'Schema {0} failed to be removed'.format(name)
return ret
else:
ret['comment'] = 'Schema {0} is not present in database {1},' \
' so it cannot be removed'.format(name, dbname)
return ret

View file

@ -69,6 +69,7 @@ def present(name, value, vtype='REG_DWORD', reflection=True):
return ret
def absent(name):
'''
Remove a registry key

122
salt/states/syslog_ng.py Normal file
View file

@ -0,0 +1,122 @@
# -*- coding: utf-8 -*-
'''
State module for syslog_ng
==========================
:maintainer: Tibor Benke <btibi@sch.bme.hu>
:maturity: new
:depends: cmd, ps, syslog_ng
:platform: all
Users can generate syslog-ng configuration files from YAML format or use
plain ones and reload, start, or stop their syslog-ng by using this module.
Details
-------
The service module is not available on all system, so this module includes
:mod:`syslog_ng.reloaded <salt.states.syslog_ng.reloaded>`,
:mod:`syslog_ng.stopped <salt.states.syslog_ng.stopped>`,
and :mod:`syslog_ng.started <salt.states.syslog_ng.started>` functions.
If the service module is available on the computers, users should use that.
Users can generate syslog-ng configuration with
:mod:`syslog_ng.config <salt.states.syslog_ng.config>` function.
For more information see :doc:`syslog-ng state usage </topics/tutorials/syslog_ng-state-usage>`.
Syslog-ng configuration file format
-----------------------------------
The syntax of a configuration snippet in syslog-ng.conf:
..
object_type object_id {<options>};
These constructions are also called statements. There are options inside of them:
..
option(parameter1, parameter2); option2(parameter1, parameter2);
You can find more information about syslog-ng's configuration syntax in the
Syslog-ng Admin guide:
http://www.balabit.com/sites/default/files/documents/syslog-ng-ose-3.5-guides/en/syslog-ng-ose-v3.5-guide-admin/html-single/index.html#syslog-ng.conf.5
'''
from __future__ import generators, print_function, with_statement
import logging
log = logging.getLogger(__name__)
def config(name,
config,
write=True):
'''
Builds syslog-ng configuration.
name : the id of the Salt document
config : the parsed YAML code
write : if True, it writes the config into the configuration file,
otherwise just returns it
'''
return __salt__['syslog_ng.config'](name, config, write)
def stopped(name=None):
'''
Kills syslog-ng.
'''
return __salt__['syslog_ng.stop'](name)
def started(name=None,
user=None,
group=None,
chroot=None,
caps=None,
no_caps=False,
pidfile=None,
enable_core=False,
fd_limit=None,
verbose=False,
debug=False,
trace=False,
yydebug=False,
persist_file=None,
control=None,
worker_threads=None,
*args,
**kwargs):
'''
Ensures, that syslog-ng is started via the given parameters.
Users shouldn't use this function, if the service module is available on
their system.
'''
return __salt__['syslog_ng.start'](name=name,
user=user,
group=group,
chroot=chroot,
caps=caps,
no_caps=no_caps,
pidfile=pidfile,
enable_core=enable_core,
fd_limit=fd_limit,
verbose=verbose,
debug=debug,
trace=trace,
yydebug=yydebug,
persist_file=persist_file,
control=control,
worker_threads=worker_threads)
def reloaded(name):
'''
Reloads syslog-ng.
'''
return __salt__['syslog_ng.reload'](name)

View file

@ -0,0 +1,30 @@
DEVICE="{{name}}"
{% if addr %}HWADDR="{{addr}}"
{%endif%}{% if userctl %}USERCTL="{{userctl}}"
{%endif%}{% if master %}MASTER="{{master}}"
{%endif%}{% if slave %}SLAVE="{{slave}}"
{%endif%}{% if vlan %}VLAN="{{vlan}}"
{%endif%}{% if devtype %}TYPE="{{devtype}}"
{%endif%}{% if proto %}BOOTPROTO="{{proto}}"
{%endif%}{% if onboot %}ONBOOT="{{onboot}}"
{%endif%}{% if onparent %}ONPARENT={{onparent}}
{%endif%}{% if ipaddr %}IPADDR="{{ipaddr}}"
{%endif%}{% if netmask %}NETMASK="{{netmask}}"
{%endif%}{% if gateway %}GATEWAY="{{gateway}}"
{%endif%}{% if enable_ipv6 %}IPV6INIT="yes"
{% if ipv6_autoconf %}IPV6_AUTOCONF="{{ipv6_autoconf}}"
{%endif%}{% if ipv6addr %}IPV6ADDR="{{ipv6addr}}"
{%endif%}{% if ipv6gateway %}IPV6_DEFAULTGW="{{ipv6gateway}}"
{%endif%}{%endif%}{% if srcaddr %}SRCADDR="{{srcaddr}}"
{%endif%}{% if peerdns %}PEERDNS="{{peerdns}}"
{%endif%}{% if defroute %}DEFROUTE="{{defroute}}"
{%endif%}{% if bridge %}BRIDGE="{{bridge}}"
{%endif%}{% if delay %}DELAY="{{delay}}"
{%endif%}{% if my_inner_ipaddr %}MY_INNER_IPADDR={{my_inner_ipaddr}}
{%endif%}{% if my_outer_ipaddr %}MY_OUTER_IPADDR={{my_outer_ipaddr}}
{%endif%}{%if bonding %}BONDING_OPTS="{%for item in bonding %}{{item}}={{bonding[item]}} {%endfor%}"
{%endif%}{% if ethtool %}ETHTOOL_OPTS="{%for item in ethtool %}{{item}} {{ethtool[item]}} {%endfor%}"
{%endif%}{% if domain %}DOMAIN="{{ domain|join(' ') }}"
{% endif %}{% for server in dns -%}
DNS{{loop.index}}="{{server}}"
{% endfor -%}

377
salt/utils/aws.py Normal file
View file

@ -0,0 +1,377 @@
# -*- coding: utf-8 -*-
'''
Connection library for AWS
.. versionadded:: Lithium
This is a base library used by a number of AWS services.
:depends: requests
'''
# Import Python libs
import sys
import time
import binascii
import datetime
import hashlib
import hmac
import logging
import urllib
import urlparse
import requests
# Import Salt libs
import salt.utils.xmlutil as xml
from salt._compat import ElementTree as ET
LOG = logging.getLogger(__name__)
DEFAULT_LOCATION = 'us-east-1'
DEFAULT_AWS_API_VERSION = '2013-10-15'
AWS_RETRY_CODES = [
'RequestLimitExceeded',
'InsufficientInstanceCapacity',
'InternalError',
'Unavailable',
'InsufficientAddressCapacity',
'InsufficientReservedInstanceCapacity',
]
def sig2(method, endpoint, params, provider, aws_api_version):
'''
Sign a query against AWS services using Signature Version 2 Signing
Process. This is documented at:
http://docs.aws.amazon.com/general/latest/gr/signature-version-2.html
'''
timenow = datetime.datetime.utcnow()
timestamp = timenow.strftime('%Y-%m-%dT%H:%M:%SZ')
params_with_headers = params.copy()
params_with_headers['AWSAccessKeyId'] = provider.get('id', None)
params_with_headers['SignatureVersion'] = '2'
params_with_headers['SignatureMethod'] = 'HmacSHA256'
params_with_headers['Timestamp'] = '{0}'.format(timestamp)
params_with_headers['Version'] = aws_api_version
keys = sorted(params_with_headers.keys())
values = map(params_with_headers.get, keys)
querystring = urllib.urlencode(list(zip(keys, values)))
canonical = '{0}\n{1}\n/\n{2}'.format(
method.encode('utf-8'),
endpoint.encode('utf-8'),
querystring.encode('utf-8'),
)
hashed = hmac.new(provider['key'], canonical, hashlib.sha256)
sig = binascii.b2a_base64(hashed.digest())
params_with_headers['Signature'] = sig.strip()
return params_with_headers
def sig4(method, endpoint, params, provider, aws_api_version, location,
product='ec2', uri='/', requesturl=None):
'''
Sign a query against AWS services using Signature Version 4 Signing
Process. This is documented at:
http://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html
http://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html
http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html
'''
timenow = datetime.datetime.utcnow()
timestamp = timenow.strftime('%Y-%m-%dT%H:%M:%SZ')
params_with_headers = params.copy()
params_with_headers['Version'] = aws_api_version
keys = sorted(params_with_headers.keys())
values = map(params_with_headers.get, keys)
querystring = urllib.urlencode(list(zip(keys, values)))
amzdate = timenow.strftime('%Y%m%dT%H%M%SZ')
datestamp = timenow.strftime('%Y%m%d')
payload_hash = hashlib.sha256('').hexdigest()
canonical_headers = 'host:{0}\nx-amz-date:{1}\n'.format(
endpoint,
amzdate,
)
signed_headers = 'host;x-amz-date'
request = '\n'.join((
method, endpoint, querystring, canonical_headers,
signed_headers, payload_hash
))
algorithm = 'AWS4-HMAC-SHA256'
# Create payload hash (hash of the request body content). For GET
# requests, the payload is an empty string ('').
payload_hash = hashlib.sha256('').hexdigest()
# Combine elements to create create canonical request
canonical_request = '\n'.join((
method,
uri,
querystring,
canonical_headers,
signed_headers,
payload_hash
))
# Create the string to sign
credential_scope = '/'.join((
datestamp, location, product, 'aws4_request'
))
string_to_sign = '\n'.join((
algorithm,
amzdate,
credential_scope,
hashlib.sha256(canonical_request).hexdigest()
))
# Create the signing key using the function defined above.
signing_key = _sig_key(
provider.get('key', None),
datestamp,
location,
product
)
# Sign the string_to_sign using the signing_key
signature = hmac.new(
signing_key,
string_to_sign.encode('utf-8'),
hashlib.sha256).hexdigest()
# Add signing information to the request
authorization_header = (
'{0} Credential={1}/{2}, SignedHeaders={3}, Signature={4}'
).format(
algorithm,
provider.get('id', None),
credential_scope,
signed_headers,
signature,
)
headers = {
'x-amz-date': amzdate,
'Authorization': authorization_header
}
requesturl = '{0}?{1}'.format(requesturl, querystring)
return headers, requesturl
def _sign(key, msg):
'''
Key derivation functions. See:
http://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html#signature-v4-examples-python
'''
return hmac.new(key, msg.encode('utf-8'), hashlib.sha256).digest()
def _sig_key(key, date_stamp, regionName, serviceName):
'''
Get a signature key. See:
http://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html#signature-v4-examples-python
'''
kDate = _sign(('AWS4' + key).encode('utf-8'), date_stamp)
kRegion = _sign(kDate, regionName)
kService = _sign(kRegion, serviceName)
kSigning = _sign(kService, 'aws4_request')
return kSigning
def query(params=None, setname=None, requesturl=None, location=None,
return_url=False, return_root=False, opts=None, provider=None,
endpoint=None, product='ec2', sigver='2'):
'''
Perform a query against AWS services using Signature Version 2 Signing
Process. This is documented at:
http://docs.aws.amazon.com/general/latest/gr/signature-version-2.html
Regions and endpoints are documented at:
http://docs.aws.amazon.com/general/latest/gr/rande.html
Default ``product`` is ``ec2``. Valid ``product`` names are:
.. code-block: yaml
- autoscaling (Auto Scaling)
- cloudformation (CloudFormation)
- ec2 (Elastic Compute Cloud)
- elasticache (ElastiCache)
- elasticbeanstalk (Elastic BeanStalk)
- elasticloadbalancing (Elastic Load Balancing)
- elasticmapreduce (Elastic MapReduce)
- iam (Identity and Access Management)
- importexport (Import/Export)
- monitoring (CloudWatch)
- rds (Relational Database Service)
- sdb (SimpleDB)
- sns (Simple Notification Service)
- sqs (Simple Queue Service)
'''
if params is None:
params = {}
if opts is None:
opts = {}
if provider is None:
function = opts.get('function', ())
providers = opts.get('providers', {})
prov_dict = providers.get(function[1], None)
if prov_dict is not None:
driver = prov_dict.keys()[0]
provider = prov_dict[driver]
service_url = provider.get('service_url', 'amazonaws.com')
if not location:
location = get_location(opts, provider)
if endpoint is None:
if not requesturl:
endpoint = provider.get(
'endpoint',
'{0}.{1}.{2}'.format(product, location, service_url)
)
requesturl = 'https://{0}/'.format(endpoint)
else:
endpoint = urlparse.urlparse(requesturl).netloc
if endpoint == '':
endpoint_err = ('Could not find a valid endpoint in the '
'requesturl: {0}. Looking for something '
'like https://some.aws.endpoint/?args').format(
requesturl
)
LOG.error(endpoint_err)
if return_url is True:
return {'error': endpoint_err}, requesturl
return {'error': endpoint_err}
LOG.debug('Using AWS endpoint: {0}'.format(endpoint))
method = 'GET'
aws_api_version = provider.get(
'aws_api_version', provider.get(
'{0}_api_version'.format(product),
DEFAULT_AWS_API_VERSION
)
)
if sigver == '4':
headers, requesturl = sig4(
method, endpoint, params, provider, aws_api_version, location, product, requesturl=requesturl
)
params_with_headers = {}
else:
params_with_headers = sig2(
method, endpoint, params, provider, aws_api_version
)
headers = {}
attempts = 5
while attempts > 0:
LOG.debug('AWS Request: {0}'.format(requesturl))
LOG.trace('AWS Request Parameters: {0}'.format(params_with_headers))
try:
result = requests.get(requesturl, headers=headers, params=params_with_headers)
LOG.debug(
'AWS Response Status Code: {0}'.format(
result.status_code
)
)
LOG.trace(
'AWS Response Text: {0}'.format(
result.text
)
)
result.raise_for_status()
break
except requests.exceptions.HTTPError as exc:
root = ET.fromstring(exc.response.content)
data = xml.to_dict(root)
# check to see if we should retry the query
err_code = data.get('Errors', {}).get('Error', {}).get('Code', '')
if attempts > 0 and err_code and err_code in AWS_RETRY_CODES:
attempts -= 1
LOG.error(
'AWS Response Status Code and Error: [{0} {1}] {2}; '
'Attempts remaining: {3}'.format(
exc.response.status_code, exc, data, attempts
)
)
# Wait a bit before continuing to prevent throttling
time.sleep(2)
continue
LOG.error(
'AWS Response Status Code and Error: [{0} {1}] {2}'.format(
exc.response.status_code, exc, data
)
)
if return_url is True:
return {'error': data}, requesturl
return {'error': data}
else:
LOG.error(
'AWS Response Status Code and Error: [{0} {1}] {2}'.format(
exc.response.status_code, exc, data
)
)
if return_url is True:
return {'error': data}, requesturl
return {'error': data}
response = result.text
root = ET.fromstring(response)
items = root[1]
if return_root is True:
items = root
if setname:
if sys.version_info < (2, 7):
children_len = len(root.getchildren())
else:
children_len = len(root)
for item in range(0, children_len):
comps = root[item].tag.split('}')
if comps[1] == setname:
items = root[item]
ret = []
for item in items:
ret.append(xml.to_dict(item))
if return_url is True:
return ret, requesturl
return ret
def get_location(opts, provider=None):
'''
Return the region to use, in this order:
opts['location']
provider['location']
DEFAULT_LOCATION
'''
return opts.get(
'location', provider.get(
'location', DEFAULT_LOCATION
)
)

View file

@ -1024,14 +1024,14 @@ def deploy_script(host,
# Minion configuration
if minion_pem:
sftp_file('{0}/minion.pem'.format(tmp_dir), minion_pem, kwargs)
sftp_file('{0}/minion.pem'.format(tmp_dir), minion_pem, ssh_kwargs)
ret = root_cmd('chmod 600 {0}/minion.pem'.format(tmp_dir),
tty, sudo, **ssh_kwargs)
if ret:
raise SaltCloudSystemExit(
'Cant set perms on {0}/minion.pem'.format(tmp_dir))
if minion_pub:
sftp_file('{0}/minion.pub'.format(tmp_dir), minion_pub, kwargs)
sftp_file('{0}/minion.pub'.format(tmp_dir), minion_pub, ssh_kwargs)
if minion_conf:
if not isinstance(minion_conf, dict):
@ -1057,7 +1057,7 @@ def deploy_script(host,
# Master configuration
if master_pem:
sftp_file('{0}/master.pem'.format(tmp_dir), master_pem, kwargs)
sftp_file('{0}/master.pem'.format(tmp_dir), master_pem, ssh_kwargs)
ret = root_cmd('chmod 600 {0}/master.pem'.format(tmp_dir),
tty, sudo, **ssh_kwargs)
if ret:
@ -1065,7 +1065,7 @@ def deploy_script(host,
'Cant set perms on {0}/master.pem'.format(tmp_dir))
if master_pub:
sftp_file('{0}/master.pub'.format(tmp_dir), master_pub, kwargs)
sftp_file('{0}/master.pub'.format(tmp_dir), master_pub, ssh_kwargs)
if master_conf:
if not isinstance(master_conf, dict):
@ -1116,7 +1116,7 @@ def deploy_script(host,
rpath = os.path.join(
preseed_minion_keys_tempdir, minion_id
)
sftp_file(rpath, minion_key, kwargs)
sftp_file(rpath, minion_key, ssh_kwargs)
if ssh_kwargs['username'] != 'root':
root_cmd(
@ -1134,7 +1134,7 @@ def deploy_script(host,
if script:
# got strange escaping issues with sudoer, going onto a
# subshell fixes that
sftp_file('{0}/deploy.sh'.format(tmp_dir), script, kwargs)
sftp_file('{0}/deploy.sh'.format(tmp_dir), script, ssh_kwargs)
ret = root_cmd(
('sh -c "( chmod +x \\"{0}/deploy.sh\\" )";'
'exit $?').format(tmp_dir),

View file

@ -45,7 +45,10 @@ def _read_proc_file(path, opts):
data = serial.loads(buf)
else:
# Proc file is empty, remove
os.remove(path)
try:
os.remove(path)
except IOError:
pass
return None
if not isinstance(data, dict):
# Invalid serial object
@ -53,19 +56,28 @@ def _read_proc_file(path, opts):
if not salt.utils.process.os_is_running(data['pid']):
# The process is no longer running, clear out the file and
# continue
os.remove(path)
try:
os.remove(path)
except IOError:
pass
return None
if opts['multiprocessing']:
if data.get('pid') == pid:
return None
else:
if data.get('pid') != pid:
os.remove(path)
try:
os.remove(path)
except IOError:
pass
return None
if data.get('jid') == current_thread:
return None
if not data.get('jid') in [x.name for x in threading.enumerate()]:
os.remove(path)
try:
os.remove(path)
except IOError:
pass
return None
return data

View file

@ -1509,7 +1509,7 @@ class SaltCMDOptionParser(OptionParser, ConfigDirMixIn, MergeConfigMixIn,
default=False,
dest='mktoken',
action='store_true',
help=('Generate and save an authentication token for re-use. The'
help=('Generate and save an authentication token for re-use. The '
'token is generated and made available for the period '
'defined in the Salt Master.')
)
@ -2193,7 +2193,7 @@ class SaltSSHOptionParser(OptionParser, ConfigDirMixIn, MergeConfigMixIn,
self.add_option(
'--roster',
dest='roster',
default='',
default='flat',
help=('Define which roster system to use, this defines if a '
'database backend, scanner, or custom roster system is '
'used. Default is the flat file roster.')
@ -2242,6 +2242,13 @@ class SaltSSHOptionParser(OptionParser, ConfigDirMixIn, MergeConfigMixIn,
action='store_true',
help=('Turn on command verbosity, display jid')
)
self.add_option(
'-s', '--static',
default=False,
action='store_true',
help=('Return the data from minions as a group after they '
'all return.')
)
auth_group = optparse.OptionGroup(
self, 'Authentication Options',

368
tests/buildpackage.py Executable file
View file

@ -0,0 +1,368 @@
# -*- coding: utf-8 -*-
# Maintainer: Erik Johnson (https://github.com/terminalmage)
#
# WARNING: This script will recursively remove the build and artifact
# directories.
#
# This script is designed for speed, therefore it does not use mock and does not
# run tests. It *will* install the build deps on the machine running the script.
#
import errno
import glob
import logging
import os
import re
import shutil
import subprocess
import sys
from optparse import OptionParser, OptionGroup
logging.QUIET = 0
logging.GARBAGE = 1
logging.TRACE = 5
logging.addLevelName(logging.QUIET, 'QUIET')
logging.addLevelName(logging.TRACE, 'TRACE')
logging.addLevelName(logging.GARBAGE, 'GARBAGE')
LOG_LEVELS = {
'all': logging.NOTSET,
'debug': logging.DEBUG,
'error': logging.ERROR,
'critical': logging.CRITICAL,
'garbage': logging.GARBAGE,
'info': logging.INFO,
'quiet': logging.QUIET,
'trace': logging.TRACE,
'warning': logging.WARNING,
}
log = logging.getLogger(__name__)
# FUNCTIONS
def _abort(msgs):
'''
Unrecoverable error, pull the plug
'''
if not isinstance(msgs, list):
msgs = [msgs]
for msg in msgs:
log.error(msg)
sys.stderr.write(msg + '\n\n')
sys.stderr.write('Build failed. See log file for further details.\n')
sys.exit(1)
# HELPER FUNCTIONS
def _init():
'''
Parse CLI options.
'''
parser = OptionParser()
parser.add_option('--platform',
dest='platform',
help='Platform (\'os\' grain)')
parser.add_option('--log-level',
dest='log_level',
default='warning',
help='Control verbosity of logging. Default: %default')
# All arguments dealing with file paths (except for platform-specific ones
# like those for SPEC files) should be placed in this group so that
# relative paths are properly expanded.
path_group = OptionGroup(parser, 'File/Directory Options')
path_group.add_option('--source-dir',
default='/testing',
help='Source directory. Must be a git checkout. '
'(default: %default)')
path_group.add_option('--build-dir',
default='/tmp/salt-buildpackage',
help='Build root, will be removed if it exists '
'prior to running script. (default: %default)')
path_group.add_option('--artifact-dir',
default='/tmp/salt-packages',
help='Location where build artifacts should be '
'placed for Jenkins to retrieve them '
'(default: %default)')
parser.add_option_group(path_group)
# This group should also consist of nothing but file paths, which will be
# normalized below.
rpm_group = OptionGroup(parser, 'RPM-specific File/Directory Options')
rpm_group.add_option('--spec',
dest='spec_file',
default='/tmp/salt.spec',
help='Spec file to use as a template to build RPM. '
'(default: %default)')
parser.add_option_group(rpm_group)
opts = parser.parse_args()[0]
# Expand any relative paths
for group in (path_group, rpm_group):
for path_opt in [opt.dest for opt in group.option_list]:
path = getattr(opts, path_opt)
if not os.path.isabs(path):
# Expand ~ or ~user
path = os.path.expanduser(path)
if not os.path.isabs(path):
# Still not absolute, resolve '..'
path = os.path.realpath(path)
# Update attribute with absolute path
setattr(opts, path_opt, path)
# Sanity checks
problems = []
if not opts.platform:
problems.append('Platform (\'os\' grain) required')
if not os.path.isdir(opts.source_dir):
problems.append('Source directory {0} not found'
.format(opts.source_dir))
try:
shutil.rmtree(opts.build_dir)
except OSError as exc:
if exc.errno not in (errno.ENOENT, errno.ENOTDIR):
problems.append('Unable to remove pre-existing destination '
'directory {0}: {1}'.format(opts.build_dir, exc))
finally:
try:
os.makedirs(opts.build_dir)
except OSError as exc:
problems.append('Unable to create destination directory {0}: {1}'
.format(opts.build_dir, exc))
try:
shutil.rmtree(opts.artifact_dir)
except OSError as exc:
if exc.errno not in (errno.ENOENT, errno.ENOTDIR):
problems.append('Unable to remove pre-existing artifact directory '
'{0}: {1}'.format(opts.artifact_dir, exc))
finally:
try:
os.makedirs(opts.artifact_dir)
except OSError as exc:
problems.append('Unable to create artifact directory {0}: {1}'
.format(opts.artifact_dir, exc))
# Create log file in the artifact dir so it is sent back to master if the
# job fails
opts.log_file = os.path.join(opts.artifact_dir, 'salt-buildpackage.log')
if problems:
_abort(problems)
return opts
def _move(src, dst):
'''
Wrapper around shutil.move()
'''
try:
os.remove(os.path.join(dst, os.path.basename(src)))
except OSError as exc:
if exc.errno != errno.ENOENT:
_abort(exc)
try:
shutil.move(src, dst)
except shutil.Error as exc:
_abort(exc)
def _run_command(args):
log.info('Running command: {0}'.format(args))
proc = subprocess.Popen(args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, stderr = proc.communicate()
if stdout:
log.debug('Command output: \n{0}'.format(stdout))
if stderr:
log.error(stderr)
log.info('Return code: {0}'.format(proc.returncode))
return stdout, stderr, proc.returncode
def _make_sdist(opts, python_bin='python'):
os.chdir(opts.source_dir)
stdout, stderr, rcode = _run_command([python_bin, 'setup.py', 'sdist'])
if rcode == 0:
# Find the sdist with the most recently-modified metadata
sdist_path = max(
glob.iglob(os.path.join(opts.source_dir, 'dist', 'salt-*.tar.gz')),
key=os.path.getctime
)
log.info('sdist is located at {0}'.format(sdist_path))
return sdist_path
else:
_abort('Failed to create sdist')
# BUILDER FUNCTIONS
def build_centos(opts):
'''
Build an RPM
'''
log.info('Building CentOS RPM')
log.info('Detecting major release')
try:
with open('/etc/redhat-release', 'r') as fp_:
redhat_release = fp_.read().strip()
major_release = int(redhat_release.split()[2].split('.')[0])
except (ValueError, IndexError):
_abort('Unable to determine major release from /etc/redhat-release '
'contents: {0!r}'.format(redhat_release))
except IOError as exc:
_abort('{0}'.format(exc))
log.info('major_release: {0}'.format(major_release))
define_opts = [
'--define',
'_topdir {0}'.format(os.path.join(opts.build_dir))
]
build_reqs = ['rpm-build']
if major_release == 5:
python_bin = 'python26'
define_opts.extend(['--define=', 'dist .el5'])
build_reqs.extend(['python26-devel'])
elif major_release == 6:
build_reqs.extend(['python-devel'])
elif major_release == 7:
build_reqs.extend(['python-devel', 'systemd-units'])
else:
_abort('Unsupported major release: {0}'.format(major_release))
# Install build deps
_run_command(['yum', '-y', 'install'] + build_reqs)
# Make the sdist
try:
sdist = _make_sdist(opts, python_bin=python_bin)
except NameError:
sdist = _make_sdist(opts)
# Example tarball names:
# - Git checkout: salt-2014.7.0rc1-1584-g666602e.tar.gz
# - Tagged release: salt-2014.7.0.tar.gz
tarball_re = re.compile(r'^salt-([^-]+)(?:-(\d+)-(g[0-9a-f]+))?\.tar\.gz$')
try:
base, offset, oid = tarball_re.match(os.path.basename(sdist)).groups()
except AttributeError:
_abort('Unable to extract version info from sdist filename {0!r}'
.format(sdist))
if offset is None:
salt_pkgver = salt_srcver = base
else:
salt_pkgver = '.'.join((base, offset, oid))
salt_srcver = '-'.join((base, offset, oid))
log.info('salt_pkgver: {0}'.format(salt_pkgver))
log.info('salt_srcver: {0}'.format(salt_srcver))
# Setup build environment
for build_dir in 'BUILD BUILDROOT RPMS SOURCES SPECS SRPMS'.split():
path = os.path.join(opts.build_dir, build_dir)
try:
os.makedirs(path)
except OSError:
pass
if not os.path.isdir(path):
_abort('Unable to make directory: {0}'.format(path))
# Get sources into place
build_sources_path = os.path.join(opts.build_dir, 'SOURCES')
rpm_sources_path = os.path.join(opts.source_dir, 'pkg', 'rpm')
_move(sdist, build_sources_path)
for src in ('salt-master', 'salt-syndic', 'salt-minion', 'salt-api',
'salt-master.service', 'salt-syndic.service',
'salt-minion.service', 'salt-api.service',
'README.fedora', 'logrotate.salt'):
shutil.copy(os.path.join(rpm_sources_path, src), build_sources_path)
# Prepare SPEC file
spec_path = os.path.join(opts.build_dir, 'SPECS', 'salt.spec')
with open(opts.spec_file, 'r') as spec:
spec_lines = spec.read().splitlines()
with open(spec_path, 'w') as fp_:
for line in spec_lines:
if line.startswith('%global srcver '):
line = '%global srcver {0}'.format(salt_srcver)
elif line.startswith('Version: '):
line = 'Version: {0}'.format(salt_pkgver)
fp_.write(line + '\n')
# Do the thing
cmd = ['rpmbuild', '-ba']
cmd.extend(define_opts)
cmd.append(spec_path)
stdout, stderr, rcode = _run_command(cmd)
if rcode != 0:
_abort('Build failed.')
packages = glob.glob(
os.path.join(
opts.build_dir,
'RPMS',
'noarch',
'salt-*{0}*.noarch.rpm'.format(salt_pkgver)
)
)
packages.extend(
glob.glob(
os.path.join(
opts.build_dir,
'SRPMS',
'salt-{0}*.src.rpm'.format(salt_pkgver)
)
)
)
return packages
# MAIN
if __name__ == '__main__':
opts = _init()
print('Starting {0} build. Progress will be logged to {1}.'
.format(opts.platform, opts.log_file))
# Setup logging
log_format = '%(asctime)s.%(msecs)03d %(levelname)s: %(message)s'
log_datefmt = '%H:%M:%S'
log_level = LOG_LEVELS[opts.log_level] \
if opts.log_level in LOG_LEVELS \
else LOG_LEVELS['warning']
logging.basicConfig(filename=opts.log_file,
format=log_format,
datefmt=log_datefmt,
level=LOG_LEVELS[opts.log_level])
if opts.log_level not in LOG_LEVELS:
log.error('Invalid log level {0!r}, falling back to \'warning\''
.format(opts.log_level))
# Build for the specified platform
if not opts.platform:
_abort('Platform required')
elif opts.platform.lower() == 'centos':
artifacts = build_centos(opts)
else:
_abort('Unsupported platform {0!r}'.format(opts.platform))
msg = ('Build complete. Artifacts will be stored in {0}'
.format(opts.artifact_dir))
log.info(msg)
print(msg) # pylint: disable=C0325
for artifact in artifacts:
shutil.copy(artifact, opts.artifact_dir)
log.info('Copied {0} to artifact directory'.format(artifact))
log.info('Done!')

View file

@ -987,7 +987,7 @@ class ModuleCase(TestCase, SaltClientTestCaseMixIn):
# Try to match stalled state functions
orig[minion_tgt] = self._check_state_return(
orig[minion_tgt], func=function
orig[minion_tgt]
)
return orig[minion_tgt]
@ -1017,7 +1017,7 @@ class ModuleCase(TestCase, SaltClientTestCaseMixIn):
self.get_config_file_path('sub_minion')
)
def _check_state_return(self, ret, func='state.single'):
def _check_state_return(self, ret):
if isinstance(ret, dict):
# This is the supposed return format for state calls
return ret

View file

@ -0,0 +1,128 @@
# -*- coding: utf-8 -*-
'''
:codeauthor: :email:`Nicole Thomas <nicole@saltstack.com>`
'''
# Import Python Libs
import os
import random
import string
# Import Salt Libs
import integration
from salt.config import cloud_providers_config
# Import Salt Testing Libs
from salttesting.helpers import ensure_in_syspath, expensiveTest
ensure_in_syspath('../../../')
def __random_name(size=6):
'''
Generates a radom cloud instance name
'''
return 'CLOUD-TEST-' + ''.join(
random.choice(string.ascii_uppercase + string.digits)
for x in range(size)
)
# Create the cloud instance name to be used throughout the tests
INSTANCE_NAME = __random_name()
class EC2Test(integration.ShellCase):
'''
Integration tests for the EC2 cloud provider in Salt-Cloud
'''
@expensiveTest
def setUp(self):
'''
Sets up the test requirements
'''
super(EC2Test, self).setUp()
# check if appropriate cloud provider and profile files are present
profile_str = 'ec2-config:'
provider = 'ec2'
providers = self.run_cloud('--list-providers')
if profile_str not in providers:
self.skipTest(
'Configuration file for {0} was not found. Check {0}.conf files '
'in tests/integration/files/conf/cloud.*.d/ to run these tests.'
.format(provider)
)
# check if id, key, keyname, securitygroup, private_key, location,
# and provider are present
path = os.path.join(integration.FILES,
'conf',
'cloud.providers.d',
provider + '.conf')
config = cloud_providers_config(path)
id = config['ec2-config']['ec2']['id']
key = config['ec2-config']['ec2']['key']
keyname = config['ec2-config']['ec2']['keyname']
sec_group = config['ec2-config']['ec2']['securitygroup']
private_key = config['ec2-config']['ec2']['private_key']
location = config['ec2-config']['ec2']['location']
conf_items = [id, key, keyname, sec_group, private_key, location]
missing_conf_item = []
for item in conf_items:
if item == '':
missing_conf_item.append(item)
if missing_conf_item:
self.skipTest(
'An id, key, keyname, security group, private key, and location must '
'be provided to run these tests. One or more of these elements is '
'missing. Check tests/integration/files/conf/cloud.providers.d/{0}.conf'
.format(provider)
)
def test_instance(self):
'''
Tests creating and deleting an instance on EC2 (classic)
'''
# create the instance
instance = self.run_cloud('-p ec2-test {0}'.format(INSTANCE_NAME))
ret_str = ' {0}'.format(INSTANCE_NAME)
# check if instance returned with salt installed
try:
self.assertIn(ret_str, instance)
except AssertionError:
self.run_cloud('-d {0} --assume-yes'.format(INSTANCE_NAME))
raise
# delete the instance
delete = self.run_cloud('-d {0} --assume-yes'.format(INSTANCE_NAME))
ret_str = ' True'
# check if deletion was performed appropriately
try:
self.assertIn(ret_str, delete)
except AssertionError:
raise
def tearDown(self):
'''
Clean up after tests
'''
query = self.run_cloud('--query')
ret_str = ' {0}:'.format(INSTANCE_NAME)
# if test instance is still present, delete it
if ret_str in query:
self.run_cloud('-d {0} --assume-yes'.format(INSTANCE_NAME))
if __name__ == '__main__':
from integration import run_tests
run_tests(EC2Test)

View file

@ -1,9 +1,5 @@
# vim: filetype=yaml sw=2 ts=2 fenc=utf-8 et
Ubuntu-13.04-AMD64:
image: ami-c30360aa
ec2-test:
provider: ec2-config
size: Micro Instance
ssh_username: ubuntu
securitygroup:
- default
image: ami-b06a98d8
size: t1.micro
sh_username: ec2-user

View file

@ -1,10 +1,8 @@
# vim: filetype=yaml sw=2 ts=2 fenc=utf-8 et
---
ec2-config:
id: AAAAAABBBBBCCCCCDDDDDDFFFFF
key: AAAAAABBBBBCCCCCDDDDDDFFFFF
provider: ec2
keyname: salttest
securitygroup: default
private_key: salttest
id: ''
key: ''
keyname: ''
securitygroup: ''
private_key: ''
location: ''

View file

@ -10,6 +10,7 @@ This script is intended to be shell-centric!!
# Import python libs
from __future__ import print_function
import glob
import os
import re
import sys
@ -67,6 +68,12 @@ def build_pillar_data(options):
pillar['bootstrap_salt_url'] = options.bootstrap_salt_url
if options.bootstrap_salt_commit is not None:
pillar['bootstrap_salt_commit'] = options.bootstrap_salt_commit
if options.package_source_dir:
pillar['package_source_dir'] = options.package_source_dir
if options.package_build_dir:
pillar['package_build_dir'] = options.package_build_dir
if options.package_artifact_dir:
pillar['package_artifact_dir'] = options.package_artifact_dir
if options.pillar:
pillar.update(dict(options.pillar))
return yaml.dump(pillar, default_flow_style=True, indent=0, width=sys.maxint).rstrip()
@ -310,6 +317,52 @@ def download_remote_logs(options):
time.sleep(0.25)
def download_packages(options):
print('Downloading packages...')
sys.stdout.flush()
workspace = options.workspace
vm_name = options.download_packages
for fglob in ('salt-*.rpm',
'salt-*.deb',
'salt-*.pkg.xz',
'salt-buildpackage.log'):
for fname in glob.glob(os.path.join(workspace, fglob)):
if os.path.isfile(fname):
os.unlink(fname)
cmds = [
('salt {{0}} archive.tar czf {0}.tar.gz sources=\'*.*\' cwd={0}'
.format(options.package_artifact_dir)),
'salt {{0}} cp.push {0}.tar.gz'.format(options.package_artifact_dir),
('tar -C {{2}} -xzf /var/cache/salt/master/minions/{{1}}/files{0}.tar.gz'
.format(options.package_artifact_dir)),
]
for cmd in cmds:
cmd = cmd.format(build_minion_target(options, vm_name), vm_name, workspace)
print('Running CMD: {0}'.format(cmd))
sys.stdout.flush()
proc = NonBlockingPopen(
cmd,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stream_stds=True
)
proc.poll_and_read_until_finish()
proc.communicate()
if proc.returncode != 0:
print(
'\nFailed to execute command. Exit code: {0}'.format(
proc.returncode
)
)
time.sleep(0.25)
def run(opts):
'''
RUN!
@ -322,6 +375,7 @@ def run(opts):
if opts.download_remote_reports:
opts.download_coverage_report = vm_name
opts.download_unittest_reports = vm_name
opts.download_packages = vm_name
if opts.bootstrap_salt_commit is not None:
if opts.bootstrap_salt_url is None:
@ -687,6 +741,37 @@ def run(opts):
# Anything else, raise the exception
raise
if retcode == 0:
# Build packages
time.sleep(3)
cmd = (
'salt -t 1800 {target} state.sls buildpackage pillar="{pillar}" --no-color'.format(
pillar=build_pillar_data(opts),
target=build_minion_target(opts, vm_name),
)
)
print('Running CMD: {0}'.format(cmd))
sys.stdout.flush()
proc = subprocess.Popen(
cmd,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
stdout, stderr = proc.communicate()
if stdout:
print(stdout)
sys.stdout.flush()
if stderr:
print(stderr)
sys.stderr.flush()
# Download packages only if the script ran and was successful
if 'Build complete' in stdout:
download_packages(opts)
if opts.download_remote_reports:
# Download unittest reports
download_unittest_reports(opts)
@ -826,6 +911,30 @@ def parse():
action='store_true',
help='Run the cloud provider tests only.'
)
parser.add_option(
'--build-packages',
default=True,
action='store_true',
help='Run buildpackage.py to create packages off of the git build.'
)
# These next three options are ignored if --build-packages is False
parser.add_option(
'--package-source-dir',
default='/testing',
help='Directory where the salt source code checkout is found '
'(default: %default)',
)
parser.add_option(
'--package-build-dir',
default='/tmp/salt-buildpackage',
help='Build root for automated package builds (default: %default)',
)
parser.add_option(
'--package-artifact-dir',
default='/tmp/salt-packages',
help='Location on the minion from which packages should be '
'retrieved (default: %default)',
)
options, args = parser.parse_args()

310
tests/pkg/rpm/salt.spec Normal file
View file

@ -0,0 +1,310 @@
# Maintainer: Erik Johnson (https://github.com/terminalmage)
#
# This is a modified version of the spec file, which supports git builds. It
# should be kept more or less up-to-date with upstream changes.
#
# Please contact the maintainer before submitting any pull requests for this
# spec file.
%if ! (0%{?rhel} >= 6 || 0%{?fedora} > 12)
%global with_python26 1
%define pybasever 2.6
%define __python_ver 26
%define __python %{_bindir}/python%{?pybasever}
%endif
%{!?python_sitelib: %global python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())")}
%{!?python_sitearch: %global python_sitearch %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib(1))")}
%global srcver REPLACE_ME
Name: salt
Version: REPLACE_ME
Release: 1%{?dist}
Summary: A parallel remote execution system
Group: System Environment/Daemons
License: ASL 2.0
URL: http://saltstack.org/
Source0: %{name}-%{srcver}.tar.gz
Source1: %{name}-master
Source2: %{name}-syndic
Source3: %{name}-minion
Source4: %{name}-api
Source5: %{name}-master.service
Source6: %{name}-syndic.service
Source7: %{name}-minion.service
Source8: %{name}-api.service
Source9: README.fedora
Source10: logrotate.salt
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
BuildArch: noarch
%ifarch %{ix86} x86_64
Requires: dmidecode
%endif
Requires: pciutils
Requires: yum-utils
Requires: sshpass
%if 0%{?with_python26}
BuildRequires: python26-devel
Requires: python26-m2crypto
Requires: python26-crypto
Requires: python26-jinja2
Requires: python26-msgpack
Requires: python26-PyYAML
Requires: python26-zmq
Requires: python26-requests
%else
BuildRequires: python-devel
Requires: m2crypto
Requires: python-crypto
Requires: python-zmq
Requires: python-jinja2
Requires: PyYAML
Requires: python-msgpack
Requires: python-requests
%endif
%if ! (0%{?rhel} >= 7 || 0%{?fedora} >= 15)
Requires(post): chkconfig
Requires(preun): chkconfig
Requires(preun): initscripts
Requires(postun): initscripts
%else
%if 0%{?systemd_preun:1}
Requires(post): systemd-units
Requires(preun): systemd-units
Requires(postun): systemd-units
%endif
BuildRequires: systemd-units
Requires: systemd-python
%endif
%description
Salt is a distributed remote execution system used to execute commands and
query data. It was developed in order to bring the best solutions found in
the world of remote execution together and make them better, faster and more
malleable. Salt accomplishes this via its ability to handle larger loads of
information, and not just dozens, but hundreds or even thousands of individual
servers, handle them quickly and through a simple and manageable interface.
%package -n salt-master
Summary: Management component for salt, a parallel remote execution system
Group: System Environment/Daemons
Requires: salt = %{version}-%{release}
%description -n salt-master
The Salt master is the central server to which all minions connect.
%package -n salt-minion
Summary: Client component for salt, a parallel remote execution system
Group: System Environment/Daemons
Requires: salt = %{version}-%{release}
%description -n salt-minion
Salt minion is queried and controlled from the master.
%prep
%setup -n %{name}-%{srcver}
%build
%install
rm -rf %{buildroot}
#cd $RPM_BUILD_DIR/%{name}-%{version}/%{name}-%{version}
%{__python} setup.py install -O1 --root %{buildroot}
install -d -m 0755 %{buildroot}%{_var}/cache/salt
%if ! (0%{?rhel} >= 7 || 0%{?fedora} >= 15)
mkdir -p %{buildroot}%{_initrddir}
install -p %{SOURCE1} %{buildroot}%{_initrddir}/
install -p %{SOURCE2} %{buildroot}%{_initrddir}/
install -p %{SOURCE3} %{buildroot}%{_initrddir}/
install -p %{SOURCE4} %{buildroot}%{_initrddir}/
%else
mkdir -p %{buildroot}%{_unitdir}
install -p -m 0644 %{SOURCE5} %{buildroot}%{_unitdir}/
install -p -m 0644 %{SOURCE6} %{buildroot}%{_unitdir}/
install -p -m 0644 %{SOURCE7} %{buildroot}%{_unitdir}/
install -p -m 0644 %{SOURCE8} %{buildroot}%{_unitdir}/
%endif
install -p %{SOURCE9} .
mkdir -p %{buildroot}%{_sysconfdir}/logrotate.d/
install -p %{SOURCE10} %{buildroot}%{_sysconfdir}/logrotate.d/salt
mkdir -p %{buildroot}%{_sysconfdir}/salt/
install -p -m 0640 conf/minion %{buildroot}%{_sysconfdir}/salt/minion
install -p -m 0640 conf/master %{buildroot}%{_sysconfdir}/salt/master
%clean
rm -rf %{buildroot}
%files
%defattr(-,root,root,-)
%doc LICENSE
%{python_sitelib}/%{name}/*
%{python_sitelib}/%{name}-*-py?.?.egg-info
%{_sysconfdir}/logrotate.d/salt
%{_var}/cache/salt
%doc %{_mandir}/man7/salt.7.*
%doc README.fedora
%files -n salt-minion
%defattr(-,root,root)
%doc %{_mandir}/man1/salt-call.1.*
%doc %{_mandir}/man1/salt-minion.1.*
%{_bindir}/salt-minion
%{_bindir}/salt-call
%if ! (0%{?rhel} >= 7 || 0%{?fedora} >= 15)
%attr(0755, root, root) %{_initrddir}/salt-minion
%else
%{_unitdir}/salt-minion.service
%endif
%config(noreplace) %{_sysconfdir}/salt/minion
%files -n salt-master
%defattr(-,root,root)
%doc %{_mandir}/man1/salt.1.*
%doc %{_mandir}/man1/salt-api.1.*
%doc %{_mandir}/man1/salt-cloud.1.*
%doc %{_mandir}/man1/salt-cp.1.*
%doc %{_mandir}/man1/salt-key.1.*
%doc %{_mandir}/man1/salt-master.1.*
%doc %{_mandir}/man1/salt-run.1.*
%doc %{_mandir}/man1/salt-ssh.1.*
%doc %{_mandir}/man1/salt-syndic.1.*
%{_bindir}/salt
%{_bindir}/salt-api
%{_bindir}/salt-cloud
%{_bindir}/salt-cp
%{_bindir}/salt-key
%{_bindir}/salt-master
%{_bindir}/salt-run
%{_bindir}/salt-ssh
%{_bindir}/salt-syndic
%{_bindir}/salt-unity
%if ! (0%{?rhel} >= 7 || 0%{?fedora} >= 15)
%attr(0755, root, root) %{_initrddir}/salt-master
%attr(0755, root, root) %{_initrddir}/salt-syndic
%attr(0755, root, root) %{_initrddir}/salt-api
%else
%{_unitdir}/salt-master.service
%{_unitdir}/salt-syndic.service
%{_unitdir}/salt-api.service
%endif
%config(noreplace) %{_sysconfdir}/salt/master
# less than RHEL 8 / Fedora 16
# not sure if RHEL 7 will use systemd yet
%if ! (0%{?rhel} >= 7 || 0%{?fedora} >= 15)
%preun -n salt-master
if [ $1 -eq 0 ] ; then
/sbin/service salt-master stop >/dev/null 2>&1
/sbin/service salt-syndic stop >/dev/null 2>&1
/sbin/chkconfig --del salt-master
/sbin/chkconfig --del salt-syndic
fi
%preun -n salt-minion
if [ $1 -eq 0 ] ; then
/sbin/service salt-minion stop >/dev/null 2>&1
/sbin/chkconfig --del salt-minion
fi
%post -n salt-master
/sbin/chkconfig --add salt-master
/sbin/chkconfig --add salt-syndic
%post -n salt-minion
/sbin/chkconfig --add salt-minion
%postun -n salt-master
if [ "$1" -ge "1" ] ; then
/sbin/service salt-master condrestart >/dev/null 2>&1 || :
/sbin/service salt-syndic condrestart >/dev/null 2>&1 || :
fi
%postun -n salt-minion
if [ "$1" -ge "1" ] ; then
/sbin/service salt-minion condrestart >/dev/null 2>&1 || :
fi
%else
%preun -n salt-master
%if 0%{?systemd_preun:1}
%systemd_preun salt-master.service
%else
if [ $1 -eq 0 ] ; then
# Package removal, not upgrade
/bin/systemctl --no-reload disable salt-master.service > /dev/null 2>&1 || :
/bin/systemctl stop salt-master.service > /dev/null 2>&1 || :
/bin/systemctl --no-reload disable salt-syndic.service > /dev/null 2>&1 || :
/bin/systemctl stop salt-syndic.service > /dev/null 2>&1 || :
fi
%endif
%preun -n salt-minion
%if 0%{?systemd_preun:1}
%systemd_preun salt-minion.service
%else
if [ $1 -eq 0 ] ; then
# Package removal, not upgrade
/bin/systemctl --no-reload disable salt-minion.service > /dev/null 2>&1 || :
/bin/systemctl stop salt-minion.service > /dev/null 2>&1 || :
fi
%endif
%post -n salt-master
%if 0%{?systemd_post:1}
%systemd_post salt-master.service
%else
/bin/systemctl daemon-reload &>/dev/null || :
%endif
%post -n salt-minion
%if 0%{?systemd_post:1}
%systemd_post salt-minion.service
%else
/bin/systemctl daemon-reload &>/dev/null || :
%endif
%postun -n salt-master
%if 0%{?systemd_post:1}
%systemd_postun salt-master.service
%else
/bin/systemctl daemon-reload &>/dev/null
[ $1 -gt 0 ] && /bin/systemctl try-restart salt-master.service &>/dev/null || :
[ $1 -gt 0 ] && /bin/systemctl try-restart salt-syndic.service &>/dev/null || :
%endif
%postun -n salt-minion
%if 0%{?systemd_post:1}
%systemd_postun salt-minion.service
%else
/bin/systemctl daemon-reload &>/dev/null
[ $1 -gt 0 ] && /bin/systemctl try-restart salt-minion.service &>/dev/null || :
%endif
%endif

View file

@ -923,7 +923,7 @@ class ConfigTestCase(TestCase, integration.AdaptedConfigurationTestCaseMixIn):
'''
config = sconfig.cloud_config(self.get_config_file_path('cloud'))
self.assertIn('ec2-config', config['providers'])
self.assertIn('Ubuntu-13.04-AMD64', config['profiles'])
self.assertIn('ec2-test', config['profiles'])
# <---- Salt Cloud Configuration Tests ---------------------------------------------

View file

@ -27,6 +27,13 @@ test_list_db_csv = (
'test_db,postgres,LATIN1,en_US,en_US,,pg_default'
)
test_list_schema_csv = (
'name,owner,acl\n'
'public,postgres,"{postgres=UC/postgres,=UC/postgres}"\n'
'pg_toast,postgres,""'
)
if NO_MOCK is False:
SALT_STUB = {
'config.option': Mock(),
@ -666,6 +673,136 @@ class PostgresTestCase(TestCase):
'foo', 'bar', True),
'md596948aad3fcae80c08a35c9b5958cd89')
@patch('salt.modules.postgres._run_psql',
Mock(return_value={'retcode': None,
'stdout': test_list_schema_csv}))
def test_schema_list(self):
ret = postgres.schema_list(
'maint_db',
db_user='testuser',
db_host='testhost',
db_port='testport',
db_password='foo'
)
self.assertDictEqual(ret, {
'public': {'acl': '{postgres=UC/postgres,=UC/postgres}',
'owner': 'postgres'},
'pg_toast': {'acl': '', 'owner': 'postgres'}
})
@patch('salt.modules.postgres._run_psql',
Mock(return_value={'retcode': None}))
@patch('salt.modules.postgres.psql_query',
Mock(return_value=[
{
'name': 'public',
'acl': '{postgres=UC/postgres,=UC/postgres}',
'owner': 'postgres'
}]))
def test_schema_exists(self):
ret = postgres.schema_exists(
'template1',
'public'
)
self.assertTrue(ret)
@patch('salt.modules.postgres._run_psql',
Mock(return_value={'retcode': None}))
@patch('salt.modules.postgres.psql_query',
Mock(return_value=[
{
'name': 'public',
'acl': '{postgres=UC/postgres,=UC/postgres}',
'owner': 'postgres'
}]))
def test_schema_get(self):
ret = postgres.schema_get(
'template1',
'public'
)
self.assertTrue(ret)
@patch('salt.modules.postgres._run_psql',
Mock(return_value={'retcode': None}))
@patch('salt.modules.postgres.psql_query',
Mock(return_value=[
{
'name': 'public',
'acl': '{postgres=UC/postgres,=UC/postgres}',
'owner': 'postgres'
}]))
def test_schema_get_again(self):
ret = postgres.schema_get(
'template1',
'pg_toast'
)
self.assertFalse(ret)
@patch('salt.modules.postgres._run_psql',
Mock(return_value={'retcode': None}))
@patch('salt.modules.postgres.schema_exists', Mock(return_value=False))
def test_schema_create(self):
postgres.schema_create(
'test_db',
'test_schema',
user='user',
db_host='test_host',
db_port='test_port',
db_user='test_user',
db_password='test_password'
)
postgres._run_psql.assert_called_once_with(
"/usr/bin/pgsql --no-align --no-readline --no-password "
"--username test_user "
"--host test_host --port test_port "
"--dbname test_db -c 'CREATE SCHEMA test_schema'",
host='test_host', port='test_port',
password='test_password', user='test_user', runas='user')
@patch('salt.modules.postgres.schema_exists', Mock(return_value=True))
def test_schema_create2(self):
ret = postgres.schema_create('test_db',
'test_schema',
user='user',
db_host='test_host',
db_port='test_port',
db_user='test_user',
db_password='test_password'
)
self.assertFalse(ret)
@patch('salt.modules.postgres._run_psql',
Mock(return_value={'retcode': None}))
@patch('salt.modules.postgres.schema_exists', Mock(return_value=True))
def test_schema_remove(self):
postgres.schema_remove(
'test_db',
'test_schema',
user='user',
db_host='test_host',
db_port='test_port',
db_user='test_user',
db_password='test_password'
)
postgres._run_psql.assert_called_once_with(
"/usr/bin/pgsql --no-align --no-readline --no-password "
"--username test_user "
"--host test_host --port test_port "
"--dbname test_db -c 'DROP SCHEMA test_schema'",
host='test_host', port='test_port',
password='test_password', user='test_user', runas='user')
@patch('salt.modules.postgres.schema_exists', Mock(return_value=False))
def test_schema_remove2(self):
ret = postgres.schema_remove('test_db',
'test_schema',
user='user',
db_host='test_host',
db_port='test_port',
db_user='test_user',
db_password='test_password'
)
self.assertFalse(ret)
if __name__ == '__main__':
from integration import run_tests

View file

@ -0,0 +1,285 @@
# -*- coding: utf-8 -*-
'''
Test module for syslog_ng
'''
# Import Salt Testing libs
import salt
from salttesting import skipIf, TestCase
from salttesting.helpers import ensure_in_syspath
from salttesting.mock import NO_MOCK, NO_MOCK_REASON, MagicMock, patch
from textwrap import dedent
ensure_in_syspath('../../')
from salt.modules import syslog_ng
syslog_ng.__salt__ = {}
syslog_ng.__opts__ = {}
_VERSION = "3.6.0alpha0"
_MODULES = ("syslogformat,json-plugin,basicfuncs,afstomp,afsocket,cryptofuncs,"
"afmongodb,dbparser,system-source,affile,pseudofile,afamqp,"
"afsocket-notls,csvparser,linux-kmsg-format,afuser,confgen,afprog")
VERSION_OUTPUT = """syslog-ng {0}
Installer-Version: {0}
Revision:
Compile-Date: Apr 4 2014 20:26:18
Error opening plugin module; module='afsocket-tls', error='/home/tibi/install/syslog-ng/lib/syslog-ng/libafsocket-tls.so: undefined symbol: tls_context_setup_session'
Available-Modules: {1}
Enable-Debug: on
Enable-GProf: off
Enable-Memtrace: off
Enable-IPv6: on
Enable-Spoof-Source: off
Enable-TCP-Wrapper: off
Enable-Linux-Caps: off""".format(_VERSION, _MODULES)
STATS_OUTPUT = """SourceName;SourceId;SourceInstance;State;Type;Number
center;;received;a;processed;0
destination;#anon-destination0;;a;processed;0
destination;#anon-destination1;;a;processed;0
source;s_gsoc2014;;a;processed;0
center;;queued;a;processed;0
global;payload_reallocs;;a;processed;0
global;sdata_updates;;a;processed;0
global;msg_clones;;a;processed;0"""
_SYSLOG_NG_NOT_INSTALLED_RETURN_VALUE = {
"retcode": -1, "stderr":
"Unable to execute the command 'syslog-ng'. It is not in the PATH."
}
_SYSLOG_NG_CTL_NOT_INSTALLED_RETURN_VALUE = {
"retcode": -1, "stderr":
"Unable to execute the command 'syslog-ng-ctl'. It is not in the PATH."
}
@skipIf(NO_MOCK, NO_MOCK_REASON)
class SyslogNGTestCase(TestCase):
def test_statement_without_options(self):
s = syslog_ng.Statement("source", "s_local", options=[])
b = s.build()
self.assertEqual(dedent(
"""\
source s_local {
};
"""), b)
def test_non_empty_statement(self):
o1 = syslog_ng.Option("file")
o2 = syslog_ng.Option("tcp")
s = syslog_ng.Statement("source", "s_local", options=[o1, o2])
b = s.build()
self.assertEqual(dedent(
"""\
source s_local {
file(
);
tcp(
);
};
"""), b)
def test_option_with_parameters(self):
o1 = syslog_ng.Option("file")
p1 = syslog_ng.SimpleParameter('"/var/log/messages"')
p2 = syslog_ng.SimpleParameter()
p3 = syslog_ng.TypedParameter()
p3.type = "tls"
p2.value = '"/var/log/syslog"'
o1.add_parameter(p1)
o1.add_parameter(p2)
o1.add_parameter(p3)
b = o1.build()
self.assertEqual(dedent(
"""\
file(
"/var/log/messages",
"/var/log/syslog",
tls(
)
);
"""), b)
def test_parameter_with_values(self):
p = syslog_ng.TypedParameter()
p.type = "tls"
v1 = syslog_ng.TypedParameterValue()
v1.type = 'key_file'
v2 = syslog_ng.TypedParameterValue()
v2.type = 'cert_file'
p.add_value(v1)
p.add_value(v2)
b = p.build()
self.assertEqual(dedent(
"""\
tls(
key_file(
),
cert_file(
)
)"""), b)
def test_value_with_arguments(self):
t = syslog_ng.TypedParameterValue()
t.type = 'key_file'
a1 = syslog_ng.Argument('"/opt/syslog-ng/etc/syslog-ng/key.d/syslog-ng.key"')
a2 = syslog_ng.Argument('"/opt/syslog-ng/etc/syslog-ng/key.d/syslog-ng.key"')
t.add_argument(a1)
t.add_argument(a2)
b = t.build()
self.assertEqual(dedent(
'''\
key_file(
"/opt/syslog-ng/etc/syslog-ng/key.d/syslog-ng.key"
"/opt/syslog-ng/etc/syslog-ng/key.d/syslog-ng.key"
)'''), b)
def test_end_to_end_statement_generation(self):
s = syslog_ng.Statement('source', 's_tls')
o = syslog_ng.Option('tcp')
ip = syslog_ng.TypedParameter('ip')
ip.add_value(syslog_ng.SimpleParameterValue("'192.168.42.2'"))
o.add_parameter(ip)
port = syslog_ng.TypedParameter('port')
port.add_value(syslog_ng.SimpleParameterValue(514))
o.add_parameter(port)
tls = syslog_ng.TypedParameter('tls')
key_file = syslog_ng.TypedParameterValue('key_file')
key_file.add_argument(syslog_ng.Argument('"/opt/syslog-ng/etc/syslog-ng/key.d/syslog-ng.key"'))
cert_file = syslog_ng.TypedParameterValue('cert_file')
cert_file.add_argument(syslog_ng.Argument('"/opt/syslog-ng/etc/syslog-ng/cert.d/syslog-ng.cert"'))
peer_verify = syslog_ng.TypedParameterValue('peer_verify')
peer_verify.add_argument(syslog_ng.Argument('optional-untrusted'))
tls.add_value(key_file)
tls.add_value(cert_file)
tls.add_value(peer_verify)
o.add_parameter(tls)
s.add_child(o)
b = s.build()
self.assertEqual(dedent(
'''\
source s_tls {
tcp(
ip(
'192.168.42.2'
),
port(
514
),
tls(
key_file(
"/opt/syslog-ng/etc/syslog-ng/key.d/syslog-ng.key"
),
cert_file(
"/opt/syslog-ng/etc/syslog-ng/cert.d/syslog-ng.cert"
),
peer_verify(
optional-untrusted
)
)
);
};
'''), b)
def test_version(self):
mock_return_value = {"retcode": 0, 'stdout': VERSION_OUTPUT}
expected_output = {"retcode": 0, "stdout": "3.6.0alpha0"}
mock_args = "syslog-ng -V"
self._assert_template(mock_args,
mock_return_value,
function_to_call=syslog_ng.version,
expected_output=expected_output)
def test_stats(self):
mock_return_value = {"retcode": 0, 'stdout': STATS_OUTPUT}
expected_output = {"retcode": 0, "stdout": STATS_OUTPUT}
mock_args = "syslog-ng-ctl stats"
self._assert_template(mock_args,
mock_return_value,
function_to_call=syslog_ng.stats,
expected_output=expected_output)
def test_modules(self):
mock_return_value = {"retcode": 0, 'stdout': VERSION_OUTPUT}
expected_output = {"retcode": 0, "stdout": _MODULES}
mock_args = "syslog-ng -V"
self._assert_template(mock_args,
mock_return_value,
function_to_call=syslog_ng.modules,
expected_output=expected_output)
def test_config_test_ok(self):
mock_return_value = {"retcode": 0, "stderr": "", "stdout": "Syslog-ng startup text..."}
mock_args = "syslog-ng --syntax-only"
self._assert_template(mock_args,
mock_return_value,
function_to_call=syslog_ng.config_test,
expected_output=mock_return_value)
def test_config_test_fails(self):
mock_return_value = {"retcode": 1, 'stderr': "Syntax error...", "stdout": ""}
mock_args = "syslog-ng --syntax-only"
self._assert_template(mock_args,
mock_return_value,
function_to_call=syslog_ng.config_test,
expected_output=mock_return_value)
def test_config_test_cfgfile(self):
cfgfile = "/path/to/syslog-ng.conf"
mock_return_value = {"retcode": 1, 'stderr': "Syntax error...", "stdout": ""}
mock_args = "syslog-ng --syntax-only --cfgfile={0}".format(cfgfile)
self._assert_template(mock_args,
mock_return_value,
function_to_call=syslog_ng.config_test,
function_args={"cfgfile": cfgfile},
expected_output=mock_return_value)
def _assert_template(self,
mock_funtion_args,
mock_return_value,
function_to_call,
expected_output,
function_args=None):
if function_args is None:
function_args = {}
installed = True
if not salt.utils.which("syslog-ng"):
installed = False
if "syslog-ng-ctl" in mock_funtion_args:
expected_output = _SYSLOG_NG_CTL_NOT_INSTALLED_RETURN_VALUE
else:
expected_output = _SYSLOG_NG_NOT_INSTALLED_RETURN_VALUE
mock_function = MagicMock(return_value=mock_return_value)
with patch.dict(syslog_ng.__salt__, {'cmd.run_all': mock_function}):
got = function_to_call(**function_args)
self.assertEqual(expected_output, got)
if installed:
self.assertTrue(mock_function.called)
self.assertEqual(len(mock_function.call_args), 2)
mock_param = mock_function.call_args
self.assertTrue(mock_param[0][0].endswith(mock_funtion_args))
if __name__ == '__main__':
from integration import run_tests
run_tests(SyslogNGTestCase, needs_daemon=False)

View file

@ -16,12 +16,14 @@ from salt.states import (
postgres_user,
postgres_group,
postgres_extension,
postgres_schema,
)
MODS = (
postgres_database,
postgres_user,
postgres_group,
postgres_extension,
postgres_schema,
)
@ -483,6 +485,82 @@ class PostgresExtensionTestCase(TestCase):
)
@skipIf(NO_MOCK, NO_MOCK_REASON)
@patch.multiple(postgres_schema,
__grains__={'os_family': 'Linux'},
__salt__=SALT_STUB)
@patch('salt.utils.which', Mock(return_value='/usr/bin/pgsql'))
class PostgresSchemaTestCase(TestCase):
@patch.dict(SALT_STUB, {
'postgres.schema_get': Mock(return_value=None),
'postgres.schema_create': MagicMock(),
})
def test_present_creation(self):
ret = postgres_schema.present('dbname', 'foo')
self.assertEqual(
ret,
{'comment': 'Schema foo has been created in database dbname',
'changes': {'foo': 'Present'},
'dbname': 'dbname',
'name': 'foo',
'result': True}
)
self.assertEqual(SALT_STUB['postgres.schema_create'].call_count, 1)
@patch.dict(SALT_STUB, {
'postgres.schema_get': Mock(return_value={'foo':
{'acl': '',
'owner': 'postgres'}
}),
'postgres.schema_create': MagicMock(),
})
def test_present_nocreation(self):
ret = postgres_schema.present('dbname', 'foo')
self.assertEqual(
ret,
{'comment': 'Schema foo already exists in database dbname',
'changes': {},
'dbname': 'dbname',
'name': 'foo',
'result': True}
)
self.assertEqual(SALT_STUB['postgres.schema_create'].call_count, 0)
@patch.dict(SALT_STUB, {
'postgres.schema_exists': Mock(return_value=True),
'postgres.schema_remove': MagicMock(),
})
def test_absent_remove(self):
ret = postgres_schema.absent('dbname', 'foo')
self.assertEqual(
ret,
{'comment': 'Schema foo has been removed from database dbname',
'changes': {'foo': 'Absent'},
'dbname': 'dbname',
'name': 'foo',
'result': True}
)
self.assertEqual(SALT_STUB['postgres.schema_remove'].call_count, 1)
@patch.dict(SALT_STUB, {
'postgres.schema_exists': Mock(return_value=False),
'postgres.schema_remove': MagicMock(),
})
def test_absent_noremove(self):
ret = postgres_schema.absent('dbname', 'foo')
self.assertEqual(
ret,
{'comment': 'Schema foo is not present in database dbname,'
' so it cannot be removed',
'changes': {},
'dbname': 'dbname',
'name': 'foo',
'result': True}
)
self.assertEqual(SALT_STUB['postgres.schema_remove'].call_count, 0)
if __name__ == '__main__':
from integration import run_tests
run_tests(PostgresExtensionTestCase, needs_daemon=False)

View file

@ -0,0 +1,388 @@
# -*- coding: utf-8 -*-
'''
Test module for syslog_ng state
'''
import yaml
import re
import tempfile
import os
from salttesting import skipIf, TestCase
from salttesting.helpers import ensure_in_syspath
from salttesting.mock import NO_MOCK, NO_MOCK_REASON, MagicMock, patch
ensure_in_syspath('../../')
from salt.states import syslog_ng
from salt.modules import syslog_ng as syslog_ng_module
syslog_ng.__salt__ = {}
syslog_ng_module.__salt__ = {}
syslog_ng_module.__opts__ = {'test': False}
SOURCE_1_CONFIG = {
"id": "s_tail",
"config": (
"""
source:
- file:
- '"/var/log/apache/access.log"'
- follow_freq : 1
- flags:
- no-parse
- validate-utf8
""")
}
SOURCE_1_EXPECTED = (
"""
source s_tail {
file(
"/var/log/apache/access.log",
follow_freq(1),
flags(no-parse, validate-utf8)
);
};
"""
)
SOURCE_2_CONFIG = {
"id": "s_gsoc2014",
"config": (
"""
source:
- tcp:
- ip: '"0.0.0.0"'
- port: 1234
- flags: no-parse
"""
)
}
SOURCE_2_EXPECTED = (
"""
source s_gsoc2014 {
tcp(
ip("0.0.0.0"),
port(1234),
flags(no-parse)
);
};"""
)
FILTER_1_CONFIG = {
"id": "f_json",
"config": (
"""
filter:
- match:
- '"@json:"'
"""
)
}
FILTER_1_EXPECTED = (
"""
filter f_json {
match(
"@json:"
);
};
"""
)
TEMPLATE_1_CONFIG = {
"id": "t_demo_filetemplate",
"config": (
"""
template:
- template:
- '"$ISODATE $HOST $MSG\n"'
- template_escape:
- "no"
"""
)
}
TEMPLATE_1_EXPECTED = (
"""
template t_demo_filetemplate {
template(
"$ISODATE $HOST $MSG "
);
template_escape(
no
);
};
"""
)
REWRITE_1_CONFIG = {
"id": "r_set_message_to_MESSAGE",
"config": (
"""
rewrite:
- set:
- '"${.json.message}"'
- value : '"$MESSAGE"'
"""
)
}
REWRITE_1_EXPECTED = (
"""
rewrite r_set_message_to_MESSAGE {
set(
"${.json.message}",
value("$MESSAGE")
);
};
"""
)
LOG_1_CONFIG = {
"id": "l_gsoc2014",
"config": (
"""
log:
- source: s_gsoc2014
- junction:
- channel:
- filter: f_json
- parser: p_json
- rewrite: r_set_json_tag
- rewrite: r_set_message_to_MESSAGE
- destination:
- file:
- '"/tmp/json-input.log"'
- template: t_gsoc2014
- flags: final
- channel:
- filter: f_not_json
- parser:
- syslog-parser: []
- rewrite: r_set_syslog_tag
- flags: final
- destination:
- file:
- '"/tmp/all.log"'
- template: t_gsoc2014
"""
)
}
LOG_1_EXPECTED = (
"""
log {
source(s_gsoc2014);
junction {
channel {
filter(f_json);
parser(p_json);
rewrite(r_set_json_tag);
rewrite(r_set_message_to_MESSAGE);
destination {
file(
"/tmp/json-input.log",
template(t_gsoc2014)
);
};
flags(final);
};
channel {
filter(f_not_json);
parser {
syslog-parser(
);
};
rewrite(r_set_syslog_tag);
flags(final);
};
};
destination {
file(
"/tmp/all.log",
template(t_gsoc2014)
);
};
};
"""
)
OPTIONS_1_CONFIG = {
"id": "global_options",
"config": (
"""
options:
- time_reap: 30
- mark_freq: 10
- keep_hostname: "yes"
"""
)
}
OPTIONS_1_EXPECTED = (
"""
options {
time_reap(30);
mark_freq(10);
keep_hostname(yes);
};
"""
)
SHORT_FORM_CONFIG = {
"id": "source.s_gsoc",
"config": (
"""
- tcp:
- ip: '"0.0.0.0"'
- port: 1234
- flags: no-parse
"""
)
}
SHORT_FORM_EXPECTED = (
"""
source s_gsoc {
tcp(
ip(
"0.0.0.0"
),
port(
1234
),
flags(
no-parse
)
);
};
"""
)
GIVEN_CONFIG = {
'id': "config.some_name",
'config': (
""" |
source s_gsoc {
tcp(
ip(
"0.0.0.0"
),
port(
1234
),
flags(
no-parse
)
);
};
"""
)
}
_SALT_VAR_WITH_MODULE_METHODS = {
'syslog_ng.config': syslog_ng_module.config,
'syslog_ng.start': syslog_ng_module.start,
'syslog_ng.reload': syslog_ng_module.reload,
'syslog_ng.stop': syslog_ng_module.stop,
'syslog_ng.write_version': syslog_ng_module.write_version,
'syslog_ng.write_config': syslog_ng_module.write_config
}
def remove_whitespaces(source):
return re.sub(r"\s+", "", source.strip())
@skipIf(NO_MOCK, NO_MOCK_REASON)
# @skipIf(syslog_ng.__virtual__() is False, 'Syslog-ng must be installed')
class SyslogNGTestCase(TestCase):
def test_generate_source_config(self):
self._config_generator_template(SOURCE_1_CONFIG, SOURCE_1_EXPECTED)
def test_generate_log_config(self):
self._config_generator_template(LOG_1_CONFIG, LOG_1_EXPECTED)
def test_generate_tcp_source_config(self):
self._config_generator_template(SOURCE_2_CONFIG, SOURCE_2_EXPECTED)
def test_generate_filter_config(self):
self._config_generator_template(FILTER_1_CONFIG, FILTER_1_EXPECTED)
def test_generate_template_config(self):
self._config_generator_template(TEMPLATE_1_CONFIG, TEMPLATE_1_EXPECTED)
def test_generate_rewrite_config(self):
self._config_generator_template(REWRITE_1_CONFIG, REWRITE_1_EXPECTED)
def test_generate_global_options_config(self):
self._config_generator_template(OPTIONS_1_CONFIG, OPTIONS_1_EXPECTED)
def test_generate_short_form_statement(self):
self._config_generator_template(SHORT_FORM_CONFIG, SHORT_FORM_EXPECTED)
def test_generate_given_config(self):
self._config_generator_template(GIVEN_CONFIG, SHORT_FORM_EXPECTED)
def _config_generator_template(self, yaml_input, expected):
parsed_yaml_config = yaml.load(yaml_input["config"])
id = yaml_input["id"]
with patch.dict(syslog_ng.__salt__, _SALT_VAR_WITH_MODULE_METHODS):
got = syslog_ng.config(id, config=parsed_yaml_config, write=False)
config = got["changes"]["new"]
self.assertEqual(remove_whitespaces(expected), remove_whitespaces(config))
self.assertEqual(False, got["result"])
def test_write_config(self):
yaml_inputs = (
SOURCE_2_CONFIG, SOURCE_1_CONFIG, FILTER_1_CONFIG, TEMPLATE_1_CONFIG, REWRITE_1_CONFIG, LOG_1_CONFIG
)
expected_outputs = (
SOURCE_2_EXPECTED, SOURCE_1_EXPECTED, FILTER_1_EXPECTED, TEMPLATE_1_EXPECTED, REWRITE_1_EXPECTED,
LOG_1_EXPECTED
)
config_file_fd, config_file_name = tempfile.mkstemp()
os.close(config_file_fd)
with patch.dict(syslog_ng.__salt__, _SALT_VAR_WITH_MODULE_METHODS):
syslog_ng_module.set_config_file(config_file_name)
syslog_ng_module.write_version("3.6")
syslog_ng_module.write_config(config='@include "scl.conf"')
for i in yaml_inputs:
parsed_yaml_config = yaml.load(i["config"])
id = i["id"]
got = syslog_ng.config(id, config=parsed_yaml_config, write=True)
written_config = ""
with open(config_file_name, "r") as f:
written_config = f.read()
config_without_whitespaces = remove_whitespaces(written_config)
for i in expected_outputs:
without_whitespaces = remove_whitespaces(i)
self.assertIn(without_whitespaces, config_without_whitespaces)
syslog_ng_module.set_config_file("")
os.remove(config_file_name)
def test_started_state_generate_valid_cli_command(self):
mock_func = MagicMock(return_value={"retcode": 0, "stdout": "", "pid": 1000})
with patch.dict(syslog_ng.__salt__, _SALT_VAR_WITH_MODULE_METHODS):
with patch.dict(syslog_ng_module.__salt__, {'cmd.run_all': mock_func}):
got = syslog_ng.started(user="joe", group="users", enable_core=True)
command = got["changes"]["new"]
self.assertTrue(
command.endswith("syslog-ng --user=joe --group=users --enable-core --cfgfile=/etc/syslog-ng.conf"))
if __name__ == '__main__':
from integration import run_tests
run_tests(SyslogNGTestCase, needs_daemon=False)