mirror of
https://github.com/saltstack/salt.git
synced 2025-04-16 17:50:20 +00:00
Fix numerous typos found via Lintian
Thanks https://github.com/Debian/lintian/tree/master/data/spelling !
This commit is contained in:
parent
c0747eb938
commit
177c168a21
34 changed files with 64 additions and 64 deletions
|
@ -150,7 +150,7 @@
|
|||
# public keys from the minions. Note that this is insecure.
|
||||
#auto_accept: False
|
||||
|
||||
# Time in minutes that a incomming public key with a matching name found in
|
||||
# Time in minutes that a incoming public key with a matching name found in
|
||||
# pki_dir/minion_autosign/keyid is automatically accepted. Expired autosign keys
|
||||
# are removed when the master checks the minion_autosign directory.
|
||||
# 0 equals no timeout
|
||||
|
|
|
@ -136,15 +136,15 @@
|
|||
# Number of consecutive SaltReqTimeoutError that are acceptable when trying to authenticate.
|
||||
#auth_tries: 7
|
||||
|
||||
# If authentication failes due to SaltReqTimeoutError during a ping_interval,
|
||||
# cause sub minion proccess to restart.
|
||||
# If authentication fails due to SaltReqTimeoutError during a ping_interval,
|
||||
# cause sub minion process to restart.
|
||||
#auth_safemode: False
|
||||
|
||||
# Ping Master to ensure connection is alive (minutes).
|
||||
# TODO: perhaps could update the scheduler to raise Exception in main thread after /mine_interval (60 minutes)/ fails
|
||||
#ping_interval: 90
|
||||
|
||||
# If you don't have any problems with syn-floods, dont bother with the
|
||||
# If you don't have any problems with syn-floods, don't bother with the
|
||||
# three recon_* settings described below, just leave the defaults!
|
||||
#
|
||||
# The ZeroMQ pull-socket that binds to the masters publishing interface tries
|
||||
|
@ -438,7 +438,7 @@
|
|||
#state_output: full
|
||||
#
|
||||
# The state_output_diff setting changes whether or not the output from
|
||||
# sucessful states is returned. Useful when even the terse output of these
|
||||
# successful states is returned. Useful when even the terse output of these
|
||||
# states is cluttering the logs. Set it to True to ignore them.
|
||||
#state_output_diff: False
|
||||
#
|
||||
|
|
|
@ -141,4 +141,4 @@ Driver Support
|
|||
|
||||
- Container creation
|
||||
- Image listing (LXC templates)
|
||||
- Running container informations (IP addresses, etc.)
|
||||
- Running container information (IP addresses, etc.)
|
||||
|
|
|
@ -201,7 +201,7 @@ required: name or instance_id, volume_id and device.
|
|||
|
||||
Show a Volume
|
||||
-------------
|
||||
The details about an existing volume may be retreived.
|
||||
The details about an existing volume may be retrieved.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
|
|
|
@ -329,7 +329,7 @@ Packages management under Windows 2003
|
|||
|
||||
On windows Server 2003, you need to install optional component "wmi windows
|
||||
installer provider" to have full list of installed packages. If you don't have
|
||||
this, salt-minion can't report some installed softwares.
|
||||
this, salt-minion can't report some installed packages.
|
||||
|
||||
|
||||
.. _http://csa-net.dk/salt: http://csa-net.dk/salt
|
||||
|
|
|
@ -54,7 +54,7 @@ and each event tag has a list of reactor SLS files to be run.
|
|||
|
||||
|
||||
Reactor sls files are similar to state and pillar sls files. They are
|
||||
by default yaml + Jinja templates and are passed familar context variables.
|
||||
by default yaml + Jinja templates and are passed familiar context variables.
|
||||
|
||||
They differ because of the addition of the ``tag`` and ``data`` variables.
|
||||
|
||||
|
|
|
@ -73,13 +73,13 @@ and other syndics that are bound to them further down in the hierarchy. When
|
|||
events and job return data are generated by minions, they aggregated back,
|
||||
through the same syndic(s), to the master which issued the command.
|
||||
|
||||
The master sitting at the top of the hierachy (the Master of Masters) will *not*
|
||||
The master sitting at the top of the hierarchy (the Master of Masters) will *not*
|
||||
be running the ``salt-syndic`` daemon. It will have the ``salt-master``
|
||||
daemon running, and optionally, the ``salt-minion`` daemon. Each syndic
|
||||
connected to an upper-level master will have both the ``salt-master`` and the
|
||||
``salt-syndic`` daemon running, and optionally, the ``salt-minion`` daemon.
|
||||
|
||||
Nodes on the lowest points of the hierarchy (minions which do not propogate
|
||||
Nodes on the lowest points of the hierarchy (minions which do not propagate
|
||||
data to another level) will only have the ``salt-minion`` daemon running. There
|
||||
is no need for either ``salt-master`` or ``salt-syndic`` to be running on a
|
||||
standard minion.
|
||||
|
|
|
@ -77,7 +77,7 @@ Using RAET in Salt
|
|||
Using RAET in Salt is easy, the main difference is that the core dependencies
|
||||
change, instead of needing pycrypto, M2Crypto, ZeroMQ and PYZMQ, the packages
|
||||
libsodium, pynacl and ioflo are required. Encryption is handled very cleanly
|
||||
by libsodium and pynacl, while the queueing and flow control is handled by
|
||||
by libsodium and pynacl, while the queuing and flow control is handled by
|
||||
ioflo. Distribution packages are forthcoming, but libsodium can be easily
|
||||
installed from source, or many distributions do ship packages for it.
|
||||
The pynacl and ioflo packages can be easily installed from pypi, distribution
|
||||
|
@ -100,7 +100,7 @@ be installed a git clone:
|
|||
|
||||
Once installed, modify the configuration files for the minion and master to
|
||||
set the transport to raet (the file_buffer_size and id need to be set to
|
||||
adress known bugs in the unreleased code as of this writing):
|
||||
address known bugs in the unreleased code as of this writing):
|
||||
|
||||
``/etc/salt/master``:
|
||||
|
||||
|
|
|
@ -95,7 +95,7 @@ statement) it's purpose is like a mandatory comment.
|
|||
|
||||
You can use ``set_binary_path`` to set the directory which contains the
|
||||
syslog-ng and syslog-ng-ctl binaries. If this directory is in your PATH,
|
||||
you dont't need to use this function.
|
||||
you don't need to use this function.
|
||||
|
||||
Under ``auto_start_or_reload`` you can see a Jinja template. If
|
||||
syslog-ng isn't running it will start it, otherwise reload it. It uses
|
||||
|
|
|
@ -5,7 +5,7 @@ then invoking thin.
|
|||
|
||||
This is not intended to be instantiated as a module, rather it is a
|
||||
helper script used by salt.client.ssh.Single. It is here, in a
|
||||
seperate file, for convenience of development.
|
||||
separate file, for convenience of development.
|
||||
'''
|
||||
|
||||
import optparse
|
||||
|
|
|
@ -81,7 +81,7 @@ def enter_mainloop(target,
|
|||
queue=None):
|
||||
'''Manage a multiprocessing pool
|
||||
|
||||
- If the queue does not output anything, the pool runs indefinitvly
|
||||
- If the queue does not output anything, the pool runs indefinitely
|
||||
|
||||
- If the queue returns KEYBOARDINT or ERROR, this will kill the pool
|
||||
totally calling terminate & join and ands with a SaltCloudSystemExit
|
||||
|
@ -99,14 +99,14 @@ def enter_mainloop(target,
|
|||
pool size if you did not provide yourself a pool
|
||||
callback
|
||||
a boolean taking a string in argument which returns True to
|
||||
signal that 'target' is finnished and we need to join
|
||||
signal that 'target' is finished and we need to join
|
||||
the pool
|
||||
queue
|
||||
A custom multiproccessing queue in case you want to do
|
||||
extra stuff and need it later in your program
|
||||
args
|
||||
positionnal arguments to call the function with
|
||||
if you dont want to use pool.map
|
||||
positional arguments to call the function with
|
||||
if you don't want to use pool.map
|
||||
|
||||
mapped_args
|
||||
a list of one or more arguments combinations to call the function with
|
||||
|
@ -217,7 +217,7 @@ class CloudClient(object):
|
|||
for _profile in [a for a in opts.get('profiles', {})]:
|
||||
if not _profile == profile:
|
||||
opts['profiles'].pop(_profile)
|
||||
# if profile is specified and we have enougth info about providers
|
||||
# if profile is specified and we have enough info about providers
|
||||
# also filter them to speedup methods like
|
||||
# __filter_non_working_providers
|
||||
providers = [a.get('provider', '').split(':')[0]
|
||||
|
@ -573,7 +573,7 @@ class Cloud(object):
|
|||
for driver, details in drivers.iteritems():
|
||||
# If driver has function list_nodes_min, just replace it
|
||||
# with query param to check existing vms on this driver
|
||||
# for minimum information, Othwise still use query param.
|
||||
# for minimum information, Otherwise still use query param.
|
||||
if 'selected_query_option' not in opts:
|
||||
if '{0}.list_nodes_min'.format(driver) in self.clouds:
|
||||
this_query = 'list_nodes_min'
|
||||
|
@ -630,10 +630,10 @@ class Cloud(object):
|
|||
# that matches the provider specified in the profile.
|
||||
# This solves the issues when many providers return the
|
||||
# same instance. For example there may be one provider for
|
||||
# each avaliablity zone in amazon in the same region, but
|
||||
# each availability zone in amazon in the same region, but
|
||||
# the search returns the same instance for each provider
|
||||
# because amazon returns all instances in a region, not
|
||||
# avaliabilty zone.
|
||||
# availability zone.
|
||||
if profile:
|
||||
if alias not in \
|
||||
self.opts['profiles'][profile]['provider'].split(
|
||||
|
|
|
@ -623,7 +623,7 @@ def create(vm_):
|
|||
log.debug('VM {0} is now running'.format(public_ip))
|
||||
vm_['ssh_host'] = public_ip
|
||||
|
||||
# The instance is booted and accessable, let's Salt it!
|
||||
# The instance is booted and accessible, let's Salt it!
|
||||
ret = salt.utils.cloud.bootstrap(vm_, __opts__)
|
||||
ret.update(data.__dict__)
|
||||
|
||||
|
|
|
@ -1789,7 +1789,7 @@ def wait_for_instance(
|
|||
gateway=ssh_gateway_config
|
||||
):
|
||||
# If a known_hosts_file is configured, this instance will not be
|
||||
# accessable until it has a host key. Since this is provided on
|
||||
# accessible until it has a host key. Since this is provided on
|
||||
# supported instances by cloud-init, and viewable to us only from the
|
||||
# console output (which may take several minutes to become available,
|
||||
# we have some more waiting to do here.
|
||||
|
|
|
@ -714,7 +714,7 @@ def create_node(vm_):
|
|||
newnode['vmid'] = _get_next_vmid()
|
||||
|
||||
for prop in ('cpuunits', 'description', 'memory', 'onboot'):
|
||||
if prop in vm_: # if the propery is set, use it for the VM request
|
||||
if prop in vm_: # if the property is set, use it for the VM request
|
||||
newnode[prop] = vm_[prop]
|
||||
|
||||
if vm_['technology'] == 'openvz':
|
||||
|
@ -724,12 +724,12 @@ def create_node(vm_):
|
|||
|
||||
# optional VZ settings
|
||||
for prop in ('cpus', 'disk', 'ip_address', 'nameserver', 'password', 'swap', 'poolid'):
|
||||
if prop in vm_: # if the propery is set, use it for the VM request
|
||||
if prop in vm_: # if the property is set, use it for the VM request
|
||||
newnode[prop] = vm_[prop]
|
||||
elif vm_['technology'] == 'qemu':
|
||||
# optional Qemu settings
|
||||
for prop in ('acpi', 'cores', 'cpu', 'pool'):
|
||||
if prop in vm_: # if the propery is set, use it for the VM request
|
||||
if prop in vm_: # if the property is set, use it for the VM request
|
||||
newnode[prop] = vm_[prop]
|
||||
|
||||
# The node is ready. Lets request it to be added
|
||||
|
|
|
@ -88,7 +88,7 @@ echoinfo() {
|
|||
|
||||
#--- FUNCTION -------------------------------------------------------------------------------------------------------
|
||||
# NAME: echowarn
|
||||
# DESCRIPTION: Echo warning informations to stdout.
|
||||
# DESCRIPTION: Echo warning information to stdout.
|
||||
#----------------------------------------------------------------------------------------------------------------------
|
||||
echowarn() {
|
||||
printf "${YC} * WARN${EC}: %s\n" "$@";
|
||||
|
|
|
@ -520,8 +520,8 @@ class MultiMinion(MinionBase):
|
|||
for master in masters:
|
||||
minion = minions[master]
|
||||
# if we haven't connected yet, lets attempt some more.
|
||||
# make sure to keep seperate auth_wait times, since these
|
||||
# are seperate masters
|
||||
# make sure to keep separate auth_wait times, since these
|
||||
# are separate masters
|
||||
if 'generator' not in minion:
|
||||
if time.time() - minion['auth_wait'] > minion['last']:
|
||||
minion['last'] = time.time()
|
||||
|
@ -1784,7 +1784,7 @@ class Syndic(Minion):
|
|||
def __init__(self, opts, **kwargs):
|
||||
self._syndic_interface = opts.get('interface')
|
||||
self._syndic = True
|
||||
# force auth_safemode True because Syndic dont support autorestart
|
||||
# force auth_safemode True because Syndic don't support autorestart
|
||||
opts['auth_safemode'] = True
|
||||
opts['loop_interval'] = 1
|
||||
super(Syndic, self).__init__(opts, **kwargs)
|
||||
|
@ -2232,7 +2232,7 @@ class MultiSyndic(MinionBase):
|
|||
self.event_forward_timeout < time.time()):
|
||||
self._forward_events()
|
||||
# We don't handle ZMQErrors like the other minions
|
||||
# I've put explicit handling around the recieve calls
|
||||
# I've put explicit handling around the receive calls
|
||||
# in the process_*_socket methods. If we see any other
|
||||
# errors they may need some kind of handling so log them
|
||||
# for now.
|
||||
|
|
|
@ -269,7 +269,7 @@ def _get_client(version=None, timeout=None):
|
|||
|
||||
client = docker.Client(**kwargs)
|
||||
if not version:
|
||||
# set version that match docker deamon
|
||||
# set version that match docker daemon
|
||||
client._version = client.version()['ApiVersion']
|
||||
|
||||
# try to authenticate the client using credentials
|
||||
|
|
|
@ -525,7 +525,7 @@ def _config_list(conf_tuples=None, **kwargs):
|
|||
|
||||
def _get_veths(net_data):
|
||||
'''Parse the nic setup inside lxc conf tuples back
|
||||
to a dictionnary indexed by network interface'''
|
||||
to a dictionary indexed by network interface'''
|
||||
if isinstance(net_data, dict):
|
||||
net_data = net_data.items()
|
||||
nics = salt.utils.odict.OrderedDict()
|
||||
|
|
|
@ -142,7 +142,7 @@ __ssl_options__ = __ssl_options_parameterized__ + [
|
|||
# quote_identifier. This is not the same as escaping '%' to '\%' or '_' to '\%'
|
||||
# when using a LIKE query (example in db_exists), as this escape is there to
|
||||
# avoid having _ or % characters interpreted in LIKE queries. The string parted
|
||||
# of the first query could become (still used with args dictionnary for myval):
|
||||
# of the first query could become (still used with args dictionary for myval):
|
||||
# 'SELECT * FROM {0} WHERE bar=%(myval)s'.format(quote_identifier('user input'))
|
||||
#
|
||||
# Check integration tests if you find a hole in theses strings and escapes rules
|
||||
|
@ -315,7 +315,7 @@ def _grant_to_tokens(grant):
|
|||
|
||||
:param grant: An un-parsed MySQL GRANT statement str, like
|
||||
"GRANT SELECT, ALTER, LOCK TABLES ON `mydb`.* TO 'testuser'@'localhost'"
|
||||
or a dictionnary with 'qry' and 'args' keys for 'user' and 'host'.
|
||||
or a dictionary with 'qry' and 'args' keys for 'user' and 'host'.
|
||||
:return:
|
||||
A Python dict with the following keys/values:
|
||||
- user: MySQL User
|
||||
|
@ -327,7 +327,7 @@ def _grant_to_tokens(grant):
|
|||
dict_mode = False
|
||||
if isinstance(grant, dict):
|
||||
dict_mode = True
|
||||
# Everything coming in dictionnary form was made for a MySQLdb execute
|
||||
# Everything coming in dictionary form was made for a MySQLdb execute
|
||||
# call and contain a '%%' escaping of '%' characters for MySQLdb
|
||||
# that we should remove here.
|
||||
grant_sql = grant.get('qry', 'undefined').replace('%%', '%')
|
||||
|
@ -484,7 +484,7 @@ def _execute(cur, qry, args=None):
|
|||
query. For example '%' characters on the query must be encoded as '%%' and
|
||||
will be restored as '%' when arguments are applied. But when there're no
|
||||
arguments the '%%' is not managed. We cannot apply Identifier quoting in a
|
||||
predictible way if the query are not always applying the same filters. So
|
||||
predictable way if the query are not always applying the same filters. So
|
||||
this wrapper ensure this escape is not made if no arguments are used.
|
||||
'''
|
||||
if args is None or args == {}:
|
||||
|
@ -1019,7 +1019,7 @@ def user_exists(user,
|
|||
'''
|
||||
dbc = _connect(**connection_args)
|
||||
# Did we fail to connect with the user we are checking
|
||||
# Its password might have previousely change with the same command/state
|
||||
# Its password might have previously change with the same command/state
|
||||
if dbc is None \
|
||||
and __context__['mysql.error'] \
|
||||
.startswith("MySQL Error 1045: Access denied for user '{0}'@".format(user)) \
|
||||
|
|
|
@ -1126,7 +1126,7 @@ def create_metadata(name,
|
|||
password=None,
|
||||
runas=None):
|
||||
'''
|
||||
Get lifecycle informations about an extension
|
||||
Get lifecycle information about an extension
|
||||
|
||||
CLI Example:
|
||||
|
||||
|
|
|
@ -498,7 +498,7 @@ def install(name=None, refresh=False, pkgs=None, saltenv='base', **kwargs):
|
|||
if salt.utils.is_true(refresh):
|
||||
refresh_db()
|
||||
|
||||
# Ignore pkg_type from parse_targets, Windows does not suport the "sources"
|
||||
# Ignore pkg_type from parse_targets, Windows does not support the "sources"
|
||||
# argument
|
||||
pkg_params = __salt__['pkg_resource.parse_targets'](name,
|
||||
pkgs,
|
||||
|
|
|
@ -750,7 +750,7 @@ def bootstrap(directory='.',
|
|||
gid = __salt__['user.info'](runas)['gid']
|
||||
os.chown('bootstrap.py', uid, gid)
|
||||
except (IOError, OSError) as exc:
|
||||
# dont block here, try to execute it if can pass
|
||||
# don't block here, try to execute it if can pass
|
||||
_logger.error('BUILDOUT bootstrap permissions error:'
|
||||
' {0}'.format(exc),
|
||||
exc_info=_logger.isEnabledFor(logging.DEBUG))
|
||||
|
|
|
@ -357,7 +357,7 @@ class BaseSaltAPIHandler(tornado.web.RequestHandler, SaltClientsMixIn):
|
|||
yaml.safe_load, default_flow_style=False),
|
||||
'text/yaml': functools.partial(
|
||||
yaml.safe_load, default_flow_style=False),
|
||||
# because people are terrible and dont mean what they say
|
||||
# because people are terrible and don't mean what they say
|
||||
'text/plain': json.loads
|
||||
}
|
||||
|
||||
|
|
|
@ -285,7 +285,7 @@ def _listeners_present(
|
|||
ret = {'result': None, 'comment': '', 'changes': {}}
|
||||
lb = __salt__['boto_elb.get_elb_config'](name, region, key, keyid, profile)
|
||||
if not lb:
|
||||
msg = '{0} ELB configuration could not be retreived.'.format(name)
|
||||
msg = '{0} ELB configuration could not be retrieved.'.format(name)
|
||||
ret['comment'] = msg
|
||||
ret['result'] = False
|
||||
return ret
|
||||
|
|
|
@ -286,7 +286,7 @@ def _rules_present(
|
|||
sg = __salt__['boto_secgroup.get_config'](name, None, region, key, keyid,
|
||||
profile, vpc_id)
|
||||
if not sg:
|
||||
msg = '{0} security group configuration could not be retreived.'
|
||||
msg = '{0} security group configuration could not be retrieved.'
|
||||
ret['comment'] = msg.format(name)
|
||||
ret['result'] = False
|
||||
return ret
|
||||
|
|
|
@ -70,7 +70,7 @@ def run(name,
|
|||
grain to store the output (need output=grain)
|
||||
|
||||
key:
|
||||
the specified grain will be treated as a dictionnary, the result
|
||||
the specified grain will be treated as a dictionary, the result
|
||||
of this state will be stored under the specified key.
|
||||
|
||||
overwrite:
|
||||
|
|
|
@ -905,7 +905,7 @@ def installed(
|
|||
changes[change_name]['old'] += '\n'
|
||||
changes[change_name]['old'] += '{0}'.format(i['changes']['old'])
|
||||
|
||||
# Any requested packages that were not targetted for install or reinstall
|
||||
# Any requested packages that were not targeted for install or reinstall
|
||||
if not_modified:
|
||||
if sources:
|
||||
summary = ', '.join(not_modified)
|
||||
|
|
|
@ -394,7 +394,7 @@ def installed(name, categories=None, includes=None, retries=10):
|
|||
|
||||
name:
|
||||
if categories is left empty, it will be assumed that you are passing the category option
|
||||
through the name. These are seperate because you can only have one name, but can have
|
||||
through the name. These are separate because you can only have one name, but can have
|
||||
multiple categories.
|
||||
|
||||
categories:
|
||||
|
@ -407,7 +407,7 @@ def installed(name, categories=None, includes=None, retries=10):
|
|||
Update Rollups
|
||||
|
||||
includes:
|
||||
a list of features of the updates to cull by. availble features:
|
||||
a list of features of the updates to cull by. available features:
|
||||
'UI' - User interaction required, skipped by default
|
||||
'downloaded' - Already downloaded, skipped by default (downloading)
|
||||
'present' - Present on computer, included by default (installing)
|
||||
|
@ -466,7 +466,7 @@ def downloaded(name, categories=None, includes=None, retries=10):
|
|||
|
||||
name:
|
||||
if categories is left empty, it will be assumed that you are passing the category option
|
||||
through the name. These are seperate because you can only have one name, but can have
|
||||
through the name. These are separate because you can only have one name, but can have
|
||||
multiple categories.
|
||||
|
||||
categories:
|
||||
|
@ -479,7 +479,7 @@ def downloaded(name, categories=None, includes=None, retries=10):
|
|||
Update Rollups
|
||||
|
||||
includes:
|
||||
a list of features of the updates to cull by. availble features:
|
||||
a list of features of the updates to cull by. available features:
|
||||
'UI' - User interaction required, skipped by default
|
||||
'downloaded' - Already downloaded, skipped by default (downloading)
|
||||
'present' - Present on computer, included by default (installing)
|
||||
|
|
|
@ -1250,7 +1250,7 @@ class IPv4Network(_BaseV4, _BaseNet):
|
|||
'192.168.1.1'
|
||||
'192.168.1.1/255.255.255.255'
|
||||
'192.168.1.1/32'
|
||||
are also functionaly equivalent. That is to say, failing to
|
||||
are also functionally equivalent. That is to say, failing to
|
||||
provide a subnetmask will create an object with a mask of /32.
|
||||
|
||||
If the mask (portion after the / in the argument) is given in
|
||||
|
|
|
@ -221,7 +221,7 @@ class Schedule(object):
|
|||
python data-structures to make sure, you pass correct dictionaries.
|
||||
'''
|
||||
|
||||
# we dont do any checking here besides making sure its a dict.
|
||||
# we don't do any checking here besides making sure its a dict.
|
||||
# eval() already does for us and raises errors accordingly
|
||||
if not isinstance(data, dict):
|
||||
raise ValueError('Scheduled jobs have to be of type dict.')
|
||||
|
|
|
@ -55,7 +55,7 @@ _master_options=(
|
|||
'(-T --make-token)'{-T,--make-token}'[Generate and save an authentication token for re-use.]'
|
||||
"--return[Set an alternative return method.]:Returners:_path_files -W '$salt_dir/returners' -g '[^_]*.py(\:r)'"
|
||||
'(-d --doc --documentation)'{-d,--doc,--documentation}"[Return the documentation for the specified module]:Module:_path_files -W '$salt_dir/modules' -g '[^_]*.py(\:r)'"
|
||||
'--args-separator[Set the special argument used as a delimiter between command arguments of compound commands.]:Arg seperator:'
|
||||
'--args-separator[Set the special argument used as a delimiter between command arguments of compound commands.]:Arg separator:'
|
||||
)
|
||||
|
||||
_minion_options=(
|
||||
|
|
|
@ -111,7 +111,7 @@ class Boto_SecgroupTestCase(TestCase):
|
|||
def test_get_group_id_ec2_classic(self):
|
||||
'''
|
||||
tests that given a name of a group in EC2-Classic that the correct
|
||||
group id will be retreived
|
||||
group id will be retrieved
|
||||
'''
|
||||
group_name = _random_group_name()
|
||||
group_description = 'test_get_group_id_ec2_classic'
|
||||
|
@ -134,7 +134,7 @@ class Boto_SecgroupTestCase(TestCase):
|
|||
def test_get_group_id_ec2_vpc(self):
|
||||
'''
|
||||
tests that given a name of a group in EC2-VPC that the correct
|
||||
group id will be retreived
|
||||
group id will be retrieved
|
||||
'''
|
||||
group_name = _random_group_name()
|
||||
group_description = 'test_get_group_id_ec2_vpc'
|
||||
|
|
|
@ -33,10 +33,10 @@ class GrainsModuleTestCase(TestCase):
|
|||
res = grainsmod.filter_by(dict1, grain='xxx', default='C')
|
||||
self.assertEqual(res, {'D': {'E': 'F', 'G': 'H'}})
|
||||
|
||||
# add a merge dictionnary, F disapears
|
||||
# add a merge dictionary, F disappears
|
||||
res = grainsmod.filter_by(dict1, grain='xxx', merge=mdict, default='C')
|
||||
self.assertEqual(res, {'D': {'E': 'I', 'G': 'H'}, 'J': 'K'})
|
||||
# dict1 was altered, restablish
|
||||
# dict1 was altered, reestablish
|
||||
dict1 = {'A': 'B', 'C': {'D': {'E': 'F', 'G': 'H'}}}
|
||||
|
||||
# default is not present in dict1, check we only have merge in result
|
||||
|
@ -64,13 +64,13 @@ class GrainsModuleTestCase(TestCase):
|
|||
self.assertEqual(res, {'D': {'E': 'F', 'G': 'H'}})
|
||||
res = grainsmod.filter_by(dict1, merge=mdict, default='C')
|
||||
self.assertEqual(res, {'D': {'E': 'I', 'G': 'H'}, 'J': 'K'})
|
||||
# dict1 was altered, restablish
|
||||
# dict1 was altered, reestablish
|
||||
dict1 = {'A': 'B', 'C': {'D': {'E': 'F', 'G': 'H'}}}
|
||||
res = grainsmod.filter_by(dict1, merge=mdict, default='Z')
|
||||
self.assertEqual(res, mdict)
|
||||
res = grainsmod.filter_by(dict1, default='Z')
|
||||
self.assertIs(res, None)
|
||||
# this one is in fact a traceback in updatedict, merging a string with a dictionnary
|
||||
# this one is in fact a traceback in updatedict, merging a string with a dictionary
|
||||
self.assertRaises(
|
||||
TypeError,
|
||||
grainsmod.filter_by,
|
||||
|
@ -87,11 +87,11 @@ class GrainsModuleTestCase(TestCase):
|
|||
self.assertEqual(res, {'D': {'E': 'F', 'G': 'H'}})
|
||||
res = grainsmod.filter_by(dict1, merge=mdict, default='A')
|
||||
self.assertEqual(res, {'D': {'E': 'I', 'G': 'H'}, 'J': 'K'})
|
||||
# dict1 was altered, restablish
|
||||
# dict1 was altered, reestablish
|
||||
dict1 = {'A': 'B', 'MockedOS': {'D': {'E': 'F', 'G': 'H'}}}
|
||||
res = grainsmod.filter_by(dict1, merge=mdict, default='Z')
|
||||
self.assertEqual(res, {'D': {'E': 'I', 'G': 'H'}, 'J': 'K'})
|
||||
# dict1 was altered, restablish
|
||||
# dict1 was altered, reestablish
|
||||
dict1 = {'A': 'B', 'MockedOS': {'D': {'E': 'F', 'G': 'H'}}}
|
||||
res = grainsmod.filter_by(dict1, default='Z')
|
||||
self.assertEqual(res, {'D': {'E': 'F', 'G': 'H'}})
|
||||
|
|
|
@ -68,7 +68,7 @@ class BaseCherryPyTestCase(TestCase):
|
|||
|
||||
* Responses are dispatched to a mounted application's
|
||||
page handler, if found. This is the reason why you
|
||||
must indicate which app you are targetting with
|
||||
must indicate which app you are targeting with
|
||||
this request by specifying its mount point.
|
||||
|
||||
You can simulate various request settings by setting
|
||||
|
|
Loading…
Add table
Reference in a new issue