Merge branch 'develop' into high-out-by_id

This commit is contained in:
Nicole Thomas 2017-09-28 13:27:47 -04:00 committed by GitHub
commit 3f77571dad
97 changed files with 11862 additions and 645 deletions

View file

@ -692,6 +692,12 @@
# for a full explanation.
#multiprocessing: True
# Limit the maximum amount of processes or threads created by salt-minion.
# This is useful to avoid resource exhaustion in case the minion receives more
# publications than it is able to handle, as it limits the number of spawned
# processes or threads. -1 is the default and disables the limit.
#process_count_max: -1
##### Logging settings #####
##########################################

View file

@ -2423,6 +2423,23 @@ executed in a thread.
multiprocessing: True
.. conf_minion:: process_count_max
``process_count_max``
-------
.. versionadded:: Oxygen
Default: ``-1``
Limit the maximum amount of processes or threads created by ``salt-minion``.
This is useful to avoid resource exhaustion in case the minion receives more
publications than it is able to handle, as it limits the number of spawned
processes or threads. ``-1`` is the default and disables the limit.
.. code-block:: yaml
process_count_max: -1
.. _minion-logging-settings:

View file

@ -25,6 +25,9 @@ configuration:
- web*:
- test.*
- pkg.*
# Allow managers to use saltutil module functions
manager_.*:
- saltutil.*
Permission Issues
-----------------

View file

@ -1,5 +1,5 @@
salt.runners.auth module
========================
salt.runners.auth
=================
.. automodule:: salt.runners.auth
:members:

View file

@ -1,5 +1,5 @@
salt.runners.event module
=========================
salt.runners.event
==================
.. automodule:: salt.runners.event
:members:

View file

@ -1,5 +1,5 @@
salt.runners.smartos_vmadm module
=================================
salt.runners.smartos_vmadm
==========================
.. automodule:: salt.runners.smartos_vmadm
:members:

View file

@ -1,5 +1,5 @@
salt.runners.vistara module
===========================
salt.runners.vistara
====================
.. automodule:: salt.runners.vistara
:members:

File diff suppressed because it is too large Load diff

View file

@ -117,6 +117,194 @@ file. For example:
These commands will run in sequence **before** the bootstrap script is executed.
New pillar/master_tops module called saltclass
----------------------------------------------
This module clones the behaviour of reclass (http://reclass.pantsfullofunix.net/), without the need of an external app, and add several features to improve flexibility.
Saltclass lets you define your nodes from simple ``yaml`` files (``.yml``) through hierarchical class inheritance with the possibility to override pillars down the tree.
**Features**
- Define your nodes through hierarchical class inheritance
- Reuse your reclass datas with minimal modifications
- applications => states
- parameters => pillars
- Use Jinja templating in your yaml definitions
- Access to the following Salt objects in Jinja
- ``__opts__``
- ``__salt__``
- ``__grains__``
- ``__pillars__``
- ``minion_id``
- Chose how to merge or override your lists using ^ character (see examples)
- Expand variables ${} with possibility to escape them if needed \${} (see examples)
- Ignores missing node/class and will simply return empty without breaking the pillar module completely - will be logged
An example subset of datas is available here: http://git.mauras.ch/salt/saltclass/src/master/examples
========================== ===========
Terms usable in yaml files Description
========================== ===========
classes A list of classes that will be processed in order
states A list of states that will be returned by master_tops function
pillars A yaml dictionnary that will be returned by the ext_pillar function
environment Node saltenv that will be used by master_tops
========================== ===========
A class consists of:
- zero or more parent classes
- zero or more states
- any number of pillars
A child class can override pillars from a parent class.
A node definition is a class in itself with an added ``environment`` parameter for ``saltenv`` definition.
**class names**
Class names mimic salt way of defining states and pillar files.
This means that ``default.users`` class name will correspond to one of these:
- ``<saltclass_path>/classes/default/users.yml``
- ``<saltclass_path>/classes/default/users/init.yml``
**Saltclass tree**
A saltclass tree would look like this:
.. code-block:: text
<saltclass_path>
├── classes
│ ├── app
│ │ ├── borgbackup.yml
│ │ └── ssh
│ │ └── server.yml
│ ├── default
│ │ ├── init.yml
│ │ ├── motd.yml
│ │ └── users.yml
│ ├── roles
│ │ ├── app.yml
│ │ └── nginx
│ │ ├── init.yml
│ │ └── server.yml
│ └── subsidiaries
│ ├── gnv.yml
│ ├── qls.yml
│ └── zrh.yml
└── nodes
├── geneva
│ └── gnv.node1.yml
├── lausanne
│ ├── qls.node1.yml
│ └── qls.node2.yml
├── node127.yml
└── zurich
├── zrh.node1.yml
├── zrh.node2.yml
└── zrh.node3.yml
**Examples**
``<saltclass_path>/nodes/lausanne/qls.node1.yml``
.. code-block:: yaml
environment: base
classes:
{% for class in ['default'] %}
- {{ class }}
{% endfor %}
- subsidiaries.{{ __grains__['id'].split('.')[0] }}
``<saltclass_path>/classes/default/init.yml``
.. code-block:: yaml
classes:
- default.users
- default.motd
states:
- openssh
pillars:
default:
network:
dns:
srv1: 192.168.0.1
srv2: 192.168.0.2
domain: example.com
ntp:
srv1: 192.168.10.10
srv2: 192.168.10.20
``<saltclass_path>/classes/subsidiaries/gnv.yml``
.. code-block:: yaml
pillars:
default:
network:
sub: Geneva
dns:
srv1: 10.20.0.1
srv2: 10.20.0.2
srv3: 192.168.1.1
domain: gnv.example.com
users:
adm1:
uid: 1210
gid: 1210
gecos: 'Super user admin1'
homedir: /srv/app/adm1
adm3:
uid: 1203
gid: 1203
gecos: 'Super user adm
Variable expansions:
Escaped variables are rendered as is - ``${test}``
Missing variables are rendered as is - ``${net:dns:srv2}``
.. code-block:: yaml
pillars:
app:
config:
dns:
srv1: ${default:network:dns:srv1}
srv2: ${net:dns:srv2}
uri: https://application.domain/call?\${test}
prod_parameters:
- p1
- p2
- p3
pkg:
- app-core
- app-backend
List override:
Not using ``^`` as the first entry will simply merge the lists
.. code-block:: yaml
pillars:
app:
pkg:
- ^
- app-frontend
**Known limitation**
Currently you can't have both a variable and an escaped variable in the same string as the escaped one will not be correctly rendered - '\${xx}' will stay as is instead of being rendered as '${xx}'
Newer PyWinRM Versions
----------------------

View file

@ -369,46 +369,13 @@ class LoadAuth(object):
eauth_config = self.opts['external_auth'][eauth]
if not groups:
groups = []
group_perm_keys = [item for item in eauth_config if item.endswith('%')] # The configured auth groups
# First we need to know if the user is allowed to proceed via any of their group memberships.
group_auth_match = False
for group_config in group_perm_keys:
if group_config.rstrip('%') in groups:
group_auth_match = True
break
# If a group_auth_match is set it means only that we have a
# user which matches at least one or more of the groups defined
# in the configuration file.
external_auth_in_db = False
for entry in eauth_config:
if entry.startswith('^'):
external_auth_in_db = True
break
# If neither a catchall, a named membership or a group
# membership is found, there is no need to continue. Simply
# deny the user access.
if not ((name in eauth_config) |
('*' in eauth_config) |
group_auth_match | external_auth_in_db):
# Auth successful, but no matching user found in config
log.warning('Authorization failure occurred.')
return None
# We now have an authenticated session and it is time to determine
# what the user has access to.
auth_list = []
if name in eauth_config:
auth_list = eauth_config[name]
elif '*' in eauth_config:
auth_list = eauth_config['*']
if group_auth_match:
auth_list = self.ckminions.fill_auth_list_from_groups(
auth_list = self.ckminions.fill_auth_list(
eauth_config,
groups,
auth_list)
name,
groups)
auth_list = self.__process_acl(load, auth_list)

View file

@ -481,18 +481,17 @@ def list_(bank):
Lists entries stored in the specified bank.
'''
redis_server = _get_redis_server()
bank_keys_redis_key = _get_bank_keys_redis_key(bank)
bank_keys = None
bank_redis_key = _get_bank_redis_key(bank)
try:
bank_keys = redis_server.smembers(bank_keys_redis_key)
banks = redis_server.smembers(bank_redis_key)
except (RedisConnectionError, RedisResponseError) as rerr:
mesg = 'Cannot list the Redis cache key {rkey}: {rerr}'.format(rkey=bank_keys_redis_key,
mesg = 'Cannot list the Redis cache key {rkey}: {rerr}'.format(rkey=bank_redis_key,
rerr=rerr)
log.error(mesg)
raise SaltCacheError(mesg)
if not bank_keys:
if not banks:
return []
return list(bank_keys)
return list(banks)
def contains(bank, key):
@ -500,15 +499,11 @@ def contains(bank, key):
Checks if the specified bank contains the specified key.
'''
redis_server = _get_redis_server()
bank_keys_redis_key = _get_bank_keys_redis_key(bank)
bank_keys = None
bank_redis_key = _get_bank_redis_key(bank)
try:
bank_keys = redis_server.smembers(bank_keys_redis_key)
return redis_server.sismember(bank_redis_key, key)
except (RedisConnectionError, RedisResponseError) as rerr:
mesg = 'Cannot retrieve the Redis cache key {rkey}: {rerr}'.format(rkey=bank_keys_redis_key,
mesg = 'Cannot retrieve the Redis cache key {rkey}: {rerr}'.format(rkey=bank_redis_key,
rerr=rerr)
log.error(mesg)
raise SaltCacheError(mesg)
if not bank_keys:
return False
return key in bank_keys

View file

@ -3543,12 +3543,11 @@ def list_nodes_min(location=None, call=None):
for instance in instances:
if isinstance(instance['instancesSet']['item'], list):
for item in instance['instancesSet']['item']:
state = item['instanceState']['name']
name = _extract_name_tag(item)
id = item['instanceId']
items = instance['instancesSet']['item']
else:
item = instance['instancesSet']['item']
items = [instance['instancesSet']['item']]
for item in items:
state = item['instanceState']['name']
name = _extract_name_tag(item)
id = item['instanceId']

View file

@ -101,7 +101,7 @@ __virtualname__ = 'libvirt'
log = logging.getLogger(__name__)
def libvirt_error_handler(ctx, error):
def libvirt_error_handler(ctx, error): # pylint: disable=unused-argument
'''
Redirect stderr prints from libvirt to salt logging.
'''

View file

@ -7,6 +7,7 @@ XenServer Cloud Driver
The XenServer driver is designed to work with a Citrix XenServer.
Requires XenServer SDK
(can be downloaded from https://www.citrix.com/downloads/xenserver/product-software/ )
Place a copy of the XenAPI.py in the Python site-packages folder.
@ -157,13 +158,27 @@ def _get_session():
default=False,
search_global=False
)
try:
session = XenAPI.Session(url, ignore_ssl=ignore_ssl)
log.debug('url: {} user: {} password: {}, originator: {}'.format(
url,
user,
'XXX-pw-redacted-XXX',
originator))
session.xenapi.login_with_password(user, password, api_version, originator)
session.xenapi.login_with_password(
user, password, api_version, originator)
except XenAPI.Failure as ex:
pool_master_addr = str(ex.__dict__['details'][1])
slash_parts = url.split('/')
new_url = '/'.join(slash_parts[:2]) + '/' + pool_master_addr
session = XenAPI.Session(new_url)
log.debug('session is -> url: {} user: {} password: {}, originator:{}'.format(
new_url,
user,
'XXX-pw-redacted-XXX',
originator))
session.xenapi.login_with_password(
user, password, api_version, originator)
return session
@ -182,9 +197,14 @@ def list_nodes():
for vm in vms:
record = session.xenapi.VM.get_record(vm)
if not record['is_a_template'] and not record['is_control_domain']:
ret[record['name_label']] = {
'id': record['uuid'],
'image': record['other_config']['base_template_name'],
try:
base_template_name = record['other_config']['base_template_name']
except Exception:
base_template_name = None
log.debug('VM {}, doesnt have base_template_name attribute'.format(
record['name_label']))
ret[record['name_label']] = {'id': record['uuid'],
'image': base_template_name,
'name': record['name_label'],
'size': record['memory_dynamic_max'],
'state': record['power_state'],
@ -296,10 +316,17 @@ def list_nodes_full(session=None):
for vm in vms:
record = session.xenapi.VM.get_record(vm)
if not record['is_a_template'] and not record['is_control_domain']:
# deal with cases where the VM doesn't have 'base_template_name' attribute
try:
base_template_name = record['other_config']['base_template_name']
except Exception:
base_template_name = None
log.debug('VM {}, doesnt have base_template_name attribute'.format(
record['name_label']))
vm_cfg = session.xenapi.VM.get_record(vm)
vm_cfg['id'] = record['uuid']
vm_cfg['name'] = record['name_label']
vm_cfg['image'] = record['other_config']['base_template_name']
vm_cfg['image'] = base_template_name
vm_cfg['size'] = None
vm_cfg['state'] = record['power_state']
vm_cfg['private_ips'] = get_vm_ip(record['name_label'], session)
@ -455,8 +482,14 @@ def show_instance(name, session=None, call=None):
vm = _get_vm(name, session=session)
record = session.xenapi.VM.get_record(vm)
if not record['is_a_template'] and not record['is_control_domain']:
try:
base_template_name = record['other_config']['base_template_name']
except Exception:
base_template_name = None
log.debug('VM {}, doesnt have base_template_name attribute'.format(
record['name_label']))
ret = {'id': record['uuid'],
'image': record['other_config']['base_template_name'],
'image': base_template_name,
'name': record['name_label'],
'size': record['memory_dynamic_max'],
'state': record['power_state'],
@ -716,7 +749,7 @@ def _copy_vm(template=None, name=None, session=None, sr=None):
'''
Create VM by copy
This is faster and should be used if source and target are
This is slower and should be used if source and target are
NOT in the same storage repository
template = object reference

View file

@ -337,6 +337,9 @@ VALID_OPTS = {
# Whether or not processes should be forked when needed. The alternative is to use threading.
'multiprocessing': bool,
# Maximum number of concurrently active processes at any given point in time
'process_count_max': int,
# Whether or not the salt minion should run scheduled mine updates
'mine_enabled': bool,
@ -746,6 +749,10 @@ VALID_OPTS = {
'fileserver_limit_traversal': bool,
'fileserver_verify_config': bool,
# Optionally apply '*' permissioins to any user. By default '*' is a fallback case that is
# applied only if the user didn't matched by other matchers.
'permissive_acl': bool,
# Optionally enables keeping the calculated user's auth list in the token file.
'keep_acl_in_token': bool,
@ -1258,6 +1265,7 @@ DEFAULT_MINION_OPTS = {
'auto_accept': True,
'autosign_timeout': 120,
'multiprocessing': True,
'process_count_max': -1,
'mine_enabled': True,
'mine_return_job': False,
'mine_interval': 60,
@ -1526,6 +1534,7 @@ DEFAULT_MASTER_OPTS = {
'external_auth': {},
'token_expire': 43200,
'token_expire_user_override': False,
'permissive_acl': False,
'keep_acl_in_token': False,
'eauth_acl_module': '',
'eauth_tokens': 'localfs',

219
salt/config/schemas/esxi.py Normal file
View file

@ -0,0 +1,219 @@
# -*- coding: utf-8 -*-
'''
:codeauthor: :email:`Alexandru Bleotu (alexandru.bleotu@morganstanley.com)`
salt.config.schemas.esxi
~~~~~~~~~~~~~~~~~~~~~~~~
ESXi host configuration schemas
'''
# Import Python libs
from __future__ import absolute_import
# Import Salt libs
from salt.utils.schema import (DefinitionsSchema,
Schema,
ComplexSchemaItem,
ArrayItem,
IntegerItem,
BooleanItem,
StringItem,
OneOfItem)
class VMwareScsiAddressItem(StringItem):
pattern = r'vmhba\d+:C\d+:T\d+:L\d+'
class DiskGroupDiskScsiAddressItem(ComplexSchemaItem):
'''
Schema item of a ESXi host disk group containing disk SCSI addresses
'''
title = 'Diskgroup Disk Scsi Address Item'
description = 'ESXi host diskgroup item containing disk SCSI addresses'
cache_scsi_addr = VMwareScsiAddressItem(
title='Cache Disk Scsi Address',
description='Specifies the SCSI address of the cache disk',
required=True)
capacity_scsi_addrs = ArrayItem(
title='Capacity Scsi Addresses',
description='Array with the SCSI addresses of the capacity disks',
items=VMwareScsiAddressItem(),
min_items=1)
class DiskGroupDiskIdItem(ComplexSchemaItem):
'''
Schema item of a ESXi host disk group containg disk ids
'''
title = 'Diskgroup Disk Id Item'
description = 'ESXi host diskgroup item containing disk ids'
cache_id = StringItem(
title='Cache Disk Id',
description='Specifies the id of the cache disk',
pattern=r'[^\s]+')
capacity_ids = ArrayItem(
title='Capacity Disk Ids',
description='Array with the ids of the capacity disks',
items=StringItem(pattern=r'[^\s]+'),
min_items=1)
class DiskGroupsDiskScsiAddressSchema(DefinitionsSchema):
'''
Schema of ESXi host diskgroups containing disk SCSI addresses
'''
title = 'Diskgroups Disk Scsi Address Schema'
description = 'ESXi host diskgroup schema containing disk SCSI addresses'
diskgroups = ArrayItem(
title='Diskgroups',
description='List of diskgroups in an ESXi host',
min_items=1,
items=DiskGroupDiskScsiAddressItem(),
required=True)
erase_disks = BooleanItem(
title='Erase Diskgroup Disks',
required=True)
class DiskGroupsDiskIdSchema(DefinitionsSchema):
'''
Schema of ESXi host diskgroups containing disk ids
'''
title = 'Diskgroups Disk Id Schema'
description = 'ESXi host diskgroup schema containing disk ids'
diskgroups = ArrayItem(
title='DiskGroups',
description='List of disk groups in an ESXi host',
min_items=1,
items=DiskGroupDiskIdItem(),
required=True)
class VmfsDatastoreDiskIdItem(ComplexSchemaItem):
'''
Schema item of a VMFS datastore referencing a backing disk id
'''
title = 'VMFS Datastore Disk Id Item'
description = 'VMFS datastore item referencing a backing disk id'
name = StringItem(
title='Name',
description='Specifies the name of the VMFS datastore',
required=True)
backing_disk_id = StringItem(
title='Backing Disk Id',
description=('Specifies the id of the disk backing the VMFS '
'datastore'),
pattern=r'[^\s]+',
required=True)
vmfs_version = IntegerItem(
title='VMFS Version',
description='VMFS version',
enum=[1, 2, 3, 5])
class VmfsDatastoreDiskScsiAddressItem(ComplexSchemaItem):
'''
Schema item of a VMFS datastore referencing a backing disk SCSI address
'''
title = 'VMFS Datastore Disk Scsi Address Item'
description = 'VMFS datastore item referencing a backing disk SCSI address'
name = StringItem(
title='Name',
description='Specifies the name of the VMFS datastore',
required=True)
backing_disk_scsi_addr = VMwareScsiAddressItem(
title='Backing Disk Scsi Address',
description=('Specifies the SCSI address of the disk backing the VMFS '
'datastore'),
required=True)
vmfs_version = IntegerItem(
title='VMFS Version',
description='VMFS version',
enum=[1, 2, 3, 5])
class VmfsDatastoreSchema(DefinitionsSchema):
'''
Schema of a VMFS datastore
'''
title = 'VMFS Datastore Schema'
description = 'Schema of a VMFS datastore'
datastore = OneOfItem(
items=[VmfsDatastoreDiskScsiAddressItem(),
VmfsDatastoreDiskIdItem()],
required=True)
class HostCacheSchema(DefinitionsSchema):
'''
Schema of ESXi host cache
'''
title = 'Host Cache Schema'
description = 'Schema of the ESXi host cache'
enabled = BooleanItem(
title='Enabled',
required=True)
datastore = VmfsDatastoreDiskScsiAddressItem(required=True)
swap_size = StringItem(
title='Host cache swap size (in GB or %)',
pattern=r'(\d+GiB)|(([0-9]|([1-9][0-9])|100)%)',
required=True)
erase_backing_disk = BooleanItem(
title='Erase Backup Disk',
required=True)
class SimpleHostCacheSchema(Schema):
'''
Simplified Schema of ESXi host cache
'''
title = 'Simple Host Cache Schema'
description = 'Simplified schema of the ESXi host cache'
enabled = BooleanItem(
title='Enabled',
required=True)
datastore_name = StringItem(title='Datastore Name',
required=True)
swap_size_MiB = IntegerItem(title='Host cache swap size in MiB',
minimum=1)
class EsxiProxySchema(Schema):
'''
Schema of the esxi proxy input
'''
title = 'Esxi Proxy Schema'
description = 'Esxi proxy schema'
additional_properties = False
proxytype = StringItem(required=True,
enum=['esxi'])
host = StringItem(pattern=r'[^\s]+') # Used when connecting directly
vcenter = StringItem(pattern=r'[^\s]+') # Used when connecting via a vCenter
esxi_host = StringItem()
username = StringItem()
passwords = ArrayItem(min_items=1,
items=StringItem(),
unique_items=True)
mechanism = StringItem(enum=['userpass', 'sspi'])
# TODO Should be changed when anyOf is supported for schemas
domain = StringItem()
principal = StringItem()
protocol = StringItem()
port = IntegerItem(minimum=1)

View file

@ -14,6 +14,8 @@ from __future__ import absolute_import
# Import Salt libs
from salt.utils.schema import (Schema,
ArrayItem,
IntegerItem,
StringItem)
@ -31,3 +33,25 @@ class VCenterEntitySchema(Schema):
vcenter = StringItem(title='vCenter',
description='Specifies the vcenter hostname',
required=True)
class VCenterProxySchema(Schema):
'''
Schema for the configuration for the proxy to connect to a VCenter.
'''
title = 'VCenter Proxy Connection Schema'
description = 'Schema that describes the connection to a VCenter'
additional_properties = False
proxytype = StringItem(required=True,
enum=['vcenter'])
vcenter = StringItem(required=True, pattern=r'[^\s]+')
mechanism = StringItem(required=True, enum=['userpass', 'sspi'])
username = StringItem()
passwords = ArrayItem(min_items=1,
items=StringItem(),
unique_items=True)
domain = StringItem()
principal = StringItem(default='host')
protocol = StringItem(default='https')
port = IntegerItem(minimum=1)

View file

@ -170,6 +170,14 @@ def clean_old_jobs(opts):
def mk_key(opts, user):
if HAS_PWD:
uid = None
try:
uid = pwd.getpwnam(user).pw_uid
except KeyError:
# User doesn't exist in the system
if opts['client_acl_verify']:
return None
if salt.utils.platform.is_windows():
# The username may contain '\' if it is in Windows
# 'DOMAIN\username' format. Fix this for the keyfile path.
@ -197,9 +205,9 @@ def mk_key(opts, user):
# Write access is necessary since on subsequent runs, if the file
# exists, it needs to be written to again. Windows enforces this.
os.chmod(keyfile, 0o600)
if HAS_PWD:
if HAS_PWD and uid is not None:
try:
os.chown(keyfile, pwd.getpwnam(user).pw_uid, -1)
os.chown(keyfile, uid, -1)
except OSError:
# The master is not being run as root and can therefore not
# chown the key file
@ -214,27 +222,26 @@ def access_keys(opts):
'''
# TODO: Need a way to get all available users for systems not supported by pwd module.
# For now users pattern matching will not work for publisher_acl.
users = []
keys = {}
publisher_acl = opts['publisher_acl']
acl_users = set(publisher_acl.keys())
if opts.get('user'):
acl_users.add(opts['user'])
acl_users.add(salt.utils.get_user())
for user in acl_users:
log.info('Preparing the %s key for local communication', user)
key = mk_key(opts, user)
if key is not None:
keys[user] = key
# Check other users matching ACL patterns
if opts['client_acl_verify'] and HAS_PWD:
log.profile('Beginning pwd.getpwall() call in masterarpi access_keys function')
for user in pwd.getpwall():
users.append(user.pw_name)
log.profile('End pwd.getpwall() call in masterarpi access_keys function')
for user in acl_users:
log.info('Preparing the %s key for local communication', user)
keys[user] = mk_key(opts, user)
# Check other users matching ACL patterns
if HAS_PWD:
for user in users:
user = user.pw_name
if user not in keys and salt.utils.check_whitelist_blacklist(user, whitelist=acl_users):
keys[user] = mk_key(opts, user)
log.profile('End pwd.getpwall() call in masterarpi access_keys function')
return keys

View file

@ -442,6 +442,18 @@ class VMwareObjectRetrievalError(VMwareSaltError):
'''
class VMwareObjectExistsError(VMwareSaltError):
'''
Used when a VMware object exists
'''
class VMwareObjectNotFoundError(VMwareSaltError):
'''
Used when a VMware object was not found
'''
class VMwareApiError(VMwareSaltError):
'''
Used when representing a generic VMware API error

View file

@ -16,6 +16,7 @@ import os
import json
import socket
import sys
import glob
import re
import platform
import logging
@ -65,6 +66,7 @@ __salt__ = {
'cmd.run_all': salt.modules.cmdmod._run_all_quiet,
'smbios.records': salt.modules.smbios.records,
'smbios.get': salt.modules.smbios.get,
'cmd.run_ps': salt.modules.cmdmod.powershell,
}
log = logging.getLogger(__name__)
@ -2472,3 +2474,119 @@ def default_gateway():
except Exception as exc:
pass
return grains
def fc_wwn():
'''
Return list of fiber channel HBA WWNs
'''
grains = {}
grains['fc_wwn'] = False
if salt.utils.platform.is_linux():
grains['fc_wwn'] = _linux_wwns()
elif salt.utils.platform.is_windows():
grains['fc_wwn'] = _windows_wwns()
return grains
def iscsi_iqn():
'''
Return iSCSI IQN
'''
grains = {}
grains['iscsi_iqn'] = False
if salt.utils.platform.is_linux():
grains['iscsi_iqn'] = _linux_iqn()
elif salt.utils.platform.is_windows():
grains['iscsi_iqn'] = _windows_iqn()
elif salt.utils.platform.is_aix():
grains['iscsi_iqn'] = _aix_iqn()
return grains
def _linux_iqn():
'''
Return iSCSI IQN from a Linux host.
'''
ret = []
initiator = '/etc/iscsi/initiatorname.iscsi'
if os.path.isfile(initiator):
with salt.utils.files.fopen(initiator, 'r') as _iscsi:
for line in _iscsi:
if line.find('InitiatorName') != -1:
iqn = line.split('=')
ret.extend([iqn[1]])
return ret
def _aix_iqn():
'''
Return iSCSI IQN from an AIX host.
'''
ret = []
aixcmd = 'lsattr -E -l iscsi0 | grep initiator_name'
aixret = __salt__['cmd.run'](aixcmd)
if aixret[0].isalpha():
iqn = aixret.split()
ret.extend([iqn[1]])
return ret
def _linux_wwns():
'''
Return Fibre Channel port WWNs from a Linux host.
'''
ret = []
for fcfile in glob.glob('/sys/class/fc_host/*/port_name'):
with salt.utils.files.fopen(fcfile, 'r') as _wwn:
for line in _wwn:
ret.extend([line[2:]])
return ret
def _windows_iqn():
'''
Return iSCSI IQN from a Windows host.
'''
ret = []
wmic = salt.utils.path.which('wmic')
if not wmic:
return ret
namespace = r'\\root\WMI'
mspath = 'MSiSCSIInitiator_MethodClass'
get = 'iSCSINodeName'
cmdret = __salt__['cmd.run_all'](
'{0} /namespace:{1} path {2} get {3} /format:table'.format(
wmic, namespace, mspath, get))
for line in cmdret['stdout'].splitlines():
if line[0].isalpha():
continue
ret.extend([line])
return ret
def _windows_wwns():
'''
Return Fibre Channel port WWNs from a Windows host.
'''
ps_cmd = r'Get-WmiObject -class MSFC_FibrePortHBAAttributes -namespace "root\WMI" | Select -Expandproperty Attributes | %{($_.PortWWN | % {"{0:x2}" -f $_}) -join ""}'
ret = []
cmdret = __salt__['cmd.run_ps'](ps_cmd)
for line in cmdret:
ret.append(line)
return ret

View file

@ -1333,6 +1333,7 @@ class Minion(MinionBase):
self._send_req_async(load, timeout, callback=lambda f: None) # pylint: disable=unexpected-keyword-arg
return True
@tornado.gen.coroutine
def _handle_decoded_payload(self, data):
'''
Override this method if you wish to handle the decoded data
@ -1365,6 +1366,15 @@ class Minion(MinionBase):
self.functions, self.returners, self.function_errors, self.executors = self._load_modules()
self.schedule.functions = self.functions
self.schedule.returners = self.returners
process_count_max = self.opts.get('process_count_max')
if process_count_max > 0:
process_count = len(salt.utils.minion.running(self.opts))
while process_count >= process_count_max:
log.warn("Maximum number of processes reached while executing jid {0}, waiting...".format(data['jid']))
yield tornado.gen.sleep(10)
process_count = len(salt.utils.minion.running(self.opts))
# We stash an instance references to allow for the socket
# communication in Windows. You can't pickle functions, and thus
# python needs to be able to reconstruct the reference on the other

View file

@ -42,6 +42,7 @@ from __future__ import absolute_import
import logging
import json
import yaml
import time
# Import salt libs
from salt.ext import six
@ -2148,6 +2149,7 @@ def list_entities_for_policy(policy_name, path_prefix=None, entity_filter=None,
salt myminion boto_iam.list_entities_for_policy mypolicy
'''
conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile)
retries = 30
params = {}
for arg in ('path_prefix', 'entity_filter'):
@ -2155,6 +2157,7 @@ def list_entities_for_policy(policy_name, path_prefix=None, entity_filter=None,
params[arg] = locals()[arg]
policy_arn = _get_policy_arn(policy_name, region, key, keyid, profile)
while retries:
try:
allret = {
'policy_groups': [],
@ -2166,9 +2169,13 @@ def list_entities_for_policy(policy_name, path_prefix=None, entity_filter=None,
v.extend(ret.get('list_entities_for_policy_response', {}).get('list_entities_for_policy_result', {}).get(k))
return allret
except boto.exception.BotoServerError as e:
log.debug(e)
msg = 'Failed to list {0} policy entities.'
log.error(msg.format(policy_name))
if e.error_code == 'Throttling':
log.debug("Throttled by AWS API, will retry in 5 seconds...")
time.sleep(5)
retries -= 1
continue
log.error('Failed to list {0} policy entities: {1}'.format(policy_name, e.message))
return {}
return {}

View file

@ -505,8 +505,15 @@ def update_parameter_group(name, parameters, apply_method="pending-reboot",
param_list = []
for key, value in six.iteritems(parameters):
item = (key, value, apply_method)
item = odict.OrderedDict()
item.update({'ParameterName': key})
item.update({'ApplyMethod': apply_method})
if type(value) is bool:
item.update({'ParameterValue': 'on' if value else 'off'})
else:
item.update({'ParameterValue': str(value)})
param_list.append(item)
if not len(param_list):
return {'results': False}
@ -843,6 +850,7 @@ def describe_parameters(name, Source=None, MaxRecords=None, Marker=None,
'message': 'Could not establish a connection to RDS'}
kwargs = {}
kwargs.update({'DBParameterGroupName': name})
for key in ('Marker', 'Source'):
if locals()[key] is not None:
kwargs[key] = str(locals()[key])
@ -850,21 +858,18 @@ def describe_parameters(name, Source=None, MaxRecords=None, Marker=None,
if locals()['MaxRecords'] is not None:
kwargs['MaxRecords'] = int(locals()['MaxRecords'])
r = conn.describe_db_parameters(DBParameterGroupName=name, **kwargs)
pag = conn.get_paginator('describe_db_parameters')
pit = pag.paginate(**kwargs)
if not r:
return {'result': False,
'message': 'Failed to get RDS parameters for group {0}.'
.format(name)}
results = r['Parameters']
keys = ['ParameterName', 'ParameterValue', 'Description',
'Source', 'ApplyType', 'DataType', 'AllowedValues',
'IsModifieable', 'MinimumEngineVersion', 'ApplyMethod']
parameters = odict.OrderedDict()
ret = {'result': True}
for result in results:
for p in pit:
for result in p['Parameters']:
data = odict.OrderedDict()
for k in keys:
data[k] = result.get(k)

View file

@ -599,9 +599,14 @@ def exists(vpc_id=None, name=None, cidr=None, tags=None, region=None, key=None,
try:
vpc_ids = _find_vpcs(vpc_id=vpc_id, vpc_name=name, cidr=cidr, tags=tags,
region=region, key=key, keyid=keyid, profile=profile)
except BotoServerError as err:
boto_err = salt.utils.boto.get_error(err)
if boto_err.get('aws', {}).get('code') == 'InvalidVpcID.NotFound':
# VPC was not found: handle the error and return False.
return {'exists': False}
return {'error': boto_err}
return {'exists': bool(vpc_ids)}
except BotoServerError as e:
return {'error': salt.utils.boto.get_error(e)}
def create(cidr_block, instance_tenancy=None, vpc_name=None,
@ -723,12 +728,22 @@ def describe(vpc_id=None, vpc_name=None, region=None, key=None,
try:
conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile)
vpc_id = check_vpc(vpc_id, vpc_name, region, key, keyid, profile)
except BotoServerError as err:
boto_err = salt.utils.boto.get_error(err)
if boto_err.get('aws', {}).get('code') == 'InvalidVpcID.NotFound':
# VPC was not found: handle the error and return None.
return {'vpc': None}
return {'error': boto_err}
if not vpc_id:
return {'vpc': None}
filter_parameters = {'vpc_ids': vpc_id}
try:
vpcs = conn.get_all_vpcs(**filter_parameters)
except BotoServerError as err:
return {'error': salt.utils.boto.get_error(err)}
if vpcs:
vpc = vpcs[0] # Found!
@ -742,9 +757,6 @@ def describe(vpc_id=None, vpc_name=None, region=None, key=None,
else:
return {'vpc': None}
except BotoServerError as e:
return {'error': salt.utils.boto.get_error(e)}
def describe_vpcs(vpc_id=None, name=None, cidr=None, tags=None,
region=None, key=None, keyid=None, profile=None):
@ -809,7 +821,7 @@ def _find_subnets(subnet_name=None, vpc_id=None, cidr=None, tags=None, conn=None
Given subnet properties, find and return matching subnet ids
'''
if not any(subnet_name, tags, cidr):
if not any([subnet_name, tags, cidr]):
raise SaltInvocationError('At least one of the following must be '
'specified: subnet_name, cidr or tags.')
@ -927,25 +939,31 @@ def subnet_exists(subnet_id=None, name=None, subnet_name=None, cidr=None,
try:
conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile)
filter_parameters = {'filters': {}}
except BotoServerError as err:
return {'error': salt.utils.boto.get_error(err)}
filter_parameters = {'filters': {}}
if subnet_id:
filter_parameters['subnet_ids'] = [subnet_id]
if subnet_name:
filter_parameters['filters']['tag:Name'] = subnet_name
if cidr:
filter_parameters['filters']['cidr'] = cidr
if tags:
for tag_name, tag_value in six.iteritems(tags):
filter_parameters['filters']['tag:{0}'.format(tag_name)] = tag_value
if zones:
filter_parameters['filters']['availability_zone'] = zones
try:
subnets = conn.get_all_subnets(**filter_parameters)
except BotoServerError as err:
boto_err = salt.utils.boto.get_error(err)
if boto_err.get('aws', {}).get('code') == 'InvalidSubnetID.NotFound':
# Subnet was not found: handle the error and return False.
return {'exists': False}
return {'error': boto_err}
log.debug('The filters criteria {0} matched the following subnets:{1}'.format(filter_parameters, subnets))
if subnets:
log.info('Subnet {0} exists.'.format(subnet_name or subnet_id))
@ -953,8 +971,6 @@ def subnet_exists(subnet_id=None, name=None, subnet_name=None, cidr=None,
else:
log.info('Subnet {0} does not exist.'.format(subnet_name or subnet_id))
return {'exists': False}
except BotoServerError as e:
return {'error': salt.utils.boto.get_error(e)}
def get_subnet_association(subnets, region=None, key=None, keyid=None,

View file

@ -56,3 +56,7 @@ def cmd(command, *args, **kwargs):
proxy_cmd = proxy_prefix + '.ch_config'
return __proxy__[proxy_cmd](command, *args, **kwargs)
def get_details():
return __proxy__['esxi.get_details']()

View file

@ -2318,14 +2318,14 @@ def replace(path,
if not_found_content is None:
not_found_content = repl
if prepend_if_not_found:
new_file.insert(0, not_found_content + b'\n')
new_file.insert(0, not_found_content + salt.utils.to_bytes(os.linesep))
else:
# append_if_not_found
# Make sure we have a newline at the end of the file
if 0 != len(new_file):
if not new_file[-1].endswith(b'\n'):
new_file[-1] += b'\n'
new_file.append(not_found_content + b'\n')
if not new_file[-1].endswith(salt.utils.to_bytes(os.linesep)):
new_file[-1] += salt.utils.to_bytes(os.linesep)
new_file.append(not_found_content + salt.utils.to_bytes(os.linesep))
has_changes = True
if not dry_run:
try:
@ -2336,9 +2336,9 @@ def replace(path,
raise CommandExecutionError("Exception: {0}".format(exc))
# write new content in the file while avoiding partial reads
try:
fh_ = salt.utils.atomicfile.atomic_open(path, 'w')
fh_ = salt.utils.atomicfile.atomic_open(path, 'wb')
for line in new_file:
fh_.write(salt.utils.stringutils.to_str(line))
fh_.write(salt.utils.stringutils.to_bytes(line))
finally:
fh_.close()
@ -2508,9 +2508,10 @@ def blockreplace(path,
try:
fi_file = fileinput.input(path,
inplace=False, backup=False,
bufsize=1, mode='r')
bufsize=1, mode='rb')
for line in fi_file:
line = salt.utils.to_str(line)
result = line
if marker_start in line:
@ -2523,14 +2524,24 @@ def blockreplace(path,
# end of block detected
in_block = False
# Check for multi-line '\n' terminated content as split will
# introduce an unwanted additional new line.
if content and content[-1] == '\n':
content = content[:-1]
# Handle situations where there may be multiple types
# of line endings in the same file. Separate the content
# into lines. Account for Windows-style line endings
# using os.linesep, then by linux-style line endings
# using '\n'
split_content = []
for linesep_line in content.split(os.linesep):
for content_line in linesep_line.split('\n'):
split_content.append(content_line)
# Trim any trailing new lines to avoid unwanted
# additional new lines
while not split_content[-1]:
split_content.pop()
# push new block content in file
for cline in content.split('\n'):
new_file.append(cline + '\n')
for content_line in split_content:
new_file.append(content_line + os.linesep)
done = True
@ -2558,25 +2569,25 @@ def blockreplace(path,
if not done:
if prepend_if_not_found:
# add the markers and content at the beginning of file
new_file.insert(0, marker_end + '\n')
new_file.insert(0, marker_end + os.linesep)
if append_newline is True:
new_file.insert(0, content + '\n')
new_file.insert(0, content + os.linesep)
else:
new_file.insert(0, content)
new_file.insert(0, marker_start + '\n')
new_file.insert(0, marker_start + os.linesep)
done = True
elif append_if_not_found:
# Make sure we have a newline at the end of the file
if 0 != len(new_file):
if not new_file[-1].endswith('\n'):
new_file[-1] += '\n'
if not new_file[-1].endswith(os.linesep):
new_file[-1] += os.linesep
# add the markers and content at the end of file
new_file.append(marker_start + '\n')
new_file.append(marker_start + os.linesep)
if append_newline is True:
new_file.append(content + '\n')
new_file.append(content + os.linesep)
else:
new_file.append(content)
new_file.append(marker_end + '\n')
new_file.append(marker_end + os.linesep)
done = True
else:
raise CommandExecutionError(
@ -2607,9 +2618,9 @@ def blockreplace(path,
# write new content in the file while avoiding partial reads
try:
fh_ = salt.utils.atomicfile.atomic_open(path, 'w')
fh_ = salt.utils.atomicfile.atomic_open(path, 'wb')
for line in new_file:
fh_.write(line)
fh_.write(salt.utils.to_bytes(line))
finally:
fh_.close()
@ -3749,6 +3760,14 @@ def source_list(source, source_hash, saltenv):
single_src = next(iter(single))
single_hash = single[single_src] if single[single_src] else source_hash
urlparsed_single_src = _urlparse(single_src)
# Fix this for Windows
if salt.utils.is_windows():
# urlparse doesn't handle a local Windows path without the
# protocol indicator (file://). The scheme will be the
# drive letter instead of the protocol. So, we'll add the
# protocol and re-parse
if urlparsed_single_src.scheme.lower() in string.ascii_lowercase:
urlparsed_single_src = _urlparse('file://' + single_src)
proto = urlparsed_single_src.scheme
if proto == 'salt':
path, senv = salt.utils.url.parse(single_src)
@ -3760,10 +3779,15 @@ def source_list(source, source_hash, saltenv):
elif proto.startswith('http') or proto == 'ftp':
ret = (single_src, single_hash)
break
elif proto == 'file' and os.path.exists(urlparsed_single_src.path):
elif proto == 'file' and (
os.path.exists(urlparsed_single_src.netloc) or
os.path.exists(urlparsed_single_src.path) or
os.path.exists(os.path.join(
urlparsed_single_src.netloc,
urlparsed_single_src.path))):
ret = (single_src, single_hash)
break
elif single_src.startswith('/') and os.path.exists(single_src):
elif single_src.startswith(os.sep) and os.path.exists(single_src):
ret = (single_src, single_hash)
break
elif isinstance(single, six.string_types):
@ -3774,14 +3798,26 @@ def source_list(source, source_hash, saltenv):
ret = (single, source_hash)
break
urlparsed_src = _urlparse(single)
if salt.utils.is_windows():
# urlparse doesn't handle a local Windows path without the
# protocol indicator (file://). The scheme will be the
# drive letter instead of the protocol. So, we'll add the
# protocol and re-parse
if urlparsed_src.scheme.lower() in string.ascii_lowercase:
urlparsed_src = _urlparse('file://' + single)
proto = urlparsed_src.scheme
if proto == 'file' and os.path.exists(urlparsed_src.path):
if proto == 'file' and (
os.path.exists(urlparsed_src.netloc) or
os.path.exists(urlparsed_src.path) or
os.path.exists(os.path.join(
urlparsed_src.netloc,
urlparsed_src.path))):
ret = (single, source_hash)
break
elif proto.startswith('http') or proto == 'ftp':
ret = (single, source_hash)
break
elif single.startswith('/') and os.path.exists(single):
elif single.startswith(os.sep) and os.path.exists(single):
ret = (single, source_hash)
break
if ret is None:
@ -4281,7 +4317,8 @@ def extract_hash(hash_fn,
def check_perms(name, ret, user, group, mode, attrs=None, follow_symlinks=False):
'''
Check the permissions on files, modify attributes and chown if needed
Check the permissions on files, modify attributes and chown if needed. File
attributes are only verified if lsattr(1) is installed.
CLI Example:
@ -4293,6 +4330,7 @@ def check_perms(name, ret, user, group, mode, attrs=None, follow_symlinks=False)
``follow_symlinks`` option added
'''
name = os.path.expanduser(name)
lsattr_cmd = salt.utils.path.which('lsattr')
if not ret:
ret = {'name': name,
@ -4318,7 +4356,7 @@ def check_perms(name, ret, user, group, mode, attrs=None, follow_symlinks=False)
perms['lmode'] = salt.utils.normalize_mode(cur['mode'])
is_dir = os.path.isdir(name)
if not salt.utils.platform.is_windows() and not is_dir:
if not salt.utils.platform.is_windows() and not is_dir and lsattr_cmd:
# List attributes on file
perms['lattrs'] = ''.join(lsattr(name)[name])
# Remove attributes on file so changes can be enforced.
@ -4429,7 +4467,7 @@ def check_perms(name, ret, user, group, mode, attrs=None, follow_symlinks=False)
if __opts__['test'] is True and ret['changes']:
ret['result'] = None
if not salt.utils.platform.is_windows() and not is_dir:
if not salt.utils.platform.is_windows() and not is_dir and lsattr_cmd:
# Replace attributes on file if it had been removed
if perms['lattrs']:
chattr(name, operator='add', attributes=perms['lattrs'])

View file

@ -101,8 +101,6 @@ def _construct_yaml_str(self, node):
Construct for yaml
'''
return self.construct_scalar(node)
YamlLoader.add_constructor(u'tag:yaml.org,2002:str',
_construct_yaml_str)
YamlLoader.add_constructor(u'tag:yaml.org,2002:timestamp',
_construct_yaml_str)

View file

@ -83,7 +83,7 @@ def __virtual__():
return False, 'python kubernetes library not found'
if not salt.utils.is_windows():
if not salt.utils.platform.is_windows():
@contextmanager
def _time_limit(seconds):
def signal_handler(signum, frame):
@ -713,7 +713,7 @@ def delete_deployment(name, namespace='default', **kwargs):
namespace=namespace,
body=body)
mutable_api_response = api_response.to_dict()
if not salt.utils.is_windows():
if not salt.utils.platform.is_windows():
try:
with _time_limit(POLLING_TIME_LIMIT):
while show_deployment(name, namespace) is not None:

View file

@ -68,9 +68,7 @@ class _Puppet(object):
self.vardir = 'C:\\ProgramData\\PuppetLabs\\puppet\\var'
self.rundir = 'C:\\ProgramData\\PuppetLabs\\puppet\\run'
self.confdir = 'C:\\ProgramData\\PuppetLabs\\puppet\\etc'
self.useshell = True
else:
self.useshell = False
self.puppet_version = __salt__['cmd.run']('puppet --version')
if 'Enterprise' in self.puppet_version:
self.vardir = '/var/opt/lib/pe-puppet'
@ -106,7 +104,10 @@ class _Puppet(object):
' --{0} {1}'.format(k, v) for k, v in six.iteritems(self.kwargs)]
)
return '{0} {1}'.format(cmd, args)
# Ensure that the puppet call will return 0 in case of exit code 2
if salt.utils.platform.is_windows():
return 'cmd /V:ON /c {0} {1} ^& if !ERRORLEVEL! EQU 2 (EXIT 0) ELSE (EXIT /B)'.format(cmd, args)
return '({0} {1}) || test $? -eq 2'.format(cmd, args)
def arguments(self, args=None):
'''
@ -169,12 +170,7 @@ def run(*args, **kwargs):
puppet.kwargs.update(salt.utils.args.clean_kwargs(**kwargs))
ret = __salt__['cmd.run_all'](repr(puppet), python_shell=puppet.useshell)
if ret['retcode'] in [0, 2]:
ret['retcode'] = 0
else:
ret['retcode'] = 1
ret = __salt__['cmd.run_all'](repr(puppet), python_shell=True)
return ret

View file

@ -27,6 +27,20 @@ Installation Prerequisites
pip install purestorage
- Configure Pure Storage FlashArray authentication. Use one of the following
three methods.
1) From the minion config
.. code-block:: yaml
pure_tags:
fa:
san_ip: management vip or hostname for the FlashArray
api_token: A valid api token for the FlashArray being managed
2) From environment (PUREFA_IP and PUREFA_API)
3) From the pillar (PUREFA_IP and PUREFA_API)
:maintainer: Simon Dodsley (simon@purestorage.com)
:maturity: new
:requires: purestorage
@ -195,7 +209,7 @@ def snap_create(name, suffix=None):
Will return False is volume selected to snap does not exist.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of volume to snapshot
@ -231,7 +245,7 @@ def snap_delete(name, suffix=None, eradicate=False):
Will return False if selected snapshot does not exist.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of volume
@ -273,7 +287,7 @@ def snap_eradicate(name, suffix=None):
Will retunr False is snapshot is not in a deleted state.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of volume
@ -306,7 +320,7 @@ def volume_create(name, size=None):
Will return False if volume already exists.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of volume (truncated to 63 characters)
@ -344,7 +358,7 @@ def volume_delete(name, eradicate=False):
Will return False if volume doesn't exist is already in a deleted state.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of volume
@ -383,7 +397,7 @@ def volume_eradicate(name):
Will return False is volume is not in a deleted state.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of volume
@ -413,7 +427,7 @@ def volume_extend(name, size):
Will return False if new size is less than or equal to existing size.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of volume
@ -451,7 +465,7 @@ def snap_volume_create(name, target, overwrite=False):
Will return False if target volume already exists and
overwrite is not specified, or selected snapshot doesn't exist.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of volume snapshot
@ -497,7 +511,7 @@ def volume_clone(name, target, overwrite=False):
Will return False if source volume doesn't exist, or
target volume already exists and overwrite not specified.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of volume
@ -541,7 +555,7 @@ def volume_attach(name, host):
Host and volume must exist or else will return False.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of volume
@ -574,7 +588,7 @@ def volume_detach(name, host):
Will return False if either host or volume do not exist, or
if selected volume isn't already connected to the host.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of volume
@ -608,7 +622,7 @@ def host_create(name, iqn=None, wwn=None):
Fibre Channel parameters are not in a valid format.
See Pure Storage FlashArray documentation.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of host (truncated to 63 characters)
@ -659,7 +673,7 @@ def host_update(name, iqn=None, wwn=None):
by another host, or are not in a valid format.
See Pure Storage FlashArray documentation.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of host
@ -699,7 +713,7 @@ def host_delete(name):
Will return False if the host doesn't exist.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of host
@ -735,7 +749,7 @@ def hg_create(name, host=None, volume=None):
Will return False if hostgroup already exists, or if
named host or volume do not exist.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of hostgroup (truncated to 63 characters)
@ -791,7 +805,7 @@ def hg_update(name, host=None, volume=None):
Will return False is hostgroup doesn't exist, or host
or volume do not exist.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of hostgroup
@ -837,7 +851,7 @@ def hg_delete(name):
Will return False is hostgroup is already in a deleted state.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of hostgroup
@ -875,7 +889,7 @@ def hg_remove(name, volume=None, host=None):
Will return False is hostgroup does not exist, or named host or volume are
not in the hostgroup.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of hostgroup
@ -936,7 +950,7 @@ def pg_create(name, hostgroup=None, host=None, volume=None, enabled=True):
hostgroups, hosts or volumes
* Named type for protection group does not exist
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of protection group
@ -1029,7 +1043,7 @@ def pg_update(name, hostgroup=None, host=None, volume=None):
* Incorrect type selected for current protection group type
* Specified type does not exist
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of protection group
@ -1119,7 +1133,7 @@ def pg_delete(name, eradicate=False):
Will return False if protection group is already in a deleted state.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of protection group
@ -1156,7 +1170,7 @@ def pg_eradicate(name):
Will return False if protection group is not in a deleted state.
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of protection group
@ -1188,7 +1202,7 @@ def pg_remove(name, hostgroup=None, host=None, volume=None):
* Protection group does not exist
* Specified type is not currently associated with the protection group
.. versionadded:: 2017.7.3
.. versionadded:: Oxygen
name : string
name of hostgroup

View file

@ -464,7 +464,7 @@ def fcontext_get_policy(name, filetype=None, sel_type=None, sel_user=None, sel_l
cmd_kwargs['filetype'] = '[[:alpha:] ]+' if filetype is None else filetype_id_to_string(filetype)
cmd = 'semanage fcontext -l | egrep ' + \
"'^{filespec}{spacer}{filetype}{spacer}{sel_user}:{sel_role}:{sel_type}:{sel_level}$'".format(**cmd_kwargs)
current_entry_text = __salt__['cmd.shell'](cmd)
current_entry_text = __salt__['cmd.shell'](cmd, ignore_retcode=True)
if current_entry_text == '':
return None
ret = {}

View file

@ -132,7 +132,7 @@ def procs():
uind = 0
pind = 0
cind = 0
plines = __salt__['cmd.run'](__grains__['ps']).splitlines()
plines = __salt__['cmd.run'](__grains__['ps'], python_shell=True).splitlines()
guide = plines.pop(0).split()
if 'USER' in guide:
uind = guide.index('USER')
@ -1417,7 +1417,7 @@ def pid(sig):
'''
cmd = __grains__['ps']
output = __salt__['cmd.run_stdout'](cmd)
output = __salt__['cmd.run_stdout'](cmd, python_shell=True)
pids = ''
for line in output.splitlines():

29
salt/modules/vcenter.py Normal file
View file

@ -0,0 +1,29 @@
# -*- coding: utf-8 -*-
'''
Module used to access the vcenter proxy connection methods
'''
from __future__ import absolute_import
# Import python libs
import logging
import salt.utils
log = logging.getLogger(__name__)
__proxyenabled__ = ['vcenter']
# Define the module's virtual name
__virtualname__ = 'vcenter'
def __virtual__():
'''
Only work on proxy
'''
if salt.utils.is_proxy():
return __virtualname__
return False
def get_details():
return __proxy__['vcenter.get_details']()

File diff suppressed because it is too large Load diff

View file

@ -110,7 +110,7 @@ def available(software=True,
Include software updates in the results (default is True)
drivers (bool):
Include driver updates in the results (default is False)
Include driver updates in the results (default is True)
summary (bool):
- True: Return a summary of updates available for each category.

View file

@ -1347,6 +1347,7 @@ def install(name=None,
to_install = []
to_downgrade = []
to_reinstall = []
_available = {}
# The above three lists will be populated with tuples containing the
# package name and the string being used for this particular package
# modification. The reason for this method is that the string we use for

View file

@ -77,6 +77,9 @@ def __virtual__():
) == 0:
return 'zfs'
if __grains__['kernel'] == 'OpenBSD':
return False
_zfs_fuse = lambda f: __salt__['service.' + f]('zfs-fuse')
if _zfs_fuse('available') and (_zfs_fuse('status') or _zfs_fuse('start')):
return 'zfs'

View file

@ -343,14 +343,15 @@ def ext_pillar(minion_id,
if minion_id in match:
ngroup_dir = os.path.join(
nodegroups_dir, str(nodegroup))
ngroup_pillar.update(
ngroup_pillar = salt.utils.dictupdate.merge(ngroup_pillar,
_construct_pillar(ngroup_dir,
follow_dir_links,
keep_newline,
render_default,
renderer_blacklist,
renderer_whitelist,
template)
template),
strategy='recurse'
)
else:
if debug is True:

View file

@ -374,20 +374,20 @@ def __virtual__():
return False
def ext_pillar(minion_id, repo):
def ext_pillar(minion_id, pillar, *repos): # pylint: disable=unused-argument
'''
Checkout the ext_pillar sources and compile the resulting pillar SLS
'''
opts = copy.deepcopy(__opts__)
opts['pillar_roots'] = {}
opts['__git_pillar'] = True
pillar = salt.utils.gitfs.GitPillar(opts)
pillar.init_remotes(repo, PER_REMOTE_OVERRIDES, PER_REMOTE_ONLY)
git_pillar = salt.utils.gitfs.GitPillar(opts)
git_pillar.init_remotes(repos, PER_REMOTE_OVERRIDES, PER_REMOTE_ONLY)
if __opts__.get('__role') == 'minion':
# If masterless, fetch the remotes. We'll need to remove this once
# we make the minion daemon able to run standalone.
pillar.fetch_remotes()
pillar.checkout()
git_pillar.fetch_remotes()
git_pillar.checkout()
ret = {}
merge_strategy = __opts__.get(
'pillar_source_merging_strategy',
@ -397,7 +397,14 @@ def ext_pillar(minion_id, repo):
'pillar_merge_lists',
False
)
for pillar_dir, env in six.iteritems(pillar.pillar_dirs):
for pillar_dir, env in six.iteritems(git_pillar.pillar_dirs):
# Map env if env == '__env__' before checking the env value
if env == '__env__':
env = opts.get('pillarenv') \
or opts.get('environment') \
or opts.get('git_pillar_base')
log.debug('__env__ maps to %s', env)
# If pillarenv is set, only grab pillars with that match pillarenv
if opts['pillarenv'] and env != opts['pillarenv']:
log.debug(
@ -406,7 +413,7 @@ def ext_pillar(minion_id, repo):
env, pillar_dir, opts['pillarenv']
)
continue
if pillar_dir in pillar.pillar_linked_dirs:
if pillar_dir in git_pillar.pillar_linked_dirs:
log.debug(
'git_pillar is skipping processing on %s as it is a '
'mounted repo', pillar_dir
@ -418,12 +425,6 @@ def ext_pillar(minion_id, repo):
'env \'%s\'', pillar_dir, env
)
if env == '__env__':
env = opts.get('pillarenv') \
or opts.get('environment') \
or opts.get('git_pillar_base')
log.debug('__env__ maps to %s', env)
pillar_roots = [pillar_dir]
if __opts__['git_pillar_includes']:
@ -433,7 +434,7 @@ def ext_pillar(minion_id, repo):
# list, so that its top file is sourced from the correct
# location and not from another git_pillar remote.
pillar_roots.extend(
[d for (d, e) in six.iteritems(pillar.pillar_dirs)
[d for (d, e) in six.iteritems(git_pillar.pillar_dirs)
if env == e and d != pillar_dir]
)

View file

@ -90,7 +90,8 @@ class POSTGRESExtPillar(SqlBaseExtPillar):
conn = psycopg2.connect(host=_options['host'],
user=_options['user'],
password=_options['pass'],
dbname=_options['db'])
dbname=_options['db'],
port=_options['port'])
cursor = conn.cursor()
try:
yield cursor

View file

@ -0,0 +1,163 @@
# -*- coding: utf-8 -*-
'''
Provide external pillar data from RethinkDB
.. versionadded:: Oxygen
:depends: rethinkdb (on the salt-master)
salt master rethinkdb configuration
===================================
These variables must be configured in your master configuration file.
* ``rethinkdb.host`` - The RethinkDB server. Defaults to ``'salt'``
* ``rethinkdb.port`` - The port the RethinkDB server listens on.
Defaults to ``'28015'``
* ``rethinkdb.database`` - The database to connect to.
Defaults to ``'salt'``
* ``rethinkdb.username`` - The username for connecting to RethinkDB.
Defaults to ``''``
* ``rethinkdb.password`` - The password for connecting to RethinkDB.
Defaults to ``''``
salt-master ext_pillar configuration
====================================
The ext_pillar function arguments are given in single line dictionary notation.
.. code-block:: yaml
ext_pillar:
- rethinkdb: {table: ext_pillar, id_field: minion_id, field: pillar_root, pillar_key: external_pillar}
In the example above the following happens.
* The salt-master will look for external pillars in the 'ext_pillar' table
on the RethinkDB host
* The minion id will be matched against the 'minion_id' field
* Pillars will be retrieved from the nested field 'pillar_root'
* Found pillars will be merged inside a key called 'external_pillar'
Module Documentation
====================
'''
from __future__ import absolute_import
# Import python libraries
import logging
# Import 3rd party libraries
try:
import rethinkdb
HAS_RETHINKDB = True
except ImportError:
HAS_RETHINKDB = False
__virtualname__ = 'rethinkdb'
__opts__ = {
'rethinkdb.host': 'salt',
'rethinkdb.port': '28015',
'rethinkdb.database': 'salt',
'rethinkdb.username': None,
'rethinkdb.password': None
}
def __virtual__():
if not HAS_RETHINKDB:
return False
return True
# Configure logging
log = logging.getLogger(__name__)
def ext_pillar(minion_id,
pillar,
table='pillar',
id_field=None,
field=None,
pillar_key=None):
'''
Collect minion external pillars from a RethinkDB database
Arguments:
* `table`: The RethinkDB table containing external pillar information.
Defaults to ``'pillar'``
* `id_field`: Field in document containing the minion id.
If blank then we assume the table index matches minion ids
* `field`: Specific field in the document used for pillar data, if blank
then the entire document will be used
* `pillar_key`: The salt-master will nest found external pillars under
this key before merging into the minion pillars. If blank, external
pillars will be merged at top level
'''
host = __opts__['rethinkdb.host']
port = __opts__['rethinkdb.port']
database = __opts__['rethinkdb.database']
username = __opts__['rethinkdb.username']
password = __opts__['rethinkdb.password']
log.debug('Connecting to {0}:{1} as user \'{2}\' for RethinkDB ext_pillar'
.format(host, port, username))
# Connect to the database
conn = rethinkdb.connect(host=host,
port=port,
db=database,
user=username,
password=password)
data = None
try:
if id_field:
log.debug('ext_pillar.rethinkdb: looking up pillar. '
'table: {0}, field: {1}, minion: {2}'.format(
table, id_field, minion_id))
if field:
data = rethinkdb.table(table).filter(
{id_field: minion_id}).pluck(field).run(conn)
else:
data = rethinkdb.table(table).filter(
{id_field: minion_id}).run(conn)
else:
log.debug('ext_pillar.rethinkdb: looking up pillar. '
'table: {0}, field: id, minion: {1}'.format(
table, minion_id))
if field:
data = rethinkdb.table(table).get(minion_id).pluck(field).run(
conn)
else:
data = rethinkdb.table(table).get(minion_id).run(conn)
finally:
if conn.is_open():
conn.close()
if data.items:
# Return nothing if multiple documents are found for a minion
if len(data.items) > 1:
log.error('ext_pillar.rethinkdb: ambiguous documents found for '
'minion {0}'.format(minion_id))
return {}
else:
result = data.items.pop()
if pillar_key:
return {pillar_key: result}
return result
else:
# No document found in the database
log.debug('ext_pillar.rethinkdb: no document found')
return {}

62
salt/pillar/saltclass.py Normal file
View file

@ -0,0 +1,62 @@
# -*- coding: utf-8 -*-
'''
SaltClass Pillar Module
.. code-block:: yaml
ext_pillar:
- saltclass:
- path: /srv/saltclass
'''
# import python libs
from __future__ import absolute_import
import salt.utils.saltclass as sc
import logging
log = logging.getLogger(__name__)
def __virtual__():
'''
This module has no external dependencies
'''
return True
def ext_pillar(minion_id, pillar, *args, **kwargs):
'''
Node definitions path will be retrieved from args - or set to default -
then added to 'salt_data' dict that is passed to the 'get_pillars' function.
'salt_data' dict is a convenient way to pass all the required datas to the function
It contains:
- __opts__
- __salt__
- __grains__
- __pillar__
- minion_id
- path
If successfull the function will return a pillar dict for minion_id
'''
# If path has not been set, make a default
for i in args:
if 'path' not in i:
path = '/srv/saltclass'
args[i]['path'] = path
log.warning('path variable unset, using default: {0}'.format(path))
else:
path = i['path']
# Create a dict that will contain our salt dicts to pass it to reclass
salt_data = {
'__opts__': __opts__,
'__salt__': __salt__,
'__grains__': __grains__,
'__pillar__': pillar,
'minion_id': minion_id,
'path': path
}
return sc.get_pillars(minion_id, salt_data)

View file

@ -273,13 +273,22 @@ for standing up an ESXi host from scratch.
# Import Python Libs
from __future__ import absolute_import
import logging
import os
# Import Salt Libs
from salt.exceptions import SaltSystemExit
from salt.exceptions import SaltSystemExit, InvalidConfigError
from salt.config.schemas.esxi import EsxiProxySchema
from salt.utils.dictupdate import merge
# This must be present or the Salt loader won't load this module.
__proxyenabled__ = ['esxi']
# External libraries
try:
import jsonschema
HAS_JSONSCHEMA = True
except ImportError:
HAS_JSONSCHEMA = False
# Variables are scoped to this module so we can have persistent data
# across calls to fns in here.
@ -288,7 +297,6 @@ DETAILS = {}
# Set up logging
log = logging.getLogger(__file__)
# Define the module's virtual name
__virtualname__ = 'esxi'
@ -297,7 +305,7 @@ def __virtual__():
'''
Only load if the ESXi execution module is available.
'''
if 'vsphere.system_info' in __salt__:
if HAS_JSONSCHEMA:
return __virtualname__
return False, 'The ESXi Proxy Minion module did not load.'
@ -309,17 +317,32 @@ def init(opts):
ESXi devices, the host, login credentials, and, if configured,
the protocol and port are cached.
'''
if 'host' not in opts['proxy']:
log.critical('No \'host\' key found in pillar for this proxy.')
log.debug('Initting esxi proxy module in process \'{}\''
''.format(os.getpid()))
log.debug('Validating esxi proxy input')
schema = EsxiProxySchema.serialize()
log.trace('esxi_proxy_schema = {}'.format(schema))
proxy_conf = merge(opts.get('proxy', {}), __pillar__.get('proxy', {}))
log.trace('proxy_conf = {0}'.format(proxy_conf))
try:
jsonschema.validate(proxy_conf, schema)
except jsonschema.exceptions.ValidationError as exc:
raise InvalidConfigError(exc)
DETAILS['proxytype'] = proxy_conf['proxytype']
if ('host' not in proxy_conf) and ('vcenter' not in proxy_conf):
log.critical('Neither \'host\' nor \'vcenter\' keys found in pillar '
'for this proxy.')
return False
if 'username' not in opts['proxy']:
if 'host' in proxy_conf:
# We have started the proxy by connecting directly to the host
if 'username' not in proxy_conf:
log.critical('No \'username\' key found in pillar for this proxy.')
return False
if 'passwords' not in opts['proxy']:
if 'passwords' not in proxy_conf:
log.critical('No \'passwords\' key found in pillar for this proxy.')
return False
host = opts['proxy']['host']
host = proxy_conf['host']
# Get the correct login details
try:
@ -332,9 +355,66 @@ def init(opts):
DETAILS['host'] = host
DETAILS['username'] = username
DETAILS['password'] = password
DETAILS['protocol'] = opts['proxy'].get('protocol', 'https')
DETAILS['port'] = opts['proxy'].get('port', '443')
DETAILS['credstore'] = opts['proxy'].get('credstore')
DETAILS['protocol'] = proxy_conf.get('protocol')
DETAILS['port'] = proxy_conf.get('port')
return True
if 'vcenter' in proxy_conf:
vcenter = proxy_conf['vcenter']
if not proxy_conf.get('esxi_host'):
log.critical('No \'esxi_host\' key found in pillar for this proxy.')
DETAILS['esxi_host'] = proxy_conf['esxi_host']
# We have started the proxy by connecting via the vCenter
if 'mechanism' not in proxy_conf:
log.critical('No \'mechanism\' key found in pillar for this proxy.')
return False
mechanism = proxy_conf['mechanism']
# Save mandatory fields in cache
for key in ('vcenter', 'mechanism'):
DETAILS[key] = proxy_conf[key]
if mechanism == 'userpass':
if 'username' not in proxy_conf:
log.critical('No \'username\' key found in pillar for this '
'proxy.')
return False
if 'passwords' not in proxy_conf and \
len(proxy_conf['passwords']) > 0:
log.critical('Mechanism is set to \'userpass\' , but no '
'\'passwords\' key found in pillar for this '
'proxy.')
return False
for key in ('username', 'passwords'):
DETAILS[key] = proxy_conf[key]
elif mechanism == 'sspi':
if 'domain' not in proxy_conf:
log.critical('Mechanism is set to \'sspi\' , but no '
'\'domain\' key found in pillar for this proxy.')
return False
if 'principal' not in proxy_conf:
log.critical('Mechanism is set to \'sspi\' , but no '
'\'principal\' key found in pillar for this '
'proxy.')
return False
for key in ('domain', 'principal'):
DETAILS[key] = proxy_conf[key]
if mechanism == 'userpass':
# Get the correct login details
log.debug('Retrieving credentials and testing vCenter connection'
' for mehchanism \'userpass\'')
try:
username, password = find_credentials(DETAILS['vcenter'])
DETAILS['password'] = password
except SaltSystemExit as err:
log.critical('Error: {0}'.format(err))
return False
# Save optional
DETAILS['protocol'] = proxy_conf.get('protocol', 'https')
DETAILS['port'] = proxy_conf.get('port', '443')
DETAILS['credstore'] = proxy_conf.get('credstore')
def grains():
@ -358,8 +438,9 @@ def grains_refresh():
def ping():
'''
Check to see if the host is responding. Returns False if the host didn't
respond, True otherwise.
Returns True if connection is to be done via a vCenter (no connection is attempted).
Check to see if the host is responding when connecting directly via an ESXi
host.
CLI Example:
@ -367,7 +448,12 @@ def ping():
salt esxi-host test.ping
'''
# find_credentials(DETAILS['host'])
if DETAILS.get('esxi_host'):
return True
else:
# TODO Check connection if mechanism is SSPI
if DETAILS['mechanism'] == 'userpass':
find_credentials(DETAILS['host'])
try:
__salt__['vsphere.system_info'](host=DETAILS['host'],
username=DETAILS['username'],
@ -375,7 +461,6 @@ def ping():
except SaltSystemExit as err:
log.warning(err)
return False
return True
@ -461,3 +546,14 @@ def _grains(host, protocol=None, port=None):
port=port)
GRAINS_CACHE.update(ret)
return GRAINS_CACHE
def is_connected_via_vcenter():
return True if 'vcenter' in DETAILS else False
def get_details():
'''
Return the proxy details
'''
return DETAILS

338
salt/proxy/vcenter.py Normal file
View file

@ -0,0 +1,338 @@
# -*- coding: utf-8 -*-
'''
Proxy Minion interface module for managing VMWare vCenters.
:codeauthor: :email:`Rod McKenzie (roderick.mckenzie@morganstanley.com)`
:codeauthor: :email:`Alexandru Bleotu (alexandru.bleotu@morganstanley.com)`
Dependencies
============
- pyVmomi Python Module
pyVmomi
-------
PyVmomi can be installed via pip:
.. code-block:: bash
pip install pyVmomi
.. note::
Version 6.0 of pyVmomi has some problems with SSL error handling on certain
versions of Python. If using version 6.0 of pyVmomi, Python 2.6,
Python 2.7.9, or newer must be present. This is due to an upstream dependency
in pyVmomi 6.0 that is not supported in Python versions 2.7 to 2.7.8. If the
version of Python is not in the supported range, you will need to install an
earlier version of pyVmomi. See `Issue #29537`_ for more information.
.. _Issue #29537: https://github.com/saltstack/salt/issues/29537
Based on the note above, to install an earlier version of pyVmomi than the
version currently listed in PyPi, run the following:
.. code-block:: bash
pip install pyVmomi==5.5.0.2014.1.1
The 5.5.0.2014.1.1 is a known stable version that this original ESXi State
Module was developed against.
Configuration
=============
To use this proxy module, please use on of the following configurations:
.. code-block:: yaml
proxy:
proxytype: vcenter
vcenter: <ip or dns name of parent vcenter>
username: <vCenter username>
mechanism: userpass
passwords:
- first_password
- second_password
- third_password
proxy:
proxytype: vcenter
vcenter: <ip or dns name of parent vcenter>
username: <vCenter username>
domain: <user domain>
mechanism: sspi
principal: <host kerberos principal>
proxytype
^^^^^^^^^
The ``proxytype`` key and value pair is critical, as it tells Salt which
interface to load from the ``proxy`` directory in Salt's install hierarchy,
or from ``/srv/salt/_proxy`` on the Salt Master (if you have created your
own proxy module, for example). To use this Proxy Module, set this to
``vcenter``.
vcenter
^^^^^^^
The location of the VMware vCenter server (host of ip). Required
username
^^^^^^^^
The username used to login to the vcenter, such as ``root``.
Required only for userpass.
mechanism
^^^^^^^^
The mechanism used to connect to the vCenter server. Supported values are
``userpass`` and ``sspi``. Required.
passwords
^^^^^^^^^
A list of passwords to be used to try and login to the vCenter server. At least
one password in this list is required if mechanism is ``userpass``
The proxy integration will try the passwords listed in order.
domain
^^^^^^
User domain. Required if mechanism is ``sspi``
principal
^^^^^^^^
Kerberos principal. Rquired if mechanism is ``sspi``
protocol
^^^^^^^^
If the vCenter is not using the default protocol, set this value to an
alternate protocol. Default is ``https``.
port
^^^^
If the ESXi host is not using the default port, set this value to an
alternate port. Default is ``443``.
Salt Proxy
----------
After your pillar is in place, you can test the proxy. The proxy can run on
any machine that has network connectivity to your Salt Master and to the
vCenter server in the pillar. SaltStack recommends that the machine running the
salt-proxy process also run a regular minion, though it is not strictly
necessary.
On the machine that will run the proxy, make sure there is an ``/etc/salt/proxy``
file with at least the following in it:
.. code-block:: yaml
master: <ip or hostname of salt-master>
You can then start the salt-proxy process with:
.. code-block:: bash
salt-proxy --proxyid <id of the cluster>
You may want to add ``-l debug`` to run the above in the foreground in
debug mode just to make sure everything is OK.
Next, accept the key for the proxy on your salt-master, just like you
would for a regular minion:
.. code-block:: bash
salt-key -a <id you gave the vcenter host>
You can confirm that the pillar data is in place for the proxy:
.. code-block:: bash
salt <id> pillar.items
And now you should be able to ping the ESXi host to make sure it is
responding:
.. code-block:: bash
salt <id> test.ping
At this point you can execute one-off commands against the vcenter. For
example, you can get if the proxy can actually connect to the vCenter:
.. code-block:: bash
salt <id> vsphere.test_vcenter_connection
Note that you don't need to provide credentials or an ip/hostname. Salt
knows to use the credentials you stored in Pillar.
It's important to understand how this particular proxy works.
:mod:`Salt.modules.vsphere </ref/modules/all/salt.modules.vsphere>` is a
standard Salt execution module.
If you pull up the docs for it you'll see
that almost every function in the module takes credentials and a targets either
a vcenter or a host. When credentials and a host aren't passed, Salt runs commands
through ``pyVmomi`` against the local machine. If you wanted, you could run
functions from this module on any host where an appropriate version of
``pyVmomi`` is installed, and that host would reach out over the network
and communicate with the ESXi host.
'''
# Import Python Libs
from __future__ import absolute_import
import logging
import os
# Import Salt Libs
import salt.exceptions
from salt.config.schemas.vcenter import VCenterProxySchema
from salt.utils.dictupdate import merge
# This must be present or the Salt loader won't load this module.
__proxyenabled__ = ['vcenter']
# External libraries
try:
import jsonschema
HAS_JSONSCHEMA = True
except ImportError:
HAS_JSONSCHEMA = False
# Variables are scoped to this module so we can have persistent data
# across calls to fns in here.
DETAILS = {}
# Set up logging
log = logging.getLogger(__name__)
# Define the module's virtual name
__virtualname__ = 'vcenter'
def __virtual__():
'''
Only load if the vsphere execution module is available.
'''
if HAS_JSONSCHEMA:
return __virtualname__
return False, 'The vcenter proxy module did not load.'
def init(opts):
'''
This function gets called when the proxy starts up.
For login the protocol and port are cached.
'''
log.info('Initting vcenter proxy module in process {0}'
''.format(os.getpid()))
log.trace('VCenter Proxy Validating vcenter proxy input')
schema = VCenterProxySchema.serialize()
log.trace('schema = {}'.format(schema))
proxy_conf = merge(opts.get('proxy', {}), __pillar__.get('proxy', {}))
log.trace('proxy_conf = {0}'.format(proxy_conf))
try:
jsonschema.validate(proxy_conf, schema)
except jsonschema.exceptions.ValidationError as exc:
raise salt.exceptions.InvalidConfigError(exc)
# Save mandatory fields in cache
for key in ('vcenter', 'mechanism'):
DETAILS[key] = proxy_conf[key]
# Additional validation
if DETAILS['mechanism'] == 'userpass':
if 'username' not in proxy_conf:
raise salt.exceptions.InvalidConfigError(
'Mechanism is set to \'userpass\' , but no '
'\'username\' key found in proxy config')
if 'passwords' not in proxy_conf:
raise salt.exceptions.InvalidConfigError(
'Mechanism is set to \'userpass\' , but no '
'\'passwords\' key found in proxy config')
for key in ('username', 'passwords'):
DETAILS[key] = proxy_conf[key]
else:
if 'domain' not in proxy_conf:
raise salt.exceptions.InvalidConfigError(
'Mechanism is set to \'sspi\' , but no '
'\'domain\' key found in proxy config')
if 'principal' not in proxy_conf:
raise salt.exceptions.InvalidConfigError(
'Mechanism is set to \'sspi\' , but no '
'\'principal\' key found in proxy config')
for key in ('domain', 'principal'):
DETAILS[key] = proxy_conf[key]
# Save optional
DETAILS['protocol'] = proxy_conf.get('protocol')
DETAILS['port'] = proxy_conf.get('port')
# Test connection
if DETAILS['mechanism'] == 'userpass':
# Get the correct login details
log.info('Retrieving credentials and testing vCenter connection for '
'mehchanism \'userpass\'')
try:
username, password = find_credentials()
DETAILS['password'] = password
except salt.exceptions.SaltSystemExit as err:
log.critical('Error: {0}'.format(err))
return False
return True
def ping():
'''
Returns True.
CLI Example:
.. code-block:: bash
salt vcenter test.ping
'''
return True
def shutdown():
'''
Shutdown the connection to the proxy device. For this proxy,
shutdown is a no-op.
'''
log.debug('VCenter proxy shutdown() called...')
def find_credentials():
'''
Cycle through all the possible credentials and return the first one that
works.
'''
# if the username and password were already found don't fo though the
# connection process again
if 'username' in DETAILS and 'password' in DETAILS:
return DETAILS['username'], DETAILS['password']
passwords = __pillar__['proxy']['passwords']
for password in passwords:
DETAILS['password'] = password
if not __salt__['vsphere.test_vcenter_connection']():
# We are unable to authenticate
continue
# If we have data returned from above, we've successfully authenticated.
return DETAILS['username'], password
# We've reached the end of the list without successfully authenticating.
raise salt.exceptions.VMwareConnectionError('Cannot complete login due to '
'incorrect credentials.')
def get_details():
'''
Function that returns the cached details
'''
return DETAILS

View file

@ -77,10 +77,25 @@ def serialize(obj, **options):
raise SerializationError(error)
class EncryptedString(str):
yaml_tag = u'!encrypted'
@staticmethod
def yaml_constructor(loader, tag, node):
return EncryptedString(loader.construct_scalar(node))
@staticmethod
def yaml_dumper(dumper, data):
return dumper.represent_scalar(EncryptedString.yaml_tag, data.__str__())
class Loader(BaseLoader): # pylint: disable=W0232
'''Overwrites Loader as not for pollute legacy Loader'''
pass
Loader.add_multi_constructor(EncryptedString.yaml_tag, EncryptedString.yaml_constructor)
Loader.add_multi_constructor('tag:yaml.org,2002:null', Loader.construct_yaml_null)
Loader.add_multi_constructor('tag:yaml.org,2002:bool', Loader.construct_yaml_bool)
Loader.add_multi_constructor('tag:yaml.org,2002:int', Loader.construct_yaml_int)
@ -100,6 +115,7 @@ class Dumper(BaseDumper): # pylint: disable=W0232
'''Overwrites Dumper as not for pollute legacy Dumper'''
pass
Dumper.add_multi_representer(EncryptedString, EncryptedString.yaml_dumper)
Dumper.add_multi_representer(type(None), Dumper.represent_none)
Dumper.add_multi_representer(str, Dumper.represent_str)
if six.PY2:

View file

@ -414,7 +414,7 @@ def extracted(name,
.. versionadded:: 2017.7.3
keep : True
Same as ``keep_source``.
Same as ``keep_source``, kept for backward-compatibility.
.. note::
If both ``keep_source`` and ``keep`` are used, ``keep`` will be
@ -648,6 +648,21 @@ def extracted(name,
# Remove pub kwargs as they're irrelevant here.
kwargs = salt.utils.args.clean_kwargs(**kwargs)
if 'keep_source' in kwargs and 'keep' in kwargs:
ret.setdefault('warnings', []).append(
'Both \'keep_source\' and \'keep\' were used. Since these both '
'do the same thing, \'keep\' was ignored.'
)
keep_source = bool(kwargs.pop('keep_source'))
kwargs.pop('keep')
elif 'keep_source' in kwargs:
keep_source = bool(kwargs.pop('keep_source'))
elif 'keep' in kwargs:
keep_source = bool(kwargs.pop('keep'))
else:
# Neither was passed, default is True
keep_source = True
if 'keep_source' in kwargs and 'keep' in kwargs:
ret.setdefault('warnings', []).append(
'Both \'keep_source\' and \'keep\' were used. Since these both '

View file

@ -697,7 +697,10 @@ def parameter_present(name, db_parameter_group_family, description, parameters=N
changed = {}
for items in parameters:
for k, value in items.items():
params[k] = value
if type(value) is bool:
params[k] = 'on' if value else 'off'
else:
params[k] = str(value)
logging.debug('Parameters from user are : {0}.'.format(params))
options = __salt__['boto_rds.describe_parameters'](name=name, region=region, key=key, keyid=keyid, profile=profile)
if not options.get('result'):
@ -705,8 +708,8 @@ def parameter_present(name, db_parameter_group_family, description, parameters=N
ret['comment'] = os.linesep.join([ret['comment'], 'Faled to get parameters for group {0}.'.format(name)])
return ret
for parameter in options['parameters'].values():
if parameter['ParameterName'] in params and str(params.get(parameter['ParameterName'])) != str(parameter['ParameterValue']):
logging.debug('Values that are being compared are {0}:{1} .'.format(params.get(parameter['ParameterName']), parameter['ParameterValue']))
if parameter['ParameterName'] in params and params.get(parameter['ParameterName']) != str(parameter['ParameterValue']):
logging.debug('Values that are being compared for {0} are {1}:{2} .'.format(parameter['ParameterName'], params.get(parameter['ParameterName']), parameter['ParameterValue']))
changed[parameter['ParameterName']] = params.get(parameter['ParameterName'])
if len(changed) > 0:
if __opts__['test']:
@ -715,9 +718,9 @@ def parameter_present(name, db_parameter_group_family, description, parameters=N
return ret
update = __salt__['boto_rds.update_parameter_group'](name, parameters=changed, apply_method=apply_method, tags=tags, region=region,
key=key, keyid=keyid, profile=profile)
if not update:
if 'error' in update:
ret['result'] = False
ret['comment'] = os.linesep.join([ret['comment'], 'Failed to change parameters {0} for group {1}.'.format(changed, name)])
ret['comment'] = os.linesep.join([ret['comment'], 'Failed to change parameters {0} for group {1}:'.format(changed, name), update['error']['message']])
return ret
ret['changes']['Parameters'] = changed
ret['comment'] = os.linesep.join([ret['comment'], 'Parameters {0} for group {1} are changed.'.format(changed, name)])

717
salt/states/dvs.py Normal file
View file

@ -0,0 +1,717 @@
# -*- coding: utf-8 -*-
'''
Manage VMware distributed virtual switches (DVSs) and their distributed virtual
portgroups (DVportgroups).
:codeauthor: :email:`Alexandru Bleotu <alexandru.bleotu@morganstaley.com>`
Examples
========
Several settings can be changed for DVSs and DVporgroups. Here are two examples
covering all of the settings. Fewer settings can be used
DVS
---
.. code-block:: python
'name': 'dvs1',
'max_mtu': 1000,
'uplink_names': [
'dvUplink1',
'dvUplink2',
'dvUplink3'
],
'capability': {
'portgroup_operation_supported': false,
'operation_supported': true,
'port_operation_supported': false
},
'lacp_api_version': 'multipleLag',
'contact_email': 'foo@email.com',
'product_info': {
'version':
'6.0.0',
'vendor':
'VMware,
Inc.',
'name':
'DVS'
},
'network_resource_management_enabled': true,
'contact_name': 'me@email.com',
'infrastructure_traffic_resource_pools': [
{
'reservation': 0,
'limit': 1000,
'share_level': 'high',
'key': 'management',
'num_shares': 100
},
{
'reservation': 0,
'limit': -1,
'share_level': 'normal',
'key': 'faultTolerance',
'num_shares': 50
},
{
'reservation': 0,
'limit': 32000,
'share_level': 'normal',
'key': 'vmotion',
'num_shares': 50
},
{
'reservation': 10000,
'limit': -1,
'share_level': 'normal',
'key': 'virtualMachine',
'num_shares': 50
},
{
'reservation': 0,
'limit': -1,
'share_level': 'custom',
'key': 'iSCSI',
'num_shares': 75
},
{
'reservation': 0,
'limit': -1,
'share_level': 'normal',
'key': 'nfs',
'num_shares': 50
},
{
'reservation': 0,
'limit': -1,
'share_level': 'normal',
'key': 'hbr',
'num_shares': 50
},
{
'reservation': 8750,
'limit': 15000,
'share_level': 'high',
'key': 'vsan',
'num_shares': 100
},
{
'reservation': 0,
'limit': -1,
'share_level': 'normal',
'key': 'vdp',
'num_shares': 50
}
],
'link_discovery_protocol': {
'operation':
'listen',
'protocol':
'cdp'
},
'network_resource_control_version': 'version3',
'description': 'Managed by Salt. Random settings.'
Note: The mandatory attribute is: ``name``.
Portgroup
---------
.. code-block:: python
'security_policy': {
'allow_promiscuous': true,
'mac_changes': false,
'forged_transmits': true
},
'name': 'vmotion-v702',
'out_shaping': {
'enabled': true,
'average_bandwidth': 1500,
'burst_size': 4096,
'peak_bandwidth': 1500
},
'num_ports': 128,
'teaming': {
'port_order': {
'active': [
'dvUplink2'
],
'standby': [
'dvUplink1'
]
},
'notify_switches': false,
'reverse_policy': true,
'rolling_order': false,
'policy': 'failover_explicit',
'failure_criteria': {
'check_error_percent': true,
'full_duplex': false,
'check_duplex': false,
'percentage': 50,
'check_speed': 'minimum',
'speed': 20,
'check_beacon': true
}
},
'type': 'earlyBinding',
'vlan_id': 100,
'description': 'Managed by Salt. Random settings.'
Note: The mandatory attributes are: ``name``, ``type``.
Dependencies
============
- pyVmomi Python Module
pyVmomi
-------
PyVmomi can be installed via pip:
.. code-block:: bash
pip install pyVmomi
.. note::
Version 6.0 of pyVmomi has some problems with SSL error handling on certain
versions of Python. If using version 6.0 of pyVmomi, Python 2.7.9,
or newer must be present. This is due to an upstream dependency
in pyVmomi 6.0 that is not supported in Python versions 2.7 to 2.7.8. If the
version of Python is not in the supported range, you will need to install an
earlier version of pyVmomi. See `Issue #29537`_ for more information.
.. _Issue #29537: https://github.com/saltstack/salt/issues/29537
Based on the note above, to install an earlier version of pyVmomi than the
version currently listed in PyPi, run the following:
.. code-block:: bash
pip install pyVmomi==5.5.0.2014.1.1
The 5.5.0.2014.1.1 is a known stable version that this original ESXi State
Module was developed against.
'''
# Import Python Libs
from __future__ import absolute_import
import logging
import traceback
import sys
# Import Salt Libs
import salt.exceptions
from salt.ext.six.moves import range
# Import Third Party Libs
try:
from pyVmomi import VmomiSupport
HAS_PYVMOMI = True
except ImportError:
HAS_PYVMOMI = False
# Get Logging Started
log = logging.getLogger(__name__)
def __virtual__():
if not HAS_PYVMOMI:
return False, 'State module did not load: pyVmomi not found'
# We check the supported vim versions to infer the pyVmomi version
if 'vim25/6.0' in VmomiSupport.versionMap and \
sys.version_info > (2, 7) and sys.version_info < (2, 7, 9):
return False, ('State module did not load: Incompatible versions '
'of Python and pyVmomi present. See Issue #29537.')
return 'dvs'
def mod_init(low):
'''
Init function
'''
return True
def _get_datacenter_name():
'''
Returns the datacenter name configured on the proxy
Supported proxies: esxcluster, esxdatacenter
'''
proxy_type = __salt__['vsphere.get_proxy_type']()
details = None
if proxy_type == 'esxcluster':
details = __salt__['esxcluster.get_details']()
elif proxy_type == 'esxdatacenter':
details = __salt__['esxdatacenter.get_details']()
if not details:
raise salt.exceptions.CommandExecutionError(
'details for proxy type \'{0}\' not loaded'.format(proxy_type))
return details['datacenter']
def dvs_configured(name, dvs):
'''
Configures a DVS.
Creates a new DVS, if it doesn't exist in the provided datacenter or
reconfigures it if configured differently.
dvs
DVS dict representations (see module sysdocs)
'''
datacenter_name = _get_datacenter_name()
dvs_name = dvs['name'] if dvs.get('name') else name
log.info('Running state {0} for DVS \'{1}\' in datacenter '
'\'{2}\''.format(name, dvs_name, datacenter_name))
changes_required = False
ret = {'name': name, 'changes': {}, 'result': None, 'comment': None}
comments = []
changes = {}
changes_required = False
try:
#TODO dvs validation
si = __salt__['vsphere.get_service_instance_via_proxy']()
dvss = __salt__['vsphere.list_dvss'](dvs_names=[dvs_name],
service_instance=si)
if not dvss:
changes_required = True
if __opts__['test']:
comments.append('State {0} will create a new DVS '
'\'{1}\' in datacenter \'{2}\''
''.format(name, dvs_name, datacenter_name))
log.info(comments[-1])
else:
dvs['name'] = dvs_name
__salt__['vsphere.create_dvs'](dvs_dict=dvs,
dvs_name=dvs_name,
service_instance=si)
comments.append('Created a new DVS \'{0}\' in datacenter '
'\'{1}\''.format(dvs_name, datacenter_name))
log.info(comments[-1])
changes.update({'dvs': {'new': dvs}})
else:
# DVS already exists. Checking various aspects of the config
props = ['description', 'contact_email', 'contact_name',
'lacp_api_version', 'link_discovery_protocol',
'max_mtu', 'network_resource_control_version',
'network_resource_management_enabled']
log.trace('DVS \'{0}\' found in datacenter \'{1}\'. Checking '
'for any updates in '
'{2}'.format(dvs_name, datacenter_name, props))
props_to_original_values = {}
props_to_updated_values = {}
current_dvs = dvss[0]
for prop in props:
if prop in dvs and dvs[prop] != current_dvs.get(prop):
props_to_original_values[prop] = current_dvs.get(prop)
props_to_updated_values[prop] = dvs[prop]
# Simple infrastructure traffic resource control compare doesn't
# work because num_shares is optional if share_level is not custom
# We need to do a dedicated compare for this property
infra_prop = 'infrastructure_traffic_resource_pools'
original_infra_res_pools = []
updated_infra_res_pools = []
if infra_prop in dvs:
if not current_dvs.get(infra_prop):
updated_infra_res_pools = dvs[infra_prop]
else:
for idx in range(len(dvs[infra_prop])):
if 'num_shares' not in dvs[infra_prop][idx] and \
current_dvs[infra_prop][idx]['share_level'] != \
'custom' and \
'num_shares' in current_dvs[infra_prop][idx]:
del current_dvs[infra_prop][idx]['num_shares']
if dvs[infra_prop][idx] != \
current_dvs[infra_prop][idx]:
original_infra_res_pools.append(
current_dvs[infra_prop][idx])
updated_infra_res_pools.append(
dict(dvs[infra_prop][idx]))
if updated_infra_res_pools:
props_to_original_values[
'infrastructure_traffic_resource_pools'] = \
original_infra_res_pools
props_to_updated_values[
'infrastructure_traffic_resource_pools'] = \
updated_infra_res_pools
if props_to_updated_values:
if __opts__['test']:
changes_string = ''
for p in props_to_updated_values:
if p == 'infrastructure_traffic_resource_pools':
changes_string += \
'\tinfrastructure_traffic_resource_pools:\n'
for idx in range(len(props_to_updated_values[p])):
d = props_to_updated_values[p][idx]
s = props_to_original_values[p][idx]
changes_string += \
('\t\t{0} from \'{1}\' to \'{2}\'\n'
''.format(d['key'], s, d))
else:
changes_string += \
('\t{0} from \'{1}\' to \'{2}\'\n'
''.format(p, props_to_original_values[p],
props_to_updated_values[p]))
comments.append(
'State dvs_configured will update DVS \'{0}\' '
'in datacenter \'{1}\':\n{2}'
''.format(dvs_name, datacenter_name, changes_string))
log.info(comments[-1])
else:
__salt__['vsphere.update_dvs'](
dvs_dict=props_to_updated_values,
dvs=dvs_name,
service_instance=si)
comments.append('Updated DVS \'{0}\' in datacenter \'{1}\''
''.format(dvs_name, datacenter_name))
log.info(comments[-1])
changes.update({'dvs': {'new': props_to_updated_values,
'old': props_to_original_values}})
__salt__['vsphere.disconnect'](si)
except salt.exceptions.CommandExecutionError as exc:
log.error('Error: {0}\n{1}'.format(exc, traceback.format_exc()))
if si:
__salt__['vsphere.disconnect'](si)
if not __opts__['test']:
ret['result'] = False
ret.update({'comment': str(exc),
'result': False if not __opts__['test'] else None})
return ret
if not comments:
# We have no changes
ret.update({'comment': ('DVS \'{0}\' in datacenter \'{1}\' is '
'correctly configured. Nothing to be done.'
''.format(dvs_name, datacenter_name)),
'result': True})
else:
ret.update({'comment': '\n'.join(comments)})
if __opts__['test']:
ret.update({'pchanges': changes,
'result': None})
else:
ret.update({'changes': changes,
'result': True})
return ret
def _get_diff_dict(dict1, dict2):
'''
Returns a dictionary with the diffs between two dictionaries
It will ignore any key that doesn't exist in dict2
'''
ret_dict = {}
for p in dict2.keys():
if p not in dict1:
ret_dict.update({p: {'val1': None, 'val2': dict2[p]}})
elif dict1[p] != dict2[p]:
if isinstance(dict1[p], dict) and isinstance(dict2[p], dict):
sub_diff_dict = _get_diff_dict(dict1[p], dict2[p])
if sub_diff_dict:
ret_dict.update({p: sub_diff_dict})
else:
ret_dict.update({p: {'val1': dict1[p], 'val2': dict2[p]}})
return ret_dict
def _get_val2_dict_from_diff_dict(diff_dict):
'''
Returns a dictionaries with the values stored in val2 of a diff dict.
'''
ret_dict = {}
for p in diff_dict.keys():
if not isinstance(diff_dict[p], dict):
raise ValueError('Unexpected diff difct \'{0}\''.format(diff_dict))
if 'val2' in diff_dict[p].keys():
ret_dict.update({p: diff_dict[p]['val2']})
else:
ret_dict.update(
{p: _get_val2_dict_from_diff_dict(diff_dict[p])})
return ret_dict
def _get_val1_dict_from_diff_dict(diff_dict):
'''
Returns a dictionaries with the values stored in val1 of a diff dict.
'''
ret_dict = {}
for p in diff_dict.keys():
if not isinstance(diff_dict[p], dict):
raise ValueError('Unexpected diff difct \'{0}\''.format(diff_dict))
if 'val1' in diff_dict[p].keys():
ret_dict.update({p: diff_dict[p]['val1']})
else:
ret_dict.update(
{p: _get_val1_dict_from_diff_dict(diff_dict[p])})
return ret_dict
def _get_changes_from_diff_dict(diff_dict):
'''
Returns a list of string message of the differences in a diff dict.
Each inner message is tabulated one tab deeper
'''
changes_strings = []
for p in diff_dict.keys():
if not isinstance(diff_dict[p], dict):
raise ValueError('Unexpected diff difct \'{0}\''.format(diff_dict))
if sorted(diff_dict[p].keys()) == ['val1', 'val2']:
# Some string formatting
from_str = diff_dict[p]['val1']
if isinstance(diff_dict[p]['val1'], str):
from_str = '\'{0}\''.format(diff_dict[p]['val1'])
elif isinstance(diff_dict[p]['val1'], list):
from_str = '\'{0}\''.format(', '.join(diff_dict[p]['val1']))
to_str = diff_dict[p]['val2']
if isinstance(diff_dict[p]['val2'], str):
to_str = '\'{0}\''.format(diff_dict[p]['val2'])
elif isinstance(diff_dict[p]['val2'], list):
to_str = '\'{0}\''.format(', '.join(diff_dict[p]['val2']))
changes_strings.append('{0} from {1} to {2}'.format(
p, from_str, to_str))
else:
sub_changes = _get_changes_from_diff_dict(diff_dict[p])
if sub_changes:
changes_strings.append('{0}:'.format(p))
changes_strings.extend(['\t{0}'.format(c)
for c in sub_changes])
return changes_strings
def portgroups_configured(name, dvs, portgroups):
'''
Configures portgroups on a DVS.
Creates/updates/removes portgroups in a provided DVS
dvs
Name of the DVS
portgroups
Portgroup dict representations (see module sysdocs)
'''
datacenter = _get_datacenter_name()
log.info('Running state {0} on DVS \'{1}\', datacenter '
'\'{2}\''.format(name, dvs, datacenter))
changes_required = False
ret = {'name': name, 'changes': {}, 'result': None, 'comment': None,
'pchanges': {}}
comments = []
changes = {}
changes_required = False
try:
#TODO portroups validation
si = __salt__['vsphere.get_service_instance_via_proxy']()
current_pgs = __salt__['vsphere.list_dvportgroups'](
dvs=dvs, service_instance=si)
expected_pg_names = []
for pg in portgroups:
pg_name = pg['name']
expected_pg_names.append(pg_name)
del pg['name']
log.info('Checking pg \'{0}\''.format(pg_name))
filtered_current_pgs = \
[p for p in current_pgs if p.get('name') == pg_name]
if not filtered_current_pgs:
changes_required = True
if __opts__['test']:
comments.append('State {0} will create a new portgroup '
'\'{1}\' in DVS \'{2}\', datacenter '
'\'{3}\''.format(name, pg_name, dvs,
datacenter))
else:
__salt__['vsphere.create_dvportgroup'](
portgroup_dict=pg, portgroup_name=pg_name, dvs=dvs,
service_instance=si)
comments.append('Created a new portgroup \'{0}\' in DVS '
'\'{1}\', datacenter \'{2}\''
''.format(pg_name, dvs, datacenter))
log.info(comments[-1])
changes.update({pg_name: {'new': pg}})
else:
# Porgroup already exists. Checking the config
log.trace('Portgroup \'{0}\' found in DVS \'{1}\', datacenter '
'\'{2}\'. Checking for any updates.'
''.format(pg_name, dvs, datacenter))
current_pg = filtered_current_pgs[0]
diff_dict = _get_diff_dict(current_pg, pg)
if diff_dict:
changes_required = True
if __opts__['test']:
changes_strings = \
_get_changes_from_diff_dict(diff_dict)
log.trace('changes_strings = '
'{0}'.format(changes_strings))
comments.append(
'State {0} will update portgroup \'{1}\' in '
'DVS \'{2}\', datacenter \'{3}\':\n{4}'
''.format(name, pg_name, dvs, datacenter,
'\n'.join(['\t{0}'.format(c) for c in
changes_strings])))
else:
__salt__['vsphere.update_dvportgroup'](
portgroup_dict=pg, portgroup=pg_name, dvs=dvs,
service_instance=si)
comments.append('Updated portgroup \'{0}\' in DVS '
'\'{1}\', datacenter \'{2}\''
''.format(pg_name, dvs, datacenter))
log.info(comments[-1])
changes.update(
{pg_name: {'new':
_get_val2_dict_from_diff_dict(diff_dict),
'old':
_get_val1_dict_from_diff_dict(diff_dict)}})
# Add the uplink portgroup to the expected pg names
uplink_pg = __salt__['vsphere.list_uplink_dvportgroup'](
dvs=dvs, service_instance=si)
expected_pg_names.append(uplink_pg['name'])
# Remove any extra portgroups
for current_pg in current_pgs:
if current_pg['name'] not in expected_pg_names:
changes_required = True
if __opts__['test']:
comments.append('State {0} will remove '
'the portgroup \'{1}\' from DVS \'{2}\', '
'datacenter \'{3}\''
''.format(name, current_pg['name'], dvs,
datacenter))
else:
__salt__['vsphere.remove_dvportgroup'](
portgroup=current_pg['name'], dvs=dvs,
service_instance=si)
comments.append('Removed the portgroup \'{0}\' from DVS '
'\'{1}\', datacenter \'{2}\''
''.format(current_pg['name'], dvs,
datacenter))
log.info(comments[-1])
changes.update({current_pg['name']:
{'old': current_pg}})
__salt__['vsphere.disconnect'](si)
except salt.exceptions.CommandExecutionError as exc:
log.error('Error: {0}\n{1}'.format(exc, traceback.format_exc()))
if si:
__salt__['vsphere.disconnect'](si)
if not __opts__['test']:
ret['result'] = False
ret.update({'comment': exc.strerror,
'result': False if not __opts__['test'] else None})
return ret
if not changes_required:
# We have no changes
ret.update({'comment': ('All portgroups in DVS \'{0}\', datacenter '
'\'{1}\' exist and are correctly configured. '
'Nothing to be done.'.format(dvs, datacenter)),
'result': True})
else:
ret.update({'comment': '\n'.join(comments)})
if __opts__['test']:
ret.update({'pchanges': changes,
'result': None})
else:
ret.update({'changes': changes,
'result': True})
return ret
def uplink_portgroup_configured(name, dvs, uplink_portgroup):
'''
Configures the uplink portgroup on a DVS. The state assumes there is only
one uplink portgroup.
dvs
Name of the DVS
upling_portgroup
Uplink portgroup dict representations (see module sysdocs)
'''
datacenter = _get_datacenter_name()
log.info('Running {0} on DVS \'{1}\', datacenter \'{2}\''
''.format(name, dvs, datacenter))
changes_required = False
ret = {'name': name, 'changes': {}, 'result': None, 'comment': None,
'pchanges': {}}
comments = []
changes = {}
changes_required = False
try:
#TODO portroups validation
si = __salt__['vsphere.get_service_instance_via_proxy']()
current_uplink_portgroup = __salt__['vsphere.list_uplink_dvportgroup'](
dvs=dvs, service_instance=si)
log.trace('current_uplink_portgroup = '
'{0}'.format(current_uplink_portgroup))
diff_dict = _get_diff_dict(current_uplink_portgroup, uplink_portgroup)
if diff_dict:
changes_required = True
if __opts__['test']:
changes_strings = \
_get_changes_from_diff_dict(diff_dict)
log.trace('changes_strings = '
'{0}'.format(changes_strings))
comments.append(
'State {0} will update the '
'uplink portgroup in DVS \'{1}\', datacenter '
'\'{2}\':\n{3}'
''.format(name, dvs, datacenter,
'\n'.join(['\t{0}'.format(c) for c in
changes_strings])))
else:
__salt__['vsphere.update_dvportgroup'](
portgroup_dict=uplink_portgroup,
portgroup=current_uplink_portgroup['name'],
dvs=dvs,
service_instance=si)
comments.append('Updated the uplink portgroup in DVS '
'\'{0}\', datacenter \'{1}\''
''.format(dvs, datacenter))
log.info(comments[-1])
changes.update(
{'uplink_portgroup':
{'new': _get_val2_dict_from_diff_dict(diff_dict),
'old': _get_val1_dict_from_diff_dict(diff_dict)}})
__salt__['vsphere.disconnect'](si)
except salt.exceptions.CommandExecutionError as exc:
log.error('Error: {0}\n{1}'.format(exc, traceback.format_exc()))
if si:
__salt__['vsphere.disconnect'](si)
if not __opts__['test']:
ret['result'] = False
ret.update({'comment': exc.strerror,
'result': False if not __opts__['test'] else None})
return ret
if not changes_required:
# We have no changes
ret.update({'comment': ('Uplink portgroup in DVS \'{0}\', datacenter '
'\'{1}\' is correctly configured. '
'Nothing to be done.'.format(dvs, datacenter)),
'result': True})
else:
ret.update({'comment': '\n'.join(comments)})
if __opts__['test']:
ret.update({'pchanges': changes,
'result': None})
else:
ret.update({'changes': changes,
'result': True})
return ret

View file

@ -90,20 +90,47 @@ ESXi Proxy Minion, please refer to the
configuration examples, dependency installation instructions, how to run remote
execution functions against ESXi hosts via a Salt Proxy Minion, and a larger state
example.
'''
# Import Python Libs
from __future__ import absolute_import
import logging
import sys
import re
# Import Salt Libs
from salt.ext import six
import salt.utils.files
from salt.exceptions import CommandExecutionError
from salt.exceptions import CommandExecutionError, InvalidConfigError, \
VMwareObjectRetrievalError, VMwareSaltError, VMwareApiError, \
ArgumentValueError
from salt.utils.decorators import depends
from salt.config.schemas.esxi import DiskGroupsDiskScsiAddressSchema, \
HostCacheSchema
# External libraries
try:
import jsonschema
HAS_JSONSCHEMA = True
except ImportError:
HAS_JSONSCHEMA = False
# Get Logging Started
log = logging.getLogger(__name__)
try:
from pyVmomi import VmomiSupport
# We check the supported vim versions to infer the pyVmomi version
if 'vim25/6.0' in VmomiSupport.versionMap and \
sys.version_info > (2, 7) and sys.version_info < (2, 7, 9):
log.error('pyVmomi not loaded: Incompatible versions '
'of Python. See Issue #29537.')
raise ImportError()
HAS_PYVMOMI = True
except ImportError:
HAS_PYVMOMI = False
def __virtual__():
return 'esxi.cmd' in __salt__
@ -998,6 +1025,577 @@ def syslog_configured(name,
return ret
@depends(HAS_PYVMOMI)
@depends(HAS_JSONSCHEMA)
def diskgroups_configured(name, diskgroups, erase_disks=False):
'''
Configures the disk groups to use for vsan.
It will do the following:
(1) checks for if all disks in the diskgroup spec exist and errors if they
don't
(2) creates diskgroups with the correct disk configurations if diskgroup
(identified by the cache disk canonical name) doesn't exist
(3) adds extra capacity disks to the existing diskgroup
State input example
-------------------
.. code:: python
{
'cache_scsi_addr': 'vmhba1:C0:T0:L0',
'capacity_scsi_addrs': [
'vmhba2:C0:T0:L0',
'vmhba3:C0:T0:L0',
'vmhba4:C0:T0:L0',
]
}
name
Mandatory state name.
diskgroups
Disk group representation containing scsi disk addresses.
Scsi addresses are expected for disks in the diskgroup:
erase_disks
Specifies whether to erase all partitions on all disks member of the
disk group before the disk group is created. Default vaule is False.
'''
proxy_details = __salt__['esxi.get_details']()
hostname = proxy_details['host'] if not proxy_details.get('vcenter') \
else proxy_details['esxi_host']
log.info('Running state {0} for host \'{1}\''.format(name, hostname))
# Variable used to return the result of the invocation
ret = {'name': name, 'result': None, 'changes': {},
'pchanges': {}, 'comments': None}
# Signals if errors have been encountered
errors = False
# Signals if changes are required
changes = False
comments = []
diskgroup_changes = {}
si = None
try:
log.trace('Validating diskgroups_configured input')
schema = DiskGroupsDiskScsiAddressSchema.serialize()
try:
jsonschema.validate({'diskgroups': diskgroups,
'erase_disks': erase_disks}, schema)
except jsonschema.exceptions.ValidationError as exc:
raise InvalidConfigError(exc)
si = __salt__['vsphere.get_service_instance_via_proxy']()
host_disks = __salt__['vsphere.list_disks'](service_instance=si)
if not host_disks:
raise VMwareObjectRetrievalError(
'No disks retrieved from host \'{0}\''.format(hostname))
scsi_addr_to_disk_map = {d['scsi_address']: d for d in host_disks}
log.trace('scsi_addr_to_disk_map = {0}'.format(scsi_addr_to_disk_map))
existing_diskgroups = \
__salt__['vsphere.list_diskgroups'](service_instance=si)
cache_disk_to_existing_diskgroup_map = \
{dg['cache_disk']: dg for dg in existing_diskgroups}
except CommandExecutionError as err:
log.error('Error: {0}'.format(err))
if si:
__salt__['vsphere.disconnect'](si)
ret.update({
'result': False if not __opts__['test'] else None,
'comment': str(err)})
return ret
# Iterate through all of the disk groups
for idx, dg in enumerate(diskgroups):
# Check for cache disk
if not dg['cache_scsi_addr'] in scsi_addr_to_disk_map:
comments.append('No cache disk with scsi address \'{0}\' was '
'found.'.format(dg['cache_scsi_addr']))
log.error(comments[-1])
errors = True
continue
# Check for capacity disks
cache_disk_id = scsi_addr_to_disk_map[dg['cache_scsi_addr']]['id']
cache_disk_display = '{0} (id:{1})'.format(dg['cache_scsi_addr'],
cache_disk_id)
bad_scsi_addrs = []
capacity_disk_ids = []
capacity_disk_displays = []
for scsi_addr in dg['capacity_scsi_addrs']:
if scsi_addr not in scsi_addr_to_disk_map:
bad_scsi_addrs.append(scsi_addr)
continue
capacity_disk_ids.append(scsi_addr_to_disk_map[scsi_addr]['id'])
capacity_disk_displays.append(
'{0} (id:{1})'.format(scsi_addr, capacity_disk_ids[-1]))
if bad_scsi_addrs:
comments.append('Error in diskgroup #{0}: capacity disks with '
'scsi addresses {1} were not found.'
''.format(idx,
', '.join(['\'{0}\''.format(a)
for a in bad_scsi_addrs])))
log.error(comments[-1])
errors = True
continue
if not cache_disk_to_existing_diskgroup_map.get(cache_disk_id):
# A new diskgroup needs to be created
log.trace('erase_disks = {0}'.format(erase_disks))
if erase_disks:
if __opts__['test']:
comments.append('State {0} will '
'erase all disks of disk group #{1}; '
'cache disk: \'{2}\', '
'capacity disk(s): {3}.'
''.format(name, idx, cache_disk_display,
', '.join(
['\'{}\''.format(a) for a in
capacity_disk_displays])))
else:
# Erase disk group disks
for disk_id in [cache_disk_id] + capacity_disk_ids:
__salt__['vsphere.erase_disk_partitions'](
disk_id=disk_id, service_instance=si)
comments.append('Erased disks of diskgroup #{0}; '
'cache disk: \'{1}\', capacity disk(s): '
'{2}'.format(
idx, cache_disk_display,
', '.join(['\'{0}\''.format(a) for a in
capacity_disk_displays])))
log.info(comments[-1])
if __opts__['test']:
comments.append('State {0} will create '
'the disk group #{1}; cache disk: \'{2}\', '
'capacity disk(s): {3}.'
.format(name, idx, cache_disk_display,
', '.join(['\'{0}\''.format(a) for a in
capacity_disk_displays])))
log.info(comments[-1])
changes = True
continue
try:
__salt__['vsphere.create_diskgroup'](cache_disk_id,
capacity_disk_ids,
safety_checks=False,
service_instance=si)
except VMwareSaltError as err:
comments.append('Error creating disk group #{0}: '
'{1}.'.format(idx, err))
log.error(comments[-1])
errors = True
continue
comments.append('Created disk group #\'{0}\'.'.format(idx))
log.info(comments[-1])
diskgroup_changes[str(idx)] = \
{'new': {'cache': cache_disk_display,
'capacity': capacity_disk_displays}}
changes = True
continue
# The diskgroup exists; checking the capacity disks
log.debug('Disk group #{0} exists. Checking capacity disks: '
'{1}.'.format(idx, capacity_disk_displays))
existing_diskgroup = \
cache_disk_to_existing_diskgroup_map.get(cache_disk_id)
existing_capacity_disk_displays = \
['{0} (id:{1})'.format([d['scsi_address'] for d in host_disks
if d['id'] == disk_id][0], disk_id)
for disk_id in existing_diskgroup['capacity_disks']]
# Populate added disks and removed disks and their displays
added_capacity_disk_ids = []
added_capacity_disk_displays = []
removed_capacity_disk_ids = []
removed_capacity_disk_displays = []
for disk_id in capacity_disk_ids:
if disk_id not in existing_diskgroup['capacity_disks']:
disk_scsi_addr = [d['scsi_address'] for d in host_disks
if d['id'] == disk_id][0]
added_capacity_disk_ids.append(disk_id)
added_capacity_disk_displays.append(
'{0} (id:{1})'.format(disk_scsi_addr, disk_id))
for disk_id in existing_diskgroup['capacity_disks']:
if disk_id not in capacity_disk_ids:
disk_scsi_addr = [d['scsi_address'] for d in host_disks
if d['id'] == disk_id][0]
removed_capacity_disk_ids.append(disk_id)
removed_capacity_disk_displays.append(
'{0} (id:{1})'.format(disk_scsi_addr, disk_id))
log.debug('Disk group #{0}: existing capacity disk ids: {1}; added '
'capacity disk ids: {2}; removed capacity disk ids: {3}'
''.format(idx, existing_capacity_disk_displays,
added_capacity_disk_displays,
removed_capacity_disk_displays))
#TODO revisit this when removing capacity disks is supported
if removed_capacity_disk_ids:
comments.append(
'Error removing capacity disk(s) {0} from disk group #{1}; '
'operation is not supported.'
''.format(', '.join(['\'{0}\''.format(id) for id in
removed_capacity_disk_displays]), idx))
log.error(comments[-1])
errors = True
continue
if added_capacity_disk_ids:
# Capacity disks need to be added to disk group
# Building a string representation of the capacity disks
# that need to be added
s = ', '.join(['\'{0}\''.format(id) for id in
added_capacity_disk_displays])
if __opts__['test']:
comments.append('State {0} will add '
'capacity disk(s) {1} to disk group #{2}.'
''.format(name, s, idx))
log.info(comments[-1])
changes = True
continue
try:
__salt__['vsphere.add_capacity_to_diskgroup'](
cache_disk_id,
added_capacity_disk_ids,
safety_checks=False,
service_instance=si)
except VMwareSaltError as err:
comments.append('Error adding capacity disk(s) {0} to '
'disk group #{1}: {2}.'.format(s, idx, err))
log.error(comments[-1])
errors = True
continue
com = ('Added capacity disk(s) {0} to disk group #{1}'
''.format(s, idx))
log.info(com)
comments.append(com)
diskgroup_changes[str(idx)] = \
{'new': {'cache': cache_disk_display,
'capacity': capacity_disk_displays},
'old': {'cache': cache_disk_display,
'capacity': existing_capacity_disk_displays}}
changes = True
continue
# No capacity needs to be added
s = ('Disk group #{0} is correctly configured. Nothing to be done.'
''.format(idx))
log.info(s)
comments.append(s)
__salt__['vsphere.disconnect'](si)
#Build the final return message
result = (True if not (changes or errors) else # no changes/errors
None if __opts__['test'] else # running in test mode
False if errors else True) # found errors; defaults to True
ret.update({'result': result,
'comment': '\n'.join(comments)})
if changes:
if __opts__['test']:
ret['pchanges'] = diskgroup_changes
elif changes:
ret['changes'] = diskgroup_changes
return ret
@depends(HAS_PYVMOMI)
@depends(HAS_JSONSCHEMA)
def host_cache_configured(name, enabled, datastore, swap_size='100%',
dedicated_backing_disk=False,
erase_backing_disk=False):
'''
Configures the host cache used for swapping.
It will do the following:
(1) checks if backing disk exists
(2) creates the VMFS datastore if doesn't exist (datastore partition will
be created and use the entire disk
(3) raises an error if dedicated_backing_disk is True and partitions
already exist on the backing disk
(4) configures host_cache to use a portion of the datastore for caching
(either a specific size or a percentage of the datastore)
State input examples
--------------------
Percentage swap size (can't be 100%)
.. code:: python
{
'enabled': true,
'datastore': {
'backing_disk_scsi_addr': 'vmhba0:C0:T0:L0',
'vmfs_version': 5,
'name': 'hostcache'
}
'dedicated_backing_disk': false
'swap_size': '98%',
}
.. code:: python
Fixed sized swap size
{
'enabled': true,
'datastore': {
'backing_disk_scsi_addr': 'vmhba0:C0:T0:L0',
'vmfs_version': 5,
'name': 'hostcache'
}
'dedicated_backing_disk': true
'swap_size': '10GiB',
}
name
Mandatory state name.
enabled
Specifies whether the host cache is enabled.
datastore
Specifies the host cache datastore.
swap_size
Specifies the size of the host cache swap. Can be a percentage or a
value in GiB. Default value is ``100%``.
dedicated_backing_disk
Specifies whether the backing disk is dedicated to the host cache which
means it must have no other partitions. Default is False
erase_backing_disk
Specifies whether to erase all partitions on the backing disk before
the datastore is created. Default vaule is False.
'''
log.trace('enabled = {0}'.format(enabled))
log.trace('datastore = {0}'.format(datastore))
log.trace('swap_size = {0}'.format(swap_size))
log.trace('erase_backing_disk = {0}'.format(erase_backing_disk))
# Variable used to return the result of the invocation
proxy_details = __salt__['esxi.get_details']()
hostname = proxy_details['host'] if not proxy_details.get('vcenter') \
else proxy_details['esxi_host']
log.trace('hostname = {0}'.format(hostname))
log.info('Running host_cache_swap_configured for host '
'\'{0}\''.format(hostname))
ret = {'name': hostname, 'comment': 'Default comments',
'result': None, 'changes': {}, 'pchanges': {}}
result = None if __opts__['test'] else True # We assume success
needs_setting = False
comments = []
changes = {}
si = None
try:
log.debug('Validating host_cache_configured input')
schema = HostCacheSchema.serialize()
try:
jsonschema.validate({'enabled': enabled,
'datastore': datastore,
'swap_size': swap_size,
'erase_backing_disk': erase_backing_disk},
schema)
except jsonschema.exceptions.ValidationError as exc:
raise InvalidConfigError(exc)
m = re.match(r'(\d+)(%|GiB)', swap_size)
swap_size_value = int(m.group(1))
swap_type = m.group(2)
log.trace('swap_size_value = {0}; swap_type = {1}'.format(
swap_size_value, swap_type))
si = __salt__['vsphere.get_service_instance_via_proxy']()
host_cache = __salt__['vsphere.get_host_cache'](service_instance=si)
# Check enabled
if host_cache['enabled'] != enabled:
changes.update({'enabled': {'old': host_cache['enabled'],
'new': enabled}})
needs_setting = True
# Check datastores
existing_datastores = None
if host_cache.get('datastore'):
existing_datastores = \
__salt__['vsphere.list_datastores_via_proxy'](
datastore_names=[datastore['name']],
service_instance=si)
# Retrieve backing disks
existing_disks = __salt__['vsphere.list_disks'](
scsi_addresses=[datastore['backing_disk_scsi_addr']],
service_instance=si)
if not existing_disks:
raise VMwareObjectRetrievalError(
'Disk with scsi address \'{0}\' was not found in host \'{1}\''
''.format(datastore['backing_disk_scsi_addr'], hostname))
backing_disk = existing_disks[0]
backing_disk_display = '{0} (id:{1})'.format(
backing_disk['scsi_address'], backing_disk['id'])
log.trace('backing_disk = {0}'.format(backing_disk_display))
existing_datastore = None
if not existing_datastores:
# Check if disk needs to be erased
if erase_backing_disk:
if __opts__['test']:
comments.append('State {0} will erase '
'the backing disk \'{1}\' on host \'{2}\'.'
''.format(name, backing_disk_display,
hostname))
log.info(comments[-1])
else:
# Erase disk
__salt__['vsphere.erase_disk_partitions'](
disk_id=backing_disk['id'], service_instance=si)
comments.append('Erased backing disk \'{0}\' on host '
'\'{1}\'.'.format(backing_disk_display,
hostname))
log.info(comments[-1])
# Create the datastore
if __opts__['test']:
comments.append('State {0} will create '
'the datastore \'{1}\', with backing disk '
'\'{2}\', on host \'{3}\'.'
''.format(name, datastore['name'],
backing_disk_display, hostname))
log.info(comments[-1])
else:
if dedicated_backing_disk:
# Check backing disk doesn't already have partitions
partitions = __salt__['vsphere.list_disk_partitions'](
disk_id=backing_disk['id'], service_instance=si)
log.trace('partitions = {0}'.format(partitions))
# We will ignore the mbr partitions
non_mbr_partitions = [p for p in partitions
if p['format'] != 'mbr']
if len(non_mbr_partitions) > 0:
raise VMwareApiError(
'Backing disk \'{0}\' has unexpected partitions'
''.format(backing_disk_display))
__salt__['vsphere.create_vmfs_datastore'](
datastore['name'], existing_disks[0]['id'],
datastore['vmfs_version'], service_instance=si)
comments.append('Created vmfs datastore \'{0}\', backed by '
'disk \'{1}\', on host \'{2}\'.'
''.format(datastore['name'],
backing_disk_display, hostname))
log.info(comments[-1])
changes.update(
{'datastore':
{'new': {'name': datastore['name'],
'backing_disk': backing_disk_display}}})
existing_datastore = \
__salt__['vsphere.list_datastores_via_proxy'](
datastore_names=[datastore['name']],
service_instance=si)[0]
needs_setting = True
else:
# Check datastore is backed by the correct disk
if not existing_datastores[0].get('backing_disk_ids'):
raise VMwareSaltError('Datastore \'{0}\' doesn\'t have a '
'backing disk'
''.format(datastore['name']))
if backing_disk['id'] not in \
existing_datastores[0]['backing_disk_ids']:
raise VMwareSaltError(
'Datastore \'{0}\' is not backed by the correct disk: '
'expected \'{1}\'; got {2}'
''.format(
datastore['name'], backing_disk['id'],
', '.join(
['\'{0}\''.format(disk) for disk in
existing_datastores[0]['backing_disk_ids']])))
comments.append('Datastore \'{0}\' already exists on host \'{1}\' '
'and is backed by disk \'{2}\'. Nothing to be '
'done.'.format(datastore['name'], hostname,
backing_disk_display))
existing_datastore = existing_datastores[0]
log.trace('existing_datastore = {0}'.format(existing_datastore))
log.info(comments[-1])
if existing_datastore:
# The following comparisons can be done if the existing_datastore
# is set; it may not be set if running in test mode
#
# We support percent, as well as MiB, we will convert the size
# to MiB, multiples of 1024 (VMware SDK limitation)
if swap_type == '%':
# Percentage swap size
# Convert from bytes to MiB
raw_size_MiB = (swap_size_value/100.0) * \
(existing_datastore['capacity']/1024/1024)
else:
raw_size_MiB = swap_size_value * 1024
log.trace('raw_size = {0}MiB'.format(raw_size_MiB))
swap_size_MiB = int(raw_size_MiB/1024)*1024
log.trace('adjusted swap_size = {0}MiB'.format(swap_size_MiB))
existing_swap_size_MiB = 0
m = re.match(r'(\d+)MiB', host_cache.get('swap_size')) if \
host_cache.get('swap_size') else None
if m:
# if swap_size from the host is set and has an expected value
# we are going to parse it to get the number of MiBs
existing_swap_size_MiB = int(m.group(1))
if not existing_swap_size_MiB == swap_size_MiB:
needs_setting = True
changes.update(
{'swap_size':
{'old': '{}GiB'.format(existing_swap_size_MiB/1024),
'new': '{}GiB'.format(swap_size_MiB/1024)}})
if needs_setting:
if __opts__['test']:
comments.append('State {0} will configure '
'the host cache on host \'{1}\' to: {2}.'
''.format(name, hostname,
{'enabled': enabled,
'datastore_name': datastore['name'],
'swap_size': swap_size}))
else:
if (existing_datastore['capacity'] / 1024.0**2) < \
swap_size_MiB:
raise ArgumentValueError(
'Capacity of host cache datastore \'{0}\' ({1} MiB) is '
'smaller than the required swap size ({2} MiB)'
''.format(existing_datastore['name'],
existing_datastore['capacity'] / 1024.0**2,
swap_size_MiB))
__salt__['vsphere.configure_host_cache'](
enabled,
datastore['name'],
swap_size_MiB=swap_size_MiB,
service_instance=si)
comments.append('Host cache configured on host '
'\'{0}\'.'.format(hostname))
else:
comments.append('Host cache on host \'{0}\' is already correctly '
'configured. Nothing to be done.'.format(hostname))
result = True
__salt__['vsphere.disconnect'](si)
log.info(comments[-1])
ret.update({'comment': '\n'.join(comments),
'result': result})
if __opts__['test']:
ret['pchanges'] = changes
else:
ret['changes'] = changes
return ret
except CommandExecutionError as err:
log.error('Error: {0}.'.format(err))
if si:
__salt__['vsphere.disconnect'](si)
ret.update({
'result': False if not __opts__['test'] else None,
'comment': '{}.'.format(err)})
return ret
def _lookup_syslog_config(config):
'''
Helper function that looks up syslog_config keys available from

View file

@ -6637,6 +6637,28 @@ def cached(name,
else:
pre_hash = None
def _try_cache(path, checksum):
'''
This helper is not needed anymore in develop as the fileclient in the
develop branch now has means of skipping a download if the existing
hash matches one passed to cp.cache_file. Remove this helper and the
code that invokes it, once we have merged forward into develop.
'''
if not path or not checksum:
return True
form = salt.utils.files.HASHES_REVMAP.get(len(checksum))
if form is None:
# Shouldn't happen, an invalid checksum length should be caught
# before we get here. But in the event this gets through, don't let
# it cause any trouble, and just return True.
return True
try:
return salt.utils.get_hash(path, form=form) != checksum
except (IOError, OSError, ValueError):
# Again, shouldn't happen, but don't let invalid input/permissions
# in the call to get_hash blow this up.
return True
# Cache the file. Note that this will not actually download the file if
# either of the following is true:
# 1. source is a salt:// URL and the fileserver determines that the hash
@ -6645,6 +6667,10 @@ def cached(name,
# matches the cached copy.
# Remote, non salt:// sources _will_ download if a copy of the file was
# not already present in the minion cache.
if _try_cache(local_copy, source_sum.get('hsum')):
# The _try_cache helper is obsolete in the develop branch. Once merged
# forward, remove the helper as well as this if statement, and dedent
# the below block.
try:
local_copy = __salt__['cp.cache_file'](
name,

View file

@ -79,8 +79,6 @@ def _construct_yaml_str(self, node):
Construct for yaml
'''
return self.construct_scalar(node)
YamlLoader.add_constructor(u'tag:yaml.org,2002:str',
_construct_yaml_str)
YamlLoader.add_constructor(u'tag:yaml.org,2002:timestamp',
_construct_yaml_str)

501
salt/states/pbm.py Normal file
View file

@ -0,0 +1,501 @@
# -*- coding: utf-8 -*-
'''
Manages VMware storage policies
(called pbm because the vCenter endpoint is /pbm)
Examples
========
Storage policy
--------------
.. code-block:: python
{
"name": "salt_storage_policy"
"description": "Managed by Salt. Random capability values.",
"resource_type": "STORAGE",
"subprofiles": [
{
"capabilities": [
{
"setting": {
"type": "scalar",
"value": 2
},
"namespace": "VSAN",
"id": "hostFailuresToTolerate"
},
{
"setting": {
"type": "scalar",
"value": 2
},
"namespace": "VSAN",
"id": "stripeWidth"
},
{
"setting": {
"type": "scalar",
"value": true
},
"namespace": "VSAN",
"id": "forceProvisioning"
},
{
"setting": {
"type": "scalar",
"value": 50
},
"namespace": "VSAN",
"id": "proportionalCapacity"
},
{
"setting": {
"type": "scalar",
"value": 0
},
"namespace": "VSAN",
"id": "cacheReservation"
}
],
"name": "Rule-Set 1: VSAN",
"force_provision": null
}
],
}
Dependencies
============
- pyVmomi Python Module
pyVmomi
-------
PyVmomi can be installed via pip:
.. code-block:: bash
pip install pyVmomi
.. note::
Version 6.0 of pyVmomi has some problems with SSL error handling on certain
versions of Python. If using version 6.0 of pyVmomi, Python 2.6,
Python 2.7.9, or newer must be present. This is due to an upstream dependency
in pyVmomi 6.0 that is not supported in Python versions 2.7 to 2.7.8. If the
version of Python is not in the supported range, you will need to install an
earlier version of pyVmomi. See `Issue #29537`_ for more information.
.. _Issue #29537: https://github.com/saltstack/salt/issues/29537
'''
# Import Python Libs
from __future__ import absolute_import
import logging
import copy
import sys
# Import Salt Libs
from salt.exceptions import CommandExecutionError, ArgumentValueError
from salt.utils.dictdiffer import recursive_diff
from salt.utils.listdiffer import list_diff
# External libraries
try:
from pyVmomi import VmomiSupport
HAS_PYVMOMI = True
except ImportError:
HAS_PYVMOMI = False
# Get Logging Started
log = logging.getLogger(__name__)
def __virtual__():
if not HAS_PYVMOMI:
return False, 'State module did not load: pyVmomi not found'
# We check the supported vim versions to infer the pyVmomi version
if 'vim25/6.0' in VmomiSupport.versionMap and \
sys.version_info > (2, 7) and sys.version_info < (2, 7, 9):
return False, ('State module did not load: Incompatible versions '
'of Python and pyVmomi present. See Issue #29537.')
return True
def mod_init(low):
'''
Init function
'''
return True
def default_vsan_policy_configured(name, policy):
'''
Configures the default VSAN policy on a vCenter.
The state assumes there is only one default VSAN policy on a vCenter.
policy
Dict representation of a policy
'''
# TODO Refactor when recurse_differ supports list_differ
# It's going to make the whole thing much easier
policy_copy = copy.deepcopy(policy)
proxy_type = __salt__['vsphere.get_proxy_type']()
log.trace('proxy_type = {0}'.format(proxy_type))
# All allowed proxies have a shim execution module with the same
# name which implementes a get_details function
# All allowed proxies have a vcenter detail
vcenter = __salt__['{0}.get_details'.format(proxy_type)]()['vcenter']
log.info('Running {0} on vCenter '
'\'{1}\''.format(name, vcenter))
log.trace('policy = {0}'.format(policy))
changes_required = False
ret = {'name': name, 'changes': {}, 'result': None, 'comment': None,
'pchanges': {}}
comments = []
changes = {}
changes_required = False
si = None
try:
#TODO policy schema validation
si = __salt__['vsphere.get_service_instance_via_proxy']()
current_policy = __salt__['vsphere.list_default_vsan_policy'](si)
log.trace('current_policy = {0}'.format(current_policy))
# Building all diffs between the current and expected policy
# XXX We simplify the comparison by assuming we have at most 1
# sub_profile
if policy.get('subprofiles'):
if len(policy['subprofiles']) > 1:
raise ArgumentValueError('Multiple sub_profiles ({0}) are not '
'supported in the input policy')
subprofile = policy['subprofiles'][0]
current_subprofile = current_policy['subprofiles'][0]
capabilities_differ = list_diff(current_subprofile['capabilities'],
subprofile.get('capabilities', []),
key='id')
del policy['subprofiles']
if subprofile.get('capabilities'):
del subprofile['capabilities']
del current_subprofile['capabilities']
# Get the subprofile diffs without the capability keys
subprofile_differ = recursive_diff(current_subprofile,
dict(subprofile))
del current_policy['subprofiles']
policy_differ = recursive_diff(current_policy, policy)
if policy_differ.diffs or capabilities_differ.diffs or \
subprofile_differ.diffs:
if 'name' in policy_differ.new_values or \
'description' in policy_differ.new_values:
raise ArgumentValueError(
'\'name\' and \'description\' of the default VSAN policy '
'cannot be updated')
changes_required = True
if __opts__['test']:
str_changes = []
if policy_differ.diffs:
str_changes.extend([change for change in
policy_differ.changes_str.split('\n')])
if subprofile_differ.diffs or capabilities_differ.diffs:
str_changes.append('subprofiles:')
if subprofile_differ.diffs:
str_changes.extend(
[' {0}'.format(change) for change in
subprofile_differ.changes_str.split('\n')])
if capabilities_differ.diffs:
str_changes.append(' capabilities:')
str_changes.extend(
[' {0}'.format(change) for change in
capabilities_differ.changes_str2.split('\n')])
comments.append(
'State {0} will update the default VSAN policy on '
'vCenter \'{1}\':\n{2}'
''.format(name, vcenter, '\n'.join(str_changes)))
else:
__salt__['vsphere.update_storage_policy'](
policy=current_policy['name'],
policy_dict=policy_copy,
service_instance=si)
comments.append('Updated the default VSAN policy in vCenter '
'\'{0}\''.format(vcenter))
log.info(comments[-1])
new_values = policy_differ.new_values
new_values['subprofiles'] = [subprofile_differ.new_values]
new_values['subprofiles'][0]['capabilities'] = \
capabilities_differ.new_values
if not new_values['subprofiles'][0]['capabilities']:
del new_values['subprofiles'][0]['capabilities']
if not new_values['subprofiles'][0]:
del new_values['subprofiles']
old_values = policy_differ.old_values
old_values['subprofiles'] = [subprofile_differ.old_values]
old_values['subprofiles'][0]['capabilities'] = \
capabilities_differ.old_values
if not old_values['subprofiles'][0]['capabilities']:
del old_values['subprofiles'][0]['capabilities']
if not old_values['subprofiles'][0]:
del old_values['subprofiles']
changes.update({'default_vsan_policy':
{'new': new_values,
'old': old_values}})
log.trace(changes)
__salt__['vsphere.disconnect'](si)
except CommandExecutionError as exc:
log.error('Error: {}'.format(exc))
if si:
__salt__['vsphere.disconnect'](si)
if not __opts__['test']:
ret['result'] = False
ret.update({'comment': exc.strerror,
'result': False if not __opts__['test'] else None})
return ret
if not changes_required:
# We have no changes
ret.update({'comment': ('Default VSAN policy in vCenter '
'\'{0}\' is correctly configured. '
'Nothing to be done.'.format(vcenter)),
'result': True})
else:
ret.update({'comment': '\n'.join(comments)})
if __opts__['test']:
ret.update({'pchanges': changes,
'result': None})
else:
ret.update({'changes': changes,
'result': True})
return ret
def storage_policies_configured(name, policies):
'''
Configures storage policies on a vCenter.
policies
List of dict representation of the required storage policies
'''
comments = []
changes = []
changes_required = False
ret = {'name': name, 'changes': {}, 'result': None, 'comment': None,
'pchanges': {}}
log.trace('policies = {0}'.format(policies))
si = None
try:
proxy_type = __salt__['vsphere.get_proxy_type']()
log.trace('proxy_type = {0}'.format(proxy_type))
# All allowed proxies have a shim execution module with the same
# name which implementes a get_details function
# All allowed proxies have a vcenter detail
vcenter = __salt__['{0}.get_details'.format(proxy_type)]()['vcenter']
log.info('Running state \'{0}\' on vCenter '
'\'{1}\''.format(name, vcenter))
si = __salt__['vsphere.get_service_instance_via_proxy']()
current_policies = __salt__['vsphere.list_storage_policies'](
policy_names=[policy['name'] for policy in policies],
service_instance=si)
log.trace('current_policies = {0}'.format(current_policies))
# TODO Refactor when recurse_differ supports list_differ
# It's going to make the whole thing much easier
for policy in policies:
policy_copy = copy.deepcopy(policy)
filtered_policies = [p for p in current_policies
if p['name'] == policy['name']]
current_policy = filtered_policies[0] \
if filtered_policies else None
if not current_policy:
changes_required = True
if __opts__['test']:
comments.append('State {0} will create the storage policy '
'\'{1}\' on vCenter \'{2}\''
''.format(name, policy['name'], vcenter))
else:
__salt__['vsphere.create_storage_policy'](
policy['name'], policy, service_instance=si)
comments.append('Created storage policy \'{0}\' on '
'vCenter \'{1}\''.format(policy['name'],
vcenter))
changes.append({'new': policy, 'old': None})
log.trace(comments[-1])
# Continue with next
continue
# Building all diffs between the current and expected policy
# XXX We simplify the comparison by assuming we have at most 1
# sub_profile
if policy.get('subprofiles'):
if len(policy['subprofiles']) > 1:
raise ArgumentValueError('Multiple sub_profiles ({0}) are not '
'supported in the input policy')
subprofile = policy['subprofiles'][0]
current_subprofile = current_policy['subprofiles'][0]
capabilities_differ = list_diff(current_subprofile['capabilities'],
subprofile.get('capabilities', []),
key='id')
del policy['subprofiles']
if subprofile.get('capabilities'):
del subprofile['capabilities']
del current_subprofile['capabilities']
# Get the subprofile diffs without the capability keys
subprofile_differ = recursive_diff(current_subprofile,
dict(subprofile))
del current_policy['subprofiles']
policy_differ = recursive_diff(current_policy, policy)
if policy_differ.diffs or capabilities_differ.diffs or \
subprofile_differ.diffs:
changes_required = True
if __opts__['test']:
str_changes = []
if policy_differ.diffs:
str_changes.extend(
[change for change in
policy_differ.changes_str.split('\n')])
if subprofile_differ.diffs or \
capabilities_differ.diffs:
str_changes.append('subprofiles:')
if subprofile_differ.diffs:
str_changes.extend(
[' {0}'.format(change) for change in
subprofile_differ.changes_str.split('\n')])
if capabilities_differ.diffs:
str_changes.append(' capabilities:')
str_changes.extend(
[' {0}'.format(change) for change in
capabilities_differ.changes_str2.split('\n')])
comments.append(
'State {0} will update the storage policy \'{1}\''
' on vCenter \'{2}\':\n{3}'
''.format(name, policy['name'], vcenter,
'\n'.join(str_changes)))
else:
__salt__['vsphere.update_storage_policy'](
policy=current_policy['name'],
policy_dict=policy_copy,
service_instance=si)
comments.append('Updated the storage policy \'{0}\''
'in vCenter \'{1}\''
''.format(policy['name'], vcenter))
log.info(comments[-1])
# Build new/old values to report what was changed
new_values = policy_differ.new_values
new_values['subprofiles'] = [subprofile_differ.new_values]
new_values['subprofiles'][0]['capabilities'] = \
capabilities_differ.new_values
if not new_values['subprofiles'][0]['capabilities']:
del new_values['subprofiles'][0]['capabilities']
if not new_values['subprofiles'][0]:
del new_values['subprofiles']
old_values = policy_differ.old_values
old_values['subprofiles'] = [subprofile_differ.old_values]
old_values['subprofiles'][0]['capabilities'] = \
capabilities_differ.old_values
if not old_values['subprofiles'][0]['capabilities']:
del old_values['subprofiles'][0]['capabilities']
if not old_values['subprofiles'][0]:
del old_values['subprofiles']
changes.append({'new': new_values,
'old': old_values})
else:
# No diffs found - no updates required
comments.append('Storage policy \'{0}\' is up to date. '
'Nothing to be done.'.format(policy['name']))
__salt__['vsphere.disconnect'](si)
except CommandExecutionError as exc:
log.error('Error: {0}'.format(exc))
if si:
__salt__['vsphere.disconnect'](si)
if not __opts__['test']:
ret['result'] = False
ret.update({'comment': exc.strerror,
'result': False if not __opts__['test'] else None})
return ret
if not changes_required:
# We have no changes
ret.update({'comment': ('All storage policy in vCenter '
'\'{0}\' is correctly configured. '
'Nothing to be done.'.format(vcenter)),
'result': True})
else:
ret.update({'comment': '\n'.join(comments)})
if __opts__['test']:
ret.update({'pchanges': {'storage_policies': changes},
'result': None})
else:
ret.update({'changes': {'storage_policies': changes},
'result': True})
return ret
def default_storage_policy_assigned(name, policy, datastore):
'''
Assigns a default storage policy to a datastore
policy
Name of storage policy
datastore
Name of datastore
'''
log.info('Running state {0} for policy \'{1}\', datastore \'{2}\'.'
''.format(name, policy, datastore))
changes = {}
changes_required = False
ret = {'name': name, 'changes': {}, 'result': None, 'comment': None,
'pchanges': {}}
si = None
try:
si = __salt__['vsphere.get_service_instance_via_proxy']()
existing_policy = \
__salt__['vsphere.list_default_storage_policy_of_datastore'](
datastore=datastore, service_instance=si)
if existing_policy['name'] == policy:
comment = ('Storage policy \'{0}\' is already assigned to '
'datastore \'{1}\'. Nothing to be done.'
''.format(policy, datastore))
else:
changes_required = True
changes = {
'default_storage_policy': {'old': existing_policy['name'],
'new': policy}}
if __opts__['test']:
comment = ('State {0} will assign storage policy \'{1}\' to '
'datastore \'{2}\'.').format(name, policy,
datastore)
else:
__salt__['vsphere.assign_default_storage_policy_to_datastore'](
policy=policy, datastore=datastore, service_instance=si)
comment = ('Storage policy \'{0} was assigned to datastore '
'\'{1}\'.').format(policy, name)
log.info(comment)
except CommandExecutionError as exc:
log.error('Error: {}'.format(exc))
if si:
__salt__['vsphere.disconnect'](si)
ret.update({'comment': exc.strerror,
'result': False if not __opts__['test'] else None})
return ret
ret['comment'] = comment
if changes_required:
if __opts__['test']:
ret.update({'result': None,
'pchanges': changes})
else:
ret.update({'result': True,
'changes': changes})
else:
ret['result'] = True
return ret

View file

@ -84,10 +84,12 @@ def installed(name, updates=None):
Args:
name (str): The identifier of a single update to install.
name (str):
The identifier of a single update to install.
updates (list): A list of identifiers for updates to be installed.
Overrides ``name``. Default is None.
updates (list):
A list of identifiers for updates to be installed. Overrides
``name``. Default is None.
.. note:: Identifiers can be the GUID, the KB number, or any part of the
Title of the Microsoft update. GUIDs and KBs are the preferred method
@ -121,7 +123,7 @@ def installed(name, updates=None):
# Install multiple updates
install_updates:
wua.installed:
- name:
- updates:
- KB3194343
- 28cf1b09-2b1a-458c-9bd1-971d1b26b211
'''
@ -215,10 +217,12 @@ def removed(name, updates=None):
Args:
name (str): The identifier of a single update to uninstall.
name (str):
The identifier of a single update to uninstall.
updates (list): A list of identifiers for updates to be removed.
Overrides ``name``. Default is None.
updates (list):
A list of identifiers for updates to be removed. Overrides ``name``.
Default is None.
.. note:: Identifiers can be the GUID, the KB number, or any part of the
Title of the Microsoft update. GUIDs and KBs are the preferred method
@ -329,3 +333,172 @@ def removed(name, updates=None):
ret['comment'] = 'Updates removed successfully'
return ret
def uptodate(name,
software=True,
drivers=False,
skip_hidden=False,
skip_mandatory=False,
skip_reboot=True,
categories=None,
severities=None,):
'''
Ensure Microsoft Updates that match the passed criteria are installed.
Updates will be downloaded if needed.
This state allows you to update a system without specifying a specific
update to apply. All matching updates will be installed.
Args:
name (str):
The name has no functional value and is only used as a tracking
reference
software (bool):
Include software updates in the results (default is True)
drivers (bool):
Include driver updates in the results (default is False)
skip_hidden (bool):
Skip updates that have been hidden. Default is False.
skip_mandatory (bool):
Skip mandatory updates. Default is False.
skip_reboot (bool):
Skip updates that require a reboot. Default is True.
categories (list):
Specify the categories to list. Must be passed as a list. All
categories returned by default.
Categories include the following:
* Critical Updates
* Definition Updates
* Drivers (make sure you set drivers=True)
* Feature Packs
* Security Updates
* Update Rollups
* Updates
* Update Rollups
* Windows 7
* Windows 8.1
* Windows 8.1 drivers
* Windows 8.1 and later drivers
* Windows Defender
severities (list):
Specify the severities to include. Must be passed as a list. All
severities returned by default.
Severities include the following:
* Critical
* Important
Returns:
dict: A dictionary containing the results of the update
CLI Example:
.. code-block:: yaml
# Update the system using the state defaults
update_system:
wua.up_to_date
# Update the drivers
update_drivers:
wua.up_to_date:
- software: False
- drivers: True
- skip_reboot: False
# Apply all critical updates
update_critical:
wua.up_to_date:
- severities:
- Critical
'''
ret = {'name': name,
'changes': {},
'result': True,
'comment': ''}
wua = salt.utils.win_update.WindowsUpdateAgent()
available_updates = wua.available(
skip_hidden=skip_hidden, skip_installed=True,
skip_mandatory=skip_mandatory, skip_reboot=skip_reboot,
software=software, drivers=drivers, categories=categories,
severities=severities)
# No updates found
if available_updates.count() == 0:
ret['comment'] = 'No updates found'
return ret
updates = list(available_updates.list().keys())
# Search for updates
install_list = wua.search(updates)
# List of updates to download
download = salt.utils.win_update.Updates()
for item in install_list.updates:
if not salt.utils.is_true(item.IsDownloaded):
download.updates.Add(item)
# List of updates to install
install = salt.utils.win_update.Updates()
for item in install_list.updates:
if not salt.utils.is_true(item.IsInstalled):
install.updates.Add(item)
# Return comment of changes if test.
if __opts__['test']:
ret['result'] = None
ret['comment'] = 'Updates will be installed:'
for update in install.updates:
ret['comment'] += '\n'
ret['comment'] += ': '.join(
[update.Identity.UpdateID, update.Title])
return ret
# Download updates
wua.download(download)
# Install updates
wua.install(install)
# Refresh windows update info
wua.refresh()
post_info = wua.updates().list()
# Verify the installation
for item in install.list():
if not salt.utils.is_true(post_info[item]['Installed']):
ret['changes']['failed'] = {
item: {'Title': post_info[item]['Title'][:40] + '...',
'KBs': post_info[item]['KBs']}
}
ret['result'] = False
else:
ret['changes']['installed'] = {
item: {'Title': post_info[item]['Title'][:40] + '...',
'NeedsReboot': post_info[item]['NeedsReboot'],
'KBs': post_info[item]['KBs']}
}
if ret['changes'].get('failed', False):
ret['comment'] = 'Updates failed'
else:
ret['comment'] = 'Updates installed successfully'
return ret

69
salt/tops/saltclass.py Normal file
View file

@ -0,0 +1,69 @@
# -*- coding: utf-8 -*-
'''
SaltClass master_tops Module
.. code-block:: yaml
master_tops:
saltclass:
path: /srv/saltclass
'''
# import python libs
from __future__ import absolute_import
import logging
import salt.utils.saltclass as sc
log = logging.getLogger(__name__)
def __virtual__():
'''
Only run if properly configured
'''
if __opts__['master_tops'].get('saltclass'):
return True
return False
def top(**kwargs):
'''
Node definitions path will be retrieved from __opts__ - or set to default -
then added to 'salt_data' dict that is passed to the 'get_tops' function.
'salt_data' dict is a convenient way to pass all the required datas to the function
It contains:
- __opts__
- empty __salt__
- __grains__
- empty __pillar__
- minion_id
- path
If successfull the function will return a top dict for minion_id
'''
# If path has not been set, make a default
_opts = __opts__['master_tops']['saltclass']
if 'path' not in _opts:
path = '/srv/saltclass'
log.warning('path variable unset, using default: {0}'.format(path))
else:
path = _opts['path']
# Create a dict that will contain our salt objects
# to send to get_tops function
if 'id' not in kwargs['opts']:
log.warning('Minion id not found - Returning empty dict')
return {}
else:
minion_id = kwargs['opts']['id']
salt_data = {
'__opts__': kwargs['opts'],
'__salt__': {},
'__grains__': kwargs['grains'],
'__pillar__': {},
'minion_id': minion_id,
'path': path
}
return sc.get_tops(minion_id, salt_data)

View file

@ -217,7 +217,7 @@ class RecursiveDictDiffer(DictDiffer):
Each inner difference is tabulated two space deeper
'''
changes_strings = []
for p in diff_dict.keys():
for p in sorted(diff_dict.keys()):
if sorted(diff_dict[p].keys()) == ['new', 'old']:
# Some string formatting
old_value = diff_dict[p]['old']
@ -267,7 +267,7 @@ class RecursiveDictDiffer(DictDiffer):
keys.append('{0}{1}'.format(prefix, key))
return keys
return _added(self._diffs, prefix='')
return sorted(_added(self._diffs, prefix=''))
def removed(self):
'''
@ -290,7 +290,7 @@ class RecursiveDictDiffer(DictDiffer):
prefix='{0}{1}.'.format(prefix, key)))
return keys
return _removed(self._diffs, prefix='')
return sorted(_removed(self._diffs, prefix=''))
def changed(self):
'''
@ -338,7 +338,7 @@ class RecursiveDictDiffer(DictDiffer):
return keys
return _changed(self._diffs, prefix='')
return sorted(_changed(self._diffs, prefix=''))
def unchanged(self):
'''
@ -363,7 +363,7 @@ class RecursiveDictDiffer(DictDiffer):
prefix='{0}{1}.'.format(prefix, key)))
return keys
return _unchanged(self.current_dict, self._diffs, prefix='')
return sorted(_unchanged(self.current_dict, self._diffs, prefix=''))
@property
def diffs(self):

View file

@ -485,6 +485,8 @@ def safe_filename_leaf(file_basename):
windows is \\ / : * ? " < > | posix is /
.. versionadded:: 2017.7.2
:codeauthor: Damon Atkins <https://github.com/damon-atkins>
'''
def _replace(re_obj):
return urllib.quote(re_obj.group(0), safe=u'')
@ -497,18 +499,26 @@ def safe_filename_leaf(file_basename):
return re.sub(u'[\\\\:/*?"<>|]', _replace, file_basename, flags=re.UNICODE)
def safe_filepath(file_path_name):
def safe_filepath(file_path_name, dir_sep=None):
'''
Input the full path and filename, splits on directory separator and calls safe_filename_leaf for
each part of the path.
each part of the path. dir_sep allows coder to force a directory separate to a particular character
.. versionadded:: 2017.7.2
:codeauthor: Damon Atkins <https://github.com/damon-atkins>
'''
if not dir_sep:
dir_sep = os.sep
# Normally if file_path_name or dir_sep is Unicode then the output will be Unicode
# This code ensure the output type is the same as file_path_name
if not isinstance(file_path_name, six.text_type) and isinstance(dir_sep, six.text_type):
dir_sep = dir_sep.encode('ascii') # This should not be executed under PY3
# splitdrive only set drive on windows platform
(drive, path) = os.path.splitdrive(file_path_name)
path = os.sep.join([safe_filename_leaf(file_section) for file_section in file_path_name.rsplit(os.sep)])
path = dir_sep.join([safe_filename_leaf(file_section) for file_section in path.rsplit(dir_sep)])
if drive:
return os.sep.join([drive, path])
else:
path = dir_sep.join([drive, path])
return path

View file

@ -966,6 +966,31 @@ class CkMinions(object):
auth_list.append(matcher)
return auth_list
def fill_auth_list(self, auth_provider, name, groups, auth_list=None, permissive=None):
'''
Returns a list of authorisation matchers that a user is eligible for.
This list is a combination of the provided personal matchers plus the
matchers of any group the user is in.
'''
if auth_list is None:
auth_list = []
if permissive is None:
permissive = self.opts.get('permissive_acl')
name_matched = False
for match in auth_provider:
if match == '*' and not permissive:
continue
if match.endswith('%'):
if match.rstrip('%') in groups:
auth_list.extend(auth_provider[match])
else:
if salt.utils.expr_match(match, name):
name_matched = True
auth_list.extend(auth_provider[match])
if not permissive and not name_matched and '*' in auth_provider:
auth_list.extend(auth_provider['*'])
return auth_list
def wheel_check(self, auth_list, fun, args):
'''
Check special API permissions
@ -982,6 +1007,8 @@ class CkMinions(object):
'''
Check special API permissions
'''
if not auth_list:
return False
if form != 'cloud':
comps = fun.split('.')
if len(comps) != 2:

329
salt/utils/pbm.py Normal file
View file

@ -0,0 +1,329 @@
# -*- coding: utf-8 -*-
'''
Library for VMware Storage Policy management (via the pbm endpoint)
This library is used to manage the various policies available in VMware
:codeauthor: Alexandru Bleotu <alexandru.bleotu@morganstaley.com>
Dependencies
~~~~~~~~~~~~
- pyVmomi Python Module
pyVmomi
-------
PyVmomi can be installed via pip:
.. code-block:: bash
pip install pyVmomi
.. note::
versions of Python. If using version 6.0 of pyVmomi, Python 2.6,
Python 2.7.9, or newer must be present. This is due to an upstream dependency
in pyVmomi 6.0 that is not supported in Python versions 2.7 to 2.7.8. If the
version of Python is not in the supported range, you will need to install an
earlier version of pyVmomi. See `Issue #29537`_ for more information.
.. _Issue #29537: https://github.com/saltstack/salt/issues/29537
Based on the note above, to install an earlier version of pyVmomi than the
version currently listed in PyPi, run the following:
.. code-block:: bash
pip install pyVmomi==5.5.0.2014.1.1
'''
# Import Python Libs
from __future__ import absolute_import
import logging
# Import Salt Libs
import salt.utils.vmware
from salt.exceptions import VMwareApiError, VMwareRuntimeError, \
VMwareObjectRetrievalError
try:
from pyVmomi import pbm, vim, vmodl
HAS_PYVMOMI = True
except ImportError:
HAS_PYVMOMI = False
# Get Logging Started
log = logging.getLogger(__name__)
def __virtual__():
'''
Only load if PyVmomi is installed.
'''
if HAS_PYVMOMI:
return True
else:
return False, 'Missing dependency: The salt.utils.pbm module ' \
'requires the pyvmomi library'
def get_profile_manager(service_instance):
'''
Returns a profile manager
service_instance
Service instance to the host or vCenter
'''
stub = salt.utils.vmware.get_new_service_instance_stub(
service_instance, ns='pbm/2.0', path='/pbm/sdk')
pbm_si = pbm.ServiceInstance('ServiceInstance', stub)
try:
profile_manager = pbm_si.RetrieveContent().profileManager
except vim.fault.NoPermission as exc:
log.exception(exc)
raise VMwareApiError('Not enough permissions. Required privilege: '
'{0}'.format(exc.privilegeId))
except vim.fault.VimFault as exc:
log.exception(exc)
raise VMwareApiError(exc.msg)
except vmodl.RuntimeFault as exc:
log.exception(exc)
raise VMwareRuntimeError(exc.msg)
return profile_manager
def get_placement_solver(service_instance):
'''
Returns a placement solver
service_instance
Service instance to the host or vCenter
'''
stub = salt.utils.vmware.get_new_service_instance_stub(
service_instance, ns='pbm/2.0', path='/pbm/sdk')
pbm_si = pbm.ServiceInstance('ServiceInstance', stub)
try:
profile_manager = pbm_si.RetrieveContent().placementSolver
except vim.fault.NoPermission as exc:
log.exception(exc)
raise VMwareApiError('Not enough permissions. Required privilege: '
'{0}'.format(exc.privilegeId))
except vim.fault.VimFault as exc:
log.exception(exc)
raise VMwareApiError(exc.msg)
except vmodl.RuntimeFault as exc:
log.exception(exc)
raise VMwareRuntimeError(exc.msg)
return profile_manager
def get_capability_definitions(profile_manager):
'''
Returns a list of all capability definitions.
profile_manager
Reference to the profile manager.
'''
res_type = pbm.profile.ResourceType(
resourceType=pbm.profile.ResourceTypeEnum.STORAGE)
try:
cap_categories = profile_manager.FetchCapabilityMetadata(res_type)
except vim.fault.NoPermission as exc:
log.exception(exc)
raise VMwareApiError('Not enough permissions. Required privilege: '
'{0}'.format(exc.privilegeId))
except vim.fault.VimFault as exc:
log.exception(exc)
raise VMwareApiError(exc.msg)
except vmodl.RuntimeFault as exc:
log.exception(exc)
raise VMwareRuntimeError(exc.msg)
cap_definitions = []
for cat in cap_categories:
cap_definitions.extend(cat.capabilityMetadata)
return cap_definitions
def get_policies_by_id(profile_manager, policy_ids):
'''
Returns a list of policies with the specified ids.
profile_manager
Reference to the profile manager.
policy_ids
List of policy ids to retrieve.
'''
try:
return profile_manager.RetrieveContent(policy_ids)
except vim.fault.NoPermission as exc:
log.exception(exc)
raise VMwareApiError('Not enough permissions. Required privilege: '
'{0}'.format(exc.privilegeId))
except vim.fault.VimFault as exc:
log.exception(exc)
raise VMwareApiError(exc.msg)
except vmodl.RuntimeFault as exc:
log.exception(exc)
raise VMwareRuntimeError(exc.msg)
def get_storage_policies(profile_manager, policy_names=None,
get_all_policies=False):
'''
Returns a list of the storage policies, filtered by name.
profile_manager
Reference to the profile manager.
policy_names
List of policy names to filter by.
Default is None.
get_all_policies
Flag specifying to return all policies, regardless of the specified
filter.
'''
res_type = pbm.profile.ResourceType(
resourceType=pbm.profile.ResourceTypeEnum.STORAGE)
try:
policy_ids = profile_manager.QueryProfile(res_type)
except vim.fault.NoPermission as exc:
log.exception(exc)
raise VMwareApiError('Not enough permissions. Required privilege: '
'{0}'.format(exc.privilegeId))
except vim.fault.VimFault as exc:
log.exception(exc)
raise VMwareApiError(exc.msg)
except vmodl.RuntimeFault as exc:
log.exception(exc)
raise VMwareRuntimeError(exc.msg)
log.trace('policy_ids = {0}'.format(policy_ids))
# More policies are returned so we need to filter again
policies = [p for p in get_policies_by_id(profile_manager, policy_ids)
if p.resourceType.resourceType ==
pbm.profile.ResourceTypeEnum.STORAGE]
if get_all_policies:
return policies
if not policy_names:
policy_names = []
return [p for p in policies if p.name in policy_names]
def create_storage_policy(profile_manager, policy_spec):
'''
Creates a storage policy.
profile_manager
Reference to the profile manager.
policy_spec
Policy update spec.
'''
try:
profile_manager.Create(policy_spec)
except vim.fault.NoPermission as exc:
log.exception(exc)
raise VMwareApiError('Not enough permissions. Required privilege: '
'{0}'.format(exc.privilegeId))
except vim.fault.VimFault as exc:
log.exception(exc)
raise VMwareApiError(exc.msg)
except vmodl.RuntimeFault as exc:
log.exception(exc)
raise VMwareRuntimeError(exc.msg)
def update_storage_policy(profile_manager, policy, policy_spec):
'''
Updates a storage policy.
profile_manager
Reference to the profile manager.
policy
Reference to the policy to be updated.
policy_spec
Policy update spec.
'''
try:
profile_manager.Update(policy.profileId, policy_spec)
except vim.fault.NoPermission as exc:
log.exception(exc)
raise VMwareApiError('Not enough permissions. Required privilege: '
'{0}'.format(exc.privilegeId))
except vim.fault.VimFault as exc:
log.exception(exc)
raise VMwareApiError(exc.msg)
except vmodl.RuntimeFault as exc:
log.exception(exc)
raise VMwareRuntimeError(exc.msg)
def get_default_storage_policy_of_datastore(profile_manager, datastore):
'''
Returns the default storage policy reference assigned to a datastore.
profile_manager
Reference to the profile manager.
datastore
Reference to the datastore.
'''
# Retrieve all datastores visible
hub = pbm.placement.PlacementHub(
hubId=datastore._moId, hubType='Datastore')
log.trace('placement_hub = {0}'.format(hub))
try:
policy_id = profile_manager.QueryDefaultRequirementProfile(hub)
except vim.fault.NoPermission as exc:
log.exception(exc)
raise VMwareApiError('Not enough permissions. Required privilege: '
'{0}'.format(exc.privilegeId))
except vim.fault.VimFault as exc:
log.exception(exc)
raise VMwareApiError(exc.msg)
except vmodl.RuntimeFault as exc:
log.exception(exc)
raise VMwareRuntimeError(exc.msg)
policy_refs = get_policies_by_id(profile_manager, [policy_id])
if not policy_refs:
raise VMwareObjectRetrievalError('Storage policy with id \'{0}\' was '
'not found'.format(policy_id))
return policy_refs[0]
def assign_default_storage_policy_to_datastore(profile_manager, policy,
datastore):
'''
Assigns a storage policy as the default policy to a datastore.
profile_manager
Reference to the profile manager.
policy
Reference to the policy to assigned.
datastore
Reference to the datastore.
'''
placement_hub = pbm.placement.PlacementHub(
hubId=datastore._moId, hubType='Datastore')
log.trace('placement_hub = {0}'.format(placement_hub))
try:
profile_manager.AssignDefaultRequirementProfile(policy.profileId,
[placement_hub])
except vim.fault.NoPermission as exc:
log.exception(exc)
raise VMwareApiError('Not enough permissions. Required privilege: '
'{0}'.format(exc.privilegeId))
except vim.fault.VimFault as exc:
log.exception(exc)
raise VMwareApiError(exc.msg)
except vmodl.RuntimeFault as exc:
log.exception(exc)
raise VMwareRuntimeError(exc.msg)

296
salt/utils/saltclass.py Normal file
View file

@ -0,0 +1,296 @@
# -*- coding: utf-8 -*-
from __future__ import absolute_import
import os
import re
import logging
from salt.ext.six import iteritems
import yaml
from jinja2 import FileSystemLoader, Environment
log = logging.getLogger(__name__)
# Renders jinja from a template file
def render_jinja(_file, salt_data):
j_env = Environment(loader=FileSystemLoader(os.path.dirname(_file)))
j_env.globals.update({
'__opts__': salt_data['__opts__'],
'__salt__': salt_data['__salt__'],
'__grains__': salt_data['__grains__'],
'__pillar__': salt_data['__pillar__'],
'minion_id': salt_data['minion_id'],
})
j_render = j_env.get_template(os.path.basename(_file)).render()
return j_render
# Renders yaml from rendered jinja
def render_yaml(_file, salt_data):
return yaml.safe_load(render_jinja(_file, salt_data))
# Returns a dict from a class yaml definition
def get_class(_class, salt_data):
l_files = []
saltclass_path = salt_data['path']
straight = '{0}/classes/{1}.yml'.format(saltclass_path, _class)
sub_straight = '{0}/classes/{1}.yml'.format(saltclass_path,
_class.replace('.', '/'))
sub_init = '{0}/classes/{1}/init.yml'.format(saltclass_path,
_class.replace('.', '/'))
for root, dirs, files in os.walk('{0}/classes'.format(saltclass_path)):
for l_file in files:
l_files.append('{0}/{1}'.format(root, l_file))
if straight in l_files:
return render_yaml(straight, salt_data)
if sub_straight in l_files:
return render_yaml(sub_straight, salt_data)
if sub_init in l_files:
return render_yaml(sub_init, salt_data)
log.warning('{0}: Class definition not found'.format(_class))
return {}
# Return environment
def get_env_from_dict(exp_dict_list):
environment = ''
for s_class in exp_dict_list:
if 'environment' in s_class:
environment = s_class['environment']
return environment
# Merge dict b into a
def dict_merge(a, b, path=None):
if path is None:
path = []
for key in b:
if key in a:
if isinstance(a[key], list) and isinstance(b[key], list):
if b[key][0] == '^':
b[key].pop(0)
a[key] = b[key]
else:
a[key].extend(b[key])
elif isinstance(a[key], dict) and isinstance(b[key], dict):
dict_merge(a[key], b[key], path + [str(key)])
elif a[key] == b[key]:
pass
else:
a[key] = b[key]
else:
a[key] = b[key]
return a
# Recursive search and replace in a dict
def dict_search_and_replace(d, old, new, expanded):
for (k, v) in iteritems(d):
if isinstance(v, dict):
dict_search_and_replace(d[k], old, new, expanded)
if v == old:
d[k] = new
return d
# Retrieve original value from ${xx:yy:zz} to be expanded
def find_value_to_expand(x, v):
a = x
for i in v[2:-1].split(':'):
if i in a:
a = a.get(i)
else:
a = v
return a
return a
# Return a dict that contains expanded variables if found
def expand_variables(a, b, expanded, path=None):
if path is None:
b = a.copy()
path = []
for (k, v) in iteritems(a):
if isinstance(v, dict):
expand_variables(v, b, expanded, path + [str(k)])
else:
if isinstance(v, str):
vre = re.search(r'(^|.)\$\{.*?\}', v)
if vre:
re_v = vre.group(0)
if re_v.startswith('\\'):
v_new = v.replace(re_v, re_v.lstrip('\\'))
b = dict_search_and_replace(b, v, v_new, expanded)
expanded.append(k)
elif not re_v.startswith('$'):
v_expanded = find_value_to_expand(b, re_v[1:])
v_new = v.replace(re_v[1:], v_expanded)
b = dict_search_and_replace(b, v, v_new, expanded)
expanded.append(k)
else:
v_expanded = find_value_to_expand(b, re_v)
b = dict_search_and_replace(b, v, v_expanded, expanded)
expanded.append(k)
return b
def expand_classes_in_order(minion_dict,
salt_data,
seen_classes,
expanded_classes,
classes_to_expand):
# Get classes to expand from minion dictionnary
if not classes_to_expand and 'classes' in minion_dict:
classes_to_expand = minion_dict['classes']
# Now loop on list to recursively expand them
for klass in classes_to_expand:
if klass not in seen_classes:
seen_classes.append(klass)
expanded_classes[klass] = get_class(klass, salt_data)
# Fix corner case where class is loaded but doesn't contain anything
if expanded_classes[klass] is None:
expanded_classes[klass] = {}
# Now replace class element in classes_to_expand by expansion
if 'classes' in expanded_classes[klass]:
l_id = classes_to_expand.index(klass)
classes_to_expand[l_id:l_id] = expanded_classes[klass]['classes']
expand_classes_in_order(minion_dict,
salt_data,
seen_classes,
expanded_classes,
classes_to_expand)
else:
expand_classes_in_order(minion_dict,
salt_data,
seen_classes,
expanded_classes,
classes_to_expand)
# We may have duplicates here and we want to remove them
tmp = []
for t_element in classes_to_expand:
if t_element not in tmp:
tmp.append(t_element)
classes_to_expand = tmp
# Now that we've retrieved every class in order,
# let's return an ordered list of dicts
ord_expanded_classes = []
ord_expanded_states = []
for ord_klass in classes_to_expand:
ord_expanded_classes.append(expanded_classes[ord_klass])
# And be smart and sort out states list
# Address the corner case where states is empty in a class definition
if 'states' in expanded_classes[ord_klass] and expanded_classes[ord_klass]['states'] is None:
expanded_classes[ord_klass]['states'] = {}
if 'states' in expanded_classes[ord_klass]:
ord_expanded_states.extend(expanded_classes[ord_klass]['states'])
# Add our minion dict as final element but check if we have states to process
if 'states' in minion_dict and minion_dict['states'] is None:
minion_dict['states'] = []
if 'states' in minion_dict:
ord_expanded_states.extend(minion_dict['states'])
ord_expanded_classes.append(minion_dict)
return ord_expanded_classes, classes_to_expand, ord_expanded_states
def expanded_dict_from_minion(minion_id, salt_data):
_file = ''
saltclass_path = salt_data['path']
# Start
for root, dirs, files in os.walk('{0}/nodes'.format(saltclass_path)):
for minion_file in files:
if minion_file == '{0}.yml'.format(minion_id):
_file = os.path.join(root, minion_file)
# Load the minion_id definition if existing, else an exmpty dict
node_dict = {}
if _file:
node_dict[minion_id] = render_yaml(_file, salt_data)
else:
log.warning('{0}: Node definition not found'.format(minion_id))
node_dict[minion_id] = {}
# Get 2 ordered lists:
# expanded_classes: A list of all the dicts
# classes_list: List of all the classes
expanded_classes, classes_list, states_list = expand_classes_in_order(
node_dict[minion_id],
salt_data, [], {}, [])
# Here merge the pillars together
pillars_dict = {}
for exp_dict in expanded_classes:
if 'pillars' in exp_dict:
dict_merge(pillars_dict, exp_dict)
return expanded_classes, pillars_dict, classes_list, states_list
def get_pillars(minion_id, salt_data):
# Get 2 dicts and 2 lists
# expanded_classes: Full list of expanded dicts
# pillars_dict: dict containing merged pillars in order
# classes_list: All classes processed in order
# states_list: All states listed in order
(expanded_classes,
pillars_dict,
classes_list,
states_list) = expanded_dict_from_minion(minion_id, salt_data)
# Retrieve environment
environment = get_env_from_dict(expanded_classes)
# Expand ${} variables in merged dict
# pillars key shouldn't exist if we haven't found any minion_id ref
if 'pillars' in pillars_dict:
pillars_dict_expanded = expand_variables(pillars_dict['pillars'], {}, [])
else:
pillars_dict_expanded = expand_variables({}, {}, [])
# Build the final pillars dict
pillars_dict = {}
pillars_dict['__saltclass__'] = {}
pillars_dict['__saltclass__']['states'] = states_list
pillars_dict['__saltclass__']['classes'] = classes_list
pillars_dict['__saltclass__']['environment'] = environment
pillars_dict['__saltclass__']['nodename'] = minion_id
pillars_dict.update(pillars_dict_expanded)
return pillars_dict
def get_tops(minion_id, salt_data):
# Get 2 dicts and 2 lists
# expanded_classes: Full list of expanded dicts
# pillars_dict: dict containing merged pillars in order
# classes_list: All classes processed in order
# states_list: All states listed in order
(expanded_classes,
pillars_dict,
classes_list,
states_list) = expanded_dict_from_minion(minion_id, salt_data)
# Retrieve environment
environment = get_env_from_dict(expanded_classes)
# Build final top dict
tops_dict = {}
tops_dict[environment] = states_list
return tops_dict

File diff suppressed because it is too large Load diff

View file

@ -49,7 +49,8 @@ import logging
import ssl
# Import Salt Libs
from salt.exceptions import VMwareApiError, VMwareRuntimeError
from salt.exceptions import VMwareApiError, VMwareRuntimeError, \
VMwareObjectRetrievalError
import salt.utils.vmware
try:
@ -129,6 +130,308 @@ def get_vsan_cluster_config_system(service_instance):
return vc_mos['vsan-cluster-config-system']
def get_vsan_disk_management_system(service_instance):
'''
Returns a vim.VimClusterVsanVcDiskManagementSystem object
service_instance
Service instance to the host or vCenter
'''
#TODO Replace when better connection mechanism is available
#For python 2.7.9 and later, the defaul SSL conext has more strict
#connection handshaking rule. We may need turn of the hostname checking
#and client side cert verification
context = None
if sys.version_info[:3] > (2, 7, 8):
context = ssl.create_default_context()
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE
stub = service_instance._stub
vc_mos = vsanapiutils.GetVsanVcMos(stub, context=context)
return vc_mos['vsan-disk-management-system']
def get_host_vsan_system(service_instance, host_ref, hostname=None):
'''
Returns a host's vsan system
service_instance
Service instance to the host or vCenter
host_ref
Refernce to ESXi host
hostname
Name of ESXi host. Default value is None.
'''
if not hostname:
hostname = salt.utils.vmware.get_managed_object_name(host_ref)
traversal_spec = vmodl.query.PropertyCollector.TraversalSpec(
path='configManager.vsanSystem',
type=vim.HostSystem,
skip=False)
objs = salt.utils.vmware.get_mors_with_properties(
service_instance, vim.HostVsanSystem, property_list=['config.enabled'],
container_ref=host_ref, traversal_spec=traversal_spec)
if not objs:
raise VMwareObjectRetrievalError('Host\'s \'{0}\' VSAN system was '
'not retrieved'.format(hostname))
log.trace('[{0}] Retrieved VSAN system'.format(hostname))
return objs[0]['object']
def create_diskgroup(service_instance, vsan_disk_mgmt_system,
host_ref, cache_disk, capacity_disks):
'''
Creates a disk group
service_instance
Service instance to the host or vCenter
vsan_disk_mgmt_system
vim.VimClusterVsanVcDiskManagemenetSystem representing the vSan disk
management system retrieved from the vsan endpoint.
host_ref
vim.HostSystem object representing the target host the disk group will
be created on
cache_disk
The vim.HostScsidisk to be used as a cache disk. It must be an ssd disk.
capacity_disks
List of vim.HostScsiDisk objects representing of disks to be used as
capacity disks. Can be either ssd or non-ssd. There must be a minimum
of 1 capacity disk in the list.
'''
hostname = salt.utils.vmware.get_managed_object_name(host_ref)
cache_disk_id = cache_disk.canonicalName
log.debug('Creating a new disk group with cache disk \'{0}\' on host '
'\'{1}\''.format(cache_disk_id, hostname))
log.trace('capacity_disk_ids = {0}'.format([c.canonicalName for c in
capacity_disks]))
spec = vim.VimVsanHostDiskMappingCreationSpec()
spec.cacheDisks = [cache_disk]
spec.capacityDisks = capacity_disks
# All capacity disks must be either ssd or non-ssd (mixed disks are not
# supported)
spec.creationType = 'allFlash' if getattr(capacity_disks[0], 'ssd') \
else 'hybrid'
spec.host = host_ref
try:
task = vsan_disk_mgmt_system.InitializeDiskMappings(spec)
except vim.fault.NoPermission as exc:
log.exception(exc)
raise VMwareApiError('Not enough permissions. Required privilege: '
'{0}'.format(exc.privilegeId))
except vim.fault.VimFault as exc:
log.exception(exc)
raise VMwareApiError(exc.msg)
except vmodl.fault.MethodNotFound as exc:
log.exception(exc)
raise VMwareRuntimeError('Method \'{0}\' not found'.format(exc.method))
except vmodl.RuntimeFault as exc:
log.exception(exc)
raise VMwareRuntimeError(exc.msg)
_wait_for_tasks([task], service_instance)
return True
def add_capacity_to_diskgroup(service_instance, vsan_disk_mgmt_system,
host_ref, diskgroup, new_capacity_disks):
'''
Adds capacity disk(s) to a disk group.
service_instance
Service instance to the host or vCenter
vsan_disk_mgmt_system
vim.VimClusterVsanVcDiskManagemenetSystem representing the vSan disk
management system retrieved from the vsan endpoint.
host_ref
vim.HostSystem object representing the target host the disk group will
be created on
diskgroup
The vsan.HostDiskMapping object representing the host's diskgroup where
the additional capacity needs to be added
new_capacity_disks
List of vim.HostScsiDisk objects representing the disks to be added as
capacity disks. Can be either ssd or non-ssd. There must be a minimum
of 1 new capacity disk in the list.
'''
hostname = salt.utils.vmware.get_managed_object_name(host_ref)
cache_disk = diskgroup.ssd
cache_disk_id = cache_disk.canonicalName
log.debug('Adding capacity to disk group with cache disk \'{0}\' on host '
'\'{1}\''.format(cache_disk_id, hostname))
log.trace('new_capacity_disk_ids = {0}'.format([c.canonicalName for c in
new_capacity_disks]))
spec = vim.VimVsanHostDiskMappingCreationSpec()
spec.cacheDisks = [cache_disk]
spec.capacityDisks = new_capacity_disks
# All new capacity disks must be either ssd or non-ssd (mixed disks are not
# supported); also they need to match the type of the existing capacity
# disks; we assume disks are already validated
spec.creationType = 'allFlash' if getattr(new_capacity_disks[0], 'ssd') \
else 'hybrid'
spec.host = host_ref
try:
task = vsan_disk_mgmt_system.InitializeDiskMappings(spec)
except vim.fault.NoPermission as exc:
log.exception(exc)
raise VMwareApiError('Not enough permissions. Required privilege: '
'{0}'.format(exc.privilegeId))
except vim.fault.VimFault as exc:
log.exception(exc)
raise VMwareApiError(exc.msg)
except vmodl.fault.MethodNotFound as exc:
log.exception(exc)
raise VMwareRuntimeError('Method \'{0}\' not found'.format(exc.method))
except vmodl.RuntimeFault as exc:
raise VMwareRuntimeError(exc.msg)
_wait_for_tasks([task], service_instance)
return True
def remove_capacity_from_diskgroup(service_instance, host_ref, diskgroup,
capacity_disks, data_evacuation=True,
hostname=None,
host_vsan_system=None):
'''
Removes capacity disk(s) from a disk group.
service_instance
Service instance to the host or vCenter
host_vsan_system
ESXi host's VSAN system
host_ref
Reference to the ESXi host
diskgroup
The vsan.HostDiskMapping object representing the host's diskgroup from
where the capacity needs to be removed
capacity_disks
List of vim.HostScsiDisk objects representing the capacity disks to be
removed. Can be either ssd or non-ssd. There must be a minimum
of 1 capacity disk in the list.
data_evacuation
Specifies whether to gracefully evacuate the data on the capacity disks
before removing them from the disk group. Default value is True.
hostname
Name of ESXi host. Default value is None.
host_vsan_system
ESXi host's VSAN system. Default value is None.
'''
if not hostname:
hostname = salt.utils.vmware.get_managed_object_name(host_ref)
cache_disk = diskgroup.ssd
cache_disk_id = cache_disk.canonicalName
log.debug('Removing capacity from disk group with cache disk \'{0}\' on '
'host \'{1}\''.format(cache_disk_id, hostname))
log.trace('capacity_disk_ids = {0}'.format([c.canonicalName for c in
capacity_disks]))
if not host_vsan_system:
host_vsan_system = get_host_vsan_system(service_instance,
host_ref, hostname)
# Set to evacuate all data before removing the disks
maint_spec = vim.HostMaintenanceSpec()
maint_spec.vsanMode = vim.VsanHostDecommissionMode()
if data_evacuation:
maint_spec.vsanMode.objectAction = \
vim.VsanHostDecommissionModeObjectAction.evacuateAllData
else:
maint_spec.vsanMode.objectAction = \
vim.VsanHostDecommissionModeObjectAction.noAction
try:
task = host_vsan_system.RemoveDisk_Task(disk=capacity_disks,
maintenanceSpec=maint_spec)
except vim.fault.NoPermission as exc:
log.exception(exc)
raise VMwareApiError('Not enough permissions. Required privilege: '
'{0}'.format(exc.privilegeId))
except vim.fault.VimFault as exc:
log.exception(exc)
raise VMwareApiError(exc.msg)
except vmodl.RuntimeFault as exc:
log.exception(exc)
raise VMwareRuntimeError(exc.msg)
salt.utils.vmware.wait_for_task(task, hostname, 'remove_capacity')
return True
def remove_diskgroup(service_instance, host_ref, diskgroup, hostname=None,
host_vsan_system=None, erase_disk_partitions=False,
data_accessibility=True):
'''
Removes a disk group.
service_instance
Service instance to the host or vCenter
host_ref
Reference to the ESXi host
diskgroup
The vsan.HostDiskMapping object representing the host's diskgroup from
where the capacity needs to be removed
hostname
Name of ESXi host. Default value is None.
host_vsan_system
ESXi host's VSAN system. Default value is None.
data_accessibility
Specifies whether to ensure data accessibility. Default value is True.
'''
if not hostname:
hostname = salt.utils.vmware.get_managed_object_name(host_ref)
cache_disk_id = diskgroup.ssd.canonicalName
log.debug('Removing disk group with cache disk \'{0}\' on '
'host \'{1}\''.format(cache_disk_id, hostname))
if not host_vsan_system:
host_vsan_system = get_host_vsan_system(
service_instance, host_ref, hostname)
# Set to evacuate all data before removing the disks
maint_spec = vim.HostMaintenanceSpec()
maint_spec.vsanMode = vim.VsanHostDecommissionMode()
object_action = vim.VsanHostDecommissionModeObjectAction
if data_accessibility:
maint_spec.vsanMode.objectAction = \
object_action.ensureObjectAccessibility
else:
maint_spec.vsanMode.objectAction = object_action.noAction
try:
task = host_vsan_system.RemoveDiskMapping_Task(
mapping=[diskgroup], maintenanceSpec=maint_spec)
except vim.fault.NoPermission as exc:
log.exception(exc)
raise VMwareApiError('Not enough permissions. Required privilege: '
'{0}'.format(exc.privilegeId))
except vim.fault.VimFault as exc:
log.exception(exc)
raise VMwareApiError(exc.msg)
except vmodl.RuntimeFault as exc:
log.exception(exc)
raise VMwareRuntimeError(exc.msg)
salt.utils.vmware.wait_for_task(task, hostname, 'remove_diskgroup')
log.debug('Removed disk group with cache disk \'{0}\' '
'on host \'{1}\''.format(cache_disk_id, hostname))
return True
def get_cluster_vsan_info(cluster_ref):
'''
Returns the extended cluster vsan configuration object

View file

@ -0,0 +1,6 @@
classes:
- app.ssh.server
pillars:
sshd:
root_access: yes

View file

@ -0,0 +1,4 @@
pillars:
sshd:
root_access: no
ssh_port: 22

View file

@ -0,0 +1,17 @@
classes:
- default.users
- default.motd
states:
- openssh
pillars:
default:
network:
dns:
srv1: 192.168.0.1
srv2: 192.168.0.2
domain: example.com
ntp:
srv1: 192.168.10.10
srv2: 192.168.10.20

View file

@ -0,0 +1,3 @@
pillars:
motd:
text: "Welcome to {{ __grains__['id'] }} system located in ${default:network:sub}"

View file

@ -0,0 +1,16 @@
states:
- user_mgt
pillars:
default:
users:
adm1:
uid: 1201
gid: 1201
gecos: 'Super user admin1'
homedir: /home/adm1
adm2:
uid: 1202
gid: 1202
gecos: 'Super user admin2'
homedir: /home/adm2

View file

@ -0,0 +1,21 @@
states:
- app
pillars:
app:
config:
dns:
srv1: ${default:network:dns:srv1}
srv2: ${default:network:dns:srv2}
uri: https://application.domain/call?\${test}
prod_parameters:
- p1
- p2
- p3
pkg:
- app-core
- app-backend
# Safe minion_id matching
{% if minion_id == 'zrh.node3' %}
safe_pillar: '_only_ zrh.node3 will see this pillar and this cannot be overriden like grains'
{% endif %}

View file

@ -0,0 +1,7 @@
states:
- nginx_deployment
pillars:
nginx:
pkg:
- nginx

View file

@ -0,0 +1,7 @@
classes:
- roles.nginx
pillars:
nginx:
pkg:
- nginx-module

View file

@ -0,0 +1,20 @@
pillars:
default:
network:
sub: Geneva
dns:
srv1: 10.20.0.1
srv2: 10.20.0.2
srv3: 192.168.1.1
domain: gnv.example.com
users:
adm1:
uid: 1210
gid: 1210
gecos: 'Super user admin1'
homedir: /srv/app/adm1
adm3:
uid: 1203
gid: 1203
gecos: 'Super user admin3'
homedir: /home/adm3

View file

@ -0,0 +1,17 @@
classes:
- app.ssh.server
- roles.nginx.server
pillars:
default:
network:
sub: Lausanne
dns:
srv1: 10.10.0.1
domain: qls.example.com
users:
nginx_adm:
uid: 250
gid: 200
gecos: 'Nginx admin user'
homedir: /srv/www

View file

@ -0,0 +1,24 @@
classes:
- roles.app
# This should validate that we process a class only once
- app.borgbackup
# As this one should not be processed
# and would override in turn overrides from app.borgbackup
- app.ssh.server
pillars:
default:
network:
sub: Zurich
dns:
srv1: 10.30.0.1
srv2: 10.30.0.2
domain: zrh.example.com
ntp:
srv1: 10.0.0.127
users:
adm1:
uid: 250
gid: 250
gecos: 'Super user admin1'
homedir: /srv/app/1

View file

@ -0,0 +1,6 @@
environment: base
classes:
{% for class in ['default'] %}
- {{ class }}
{% endfor %}

View file

@ -98,13 +98,13 @@ class Nilrt_ipModuleTest(ModuleCase):
def test_static_all(self):
interfaces = self.__interfaces()
for interface in interfaces:
result = self.run_function('ip.set_static_all', [interface, '192.168.10.4', '255.255.255.0', '192.168.10.1', '8.8.4.4 my.dns.com'])
result = self.run_function('ip.set_static_all', [interface, '192.168.10.4', '255.255.255.0', '192.168.10.1', '8.8.4.4 8.8.8.8'])
self.assertTrue(result)
info = self.run_function('ip.get_interfaces_details')
for interface in info['interfaces']:
self.assertIn('8.8.4.4', interface['ipv4']['dns'])
self.assertIn('my.dns.com', interface['ipv4']['dns'])
self.assertIn('8.8.8.8', interface['ipv4']['dns'])
self.assertEqual(interface['ipv4']['requestmode'], 'static')
self.assertEqual(interface['ipv4']['address'], '192.168.10.4')
self.assertEqual(interface['ipv4']['netmask'], '255.255.255.0')

View file

@ -341,7 +341,8 @@ class GitPillarTestBase(GitTestBase, LoaderModuleMockMixin):
with patch.dict(git_pillar.__opts__, ext_pillar_opts):
return git_pillar.ext_pillar(
'minion',
ext_pillar_opts['ext_pillar'][0]['git'],
{},
*ext_pillar_opts['ext_pillar'][0]['git']
)
def make_repo(self, root_dir, user='root'):

View file

@ -12,6 +12,7 @@
# Python libs
from __future__ import absolute_import
import sys
# Salt libs
import salt.config
@ -45,14 +46,32 @@ class StatusBeaconTestCase(TestCase, LoaderModuleMockMixin):
def test_empty_config(self, *args, **kwargs):
config = {}
ret = status.beacon(config)
self.assertEqual(sorted(list(ret[0]['data'])), sorted(['loadavg', 'meminfo', 'cpustats', 'vmstats', 'time']))
if sys.platform.startswith('win'):
expected = []
else:
expected = sorted(['loadavg', 'meminfo', 'cpustats', 'vmstats', 'time'])
self.assertEqual(sorted(list(ret[0]['data'])), expected)
def test_deprecated_dict_config(self):
config = {'time': ['all']}
ret = status.beacon(config)
self.assertEqual(list(ret[0]['data']), ['time'])
if sys.platform.startswith('win'):
expected = []
else:
expected = ['time']
self.assertEqual(list(ret[0]['data']), expected)
def test_list_config(self):
config = [{'time': ['all']}]
ret = status.beacon(config)
self.assertEqual(list(ret[0]['data']), ['time'])
if sys.platform.startswith('win'):
expected = []
else:
expected = ['time']
self.assertEqual(list(ret[0]['data']), expected)

View file

@ -63,7 +63,7 @@ class LocalFuncsTestCase(TestCase):
u'message': u'A command invocation error occurred: Check syntax.'}}
with patch('salt.auth.LoadAuth.authenticate_token', MagicMock(return_value=mock_token)), \
patch('salt.auth.LoadAuth.get_auth_list', MagicMock(return_value=[])):
patch('salt.auth.LoadAuth.get_auth_list', MagicMock(return_value=['testing'])):
ret = self.local_funcs.runner(load)
self.assertDictEqual(mock_ret, ret)
@ -93,7 +93,7 @@ class LocalFuncsTestCase(TestCase):
self.assertDictEqual(mock_ret, ret)
def test_runner_eauth_salt_invocation_errpr(self):
def test_runner_eauth_salt_invocation_error(self):
'''
Asserts that an EauthAuthenticationError is returned when the user authenticates, but the
command is malformed.
@ -102,7 +102,7 @@ class LocalFuncsTestCase(TestCase):
mock_ret = {u'error': {u'name': u'SaltInvocationError',
u'message': u'A command invocation error occurred: Check syntax.'}}
with patch('salt.auth.LoadAuth.authenticate_eauth', MagicMock(return_value=True)), \
patch('salt.auth.LoadAuth.get_auth_list', MagicMock(return_value=[])):
patch('salt.auth.LoadAuth.get_auth_list', MagicMock(return_value=['testing'])):
ret = self.local_funcs.runner(load)
self.assertDictEqual(mock_ret, ret)
@ -146,7 +146,7 @@ class LocalFuncsTestCase(TestCase):
u'message': u'A command invocation error occurred: Check syntax.'}}
with patch('salt.auth.LoadAuth.authenticate_token', MagicMock(return_value=mock_token)), \
patch('salt.auth.LoadAuth.get_auth_list', MagicMock(return_value=[])):
patch('salt.auth.LoadAuth.get_auth_list', MagicMock(return_value=['testing'])):
ret = self.local_funcs.wheel(load)
self.assertDictEqual(mock_ret, ret)
@ -176,7 +176,7 @@ class LocalFuncsTestCase(TestCase):
self.assertDictEqual(mock_ret, ret)
def test_wheel_eauth_salt_invocation_errpr(self):
def test_wheel_eauth_salt_invocation_error(self):
'''
Asserts that an EauthAuthenticationError is returned when the user authenticates, but the
command is malformed.
@ -185,7 +185,7 @@ class LocalFuncsTestCase(TestCase):
mock_ret = {u'error': {u'name': u'SaltInvocationError',
u'message': u'A command invocation error occurred: Check syntax.'}}
with patch('salt.auth.LoadAuth.authenticate_eauth', MagicMock(return_value=True)), \
patch('salt.auth.LoadAuth.get_auth_list', MagicMock(return_value=[])):
patch('salt.auth.LoadAuth.get_auth_list', MagicMock(return_value=['testing'])):
ret = self.local_funcs.wheel(load)
self.assertDictEqual(mock_ret, ret)

View file

@ -152,6 +152,8 @@ class DiskTestCase(TestCase, LoaderModuleMockMixin):
with patch.dict(disk.__salt__, {'cmd.retcode': mock}):
self.assertEqual(disk.format_(device), True)
@skipIf(not salt.utils.which('lsblk') and not salt.utils.which('df'),
'lsblk or df not found')
def test_fstype(self):
'''
unit tests for disk.fstype

View file

@ -70,7 +70,7 @@ class EnvironTestCase(TestCase, LoaderModuleMockMixin):
Set multiple salt process environment variables from a dict.
Returns a dict.
'''
mock_environ = {'key': 'value'}
mock_environ = {'KEY': 'value'}
with patch.dict(os.environ, mock_environ):
self.assertFalse(environ.setenv('environ'))
@ -83,7 +83,7 @@ class EnvironTestCase(TestCase, LoaderModuleMockMixin):
with patch.dict(os.environ, mock_environ):
mock_setval = MagicMock(return_value=None)
with patch.object(environ, 'setval', mock_setval):
self.assertEqual(environ.setenv({}, False, True, False)['key'],
self.assertEqual(environ.setenv({}, False, True, False)['KEY'],
None)
def test_get(self):

View file

@ -10,7 +10,7 @@ import textwrap
# Import Salt Testing libs
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.paths import TMP
from tests.support.unit import TestCase
from tests.support.unit import TestCase, skipIf
from tests.support.mock import MagicMock, patch
# Import Salt libs
@ -92,42 +92,53 @@ class FileReplaceTestCase(TestCase, LoaderModuleMockMixin):
'repl': 'baz=\\g<value>',
'append_if_not_found': True,
}
base = 'foo=1\nbar=2'
expected = '{base}\n{repl}\n'.format(base=base, **args)
base = os.linesep.join(['foo=1', 'bar=2'])
# File ending with a newline, no match
with tempfile.NamedTemporaryFile(mode='w+') as tfile:
tfile.write(base + '\n')
with tempfile.NamedTemporaryFile('w+b', delete=False) as tfile:
tfile.write(salt.utils.to_bytes(base + os.linesep))
tfile.flush()
filemod.replace(tfile.name, **args)
expected = os.linesep.join([base, 'baz=\\g<value>']) + os.linesep
with salt.utils.files.fopen(tfile.name) as tfile2:
self.assertEqual(tfile2.read(), expected)
os.remove(tfile.name)
# File not ending with a newline, no match
with tempfile.NamedTemporaryFile('w+') as tfile:
tfile.write(base)
with tempfile.NamedTemporaryFile('w+b', delete=False) as tfile:
tfile.write(salt.utils.to_bytes(base))
tfile.flush()
filemod.replace(tfile.name, **args)
with salt.utils.files.fopen(tfile.name) as tfile2:
self.assertEqual(tfile2.read(), expected)
os.remove(tfile.name)
# A newline should not be added in empty files
with tempfile.NamedTemporaryFile('w+') as tfile:
with tempfile.NamedTemporaryFile('w+b', delete=False) as tfile:
pass
filemod.replace(tfile.name, **args)
expected = args['repl'] + os.linesep
with salt.utils.files.fopen(tfile.name) as tfile2:
self.assertEqual(tfile2.read(), args['repl'] + '\n')
self.assertEqual(tfile2.read(), expected)
os.remove(tfile.name)
# Using not_found_content, rather than repl
with tempfile.NamedTemporaryFile('w+') as tfile:
args['not_found_content'] = 'baz=3'
expected = '{base}\n{not_found_content}\n'.format(base=base, **args)
tfile.write(base)
with tempfile.NamedTemporaryFile('w+b', delete=False) as tfile:
tfile.write(salt.utils.to_bytes(base))
tfile.flush()
args['not_found_content'] = 'baz=3'
expected = os.linesep.join([base, 'baz=3']) + os.linesep
filemod.replace(tfile.name, **args)
with salt.utils.files.fopen(tfile.name) as tfile2:
self.assertEqual(tfile2.read(), expected)
os.remove(tfile.name)
# not appending if matches
with tempfile.NamedTemporaryFile('w+') as tfile:
base = 'foo=1\n#baz=42\nbar=2\n'
expected = 'foo=1\nbaz=42\nbar=2\n'
tfile.write(base)
with tempfile.NamedTemporaryFile('w+b', delete=False) as tfile:
base = os.linesep.join(['foo=1', 'baz=42', 'bar=2'])
tfile.write(salt.utils.to_bytes(base))
tfile.flush()
expected = base
filemod.replace(tfile.name, **args)
with salt.utils.files.fopen(tfile.name) as tfile2:
self.assertEqual(tfile2.read(), expected)
@ -250,25 +261,26 @@ class FileBlockReplaceTestCase(TestCase, LoaderModuleMockMixin):
del self.tfile
def test_replace_multiline(self):
new_multiline_content = (
"Who's that then?\nWell, how'd you become king,"
"then?\nWe found them. I'm not a witch.\nWe shall"
"say 'Ni' again to you, if you do not appease us."
)
new_multiline_content = os.linesep.join([
"Who's that then?",
"Well, how'd you become king, then?",
"We found them. I'm not a witch.",
"We shall say 'Ni' again to you, if you do not appease us."
])
filemod.blockreplace(self.tfile.name,
'#-- START BLOCK 1',
'#-- END BLOCK 1',
new_multiline_content,
backup=False)
with salt.utils.files.fopen(self.tfile.name, 'r') as fp:
with salt.utils.files.fopen(self.tfile.name, 'rb') as fp:
filecontent = fp.read()
self.assertIn('#-- START BLOCK 1'
+ "\n" + new_multiline_content
+ "\n"
+ '#-- END BLOCK 1', filecontent)
self.assertNotIn('old content part 1', filecontent)
self.assertNotIn('old content part 2', filecontent)
self.assertIn(salt.utils.to_bytes(
os.linesep.join([
'#-- START BLOCK 1', new_multiline_content, '#-- END BLOCK 1'])),
filecontent)
self.assertNotIn(b'old content part 1', filecontent)
self.assertNotIn(b'old content part 2', filecontent)
def test_replace_append(self):
new_content = "Well, I didn't vote for you."
@ -295,10 +307,12 @@ class FileBlockReplaceTestCase(TestCase, LoaderModuleMockMixin):
backup=False,
append_if_not_found=True)
with salt.utils.files.fopen(self.tfile.name, 'r') as fp:
self.assertIn('#-- START BLOCK 2'
+ "\n" + new_content
+ '#-- END BLOCK 2', fp.read())
with salt.utils.files.fopen(self.tfile.name, 'rb') as fp:
self.assertIn(salt.utils.to_bytes(
os.linesep.join([
'#-- START BLOCK 2',
'{0}#-- END BLOCK 2'.format(new_content)])),
fp.read())
def test_replace_append_newline_at_eof(self):
'''
@ -312,27 +326,33 @@ class FileBlockReplaceTestCase(TestCase, LoaderModuleMockMixin):
'content': 'baz',
'append_if_not_found': True,
}
block = '{marker_start}\n{content}{marker_end}\n'.format(**args)
expected = base + '\n' + block
block = os.linesep.join(['#start', 'baz#stop']) + os.linesep
# File ending with a newline
with tempfile.NamedTemporaryFile(mode='w+') as tfile:
tfile.write(base + '\n')
with tempfile.NamedTemporaryFile(mode='w+b', delete=False) as tfile:
tfile.write(salt.utils.to_bytes(base + os.linesep))
tfile.flush()
filemod.blockreplace(tfile.name, **args)
expected = os.linesep.join([base, block])
with salt.utils.files.fopen(tfile.name) as tfile2:
self.assertEqual(tfile2.read(), expected)
os.remove(tfile.name)
# File not ending with a newline
with tempfile.NamedTemporaryFile(mode='w+') as tfile:
tfile.write(base)
with tempfile.NamedTemporaryFile(mode='w+b', delete=False) as tfile:
tfile.write(salt.utils.to_bytes(base))
tfile.flush()
filemod.blockreplace(tfile.name, **args)
with salt.utils.files.fopen(tfile.name) as tfile2:
self.assertEqual(tfile2.read(), expected)
os.remove(tfile.name)
# A newline should not be added in empty files
with tempfile.NamedTemporaryFile(mode='w+') as tfile:
with tempfile.NamedTemporaryFile(mode='w+b', delete=False) as tfile:
pass
filemod.blockreplace(tfile.name, **args)
with salt.utils.files.fopen(tfile.name) as tfile2:
self.assertEqual(tfile2.read(), block)
os.remove(tfile.name)
def test_replace_prepend(self):
new_content = "Well, I didn't vote for you."
@ -347,10 +367,11 @@ class FileBlockReplaceTestCase(TestCase, LoaderModuleMockMixin):
prepend_if_not_found=False,
backup=False
)
with salt.utils.files.fopen(self.tfile.name, 'r') as fp:
self.assertNotIn(
'#-- START BLOCK 2' + "\n"
+ new_content + '#-- END BLOCK 2',
with salt.utils.files.fopen(self.tfile.name, 'rb') as fp:
self.assertNotIn(salt.utils.to_bytes(
os.linesep.join([
'#-- START BLOCK 2',
'{0}#-- END BLOCK 2'.format(new_content)])),
fp.read())
filemod.blockreplace(self.tfile.name,
@ -359,12 +380,12 @@ class FileBlockReplaceTestCase(TestCase, LoaderModuleMockMixin):
backup=False,
prepend_if_not_found=True)
with salt.utils.files.fopen(self.tfile.name, 'r') as fp:
with salt.utils.files.fopen(self.tfile.name, 'rb') as fp:
self.assertTrue(
fp.read().startswith(
'#-- START BLOCK 2'
+ "\n" + new_content
+ '#-- END BLOCK 2'))
fp.read().startswith(salt.utils.to_bytes(
os.linesep.join([
'#-- START BLOCK 2',
'{0}#-- END BLOCK 2'.format(new_content)]))))
def test_replace_partial_marked_lines(self):
filemod.blockreplace(self.tfile.name,
@ -481,6 +502,7 @@ class FileModuleTestCase(TestCase, LoaderModuleMockMixin):
}
}
@skipIf(salt.utils.is_windows(), 'SED is not available on Windows')
def test_sed_limit_escaped(self):
with tempfile.NamedTemporaryFile(mode='w+') as tfile:
tfile.write(SED_CONTENT)
@ -505,37 +527,40 @@ class FileModuleTestCase(TestCase, LoaderModuleMockMixin):
newlines at end of file.
'''
# File ending with a newline
with tempfile.NamedTemporaryFile(mode='w+') as tfile:
tfile.write('foo\n')
with tempfile.NamedTemporaryFile(mode='w+b', delete=False) as tfile:
tfile.write(salt.utils.to_bytes('foo' + os.linesep))
tfile.flush()
filemod.append(tfile.name, 'bar')
expected = os.linesep.join(['foo', 'bar']) + os.linesep
with salt.utils.files.fopen(tfile.name) as tfile2:
self.assertEqual(tfile2.read(), 'foo\nbar\n')
self.assertEqual(tfile2.read(), expected)
# File not ending with a newline
with tempfile.NamedTemporaryFile(mode='w+') as tfile:
tfile.write('foo')
with tempfile.NamedTemporaryFile(mode='w+b', delete=False) as tfile:
tfile.write(salt.utils.to_bytes('foo'))
tfile.flush()
filemod.append(tfile.name, 'bar')
with salt.utils.files.fopen(tfile.name) as tfile2:
self.assertEqual(tfile2.read(), 'foo\nbar\n')
# A newline should not be added in empty files
with tempfile.NamedTemporaryFile(mode='w+') as tfile:
with salt.utils.fopen(tfile.name) as tfile2:
self.assertEqual(tfile2.read(), expected)
# A newline should be added in empty files
with tempfile.NamedTemporaryFile(mode='w+b', delete=False) as tfile:
filemod.append(tfile.name, 'bar')
with salt.utils.files.fopen(tfile.name) as tfile2:
self.assertEqual(tfile2.read(), 'bar\n')
self.assertEqual(tfile2.read(), 'bar' + os.linesep)
def test_extract_hash(self):
'''
Check various hash file formats.
'''
# With file name
with tempfile.NamedTemporaryFile(mode='w+') as tfile:
tfile.write(
with tempfile.NamedTemporaryFile(mode='w+b', delete=False) as tfile:
tfile.write(salt.utils.to_bytes(
'rc.conf ef6e82e4006dee563d98ada2a2a80a27\n'
'ead48423703509d37c4a90e6a0d53e143b6fc268 example.tar.gz\n'
'fe05bcdcdc4928012781a5f1a2a77cbb5398e106 ./subdir/example.tar.gz\n'
'ad782ecdac770fc6eb9a62e44f90873fb97fb26b foo.tar.bz2\n'
)
))
tfile.flush()
result = filemod.extract_hash(tfile.name, '', '/rc.conf')
@ -614,9 +639,10 @@ class FileModuleTestCase(TestCase, LoaderModuleMockMixin):
# Hash only, no file name (Maven repo checksum format)
# Since there is no name match, the first checksum in the file will
# always be returned, never the second.
with tempfile.NamedTemporaryFile(mode='w+') as tfile:
tfile.write('ead48423703509d37c4a90e6a0d53e143b6fc268\n'
'ad782ecdac770fc6eb9a62e44f90873fb97fb26b\n')
with tempfile.NamedTemporaryFile(mode='w+b', delete=False) as tfile:
tfile.write(salt.utils.to_bytes(
'ead48423703509d37c4a90e6a0d53e143b6fc268\n'
'ad782ecdac770fc6eb9a62e44f90873fb97fb26b\n'))
tfile.flush()
for hash_type in ('', 'sha1', 'sha256'):
@ -778,6 +804,7 @@ class FileBasicsTestCase(TestCase, LoaderModuleMockMixin):
self.addCleanup(os.remove, self.myfile)
self.addCleanup(delattr, self, 'myfile')
@skipIf(salt.utils.is_windows(), 'os.symlink is not available on Windows')
def test_symlink_already_in_desired_state(self):
os.symlink(self.tfile.name, self.directory + '/a_link')
self.addCleanup(os.remove, self.directory + '/a_link')

View file

@ -94,7 +94,7 @@ class HostsTestCase(TestCase, LoaderModuleMockMixin):
Tests true if the alias is set
'''
hosts_file = '/etc/hosts'
if salt.utils.is_windows():
if salt.utils.platform.is_windows():
hosts_file = r'C:\Windows\System32\Drivers\etc\hosts'
with patch('salt.modules.hosts.__get_hosts_filename',
@ -198,7 +198,7 @@ class HostsTestCase(TestCase, LoaderModuleMockMixin):
Tests if specified host entry gets added from the hosts file
'''
hosts_file = '/etc/hosts'
if salt.utils.is_windows():
if salt.utils.platform.is_windows():
hosts_file = r'C:\Windows\System32\Drivers\etc\hosts'
with patch('salt.utils.files.fopen', mock_open()), \

View file

@ -99,14 +99,15 @@ class KubernetesTestCase(TestCase, LoaderModuleMockMixin):
def test_delete_deployments(self):
'''
Tests deployment creation.
Tests deployment deletion
:return:
'''
with patch('salt.modules.kubernetes.kubernetes') as mock_kubernetes_lib:
with patch('salt.modules.kubernetes.show_deployment', Mock(return_value=None)):
with patch.dict(kubernetes.__salt__, {'config.option': Mock(return_value="")}):
mock_kubernetes_lib.client.V1DeleteOptions = Mock(return_value="")
mock_kubernetes_lib.client.ExtensionsV1beta1Api.return_value = Mock(
**{"delete_namespaced_deployment.return_value.to_dict.return_value": {'code': 200}}
**{"delete_namespaced_deployment.return_value.to_dict.return_value": {'code': ''}}
)
self.assertEqual(kubernetes.delete_deployment("test"), {'code': 200})
self.assertTrue(

View file

@ -50,10 +50,12 @@ class PoudriereTestCase(TestCase, LoaderModuleMockMixin):
'''
Test if it make jail ``jname`` pkgng aware.
'''
ret1 = 'Could not create or find required directory /tmp/salt'
ret2 = 'Looks like file /tmp/salt/salt-make.conf could not be created'
ret3 = {'changes': 'Created /tmp/salt/salt-make.conf'}
mock = MagicMock(return_value='/tmp/salt')
temp_dir = os.path.join('tmp', 'salt')
conf_file = os.path.join('tmp', 'salt', 'salt-make.conf')
ret1 = 'Could not create or find required directory {0}'.format(temp_dir)
ret2 = 'Looks like file {0} could not be created'.format(conf_file)
ret3 = {'changes': 'Created {0}'.format(conf_file)}
mock = MagicMock(return_value=temp_dir)
mock_true = MagicMock(return_value=True)
with patch.dict(poudriere.__salt__, {'config.option': mock,
'file.write': mock_true}):

View file

@ -639,6 +639,14 @@ class _GetProxyConnectionDetailsTestCase(TestCase, LoaderModuleMockMixin):
'mechanism': 'fake_mechanism',
'principal': 'fake_principal',
'domain': 'fake_domain'}
self.vcenter_details = {'vcenter': 'fake_vcenter',
'username': 'fake_username',
'password': 'fake_password',
'protocol': 'fake_protocol',
'port': 'fake_port',
'mechanism': 'fake_mechanism',
'principal': 'fake_principal',
'domain': 'fake_domain'}
def tearDown(self):
for attrname in ('esxi_host_details', 'esxi_vcenter_details',
@ -693,6 +701,17 @@ class _GetProxyConnectionDetailsTestCase(TestCase, LoaderModuleMockMixin):
'fake_protocol', 'fake_port', 'fake_mechanism',
'fake_principal', 'fake_domain'), ret)
def test_vcenter_proxy_details(self):
with patch('salt.modules.vsphere.get_proxy_type',
MagicMock(return_value='vcenter')):
with patch.dict(vsphere.__salt__,
{'vcenter.get_details': MagicMock(
return_value=self.vcenter_details)}):
ret = vsphere._get_proxy_connection_details()
self.assertEqual(('fake_vcenter', 'fake_username', 'fake_password',
'fake_protocol', 'fake_port', 'fake_mechanism',
'fake_principal', 'fake_domain'), ret)
def test_unsupported_proxy_details(self):
with patch('salt.modules.vsphere.get_proxy_type',
MagicMock(return_value='unsupported')):
@ -890,7 +909,7 @@ class GetServiceInstanceViaProxyTestCase(TestCase, LoaderModuleMockMixin):
}
def test_supported_proxies(self):
supported_proxies = ['esxi', 'esxcluster', 'esxdatacenter']
supported_proxies = ['esxi', 'esxcluster', 'esxdatacenter', 'vcenter']
for proxy_type in supported_proxies:
with patch('salt.modules.vsphere.get_proxy_type',
MagicMock(return_value=proxy_type)):
@ -933,7 +952,7 @@ class DisconnectTestCase(TestCase, LoaderModuleMockMixin):
}
def test_supported_proxies(self):
supported_proxies = ['esxi', 'esxcluster', 'esxdatacenter']
supported_proxies = ['esxi', 'esxcluster', 'esxdatacenter', 'vcenter']
for proxy_type in supported_proxies:
with patch('salt.modules.vsphere.get_proxy_type',
MagicMock(return_value=proxy_type)):
@ -974,7 +993,7 @@ class TestVcenterConnectionTestCase(TestCase, LoaderModuleMockMixin):
}
def test_supported_proxies(self):
supported_proxies = ['esxi', 'esxcluster', 'esxdatacenter']
supported_proxies = ['esxi', 'esxcluster', 'esxdatacenter', 'vcenter']
for proxy_type in supported_proxies:
with patch('salt.modules.vsphere.get_proxy_type',
MagicMock(return_value=proxy_type)):
@ -1049,7 +1068,7 @@ class ListDatacentersViaProxyTestCase(TestCase, LoaderModuleMockMixin):
}
def test_supported_proxies(self):
supported_proxies = ['esxcluster', 'esxdatacenter']
supported_proxies = ['esxcluster', 'esxdatacenter', 'vcenter']
for proxy_type in supported_proxies:
with patch('salt.modules.vsphere.get_proxy_type',
MagicMock(return_value=proxy_type)):
@ -1127,7 +1146,7 @@ class CreateDatacenterTestCase(TestCase, LoaderModuleMockMixin):
}
def test_supported_proxies(self):
supported_proxies = ['esxdatacenter']
supported_proxies = ['esxdatacenter', 'vcenter']
for proxy_type in supported_proxies:
with patch('salt.modules.vsphere.get_proxy_type',
MagicMock(return_value=proxy_type)):
@ -1339,12 +1358,15 @@ class _GetProxyTargetTestCase(TestCase, LoaderModuleMockMixin):
def setUp(self):
attrs = (('mock_si', MagicMock()),
('mock_dc', MagicMock()),
('mock_cl', MagicMock()))
('mock_cl', MagicMock()),
('mock_root', MagicMock()))
for attr, mock_obj in attrs:
setattr(self, attr, mock_obj)
self.addCleanup(delattr, self, attr)
attrs = (('mock_get_datacenter', MagicMock(return_value=self.mock_dc)),
('mock_get_cluster', MagicMock(return_value=self.mock_cl)))
('mock_get_cluster', MagicMock(return_value=self.mock_cl)),
('mock_get_root_folder',
MagicMock(return_value=self.mock_root)))
for attr, mock_obj in attrs:
setattr(self, attr, mock_obj)
self.addCleanup(delattr, self, attr)
@ -1360,7 +1382,8 @@ class _GetProxyTargetTestCase(TestCase, LoaderModuleMockMixin):
MagicMock(return_value=(None, None, None, None, None, None, None,
None, 'datacenter'))),
('salt.utils.vmware.get_datacenter', self.mock_get_datacenter),
('salt.utils.vmware.get_cluster', self.mock_get_cluster))
('salt.utils.vmware.get_cluster', self.mock_get_cluster),
('salt.utils.vmware.get_root_folder', self.mock_get_root_folder))
for module, mock_obj in patches:
patcher = patch(module, mock_obj)
patcher.start()
@ -1409,3 +1432,10 @@ class _GetProxyTargetTestCase(TestCase, LoaderModuleMockMixin):
MagicMock(return_value='esxdatacenter')):
ret = vsphere._get_proxy_target(self.mock_si)
self.assertEqual(ret, self.mock_dc)
def test_vcenter_proxy_return(self):
with patch('salt.modules.vsphere.get_proxy_type',
MagicMock(return_value='vcenter')):
ret = vsphere._get_proxy_target(self.mock_si)
self.mock_get_root_folder.assert_called_once_with(self.mock_si)
self.assertEqual(ret, self.mock_root)

View file

@ -0,0 +1,43 @@
# -*- coding: utf-8 -*-
# Import python libs
from __future__ import absolute_import
import os
# Import Salt Testing libs
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.unit import TestCase, skipIf
from tests.support.mock import NO_MOCK, NO_MOCK_REASON
# Import Salt Libs
import salt.pillar.saltclass as saltclass
base_path = os.path.dirname(os.path.realpath(__file__))
fake_minion_id = 'fake_id'
fake_pillar = {}
fake_args = ({'path': '{0}/../../integration/files/saltclass/examples'.format(base_path)})
fake_opts = {}
fake_salt = {}
fake_grains = {}
@skipIf(NO_MOCK, NO_MOCK_REASON)
class SaltclassPillarTestCase(TestCase, LoaderModuleMockMixin):
'''
Tests for salt.pillar.saltclass
'''
def setup_loader_modules(self):
return {saltclass: {'__opts__': fake_opts,
'__salt__': fake_salt,
'__grains__': fake_grains
}}
def _runner(self, expected_ret):
full_ret = saltclass.ext_pillar(fake_minion_id, fake_pillar, fake_args)
parsed_ret = full_ret['__saltclass__']['classes']
self.assertListEqual(parsed_ret, expected_ret)
def test_succeeds(self):
ret = ['default.users', 'default.motd', 'default']
self._runner(ret)

View file

@ -97,7 +97,7 @@ class LocalCacheCleanOldJobsTestCase(TestCase, LoaderModuleMockMixin):
local_cache.clean_old_jobs()
# Get the name of the JID directory that was created to test against
if salt.utils.is_windows():
if salt.utils.platform.is_windows():
jid_dir_name = jid_dir.rpartition('\\')[2]
else:
jid_dir_name = jid_dir.rpartition('/')[2]

View file

@ -18,6 +18,7 @@ import salt.serializers.yaml as yaml
import salt.serializers.yamlex as yamlex
import salt.serializers.msgpack as msgpack
import salt.serializers.python as python
from salt.serializers.yaml import EncryptedString
from salt.serializers import SerializationError
from salt.utils.odict import OrderedDict
@ -43,10 +44,11 @@ class TestSerializers(TestCase):
@skipIf(not yaml.available, SKIP_MESSAGE % 'yaml')
def test_serialize_yaml(self):
data = {
"foo": "bar"
"foo": "bar",
"encrypted_data": EncryptedString("foo")
}
serialized = yaml.serialize(data)
assert serialized == '{foo: bar}', serialized
assert serialized == '{encrypted_data: !encrypted foo, foo: bar}', serialized
deserialized = yaml.deserialize(serialized)
assert deserialized == data, deserialized

View file

@ -63,7 +63,7 @@ class ClearFuncsTestCase(TestCase):
u'message': u'A command invocation error occurred: Check syntax.'}}
with patch('salt.auth.LoadAuth.authenticate_token', MagicMock(return_value=mock_token)), \
patch('salt.auth.LoadAuth.get_auth_list', MagicMock(return_value=[])):
patch('salt.auth.LoadAuth.get_auth_list', MagicMock(return_value=['testing'])):
ret = self.clear_funcs.runner(clear_load)
self.assertDictEqual(mock_ret, ret)
@ -93,7 +93,7 @@ class ClearFuncsTestCase(TestCase):
self.assertDictEqual(mock_ret, ret)
def test_runner_eauth_salt_invocation_errpr(self):
def test_runner_eauth_salt_invocation_error(self):
'''
Asserts that an EauthAuthenticationError is returned when the user authenticates, but the
command is malformed.
@ -102,7 +102,7 @@ class ClearFuncsTestCase(TestCase):
mock_ret = {u'error': {u'name': u'SaltInvocationError',
u'message': u'A command invocation error occurred: Check syntax.'}}
with patch('salt.auth.LoadAuth.authenticate_eauth', MagicMock(return_value=True)), \
patch('salt.auth.LoadAuth.get_auth_list', MagicMock(return_value=[])):
patch('salt.auth.LoadAuth.get_auth_list', MagicMock(return_value=['testing'])):
ret = self.clear_funcs.runner(clear_load)
self.assertDictEqual(mock_ret, ret)
@ -155,7 +155,7 @@ class ClearFuncsTestCase(TestCase):
u'message': u'A command invocation error occurred: Check syntax.'}}
with patch('salt.auth.LoadAuth.authenticate_token', MagicMock(return_value=mock_token)), \
patch('salt.auth.LoadAuth.get_auth_list', MagicMock(return_value=[])):
patch('salt.auth.LoadAuth.get_auth_list', MagicMock(return_value=['testing'])):
ret = self.clear_funcs.wheel(clear_load)
self.assertDictEqual(mock_ret, ret)
@ -185,7 +185,7 @@ class ClearFuncsTestCase(TestCase):
self.assertDictEqual(mock_ret, ret)
def test_wheel_eauth_salt_invocation_errpr(self):
def test_wheel_eauth_salt_invocation_error(self):
'''
Asserts that an EauthAuthenticationError is returned when the user authenticates, but the
command is malformed.
@ -194,7 +194,7 @@ class ClearFuncsTestCase(TestCase):
mock_ret = {u'error': {u'name': u'SaltInvocationError',
u'message': u'A command invocation error occurred: Check syntax.'}}
with patch('salt.auth.LoadAuth.authenticate_eauth', MagicMock(return_value=True)), \
patch('salt.auth.LoadAuth.get_auth_list', MagicMock(return_value=[])):
patch('salt.auth.LoadAuth.get_auth_list', MagicMock(return_value=['testing'])):
ret = self.clear_funcs.wheel(clear_load)
self.assertDictEqual(mock_ret, ret)

View file

@ -18,6 +18,7 @@ import salt.utils.event as event
from salt.exceptions import SaltSystemExit
import salt.syspaths
import tornado
from salt.ext.six.moves import range
__opts__ = {}
@ -69,7 +70,7 @@ class MinionTestCase(TestCase):
mock_jid_queue = [123]
try:
minion = salt.minion.Minion(mock_opts, jid_queue=copy.copy(mock_jid_queue), io_loop=tornado.ioloop.IOLoop())
ret = minion._handle_decoded_payload(mock_data)
ret = minion._handle_decoded_payload(mock_data).result()
self.assertEqual(minion.jid_queue, mock_jid_queue)
self.assertIsNone(ret)
finally:
@ -98,7 +99,7 @@ class MinionTestCase(TestCase):
# Call the _handle_decoded_payload function and update the mock_jid_queue to include the new
# mock_jid. The mock_jid should have been added to the jid_queue since the mock_jid wasn't
# previously included. The minion's jid_queue attribute and the mock_jid_queue should be equal.
minion._handle_decoded_payload(mock_data)
minion._handle_decoded_payload(mock_data).result()
mock_jid_queue.append(mock_jid)
self.assertEqual(minion.jid_queue, mock_jid_queue)
finally:
@ -126,8 +127,54 @@ class MinionTestCase(TestCase):
# Call the _handle_decoded_payload function and check that the queue is smaller by one item
# and contains the new jid
minion._handle_decoded_payload(mock_data)
minion._handle_decoded_payload(mock_data).result()
self.assertEqual(len(minion.jid_queue), 2)
self.assertEqual(minion.jid_queue, [456, 789])
finally:
minion.destroy()
def test_process_count_max(self):
'''
Tests that the _handle_decoded_payload function does not spawn more than the configured amount of processes,
as per process_count_max.
'''
with patch('salt.minion.Minion.ctx', MagicMock(return_value={})), \
patch('salt.utils.process.SignalHandlingMultiprocessingProcess.start', MagicMock(return_value=True)), \
patch('salt.utils.process.SignalHandlingMultiprocessingProcess.join', MagicMock(return_value=True)), \
patch('salt.utils.minion.running', MagicMock(return_value=[])), \
patch('tornado.gen.sleep', MagicMock(return_value=tornado.concurrent.Future())):
process_count_max = 10
mock_opts = salt.config.DEFAULT_MINION_OPTS
mock_opts['minion_jid_queue_hwm'] = 100
mock_opts["process_count_max"] = process_count_max
try:
io_loop = tornado.ioloop.IOLoop()
minion = salt.minion.Minion(mock_opts, jid_queue=[], io_loop=io_loop)
# mock gen.sleep to throw a special Exception when called, so that we detect it
class SleepCalledEception(Exception):
"""Thrown when sleep is called"""
pass
tornado.gen.sleep.return_value.set_exception(SleepCalledEception())
# up until process_count_max: gen.sleep does not get called, processes are started normally
for i in range(process_count_max):
mock_data = {'fun': 'foo.bar',
'jid': i}
io_loop.run_sync(lambda data=mock_data: minion._handle_decoded_payload(data))
self.assertEqual(salt.utils.process.SignalHandlingMultiprocessingProcess.start.call_count, i + 1)
self.assertEqual(len(minion.jid_queue), i + 1)
salt.utils.minion.running.return_value += [i]
# above process_count_max: gen.sleep does get called, JIDs are created but no new processes are started
mock_data = {'fun': 'foo.bar',
'jid': process_count_max + 1}
self.assertRaises(SleepCalledEception,
lambda: io_loop.run_sync(lambda: minion._handle_decoded_payload(mock_data)))
self.assertEqual(salt.utils.process.SignalHandlingMultiprocessingProcess.start.call_count,
process_count_max)
self.assertEqual(len(minion.jid_queue), process_count_max + 1)
finally:
minion.destroy()

View file

@ -49,7 +49,7 @@ class RecursiveDictDifferTestCase(TestCase):
def test_changed_without_ignore_unset_values(self):
self.recursive_diff.ignore_unset_values = False
self.assertEqual(self.recursive_diff.changed(),
['a.c', 'a.e', 'a.g', 'a.f', 'h', 'i'])
['a.c', 'a.e', 'a.f', 'a.g', 'h', 'i'])
def test_unchanged(self):
self.assertEqual(self.recursive_diff.unchanged(),
@ -89,7 +89,7 @@ class RecursiveDictDifferTestCase(TestCase):
'a:\n'
' c from 2 to 4\n'
' e from \'old_value\' to \'new_value\'\n'
' g from nothing to \'new_key\'\n'
' f from \'old_key\' to nothing\n'
' g from nothing to \'new_key\'\n'
'h from nothing to \'new_key\'\n'
'i from nothing to None')

View file

@ -32,34 +32,43 @@ class ListDictDifferTestCase(TestCase):
continue
def test_added(self):
self.assertEqual(self.list_diff.added,
[{'key': 5, 'value': 'foo5', 'int_value': 105}])
self.assertEqual(len(self.list_diff.added), 1)
self.assertDictEqual(self.list_diff.added[0],
{'key': 5, 'value': 'foo5', 'int_value': 105})
def test_removed(self):
self.assertEqual(self.list_diff.removed,
[{'key': 3, 'value': 'foo3', 'int_value': 103}])
self.assertEqual(len(self.list_diff.removed), 1)
self.assertDictEqual(self.list_diff.removed[0],
{'key': 3, 'value': 'foo3', 'int_value': 103})
def test_diffs(self):
self.assertEqual(self.list_diff.diffs,
[{2: {'int_value': {'new': 112, 'old': 102}}},
self.assertEqual(len(self.list_diff.diffs), 3)
self.assertDictEqual(self.list_diff.diffs[0],
{2: {'int_value': {'new': 112, 'old': 102}}})
self.assertDictEqual(self.list_diff.diffs[1],
# Added items
{5: {'int_value': {'new': 105, 'old': NONE},
'key': {'new': 5, 'old': NONE},
'value': {'new': 'foo5', 'old': NONE}}},
'value': {'new': 'foo5', 'old': NONE}}})
self.assertDictEqual(self.list_diff.diffs[2],
# Removed items
{3: {'int_value': {'new': NONE, 'old': 103},
'key': {'new': NONE, 'old': 3},
'value': {'new': NONE, 'old': 'foo3'}}}])
'value': {'new': NONE, 'old': 'foo3'}}})
def test_new_values(self):
self.assertEqual(self.list_diff.new_values,
[{'key': 2, 'int_value': 112},
{'key': 5, 'value': 'foo5', 'int_value': 105}])
self.assertEqual(len(self.list_diff.new_values), 2)
self.assertDictEqual(self.list_diff.new_values[0],
{'key': 2, 'int_value': 112})
self.assertDictEqual(self.list_diff.new_values[1],
{'key': 5, 'value': 'foo5', 'int_value': 105})
def test_old_values(self):
self.assertEqual(self.list_diff.old_values,
[{'key': 2, 'int_value': 102},
{'key': 3, 'value': 'foo3', 'int_value': 103}])
self.assertEqual(len(self.list_diff.old_values), 2)
self.assertDictEqual(self.list_diff.old_values[0],
{'key': 2, 'int_value': 102})
self.assertDictEqual(self.list_diff.old_values[1],
{'key': 3, 'value': 'foo3', 'int_value': 103})
def test_changed_all(self):
self.assertEqual(self.list_diff.changed(selection='all'),
@ -78,11 +87,3 @@ class ListDictDifferTestCase(TestCase):
'\twill be removed\n'
'\tidentified by key 5:\n'
'\twill be added\n')
def test_changes_str2(self):
self.assertEqual(self.list_diff.changes_str2,
' key=2 (updated):\n'
' int_value from 102 to 112\n'
' key=3 (removed)\n'
' key=5 (added): {\'int_value\': 105, \'key\': 5, '
'\'value\': \'foo5\'}')

View file

@ -958,5 +958,47 @@ class SaltAPIParserTestCase(LogSettingsParserTests):
self.addCleanup(delattr, self, 'parser')
@skipIf(NO_MOCK, NO_MOCK_REASON)
class DaemonMixInTestCase(TestCase):
'''
Tests the PIDfile deletion in the DaemonMixIn.
'''
def setUp(self):
'''
Setting up
'''
# Set PID
self.pid = '/some/fake.pid'
# Setup mixin
self.mixin = salt.utils.parsers.DaemonMixIn()
self.mixin.info = None
self.mixin.config = {}
self.mixin.config['pidfile'] = self.pid
def test_pid_file_deletion(self):
'''
PIDfile deletion without exception.
'''
with patch('os.unlink', MagicMock()) as os_unlink:
with patch('os.path.isfile', MagicMock(return_value=True)):
with patch.object(self.mixin, 'info', MagicMock()):
self.mixin._mixin_before_exit()
assert self.mixin.info.call_count == 0
assert os_unlink.call_count == 1
def test_pid_file_deletion_with_oserror(self):
'''
PIDfile deletion with exception
'''
with patch('os.unlink', MagicMock(side_effect=OSError())) as os_unlink:
with patch('os.path.isfile', MagicMock(return_value=True)):
with patch.object(self.mixin, 'info', MagicMock()):
self.mixin._mixin_before_exit()
assert os_unlink.call_count == 1
self.mixin.info.assert_called_with(
'PIDfile could not be deleted: {0}'.format(self.pid))
# Hide the class from unittest framework when it searches for TestCase classes in the module
del LogSettingsParserTests

View file

@ -0,0 +1,664 @@
# -*- coding: utf-8 -*-
'''
:codeauthor: :email:`Alexandru Bleotu <alexandru.bleotu@morganstanley.com>`
Tests functions in salt.utils.vsan
'''
# Import python libraries
from __future__ import absolute_import
import logging
# Import Salt testing libraries
from tests.support.unit import TestCase, skipIf
from tests.support.mock import NO_MOCK, NO_MOCK_REASON, patch, MagicMock, \
PropertyMock
# Import Salt libraries
from salt.exceptions import VMwareApiError, VMwareRuntimeError, \
VMwareObjectRetrievalError
from salt.ext.six.moves import range
import salt.utils.pbm
try:
from pyVmomi import vim, vmodl, pbm
HAS_PYVMOMI = True
except ImportError:
HAS_PYVMOMI = False
# Get Logging Started
log = logging.getLogger(__name__)
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class GetProfileManagerTestCase(TestCase):
'''Tests for salt.utils.pbm.get_profile_manager'''
def setUp(self):
self.mock_si = MagicMock()
self.mock_stub = MagicMock()
self.mock_prof_mgr = MagicMock()
self.mock_content = MagicMock()
self.mock_pbm_si = MagicMock(
RetrieveContent=MagicMock(return_value=self.mock_content))
type(self.mock_content).profileManager = \
PropertyMock(return_value=self.mock_prof_mgr)
patches = (
('salt.utils.vmware.get_new_service_instance_stub',
MagicMock(return_value=self.mock_stub)),
('salt.utils.pbm.pbm.ServiceInstance',
MagicMock(return_value=self.mock_pbm_si)))
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ('mock_si', 'mock_stub', 'mock_content',
'mock_pbm_si', 'mock_prof_mgr'):
delattr(self, attr)
def test_get_new_service_stub(self):
mock_get_new_service_stub = MagicMock()
with patch('salt.utils.vmware.get_new_service_instance_stub',
mock_get_new_service_stub):
salt.utils.pbm.get_profile_manager(self.mock_si)
mock_get_new_service_stub.assert_called_once_with(
self.mock_si, ns='pbm/2.0', path='/pbm/sdk')
def test_pbm_si(self):
mock_get_pbm_si = MagicMock()
with patch('salt.utils.pbm.pbm.ServiceInstance',
mock_get_pbm_si):
salt.utils.pbm.get_profile_manager(self.mock_si)
mock_get_pbm_si.assert_called_once_with('ServiceInstance',
self.mock_stub)
def test_return_profile_manager(self):
ret = salt.utils.pbm.get_profile_manager(self.mock_si)
self.assertEqual(ret, self.mock_prof_mgr)
def test_profile_manager_raises_no_permissions(self):
exc = vim.fault.NoPermission()
exc.privilegeId = 'Fake privilege'
type(self.mock_content).profileManager = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.pbm.get_profile_manager(self.mock_si)
self.assertEqual(excinfo.exception.strerror,
'Not enough permissions. Required privilege: '
'Fake privilege')
def test_profile_manager_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = 'VimFault msg'
type(self.mock_content).profileManager = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.pbm.get_profile_manager(self.mock_si)
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
def test_profile_manager_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = 'RuntimeFault msg'
type(self.mock_content).profileManager = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.pbm.get_profile_manager(self.mock_si)
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class GetPlacementSolverTestCase(TestCase):
'''Tests for salt.utils.pbm.get_placement_solver'''
def setUp(self):
self.mock_si = MagicMock()
self.mock_stub = MagicMock()
self.mock_prof_mgr = MagicMock()
self.mock_content = MagicMock()
self.mock_pbm_si = MagicMock(
RetrieveContent=MagicMock(return_value=self.mock_content))
type(self.mock_content).placementSolver = \
PropertyMock(return_value=self.mock_prof_mgr)
patches = (
('salt.utils.vmware.get_new_service_instance_stub',
MagicMock(return_value=self.mock_stub)),
('salt.utils.pbm.pbm.ServiceInstance',
MagicMock(return_value=self.mock_pbm_si)))
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ('mock_si', 'mock_stub', 'mock_content',
'mock_pbm_si', 'mock_prof_mgr'):
delattr(self, attr)
def test_get_new_service_stub(self):
mock_get_new_service_stub = MagicMock()
with patch('salt.utils.vmware.get_new_service_instance_stub',
mock_get_new_service_stub):
salt.utils.pbm.get_placement_solver(self.mock_si)
mock_get_new_service_stub.assert_called_once_with(
self.mock_si, ns='pbm/2.0', path='/pbm/sdk')
def test_pbm_si(self):
mock_get_pbm_si = MagicMock()
with patch('salt.utils.pbm.pbm.ServiceInstance',
mock_get_pbm_si):
salt.utils.pbm.get_placement_solver(self.mock_si)
mock_get_pbm_si.assert_called_once_with('ServiceInstance',
self.mock_stub)
def test_return_profile_manager(self):
ret = salt.utils.pbm.get_placement_solver(self.mock_si)
self.assertEqual(ret, self.mock_prof_mgr)
def test_placement_solver_raises_no_permissions(self):
exc = vim.fault.NoPermission()
exc.privilegeId = 'Fake privilege'
type(self.mock_content).placementSolver = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.pbm.get_placement_solver(self.mock_si)
self.assertEqual(excinfo.exception.strerror,
'Not enough permissions. Required privilege: '
'Fake privilege')
def test_placement_solver_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = 'VimFault msg'
type(self.mock_content).placementSolver = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.pbm.get_placement_solver(self.mock_si)
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
def test_placement_solver_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = 'RuntimeFault msg'
type(self.mock_content).placementSolver = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.pbm.get_placement_solver(self.mock_si)
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class GetCapabilityDefinitionsTestCase(TestCase):
'''Tests for salt.utils.pbm.get_capability_definitions'''
def setUp(self):
self.mock_res_type = MagicMock()
self.mock_cap_cats = [MagicMock(capabilityMetadata=['fake_cap_meta1',
'fake_cap_meta2']),
MagicMock(capabilityMetadata=['fake_cap_meta3'])]
self.mock_prof_mgr = MagicMock(
FetchCapabilityMetadata=MagicMock(return_value=self.mock_cap_cats))
patches = (
('salt.utils.pbm.pbm.profile.ResourceType',
MagicMock(return_value=self.mock_res_type)),)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ('mock_res_type', 'mock_cap_cats', 'mock_prof_mgr'):
delattr(self, attr)
def test_get_res_type(self):
mock_get_res_type = MagicMock()
with patch('salt.utils.pbm.pbm.profile.ResourceType',
mock_get_res_type):
salt.utils.pbm.get_capability_definitions(self.mock_prof_mgr)
mock_get_res_type.assert_called_once_with(
resourceType=pbm.profile.ResourceTypeEnum.STORAGE)
def test_fetch_capabilities(self):
salt.utils.pbm.get_capability_definitions(self.mock_prof_mgr)
self.mock_prof_mgr.FetchCapabilityMetadata.assert_called_once_with(
self.mock_res_type)
def test_fetch_capabilities_raises_no_permissions(self):
exc = vim.fault.NoPermission()
exc.privilegeId = 'Fake privilege'
self.mock_prof_mgr.FetchCapabilityMetadata = \
MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.pbm.get_capability_definitions(self.mock_prof_mgr)
self.assertEqual(excinfo.exception.strerror,
'Not enough permissions. Required privilege: '
'Fake privilege')
def test_fetch_capabilities_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = 'VimFault msg'
self.mock_prof_mgr.FetchCapabilityMetadata = \
MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.pbm.get_capability_definitions(self.mock_prof_mgr)
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
def test_fetch_capabilities_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = 'RuntimeFault msg'
self.mock_prof_mgr.FetchCapabilityMetadata = \
MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.pbm.get_capability_definitions(self.mock_prof_mgr)
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
def test_return_cap_definitions(self):
ret = salt.utils.pbm.get_capability_definitions(self.mock_prof_mgr)
self.assertEqual(ret, ['fake_cap_meta1', 'fake_cap_meta2',
'fake_cap_meta3'])
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class GetPoliciesByIdTestCase(TestCase):
'''Tests for salt.utils.pbm.get_policies_by_id'''
def setUp(self):
self.policy_ids = MagicMock()
self.mock_policies = MagicMock()
self.mock_prof_mgr = MagicMock(
RetrieveContent=MagicMock(return_value=self.mock_policies))
def tearDown(self):
for attr in ('policy_ids', 'mock_policies', 'mock_prof_mgr'):
delattr(self, attr)
def test_retrieve_policies(self):
salt.utils.pbm.get_policies_by_id(self.mock_prof_mgr, self.policy_ids)
self.mock_prof_mgr.RetrieveContent.assert_called_once_with(
self.policy_ids)
def test_retrieve_policies_raises_no_permissions(self):
exc = vim.fault.NoPermission()
exc.privilegeId = 'Fake privilege'
self.mock_prof_mgr.RetrieveContent = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.pbm.get_policies_by_id(self.mock_prof_mgr, self.policy_ids)
self.assertEqual(excinfo.exception.strerror,
'Not enough permissions. Required privilege: '
'Fake privilege')
def test_retrieve_policies_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = 'VimFault msg'
self.mock_prof_mgr.RetrieveContent = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.pbm.get_policies_by_id(self.mock_prof_mgr, self.policy_ids)
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
def test_retrieve_policies_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = 'RuntimeFault msg'
self.mock_prof_mgr.RetrieveContent = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.pbm.get_policies_by_id(self.mock_prof_mgr, self.policy_ids)
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
def test_return_policies(self):
ret = salt.utils.pbm.get_policies_by_id(self.mock_prof_mgr, self.policy_ids)
self.assertEqual(ret, self.mock_policies)
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class GetStoragePoliciesTestCase(TestCase):
'''Tests for salt.utils.pbm.get_storage_policies'''
def setUp(self):
self.mock_res_type = MagicMock()
self.mock_policy_ids = MagicMock()
self.mock_prof_mgr = MagicMock(
QueryProfile=MagicMock(return_value=self.mock_policy_ids))
# Policies
self.mock_policies = []
for i in range(4):
mock_obj = MagicMock(resourceType=MagicMock(
resourceType=pbm.profile.ResourceTypeEnum.STORAGE))
mock_obj.name = 'fake_policy{0}'.format(i)
self.mock_policies.append(mock_obj)
patches = (
('salt.utils.pbm.pbm.profile.ResourceType',
MagicMock(return_value=self.mock_res_type)),
('salt.utils.pbm.get_policies_by_id',
MagicMock(return_value=self.mock_policies)))
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ('mock_res_type', 'mock_policy_ids', 'mock_policies',
'mock_prof_mgr'):
delattr(self, attr)
def test_get_res_type(self):
mock_get_res_type = MagicMock()
with patch('salt.utils.pbm.pbm.profile.ResourceType',
mock_get_res_type):
salt.utils.pbm.get_storage_policies(self.mock_prof_mgr)
mock_get_res_type.assert_called_once_with(
resourceType=pbm.profile.ResourceTypeEnum.STORAGE)
def test_retrieve_policy_ids(self):
mock_retrieve_policy_ids = MagicMock(return_value=self.mock_policy_ids)
self.mock_prof_mgr.QueryProfile = mock_retrieve_policy_ids
salt.utils.pbm.get_storage_policies(self.mock_prof_mgr)
mock_retrieve_policy_ids.assert_called_once_with(self.mock_res_type)
def test_retrieve_policy_ids_raises_no_permissions(self):
exc = vim.fault.NoPermission()
exc.privilegeId = 'Fake privilege'
self.mock_prof_mgr.QueryProfile = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.pbm.get_storage_policies(self.mock_prof_mgr)
self.assertEqual(excinfo.exception.strerror,
'Not enough permissions. Required privilege: '
'Fake privilege')
def test_retrieve_policy_ids_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = 'VimFault msg'
self.mock_prof_mgr.QueryProfile = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.pbm.get_storage_policies(self.mock_prof_mgr)
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
def test_retrieve_policy_ids_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = 'RuntimeFault msg'
self.mock_prof_mgr.QueryProfile = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.pbm.get_storage_policies(self.mock_prof_mgr)
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
def test_get_policies_by_id(self):
mock_get_policies_by_id = MagicMock(return_value=self.mock_policies)
with patch('salt.utils.pbm.get_policies_by_id',
mock_get_policies_by_id):
salt.utils.pbm.get_storage_policies(self.mock_prof_mgr)
mock_get_policies_by_id.assert_called_once_with(
self.mock_prof_mgr, self.mock_policy_ids)
def test_return_all_policies(self):
ret = salt.utils.pbm.get_storage_policies(self.mock_prof_mgr,
get_all_policies=True)
self.assertEqual(ret, self.mock_policies)
def test_return_filtered_policies(self):
ret = salt.utils.pbm.get_storage_policies(
self.mock_prof_mgr, policy_names=['fake_policy1', 'fake_policy3'])
self.assertEqual(ret, [self.mock_policies[1], self.mock_policies[3]])
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class CreateStoragePolicyTestCase(TestCase):
'''Tests for salt.utils.pbm.create_storage_policy'''
def setUp(self):
self.mock_policy_spec = MagicMock()
self.mock_prof_mgr = MagicMock()
def tearDown(self):
for attr in ('mock_policy_spec', 'mock_prof_mgr'):
delattr(self, attr)
def test_create_policy(self):
salt.utils.pbm.create_storage_policy(self.mock_prof_mgr,
self.mock_policy_spec)
self.mock_prof_mgr.Create.assert_called_once_with(
self.mock_policy_spec)
def test_create_policy_raises_no_permissions(self):
exc = vim.fault.NoPermission()
exc.privilegeId = 'Fake privilege'
self.mock_prof_mgr.Create = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.pbm.create_storage_policy(self.mock_prof_mgr,
self.mock_policy_spec)
self.assertEqual(excinfo.exception.strerror,
'Not enough permissions. Required privilege: '
'Fake privilege')
def test_create_policy_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = 'VimFault msg'
self.mock_prof_mgr.Create = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.pbm.create_storage_policy(self.mock_prof_mgr,
self.mock_policy_spec)
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
def test_create_policy_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = 'RuntimeFault msg'
self.mock_prof_mgr.Create = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.pbm.create_storage_policy(self.mock_prof_mgr,
self.mock_policy_spec)
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class UpdateStoragePolicyTestCase(TestCase):
'''Tests for salt.utils.pbm.update_storage_policy'''
def setUp(self):
self.mock_policy_spec = MagicMock()
self.mock_policy = MagicMock()
self.mock_prof_mgr = MagicMock()
def tearDown(self):
for attr in ('mock_policy_spec', 'mock_policy', 'mock_prof_mgr'):
delattr(self, attr)
def test_create_policy(self):
salt.utils.pbm.update_storage_policy(
self.mock_prof_mgr, self.mock_policy, self.mock_policy_spec)
self.mock_prof_mgr.Update.assert_called_once_with(
self.mock_policy.profileId, self.mock_policy_spec)
def test_create_policy_raises_no_permissions(self):
exc = vim.fault.NoPermission()
exc.privilegeId = 'Fake privilege'
self.mock_prof_mgr.Update = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.pbm.update_storage_policy(
self.mock_prof_mgr, self.mock_policy, self.mock_policy_spec)
self.assertEqual(excinfo.exception.strerror,
'Not enough permissions. Required privilege: '
'Fake privilege')
def test_create_policy_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = 'VimFault msg'
self.mock_prof_mgr.Update = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.pbm.update_storage_policy(
self.mock_prof_mgr, self.mock_policy, self.mock_policy_spec)
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
def test_create_policy_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = 'RuntimeFault msg'
self.mock_prof_mgr.Update = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.pbm.update_storage_policy(
self.mock_prof_mgr, self.mock_policy, self.mock_policy_spec)
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class GetDefaultStoragePolicyOfDatastoreTestCase(TestCase):
'''Tests for salt.utils.pbm.get_default_storage_policy_of_datastore'''
def setUp(self):
self.mock_ds = MagicMock(_moId='fake_ds_moid')
self.mock_hub = MagicMock()
self.mock_policy_id = 'fake_policy_id'
self.mock_prof_mgr = MagicMock(
QueryDefaultRequirementProfile=MagicMock(
return_value=self.mock_policy_id))
self.mock_policy_refs = [MagicMock()]
patches = (
('salt.utils.pbm.pbm.placement.PlacementHub',
MagicMock(return_value=self.mock_hub)),
('salt.utils.pbm.get_policies_by_id',
MagicMock(return_value=self.mock_policy_refs)))
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ('mock_ds', 'mock_hub', 'mock_policy_id', 'mock_prof_mgr',
'mock_policy_refs'):
delattr(self, attr)
def test_get_placement_hub(self):
mock_get_placement_hub = MagicMock()
with patch('salt.utils.pbm.pbm.placement.PlacementHub',
mock_get_placement_hub):
salt.utils.pbm.get_default_storage_policy_of_datastore(
self.mock_prof_mgr, self.mock_ds)
mock_get_placement_hub.assert_called_once_with(
hubId='fake_ds_moid', hubType='Datastore')
def test_query_default_requirement_profile(self):
mock_query_prof = MagicMock(return_value=self.mock_policy_id)
self.mock_prof_mgr.QueryDefaultRequirementProfile = \
mock_query_prof
salt.utils.pbm.get_default_storage_policy_of_datastore(
self.mock_prof_mgr, self.mock_ds)
mock_query_prof.assert_called_once_with(self.mock_hub)
def test_query_default_requirement_profile_raises_no_permissions(self):
exc = vim.fault.NoPermission()
exc.privilegeId = 'Fake privilege'
self.mock_prof_mgr.QueryDefaultRequirementProfile = \
MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.pbm.get_default_storage_policy_of_datastore(
self.mock_prof_mgr, self.mock_ds)
self.assertEqual(excinfo.exception.strerror,
'Not enough permissions. Required privilege: '
'Fake privilege')
def test_query_default_requirement_profile_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = 'VimFault msg'
self.mock_prof_mgr.QueryDefaultRequirementProfile = \
MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.pbm.get_default_storage_policy_of_datastore(
self.mock_prof_mgr, self.mock_ds)
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
def test_query_default_requirement_profile_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = 'RuntimeFault msg'
self.mock_prof_mgr.QueryDefaultRequirementProfile = \
MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.pbm.get_default_storage_policy_of_datastore(
self.mock_prof_mgr, self.mock_ds)
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
def test_get_policies_by_id(self):
mock_get_policies_by_id = MagicMock()
with patch('salt.utils.pbm.get_policies_by_id',
mock_get_policies_by_id):
salt.utils.pbm.get_default_storage_policy_of_datastore(
self.mock_prof_mgr, self.mock_ds)
mock_get_policies_by_id.assert_called_once_with(
self.mock_prof_mgr, [self.mock_policy_id])
def test_no_policy_refs(self):
mock_get_policies_by_id = MagicMock()
with patch('salt.utils.pbm.get_policies_by_id',
MagicMock(return_value=None)):
with self.assertRaises(VMwareObjectRetrievalError) as excinfo:
salt.utils.pbm.get_default_storage_policy_of_datastore(
self.mock_prof_mgr, self.mock_ds)
self.assertEqual(excinfo.exception.strerror,
'Storage policy with id \'fake_policy_id\' was not '
'found')
def test_return_policy_ref(self):
mock_get_policies_by_id = MagicMock()
ret = salt.utils.pbm.get_default_storage_policy_of_datastore(
self.mock_prof_mgr, self.mock_ds)
self.assertEqual(ret, self.mock_policy_refs[0])
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class AssignDefaultStoragePolicyToDatastoreTestCase(TestCase):
'''Tests for salt.utils.pbm.assign_default_storage_policy_to_datastore'''
def setUp(self):
self.mock_ds = MagicMock(_moId='fake_ds_moid')
self.mock_policy = MagicMock()
self.mock_hub = MagicMock()
self.mock_prof_mgr = MagicMock()
patches = (
('salt.utils.pbm.pbm.placement.PlacementHub',
MagicMock(return_value=self.mock_hub)),)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ('mock_ds', 'mock_hub', 'mock_policy', 'mock_prof_mgr'):
delattr(self, attr)
def test_get_placement_hub(self):
mock_get_placement_hub = MagicMock()
with patch('salt.utils.pbm.pbm.placement.PlacementHub',
mock_get_placement_hub):
salt.utils.pbm.assign_default_storage_policy_to_datastore(
self.mock_prof_mgr, self.mock_policy, self.mock_ds)
mock_get_placement_hub.assert_called_once_with(
hubId='fake_ds_moid', hubType='Datastore')
def test_assign_default_requirement_profile(self):
mock_assign_prof = MagicMock()
self.mock_prof_mgr.AssignDefaultRequirementProfile = \
mock_assign_prof
salt.utils.pbm.assign_default_storage_policy_to_datastore(
self.mock_prof_mgr, self.mock_policy, self.mock_ds)
mock_assign_prof.assert_called_once_with(
self.mock_policy.profileId, [self.mock_hub])
def test_assign_default_requirement_profile_raises_no_permissions(self):
exc = vim.fault.NoPermission()
exc.privilegeId = 'Fake privilege'
self.mock_prof_mgr.AssignDefaultRequirementProfile = \
MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.pbm.assign_default_storage_policy_to_datastore(
self.mock_prof_mgr, self.mock_policy, self.mock_ds)
self.assertEqual(excinfo.exception.strerror,
'Not enough permissions. Required privilege: '
'Fake privilege')
def test_assign_default_requirement_profile_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = 'VimFault msg'
self.mock_prof_mgr.AssignDefaultRequirementProfile = \
MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.pbm.assign_default_storage_policy_to_datastore(
self.mock_prof_mgr, self.mock_policy, self.mock_ds)
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
def test_assign_default_requirement_profile_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = 'RuntimeFault msg'
self.mock_prof_mgr.AssignDefaultRequirementProfile = \
MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.pbm.assign_default_storage_policy_to_datastore(
self.mock_prof_mgr, self.mock_policy, self.mock_ds)
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')

View file

@ -13,6 +13,7 @@ import ssl
import sys
# Import Salt testing libraries
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.unit import TestCase, skipIf
from tests.support.mock import NO_MOCK, NO_MOCK_REASON, patch, MagicMock, call, \
PropertyMock
@ -852,6 +853,96 @@ class IsConnectionToAVCenterTestCase(TestCase):
excinfo.exception.strerror)
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class GetNewServiceInstanceStub(TestCase, LoaderModuleMockMixin):
'''Tests for salt.utils.vmware.get_new_service_instance_stub'''
def setup_loader_modules(self):
return {salt.utils.vmware: {
'__virtual__': MagicMock(return_value='vmware'),
'sys': MagicMock(),
'ssl': MagicMock()}}
def setUp(self):
self.mock_stub = MagicMock(
host='fake_host:1000',
cookie='ignore"fake_cookie')
self.mock_si = MagicMock(
_stub=self.mock_stub)
self.mock_ret = MagicMock()
self.mock_new_stub = MagicMock()
self.context_dict = {}
patches = (('salt.utils.vmware.VmomiSupport.GetRequestContext',
MagicMock(
return_value=self.context_dict)),
('salt.utils.vmware.SoapStubAdapter',
MagicMock(return_value=self.mock_new_stub)))
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
type(salt.utils.vmware.sys).version_info = \
PropertyMock(return_value=(2, 7, 9))
self.mock_context = MagicMock()
self.mock_create_default_context = \
MagicMock(return_value=self.mock_context)
salt.utils.vmware.ssl.create_default_context = \
self.mock_create_default_context
def tearDown(self):
for attr in ('mock_stub', 'mock_si', 'mock_ret', 'mock_new_stub',
'context_dict', 'mock_context',
'mock_create_default_context'):
delattr(self, attr)
def test_ssl_default_context_loaded(self):
salt.utils.vmware.get_new_service_instance_stub(
self.mock_si, 'fake_path')
self.mock_create_default_context.assert_called_once_with()
self.assertFalse(self.mock_context.check_hostname)
self.assertEqual(self.mock_context.verify_mode,
salt.utils.vmware.ssl.CERT_NONE)
def test_ssl_default_context_not_loaded(self):
type(salt.utils.vmware.sys).version_info = \
PropertyMock(return_value=(2, 7, 8))
salt.utils.vmware.get_new_service_instance_stub(
self.mock_si, 'fake_path')
self.assertEqual(self.mock_create_default_context.call_count, 0)
def test_session_cookie_in_context(self):
salt.utils.vmware.get_new_service_instance_stub(
self.mock_si, 'fake_path')
self.assertEqual(self.context_dict['vcSessionCookie'], 'fake_cookie')
def test_get_new_stub(self):
mock_get_new_stub = MagicMock()
with patch('salt.utils.vmware.SoapStubAdapter', mock_get_new_stub):
salt.utils.vmware.get_new_service_instance_stub(
self.mock_si, 'fake_path', 'fake_ns', 'fake_version')
mock_get_new_stub.assert_called_once_with(
host='fake_host', ns='fake_ns', path='fake_path',
version='fake_version', poolSize=0, sslContext=self.mock_context)
def test_get_new_stub_2_7_8_python(self):
type(salt.utils.vmware.sys).version_info = \
PropertyMock(return_value=(2, 7, 8))
mock_get_new_stub = MagicMock()
with patch('salt.utils.vmware.SoapStubAdapter', mock_get_new_stub):
salt.utils.vmware.get_new_service_instance_stub(
self.mock_si, 'fake_path', 'fake_ns', 'fake_version')
mock_get_new_stub.assert_called_once_with(
host='fake_host', ns='fake_ns', path='fake_path',
version='fake_version', poolSize=0, sslContext=None)
def test_new_stub_returned(self):
ret = salt.utils.vmware.get_new_service_instance_stub(
self.mock_si, 'fake_path')
self.assertEqual(self.mock_new_stub.cookie, 'ignore"fake_cookie')
self.assertEqual(ret, self.mock_new_stub)
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class GetServiceInstanceFromManagedObjectTestCase(TestCase):

View file

@ -0,0 +1,784 @@
# -*- coding: utf-8 -*-
'''
:codeauthor: :email:`Alexandru Bleotu <alexandru.bleotu@morganstanley.com>`
Tests for dvs related functions in salt.utils.vmware
'''
# Import python libraries
from __future__ import absolute_import
import logging
# Import Salt testing libraries
from tests.support.unit import TestCase, skipIf
from tests.support.mock import NO_MOCK, NO_MOCK_REASON, patch, MagicMock, call
from salt.exceptions import VMwareObjectRetrievalError, VMwareApiError, \
ArgumentValueError, VMwareRuntimeError
#i Import Salt libraries
import salt.utils.vmware as vmware
# Import Third Party Libs
try:
from pyVmomi import vim, vmodl
HAS_PYVMOMI = True
except ImportError:
HAS_PYVMOMI = False
# Get Logging Started
log = logging.getLogger(__name__)
class FakeTaskClass(object):
pass
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class GetDvssTestCase(TestCase):
def setUp(self):
self.mock_si = MagicMock()
self.mock_dc_ref = MagicMock()
self.mock_traversal_spec = MagicMock()
self.mock_items = [{'object': MagicMock(),
'name': 'fake_dvs1'},
{'object': MagicMock(),
'name': 'fake_dvs2'},
{'object': MagicMock(),
'name': 'fake_dvs3'}]
self.mock_get_mors = MagicMock(return_value=self.mock_items)
patches = (
('salt.utils.vmware.get_managed_object_name',
MagicMock()),
('salt.utils.vmware.get_mors_with_properties',
self.mock_get_mors),
('salt.utils.vmware.get_service_instance_from_managed_object',
MagicMock(return_value=self.mock_si)),
('salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec',
MagicMock(return_value=self.mock_traversal_spec)))
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ('mock_si', 'mock_dc_ref', 'mock_traversal_spec',
'mock_items', 'mock_get_mors'):
delattr(self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch('salt.utils.vmware.get_managed_object_name',
mock_get_managed_object_name):
vmware.get_dvss(self.mock_dc_ref)
mock_get_managed_object_name.assert_called_once_with(self.mock_dc_ref)
def test_traversal_spec(self):
mock_traversal_spec = MagicMock(return_value='traversal_spec')
with patch(
'salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec',
mock_traversal_spec):
vmware.get_dvss(self.mock_dc_ref)
mock_traversal_spec.assert_has_calls(
[call(path='childEntity', skip=False, type=vim.Folder),
call(path='networkFolder', skip=True, type=vim.Datacenter,
selectSet=['traversal_spec'])])
def test_get_mors_with_properties(self):
vmware.get_dvss(self.mock_dc_ref)
self.mock_get_mors.assert_called_once_with(
self.mock_si, vim.DistributedVirtualSwitch,
container_ref=self.mock_dc_ref, property_list=['name'],
traversal_spec=self.mock_traversal_spec)
def test_get_no_dvss(self):
ret = vmware.get_dvss(self.mock_dc_ref)
self.assertEqual(ret, [])
def test_get_all_dvss(self):
ret = vmware.get_dvss(self.mock_dc_ref, get_all_dvss=True)
self.assertEqual(ret, [i['object'] for i in self.mock_items])
def test_filtered_all_dvss(self):
ret = vmware.get_dvss(self.mock_dc_ref,
dvs_names=['fake_dvs1', 'fake_dvs3', 'no_dvs'])
self.assertEqual(ret, [self.mock_items[0]['object'],
self.mock_items[2]['object']])
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class GetNetworkFolderTestCase(TestCase):
def setUp(self):
self.mock_si = MagicMock()
self.mock_dc_ref = MagicMock()
self.mock_traversal_spec = MagicMock()
self.mock_entries = [{'object': MagicMock(),
'name': 'fake_netw_folder'}]
self.mock_get_mors = MagicMock(return_value=self.mock_entries)
patches = (
('salt.utils.vmware.get_managed_object_name',
MagicMock(return_value='fake_dc')),
('salt.utils.vmware.get_service_instance_from_managed_object',
MagicMock(return_value=self.mock_si)),
('salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec',
MagicMock(return_value=self.mock_traversal_spec)),
('salt.utils.vmware.get_mors_with_properties',
self.mock_get_mors))
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ('mock_si', 'mock_dc_ref', 'mock_traversal_spec',
'mock_entries', 'mock_get_mors'):
delattr(self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch('salt.utils.vmware.get_managed_object_name',
mock_get_managed_object_name):
vmware.get_network_folder(self.mock_dc_ref)
mock_get_managed_object_name.assert_called_once_with(self.mock_dc_ref)
def test_traversal_spec(self):
mock_traversal_spec = MagicMock(return_value='traversal_spec')
with patch(
'salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec',
mock_traversal_spec):
vmware.get_network_folder(self.mock_dc_ref)
mock_traversal_spec.assert_called_once_with(
path='networkFolder', skip=False, type=vim.Datacenter)
def test_get_mors_with_properties(self):
vmware.get_network_folder(self.mock_dc_ref)
self.mock_get_mors.assert_called_once_with(
self.mock_si, vim.Folder, container_ref=self.mock_dc_ref,
property_list=['name'], traversal_spec=self.mock_traversal_spec)
def test_get_no_network_folder(self):
with patch('salt.utils.vmware.get_mors_with_properties',
MagicMock(return_value=[])):
with self.assertRaises(VMwareObjectRetrievalError) as excinfo:
vmware.get_network_folder(self.mock_dc_ref)
self.assertEqual(excinfo.exception.strerror,
'Network folder in datacenter \'fake_dc\' wasn\'t '
'retrieved')
def test_get_network_folder(self):
ret = vmware.get_network_folder(self.mock_dc_ref)
self.assertEqual(ret, self.mock_entries[0]['object'])
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class CreateDvsTestCase(TestCase):
def setUp(self):
self.mock_dc_ref = MagicMock()
self.mock_dvs_create_spec = MagicMock()
self.mock_task = MagicMock(spec=FakeTaskClass)
self.mock_netw_folder = \
MagicMock(CreateDVS_Task=MagicMock(
return_value=self.mock_task))
self.mock_wait_for_task = MagicMock()
patches = (
('salt.utils.vmware.get_managed_object_name',
MagicMock(return_value='fake_dc')),
('salt.utils.vmware.get_network_folder',
MagicMock(return_value=self.mock_netw_folder)),
('salt.utils.vmware.wait_for_task', self.mock_wait_for_task))
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ('mock_dc_ref', 'mock_dvs_create_spec',
'mock_task', 'mock_netw_folder', 'mock_wait_for_task'):
delattr(self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch('salt.utils.vmware.get_managed_object_name',
mock_get_managed_object_name):
vmware.create_dvs(self.mock_dc_ref, 'fake_dvs')
mock_get_managed_object_name.assert_called_once_with(self.mock_dc_ref)
def test_no_dvs_create_spec(self):
mock_spec = MagicMock(configSpec=None)
mock_config_spec = MagicMock()
mock_dvs_create_spec = MagicMock(return_value=mock_spec)
mock_vmware_dvs_config_spec = \
MagicMock(return_value=mock_config_spec)
with patch('salt.utils.vmware.vim.DVSCreateSpec',
mock_dvs_create_spec):
with patch('salt.utils.vmware.vim.VMwareDVSConfigSpec',
mock_vmware_dvs_config_spec):
vmware.create_dvs(self.mock_dc_ref, 'fake_dvs')
mock_dvs_create_spec.assert_called_once_with()
mock_vmware_dvs_config_spec.assert_called_once_with()
self.assertEqual(mock_spec.configSpec, mock_config_spec)
self.assertEqual(mock_config_spec.name, 'fake_dvs')
self.mock_netw_folder.CreateDVS_Task.assert_called_once_with(mock_spec)
def test_get_network_folder(self):
mock_get_network_folder = MagicMock()
with patch('salt.utils.vmware.get_network_folder',
mock_get_network_folder):
vmware.create_dvs(self.mock_dc_ref, 'fake_dvs')
mock_get_network_folder.assert_called_once_with(self.mock_dc_ref)
def test_create_dvs_task_passed_in_spec(self):
vmware.create_dvs(self.mock_dc_ref, 'fake_dvs',
dvs_create_spec=self.mock_dvs_create_spec)
self.mock_netw_folder.CreateDVS_Task.assert_called_once_with(
self.mock_dvs_create_spec)
def test_create_dvs_task_raises_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = 'Fake privilege'
self.mock_netw_folder.CreateDVS_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
vmware.create_dvs(self.mock_dc_ref, 'fake_dvs',
dvs_create_spec=self.mock_dvs_create_spec)
self.assertEqual(excinfo.exception.strerror,
'Not enough permissions. Required privilege: '
'Fake privilege')
def test_create_dvs_task_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = 'VimFault msg'
self.mock_netw_folder.CreateDVS_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
vmware.create_dvs(self.mock_dc_ref, 'fake_dvs',
dvs_create_spec=self.mock_dvs_create_spec)
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
def test_create_dvs_task_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = 'RuntimeFault msg'
self.mock_netw_folder.CreateDVS_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
vmware.create_dvs(self.mock_dc_ref, 'fake_dvs',
dvs_create_spec=self.mock_dvs_create_spec)
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
def test_wait_for_tasks(self):
vmware.create_dvs(self.mock_dc_ref, 'fake_dvs',
dvs_create_spec=self.mock_dvs_create_spec)
self.mock_wait_for_task.assert_called_once_with(
self.mock_task, 'fake_dvs',
'<class \'unit.utils.vmware.test_dvs.FakeTaskClass\'>')
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class UpdateDvsTestCase(TestCase):
def setUp(self):
self.mock_task = MagicMock(spec=FakeTaskClass)
self.mock_dvs_ref = MagicMock(
ReconfigureDvs_Task=MagicMock(return_value=self.mock_task))
self.mock_dvs_spec = MagicMock()
self.mock_wait_for_task = MagicMock()
patches = (
('salt.utils.vmware.get_managed_object_name',
MagicMock(return_value='fake_dvs')),
('salt.utils.vmware.wait_for_task', self.mock_wait_for_task))
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ('mock_dvs_ref', 'mock_task', 'mock_dvs_spec',
'mock_wait_for_task'):
delattr(self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch('salt.utils.vmware.get_managed_object_name',
mock_get_managed_object_name):
vmware.update_dvs(self.mock_dvs_ref, self.mock_dvs_spec)
mock_get_managed_object_name.assert_called_once_with(self.mock_dvs_ref)
def test_reconfigure_dvs_task(self):
vmware.update_dvs(self.mock_dvs_ref, self.mock_dvs_spec)
self.mock_dvs_ref.ReconfigureDvs_Task.assert_called_once_with(
self.mock_dvs_spec)
def test_reconfigure_dvs_task_raises_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = 'Fake privilege'
self.mock_dvs_ref.ReconfigureDvs_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
vmware.update_dvs(self.mock_dvs_ref, self.mock_dvs_spec)
self.assertEqual(excinfo.exception.strerror,
'Not enough permissions. Required privilege: '
'Fake privilege')
def test_reconfigure_dvs_task_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = 'VimFault msg'
self.mock_dvs_ref.ReconfigureDvs_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
vmware.update_dvs(self.mock_dvs_ref, self.mock_dvs_spec)
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
def test_reconfigure_dvs_task_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = 'RuntimeFault msg'
self.mock_dvs_ref.ReconfigureDvs_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
vmware.update_dvs(self.mock_dvs_ref, self.mock_dvs_spec)
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
def test_wait_for_tasks(self):
vmware.update_dvs(self.mock_dvs_ref, self.mock_dvs_spec)
self.mock_wait_for_task.assert_called_once_with(
self.mock_task, 'fake_dvs',
'<class \'unit.utils.vmware.test_dvs.FakeTaskClass\'>')
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class SetDvsNetworkResourceManagementEnabledTestCase(TestCase):
def setUp(self):
self.mock_enabled = MagicMock()
self.mock_dvs_ref = MagicMock(
EnableNetworkResourceManagement=MagicMock())
patches = (
('salt.utils.vmware.get_managed_object_name',
MagicMock(return_value='fake_dvs')),)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ('mock_dvs_ref', 'mock_enabled'):
delattr(self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch('salt.utils.vmware.get_managed_object_name',
mock_get_managed_object_name):
vmware.set_dvs_network_resource_management_enabled(
self.mock_dvs_ref, self.mock_enabled)
mock_get_managed_object_name.assert_called_once_with(self.mock_dvs_ref)
def test_enable_network_resource_management(self):
vmware.set_dvs_network_resource_management_enabled(
self.mock_dvs_ref, self.mock_enabled)
self.mock_dvs_ref.EnableNetworkResourceManagement.assert_called_once_with(
enable=self.mock_enabled)
def test_enable_network_resource_management_raises_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = 'Fake privilege'
self.mock_dvs_ref.EnableNetworkResourceManagement = \
MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
vmware.set_dvs_network_resource_management_enabled(
self.mock_dvs_ref, self.mock_enabled)
self.assertEqual(excinfo.exception.strerror,
'Not enough permissions. Required privilege: '
'Fake privilege')
def test_enable_network_resource_management_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = 'VimFault msg'
self.mock_dvs_ref.EnableNetworkResourceManagement = \
MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
vmware.set_dvs_network_resource_management_enabled(
self.mock_dvs_ref, self.mock_enabled)
def test_enable_network_resource_management_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = 'RuntimeFault msg'
self.mock_dvs_ref.EnableNetworkResourceManagement = \
MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
vmware.set_dvs_network_resource_management_enabled(
self.mock_dvs_ref, self.mock_enabled)
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class GetDvportgroupsTestCase(TestCase):
def setUp(self):
self.mock_si = MagicMock()
self.mock_dc_ref = MagicMock(spec=vim.Datacenter)
self.mock_dvs_ref = MagicMock(spec=vim.DistributedVirtualSwitch)
self.mock_traversal_spec = MagicMock()
self.mock_items = [{'object': MagicMock(),
'name': 'fake_pg1'},
{'object': MagicMock(),
'name': 'fake_pg2'},
{'object': MagicMock(),
'name': 'fake_pg3'}]
self.mock_get_mors = MagicMock(return_value=self.mock_items)
patches = (
('salt.utils.vmware.get_managed_object_name',
MagicMock()),
('salt.utils.vmware.get_mors_with_properties',
self.mock_get_mors),
('salt.utils.vmware.get_service_instance_from_managed_object',
MagicMock(return_value=self.mock_si)),
('salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec',
MagicMock(return_value=self.mock_traversal_spec)))
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ('mock_si', 'mock_dc_ref', 'mock_dvs_ref',
'mock_traversal_spec', 'mock_items', 'mock_get_mors'):
delattr(self, attr)
def test_unsupported_parrent(self):
with self.assertRaises(ArgumentValueError) as excinfo:
vmware.get_dvportgroups(MagicMock())
self.assertEqual(excinfo.exception.strerror,
'Parent has to be either a datacenter, or a '
'distributed virtual switch')
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch('salt.utils.vmware.get_managed_object_name',
mock_get_managed_object_name):
vmware.get_dvportgroups(self.mock_dc_ref)
mock_get_managed_object_name.assert_called_once_with(self.mock_dc_ref)
def test_traversal_spec_datacenter_parent(self):
mock_traversal_spec = MagicMock(return_value='traversal_spec')
with patch(
'salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec',
mock_traversal_spec):
vmware.get_dvportgroups(self.mock_dc_ref)
mock_traversal_spec.assert_has_calls(
[call(path='childEntity', skip=False, type=vim.Folder),
call(path='networkFolder', skip=True, type=vim.Datacenter,
selectSet=['traversal_spec'])])
def test_traversal_spec_dvs_parent(self):
mock_traversal_spec = MagicMock(return_value='traversal_spec')
with patch(
'salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec',
mock_traversal_spec):
vmware.get_dvportgroups(self.mock_dvs_ref)
mock_traversal_spec.assert_called_once_with(
path='portgroup', skip=False, type=vim.DistributedVirtualSwitch)
def test_get_mors_with_properties(self):
vmware.get_dvportgroups(self.mock_dvs_ref)
self.mock_get_mors.assert_called_once_with(
self.mock_si, vim.DistributedVirtualPortgroup,
container_ref=self.mock_dvs_ref, property_list=['name'],
traversal_spec=self.mock_traversal_spec)
def test_get_no_pgs(self):
ret = vmware.get_dvportgroups(self.mock_dvs_ref)
self.assertEqual(ret, [])
def test_get_all_pgs(self):
ret = vmware.get_dvportgroups(self.mock_dvs_ref,
get_all_portgroups=True)
self.assertEqual(ret, [i['object'] for i in self.mock_items])
def test_filtered_pgs(self):
ret = vmware.get_dvss(self.mock_dc_ref,
dvs_names=['fake_pg1', 'fake_pg3', 'no_pg'])
self.assertEqual(ret, [self.mock_items[0]['object'],
self.mock_items[2]['object']])
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class GetUplinkDvportgroupTestCase(TestCase):
def setUp(self):
self.mock_si = MagicMock()
self.mock_dvs_ref = MagicMock(spec=vim.DistributedVirtualSwitch)
self.mock_traversal_spec = MagicMock()
self.mock_items = [{'object': MagicMock(),
'tag': [MagicMock(key='fake_tag')]},
{'object': MagicMock(),
'tag': [MagicMock(key='SYSTEM/DVS.UPLINKPG')]}]
self.mock_get_mors = MagicMock(return_value=self.mock_items)
patches = (
('salt.utils.vmware.get_managed_object_name',
MagicMock(return_value='fake_dvs')),
('salt.utils.vmware.get_mors_with_properties',
self.mock_get_mors),
('salt.utils.vmware.get_service_instance_from_managed_object',
MagicMock(return_value=self.mock_si)),
('salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec',
MagicMock(return_value=self.mock_traversal_spec)))
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ('mock_si', 'mock_dvs_ref', 'mock_traversal_spec',
'mock_items', 'mock_get_mors'):
delattr(self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch('salt.utils.vmware.get_managed_object_name',
mock_get_managed_object_name):
vmware.get_uplink_dvportgroup(self.mock_dvs_ref)
mock_get_managed_object_name.assert_called_once_with(self.mock_dvs_ref)
def test_traversal_spec(self):
mock_traversal_spec = MagicMock(return_value='traversal_spec')
with patch(
'salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec',
mock_traversal_spec):
vmware.get_uplink_dvportgroup(self.mock_dvs_ref)
mock_traversal_spec.assert_called_once_with(
path='portgroup', skip=False, type=vim.DistributedVirtualSwitch)
def test_get_mors_with_properties(self):
vmware.get_uplink_dvportgroup(self.mock_dvs_ref)
self.mock_get_mors.assert_called_once_with(
self.mock_si, vim.DistributedVirtualPortgroup,
container_ref=self.mock_dvs_ref, property_list=['tag'],
traversal_spec=self.mock_traversal_spec)
def test_get_no_uplink_pg(self):
with patch('salt.utils.vmware.get_mors_with_properties',
MagicMock(return_value=[])):
with self.assertRaises(VMwareObjectRetrievalError) as excinfo:
vmware.get_uplink_dvportgroup(self.mock_dvs_ref)
self.assertEqual(excinfo.exception.strerror,
'Uplink portgroup of DVS \'fake_dvs\' wasn\'t found')
def test_get_uplink_pg(self):
ret = vmware.get_uplink_dvportgroup(self.mock_dvs_ref)
self.assertEqual(ret, self.mock_items[1]['object'])
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class CreateDvportgroupTestCase(TestCase):
def setUp(self):
self.mock_pg_spec = MagicMock()
self.mock_task = MagicMock(spec=FakeTaskClass)
self.mock_dvs_ref = \
MagicMock(CreateDVPortgroup_Task=MagicMock(
return_value=self.mock_task))
self.mock_wait_for_task = MagicMock()
patches = (
('salt.utils.vmware.get_managed_object_name',
MagicMock(return_value='fake_dvs')),
('salt.utils.vmware.wait_for_task', self.mock_wait_for_task))
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ('mock_pg_spec', 'mock_dvs_ref', 'mock_task',
'mock_wait_for_task'):
delattr(self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch('salt.utils.vmware.get_managed_object_name',
mock_get_managed_object_name):
vmware.create_dvportgroup(self.mock_dvs_ref, self.mock_pg_spec)
mock_get_managed_object_name.assert_called_once_with(self.mock_dvs_ref)
def test_create_dvporgroup_task(self):
vmware.create_dvportgroup(self.mock_dvs_ref, self.mock_pg_spec)
self.mock_dvs_ref.CreateDVPortgroup_Task.assert_called_once_with(
self.mock_pg_spec)
def test_create_dvporgroup_task_raises_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = 'Fake privilege'
self.mock_dvs_ref.CreateDVPortgroup_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
vmware.create_dvportgroup(self.mock_dvs_ref, self.mock_pg_spec)
self.assertEqual(excinfo.exception.strerror,
'Not enough permissions. Required privilege: '
'Fake privilege')
def test_create_dvporgroup_task_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = 'VimFault msg'
self.mock_dvs_ref.CreateDVPortgroup_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
vmware.create_dvportgroup(self.mock_dvs_ref, self.mock_pg_spec)
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
def test_create_dvporgroup_task_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = 'RuntimeFault msg'
self.mock_dvs_ref.CreateDVPortgroup_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
vmware.create_dvportgroup(self.mock_dvs_ref, self.mock_pg_spec)
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
def test_wait_for_tasks(self):
vmware.create_dvportgroup(self.mock_dvs_ref, self.mock_pg_spec)
self.mock_wait_for_task.assert_called_once_with(
self.mock_task, 'fake_dvs',
'<class \'unit.utils.vmware.test_dvs.FakeTaskClass\'>')
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class UpdateDvportgroupTestCase(TestCase):
def setUp(self):
self.mock_pg_spec = MagicMock()
self.mock_task = MagicMock(spec=FakeTaskClass)
self.mock_pg_ref = \
MagicMock(ReconfigureDVPortgroup_Task=MagicMock(
return_value=self.mock_task))
self.mock_wait_for_task = MagicMock()
patches = (
('salt.utils.vmware.get_managed_object_name',
MagicMock(return_value='fake_pg')),
('salt.utils.vmware.wait_for_task', self.mock_wait_for_task))
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ('mock_pg_spec', 'mock_pg_ref', 'mock_task',
'mock_wait_for_task'):
delattr(self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch('salt.utils.vmware.get_managed_object_name',
mock_get_managed_object_name):
vmware.update_dvportgroup(self.mock_pg_ref, self.mock_pg_spec)
mock_get_managed_object_name.assert_called_once_with(self.mock_pg_ref)
def test_reconfigure_dvporgroup_task(self):
vmware.update_dvportgroup(self.mock_pg_ref, self.mock_pg_spec)
self.mock_pg_ref.ReconfigureDVPortgroup_Task.assert_called_once_with(
self.mock_pg_spec)
def test_reconfigure_dvporgroup_task_raises_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = 'Fake privilege'
self.mock_pg_ref.ReconfigureDVPortgroup_Task = \
MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
vmware.update_dvportgroup(self.mock_pg_ref, self.mock_pg_spec)
self.assertEqual(excinfo.exception.strerror,
'Not enough permissions. Required privilege: '
'Fake privilege')
def test_reconfigure_dvporgroup_task_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = 'VimFault msg'
self.mock_pg_ref.ReconfigureDVPortgroup_Task = \
MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
vmware.update_dvportgroup(self.mock_pg_ref, self.mock_pg_spec)
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
def test_reconfigure_dvporgroup_task_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = 'RuntimeFault msg'
self.mock_pg_ref.ReconfigureDVPortgroup_Task = \
MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
vmware.update_dvportgroup(self.mock_pg_ref, self.mock_pg_spec)
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
def test_wait_for_tasks(self):
vmware.update_dvportgroup(self.mock_pg_ref, self.mock_pg_spec)
self.mock_wait_for_task.assert_called_once_with(
self.mock_task, 'fake_pg',
'<class \'unit.utils.vmware.test_dvs.FakeTaskClass\'>')
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
class RemoveDvportgroupTestCase(TestCase):
def setUp(self):
self.mock_task = MagicMock(spec=FakeTaskClass)
self.mock_pg_ref = \
MagicMock(Destroy_Task=MagicMock(
return_value=self.mock_task))
self.mock_wait_for_task = MagicMock()
patches = (
('salt.utils.vmware.get_managed_object_name',
MagicMock(return_value='fake_pg')),
('salt.utils.vmware.wait_for_task', self.mock_wait_for_task))
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ('mock_pg_ref', 'mock_task', 'mock_wait_for_task'):
delattr(self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch('salt.utils.vmware.get_managed_object_name',
mock_get_managed_object_name):
vmware.remove_dvportgroup(self.mock_pg_ref)
mock_get_managed_object_name.assert_called_once_with(self.mock_pg_ref)
def test_destroy_task(self):
vmware.remove_dvportgroup(self.mock_pg_ref)
self.mock_pg_ref.Destroy_Task.assert_called_once_with()
def test_destroy_task_raises_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = 'Fake privilege'
self.mock_pg_ref.Destroy_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
vmware.remove_dvportgroup(self.mock_pg_ref)
self.assertEqual(excinfo.exception.strerror,
'Not enough permissions. Required privilege: '
'Fake privilege')
def test_destroy_treconfigure_dvporgroup_task_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = 'VimFault msg'
self.mock_pg_ref.Destroy_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
vmware.remove_dvportgroup(self.mock_pg_ref)
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
def test_destroy_treconfigure_dvporgroup_task_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = 'RuntimeFault msg'
self.mock_pg_ref.Destroy_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
vmware.remove_dvportgroup(self.mock_pg_ref)
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
def test_wait_for_tasks(self):
vmware.remove_dvportgroup(self.mock_pg_ref)
self.mock_wait_for_task.assert_called_once_with(
self.mock_task, 'fake_pg',
'<class \'unit.utils.vmware.test_dvs.FakeTaskClass\'>')

View file

@ -14,6 +14,7 @@ from tests.support.unit import TestCase, skipIf
from tests.support.mock import NO_MOCK, NO_MOCK_REASON, patch, MagicMock
# Import Salt libraries
from salt.exceptions import ArgumentValueError
import salt.utils.vmware
# Import Third Party Libs
try:
@ -54,6 +55,14 @@ class GetHostsTestCase(TestCase):
self.mock_prop_hosts = [self.mock_prop_host1, self.mock_prop_host2,
self.mock_prop_host3]
def test_cluster_no_datacenter(self):
with self.assertRaises(ArgumentValueError) as excinfo:
salt.utils.vmware.get_hosts(self.mock_si,
cluster_name='fake_cluster')
self.assertEqual(excinfo.exception.strerror,
'Must specify the datacenter when specifying the '
'cluster')
def test_get_si_no_datacenter_no_cluster(self):
mock_get_mors = MagicMock()
mock_get_root_folder = MagicMock(return_value=self.mock_root_folder)
@ -124,23 +133,20 @@ class GetHostsTestCase(TestCase):
self.assertEqual(res, [])
def test_filter_cluster(self):
cluster1 = vim.ClusterComputeResource('fake_good_cluster')
cluster2 = vim.ClusterComputeResource('fake_bad_cluster')
# Mock cluster1.name and cluster2.name
cluster1._stub = MagicMock(InvokeAccessor=MagicMock(
return_value='fake_good_cluster'))
cluster2._stub = MagicMock(InvokeAccessor=MagicMock(
return_value='fake_bad_cluster'))
self.mock_prop_host1['parent'] = cluster2
self.mock_prop_host2['parent'] = cluster1
self.mock_prop_host3['parent'] = cluster1
self.mock_prop_host1['parent'] = vim.ClusterComputeResource('cluster')
self.mock_prop_host2['parent'] = vim.ClusterComputeResource('cluster')
self.mock_prop_host3['parent'] = vim.Datacenter('dc')
mock_get_cl_name = MagicMock(
side_effect=['fake_bad_cluster', 'fake_good_cluster'])
with patch('salt.utils.vmware.get_mors_with_properties',
MagicMock(return_value=self.mock_prop_hosts)):
res = salt.utils.vmware.get_hosts(self.mock_si,
datacenter_name='fake_datacenter',
cluster_name='fake_good_cluster',
get_all_hosts=True)
self.assertEqual(res, [self.mock_host2, self.mock_host3])
with patch('salt.utils.vmware.get_managed_object_name',
mock_get_cl_name):
res = salt.utils.vmware.get_hosts(
self.mock_si, datacenter_name='fake_datacenter',
cluster_name='fake_good_cluster', get_all_hosts=True)
self.assertEqual(mock_get_cl_name.call_count, 2)
self.assertEqual(res, [self.mock_host2])
def test_no_hosts(self):
with patch('salt.utils.vmware.get_mors_with_properties',

View file

@ -264,14 +264,14 @@ class GetDatastoresTestCase(TestCase):
mock_reference,
get_all_datastores=True)
mock_traversal_spec_init.assert_called([
mock_traversal_spec_init.assert_has_calls([
call(path='datastore',
skip=False,
type=vim.Datacenter),
call(path='childEntity',
selectSet=['traversal'],
skip=False,
type=vim.Folder),
call(path='datastore',
skip=False,
type=vim.Datacenter)])
type=vim.Folder)])
def test_unsupported_reference_type(self):
class FakeClass(object):
@ -379,7 +379,7 @@ class RenameDatastoreTestCase(TestCase):
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.rename_datastore(self.mock_ds_ref,
'fake_new_name')
self.assertEqual(excinfo.exception.message, 'vim_fault')
self.assertEqual(excinfo.exception.strerror, 'vim_fault')
def test_rename_datastore_raise_runtime_fault(self):
exc = vmodl.RuntimeFault()
@ -388,7 +388,7 @@ class RenameDatastoreTestCase(TestCase):
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.rename_datastore(self.mock_ds_ref,
'fake_new_name')
self.assertEqual(excinfo.exception.message, 'runtime_fault')
self.assertEqual(excinfo.exception.strerror, 'runtime_fault')
def test_rename_datastore(self):
salt.utils.vmware.rename_datastore(self.mock_ds_ref, 'fake_new_name')